# [Official] Vega Frontier / RX Vega Owners Thread



## dagget3450

I Noticed there has been a lot of talk about Vega clocks and benchmarks. I am currently playing around with mine, i am of course no expert so hopefully we will get a few of them around to help out.
Some quick testing i did on my air cooled Vega FE to try to stop clock throttling but i am using mine in gaming which is a bit underwhelming right now in performance













my gpuz shot for now


Wanted to add something ive noticed while testing my Vega FE. The load/ TACH LED's on the gpu do not seem to match the load gpu usage in say MSI AB.
I guess what i mean is with stock settings for example in wattman, the clocks throttle a LOT. And so MSI AB will show 99/100% gpu usage but i guess thats dependent on the current clock speed load. The Tach LEDs actually show anywhere from 1 to 3 LED's not lit or blinking. I have got used to the TACH LED's since FuryX. They have been a great source for me to know when the load is full or not. Even though MSI AB shows 100% gpu usage, the TECH LED's do not if the clocks are down from 1600. So lets say at 1440 clock @ full load ther eis usually one LED or even two not lit. sometimes ive seen 2 blank, and 3rd blinking. I guess i need to do more testing, but i can definitely see where this thing needs water cooling like Furyx did. In fact i may end up getting universal blocks for Vega FE IF they can improve it with drivers.


----------



## brucethemoose

I'd be interested in seeing some benchmarks at 1050Mhz (Stock Fury X clocks), just for reference.


----------



## Caldeio

Anyone want to test some folding? Even like 10 minutes on a workunit to get a rough ppd count would be great!









It's this or the 1080. I'm assuming rx vega will be priced under 1080 no matter performance?


----------



## brucethemoose

Quote:


> Originally Posted by *Caldeio*
> 
> Anyone want to test some folding? Even like 10 minutes on a workunit to get a rough ppd count would be great!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's this or the 1080. I'm assuming rx vega will be priced under 1080 no matter performance?


I'm still subscribing to the "bad drivers" theory. But not because that was the expectation.

A few benchmarks I've seen basically make Vega FE a Fury X with higher clocks. Which makes no sense, as its a ~500mm^2 14nm die... A die shrunk Fiji on this node would be like 350 mm^2, so where the heck did that extra transistor budget go? That can't all be FP16.


----------



## Y0shi

I've made some benches, look here: http://www.planet3dnow.de/vbulletin/threads/428323-Usertest-AMD-Radeon-Vega-Frontier-Edition-Bilder-Benchmarks-Energiemessungen

4.8 -> Collatz, Einstein, Milkyway (Collatz had optimized Fury Settings in xml, the other two were stock)


----------



## Evil Penguin

Anyone running into throttling+shutdown when switching to gaming mode and running an intensive game?

I had that happen consistently but I found restoring default settings plus a system restart sorted the issue.

I thought the PSU was crapping out at first but nope, it appears to be a software issue.

Edit: Seems like changing any settings within WattMan causes erratic behavior even if it's just upping the fan speed. Anyone else running into this?


----------



## dagget3450

Quote:


> Originally Posted by *Y0shi*
> 
> I've made some benches, look here: http://www.planet3dnow.de/vbulletin/threads/428323-Usertest-AMD-Radeon-Vega-Frontier-Edition-Bilder-Benchmarks-Energiemessungen
> 
> 4.8 -> Collatz, Einstein, Milkyway (Collatz had optimized Fury Settings in xml, the other two were stock)


interesting, good amount of work into that.
Quote:


> Originally Posted by *Evil Penguin*
> 
> Anyone running into throttling+shutdown when switching to gaming mode and running an intensive game?
> I had that happen consistently but I found restoring default settings plus a system restart sorted the issue.
> I thought the PSU was crapping out at first but nope, it appears to be a software issue.


Haven't had that but these drivers are very buggy for me. I am having issues from proper edid recognition and resolutions allowed, to eyefinity being unusable almost. wattman giving wierd numbers, and unable to change/save fan curve until after reboot

I have a feeling these drivers were rushed, so i am hoping next ones or even when rx vega is out that they get them way better and of course performance up as well.

EDIT: i am having some bugs with eyefinity, that i had exactly with fiji when it first released. strange


----------



## Behemoth777

Quote:


> Originally Posted by *brucethemoose*
> 
> I'd be interested in seeing some benchmarks at 1050Mhz (Stock Fury X clocks), just for reference.


Check this out:


----------



## dagget3450

So i was running Cinebench r15 opengl test and i am confused. My results are way too high compared to others i see online. Am i doing it wrong, is there a setting i am missing?


----------



## hyp36rmax

Somehow I have a feeling we'll see more performance soon. I'm on the fence with picking up two for crossfire....


----------



## AlphaC

Quote:


> Originally Posted by *dagget3450*
> 
> So i was running Cinebench r15 opengl test and i am confused. My results are way too high compared to others i see online. Am i doing it wrong, is there a setting i am missing?
> http://www.overclock.net/content/type/61/id/3078643/width/1500/height/1000


Maybe it's just the Pro drivers. The pro drivers are respectable relative to Quadros, let alone Geforce cards.

I would have expected something along those lines since the RX 480 based Radeon Pro WX 7100 is getting 180FPS ; Radeon Pro WX 5100 (RX 470D based) gets ~ 160FPS.
Quote:


> Originally Posted by *Y0shi*
> 
> I've made some benches, look here: http://www.planet3dnow.de/vbulletin/threads/428323-Usertest-AMD-Radeon-Vega-Frontier-Edition-Bilder-Benchmarks-Energiemessungen
> 
> 4.8 -> Collatz, Einstein, Milkyway (Collatz had optimized Fury Settings in xml, the other two were stock)


It would be interesting to test FP16 performance.

If you're running 2 WUs at a time on Einstein then the result is semi-respectable (looks to be 1.5Million a day? which would be better than 2 x RX 480 putting out ~ 1.3M/day).

Milkyway likely isn't worth it since VEGA doesn't have impressive FP64 performance.

The gaming variant has a high bar of performance to meet (greater than 2 x RX 470 or 2 x RX480).


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> Somehow I have a feeling we'll see more performance soon. I'm on the fence with picking up two for crossfire....


Don't shoot me but i have 2 Vega FE's here in my rig. However these drivers are terrible, and performance is lacking in gaming/benches. I can't get eyefinity working properly in many configurations. Rx Vega is around the corner so if your even considering Vega best to wait and see what happens.

Quote:


> Originally Posted by *AlphaC*
> 
> Maybe it's just the Pro drivers. The pro drivers are respectable relative to Quadros, let alone Geforce cards.
> 
> I would have expected something along those lines since the RX 480 based Radeon Pro WX 7100 is getting 180FPS ; Radeon Pro WX 5100 (RX 470D based) gets ~ 160FPS.
> It would be interesting to test FP16 performance.
> 
> If you're running 2 WUs at a time on Einstein then the result is semi-respectable (looks to be 1.5Million a day? which would be better than 2 x RX 480 putting out ~ 1.3M/day).
> 
> Milkyway likely isn't worth it since VEGA doesn't have impressive FP64 performance.
> 
> The gaming variant has a high bar of performance to meet (greater than 2 x RX 470 or 2 x RX480).


I was comparing directly with PCPER
https://www.pcper.com/image/view/83382?return=node%2F68037



my bone stock gpu results:
i get anywhere from 242-251fps.

Is it down to CPU clocks maybe? i think his 5960x was stock, mine is 4.5ghz


----------



## czin125

https://forums.overclockers.co.uk/threads/the-fury-x-fiji-owners-thread.18678073/page-349#post-29045102
HBM1 only supports 500mhz / 545mhz / 600mhz / 666mhz
500mhz = 512GB/s for Fury X

HBM2 ratios?

Could you try increasing the HBM2 clocks to 660 GB/s and keep the core at stock to see if there's any performance gains? Stock Vega HBM2 = 484 GB/s.


----------



## Caldeio

https://hardforum.com/threads/amd-vega-frontier-edition-ppd-600k.1939697/

Trying to get confirmation on this PPD rate. Can anyone please test when they get the chance for [email protected]


----------



## Evil Penguin

Quote:


> Originally Posted by *Caldeio*
> 
> https://hardforum.com/threads/amd-vega-frontier-edition-ppd-600k.1939697/
> 
> Trying to get confirmation on this PPD rate. Can anyone please test when they get the chance for [email protected]


Sure, here you go.



I'm still waiting to get other WUs.


----------



## Removed1

Vega watercooled edition semi-teardown.

GN link!


----------



## Caldeio

Quote:


> Originally Posted by *Evil Penguin*
> 
> Sure, here you go.
> 
> 
> 
> 
> I'm still waiting to get other WUs.


This looks allright I think! Are you running stock?

So the 1070 and 1080, overclock get between 650-700k PPD on this project 10496 that you did.
1080Ti overclocked is about 1mill PPD


----------



## Evil Penguin

Quote:


> Originally Posted by *Caldeio*
> 
> This looks allright I think! Are you running stock?
> 
> So the 1070 and 1080, overclock get between 650-700k PPD on this project 10496 that you did.
> 1080Ti overclocked is about 1mill PPD


Yup, stock.


----------



## Removed1

GN undervolting vega 



.


----------



## Wuest3nFuchs

Sometimes funny to listen to him ,but hearing this is difficult for me when his voice is upping the speeds seems a bit lalalblalbla wah wah sometimes[is he from texas?] ... sry !!


----------



## Removed1

Another review of the FE with some pro task bench, here.


----------



## dagget3450

Quote:


> Originally Posted by *Wimpzilla*
> 
> Another review of the FE with some pro task bench, here.


Thanks for the updates, ill update OP and reformat it soon!


----------



## AlphaC

Quote:


> Originally Posted by *Wimpzilla*
> 
> Another review of the FE with some pro task bench, here.


The Solidworks 2015 result looks really good vs the $5000 Quadro P6000.



Creo could use some improvements


Autocad needs work


...

AMD has a lot of work to do. Right now it just exists, but isn't competitive in gaming and in other things besides Solidworks 2015 (2017 is out now).


----------



## Removed1

Quote:


> Originally Posted by *AlphaC*
> 
> AMD has a lot of work to do. Right now it just exists, but isn't competitive in gaming and in other things besides Solidworks 2015 (2017 is out now).


Who cares about games, this is not a RX card, it is an FE card designated to show the potential of future pro versions.
The pro result are astonishing for the aircooled one price. It is highly competitive on all the professionals task but gaming.

I corrected your sentence for you, it should be better now! Please do not mistake again, or bring something that show it is not competitive for computing!
Without sandbagging obviously!

Some hard voltmod by buildz 



Finally an amd advert that target the user of that card. A bit late i would say unfortunately, 



! I love the song, and the unreal logo! ^^


----------



## wolf9466

Quote:


> Originally Posted by *Wimpzilla*
> 
> Quote:
> 
> 
> 
> Originally Posted by *AlphaC*
> 
> AMD has a lot of work to do. Right now it just exists, but isn't competitive in gaming and in other things besides Solidworks 2015 (2017 is out now).
> 
> 
> 
> Who cares about games, this is not a RX card, it is an FE card designated to show the potential of future pro versions.
> The pro result are astonishing for the aircooled one price. It is highly competitive on all the professionals task but gaming.
> 
> I corrected your sentence for you, it should be better now! Please do not mistake again, or bring something that show it is not competitive for computing!
> Without sandbagging obviously!
> 
> Some hard voltmod by buildz
> 
> 
> 
> Finally an amd advert that target the user of that card. A bit late i would say unfortunately,
> 
> 
> 
> ! I love the song, and the unreal logo! ^^
Click to expand...

Ouch, hard voltmod. I prefer soft ones.


----------



## dagget3450

I am RMA'ing one of my Vega FE cards. It keeps crashing to a black screen when under load. I suspect it could be a temp issue however i am not going to break the warranty sticker to find out why. I do plan to water cool these with AIO's possibly in the near future. Right now i just need a card thats not borked on stock settings so i dont play Russian roulette later....

Damn shame

Also meant to mention, during a simple 4k unigine heaven benchmark i am hitting over 1,000 watts at the wall with a stock Ryzen7 1700 !!! These things are def power hungry..

So Vega FE x2 with 50% PL and higher fan curve pulled almost 1100watts at the wall..


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> I am RMA'ing one of my Vega FE cards. It keeps crashing to a black screen when under load. I suspect it could be a temp issue however i am not going to break the warranty sticker to find out why. I do plan to water cool these with AIO's possibly in the near future. Right now i just need a card thats not borked on stock settings so i dont play Russian roulette later....
> 
> Damn shame
> 
> Also meant to mention, during a simple 4k unigine heaven benchmark i am hitting over 1,000 watts at the wall with a stock Ryzen7 1700 !!! These things are def power hungry..
> 
> So Vega FE x2 with 50% PL and higher fan curve pulled almost 1100watts at the wall..


Does your system end up shutting off when the card is under load?


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> Does your system end up shutting off when the card is under load?


Well i am going back to recheck this. When i built my new ryzen build i moved everything over i could from my intel except PSU. I was using an 1100 watt psu on the new ryzen build and now i put in my 1600watt psu to test again. I dont recall having any issues on the intel setup but the card in question was secondary.

I did have one shutdown one the 1100watt psu after taking out second card. but before doing that i moved second card to primary and it was just going to black screen under load. To make sure its gpu before i send off im am testing both cards on my 1600watt in primary and will see if i get black screen or even power off. I will report back.


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> Well i am going back to recheck this. When i built my new ryzen build i moved everything over i could from my intel except PSU. I was using an 1100 watt psu on the new ryzen build and now i put in my 1600watt psu to test again. I dont recall having any issues on the intel setup but the card in question was secondary.
> 
> I did have one shutdown one the 1100watt psu after taking out second card. but before doing that i moved second card to primary and it was just going to black screen under load. To make sure its gpu before i send off im am testing both cards on my 1600watt in primary and will see if i get black screen or even power off. I will report back.


If I modify any WattMan settings (post reboot) the system will shut off after a few seconds of light/heavy GPU load.

I have to reset WattMan and reboot in order for the system to go back to normal.

I haven't the faintest clue as to why this happens but it's probably just a driver issue.

I wonder if in some form you're running into the issue.

Hopefully we'll have access to a new driver when RX Vega launches.


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> If I modify any WattMan settings (post reboot) the system will shut off after a few seconds of light/heavy GPU load.
> I have to reset WattMan and reboot in order for the system to go back to normal.
> I haven't the faintest clue as to why this happens but it's probably just a driver issue.
> I wonder if in some form you're running into the issue.
> Hopefully we'll have access to a new driver when RX Vega launches.


its sounds a little similar, but for me it seemed to happen once i was on this 1100 watt psu. its not a very good one. My 1600 watt is rather decent, and i have a 1200watt but it only has 3 8 pin pcie modular connectors so i couldn't use it without an adapter.

I just got home so ill give this a whirl, i was testing doom 4k vulkan fps on the ryzen when i got the black screens. I was able to still hear audio in the background and attempt to shutdown. i did get one time a power off during the same test. my last testing i did on 1600wt psu i didnt have any issues. I also just overclocked the cpu/ram so if i have issues ill put it all back to stock and test again, then if both work fine ill put both in at same time and try a different benchmark


----------



## dagget3450

Okay, so i am able to confirm black screen issue on that suspect vega card. I cannot repeat it with my other vega card no matter how hard i try. So i am shipping it out tonight. I suspect yours might be an issue also. Maybe you need a way to test it but when i get my RMA back ill test it also.

The suspect GPU seems to black screen on me during load, first i tested Doom and found it would crash to back screen within roughly 5 minutes at most. Now after PSU swap i just added it as secondary gpu in CF and it again crashed to black screen during Valley bench warmup. I know its the GPU because the load LEDs go blank like its off/idle. The good vega card kept its LED load lit up. So its definitely crapping out.


----------



## AlphaC

I saw an article today that featured VEGA , arguing its use for FP16 (i.e. deep learning)
Quote:


> One other perspective is to compare it with a single *Nvidia Tesla P100* (just the GPU card) with 16GB and capable of 18.7 teraflops (fp16) but costs *$12,599* from Dell. If you are thinking of side-stepping 'professional grade' to a consumer grade Nvidia Titan X or GTX 1080 ti (like the DIGITS Devbox) then it is worth knowing that half-precision (fp16) is only at 0.17 teraflops for a Titan X and comparable for a GTX 1080 ti. That's because fp16 as well as fp64 (double precision) is unavailable for Nvidia consumer cards:


https://medium.com/intuitionmachine/building-a-50-teraflops-amd-vega-deep-learning-box-for-under-3k-ebdd60d4a93c


----------



## dagget3450

Well i gained more fps in doom 4k with ryzen 4ghz/3200/cl14 over my 5960x so yay. Its even closer to 1080ti but thats probably best case scenario for now. Im just waiting to see if anything changes with new drivers soon.



vs






so vega fe stock 77/78fps vs 84/85 fps 1080ti

I will make a small video clip like i did before

5960x @4.5ghz 3200mhz ddr4 vs ryzen r7 [email protected]/4ghz/3200cl14 i am getting roughly 1-5fps avg more @ 4k vulkan, which seems interesting because i would think @ 4k i'd be gpu capped.

If i didnt have to send one vega back i could setup both side by side... dangit


----------



## dagget3450

So it occurred to me something is weird about AMD drivers for Vega FE...

http://support.amd.com/en-us/download/frontier?os=Windows%2010%20-%2064
Says 17.6 in the link.

When you actually try to install it says 17.1.1



Noticed this when trying to load a newer vulkan api.

So i know there was a lot of debate on Vega FE benchmarks about it being an old driver. So i wonder if that's still true?

17.1.1 is from January? and 17.6 should be from June?

So if i go to Radeon PRO mode, it's 17.6



So is Vega FE using radeon drivers from January?


----------



## Y0shi

Wait for SIGGRAPH ;-)


----------



## dagget3450

Quote:


> Originally Posted by *Y0shi*
> 
> Wait for SIGGRAPH ;-)


Updated OP, its not pretty but for now.

Also when is the Vega event scheduled for AMD @ siggraph? I know its today but i am in EST USA cant seem to find it anywhere.


----------



## rv8000

Quote:


> Originally Posted by *dagget3450*
> 
> Updated OP, its not pretty but for now.
> 
> Also when is the Vega event scheduled for AMD @ siggraph? I know its today but i am in EST USA cant seem to find it anywhere.


10:30 est / 7:30 pt; no live stream.


----------



## dagget3450

Rx Vega is targeting the 1080gtx, and it looks like the Min FPS is what they are aiming to push. This ain't going to fly well because 1080ti is so much faster if they are at 1080gtx performance level with vega... I dunno these prices are kinda weird also. Time for bed, I'm sure there will be plenty of info by tomorrow


----------



## Y0shi

Quote:


> Originally Posted by *Y0shi*
> 
> Wait for SIGGRAPH ;-)


To fulfill my "promise": https://semiaccurate.com/2017/07/31/amd-releases-radeon-pro-software-17-8-driver/







Just the download links are still missing.


----------



## Evil Penguin

Quote:


> Originally Posted by *Y0shi*
> 
> To fulfill my "promise": https://semiaccurate.com/2017/07/31/amd-releases-radeon-pro-software-17-8-driver/
> 
> 
> 
> 
> 
> 
> 
> Just the download links are still missing.


Unfortunately that driver doesn't apply to Vega FE.


----------



## Y0shi

I'm sure these drivers will work with Vega FE, everything else wouldn't make any sense. They explicitly compare it with 17.6, the release for Vega FE.

€dit: https://pro.radeon.com/en-us/announcing-radeon-pro-software-crimson-relive-edition-vega-based-radeon-professional-graphics/
Quote:


> This feature builds on the "Gaming Mode" introduced in the Radeon™ Pro Software Crimson ReLive Edition 17.6 driver for the Radeon™ Vega Frontier Edition card, and the chart below shows the differences between "Gaming Mode" and the "Driver Options" feature.


€dit 2: If you take a look at the official slides from AMD here, you can read
Quote:


> Compatible with Radeon Vega Frontier Edition and Radeon Pro workstation products based on the "Vega" gpu architecture.


----------



## dagget3450

Quote:


> Originally Posted by *Y0shi*
> 
> I'm sure these drivers will work with Vega FE, everything else wouldn't make any sense. They explicitly compare it with 17.6, the release for Vega FE.
> 
> €dit: https://pro.radeon.com/en-us/announcing-radeon-pro-software-crimson-relive-edition-vega-based-radeon-professional-graphics/
> €dit 2: If you take a look at the official slides from AMD here, you can read


When/where are these drivers?


----------



## AlphaC

Quote:


> Originally Posted by *Y0shi*
> 
> I'm sure these drivers will work with Vega FE, everything else wouldn't make any sense. They explicitly compare it with 17.6, the release for Vega FE.
> 
> €dit: https://pro.radeon.com/en-us/announcing-radeon-pro-software-crimson-relive-edition-vega-based-radeon-professional-graphics/
> €dit 2: If you take a look at the official slides from AMD here, you can read


The problem there is no pro variant of Vega 56 right now, which is going to be ~ 200W (165W for the GPU core) instead of a massive 300-375W (220W for GPU core).

Radeon Pro WX 9100: https://pro.radeon.com/en-us/product/wx-series/radeon-pro-wx-9100/
Quote:


> Radeon™ Pro WX 9100 workstation graphics cards supports unique power monitoring and management technologies, and has a maximum power consumption of 250 watts TDP board power.
> ...
> Peak Engine Clock (MHz) 1500
> Peak Half Precision Compute Performance 24.6TFLOPS
> Peak Single Precision Compute Performance 12.29TFLOPS
> Peak Double Precision Compute Performance 768GFLOPS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Memory Data Rate 1.89Gbps
> Memory Speed 945MHz
> Memory Size 16GB
> Memory Type HBM2
> Memory Interface 2048-bit
> Memory Bandwidth 483.84GB/s


https://pro.radeon.com/en-us/product/pro-series/radeon-pro-ssg/
Quote:


> Up to 12.29 TFLOPS of peak single-precision floating-point performance
> 64 Next-Generation Compute Units (nCUs, 4096 Stream Processors) with support for double-rate 16-bit math
> 2TB Onboard Solid State Graphics (SSG) Memory


I am waiting for a Radeon Pro WX 8100 to come out


----------



## Y0shi

Quote:


> Originally Posted by *dagget3450*
> 
> When/where are these drivers?


This is the most important question (and I can't answer it). It SHOULD already been released...


----------



## dagget3450

Quote:


> Originally Posted by *Y0shi*
> 
> This is the most important question (and I can't answer it). It SHOULD already been released...


Waiting around is nothing new when dealing with AMD









Looks like mid August just to get info on RX Vega..


----------



## hellm

In these benchmarks the RX Vega lies between 21 and 34 percent before the Fury X, depending on resolution.
Vega has roughly 60 percent higher raw power, which shows again that AMD does have a problem to transform this power into FPS.


----------



## dagget3450

Water blocks incoming!

http://www.overclock.net/t/1635624/ek-is-releasing-full-cover-water-blocks-for-amd-radeon-rx-vega-based-graphics-cards


----------



## dagget3450

Quote:


> Update (7/31/17): The original version of this article incorrectly stated that AMD had released this software update. Radeon Pro Software Crimson ReLive Edition for Vega-based Radeon Professional Graphics is not available for download today, and will be published when Vega-based professional products are released. At this time, this includes the $2199 MSRP Radeon Pro WX 9100 and $6999 MSRP Radeon Pro SSG (Vega), both of which are slated for September 13.


source
http://www.anandtech.com/show/11682/amd-releases-radeon-pro-software-crimson-relive-edition-vega


----------



## dagget3450

Weird to still see leaks/rumors on Vega but alas, they come.

http://www.overclock.net/t/1635686/oc3d-70-100mh-s-hash-rate-for-rx-vega
Not sure it applies to Vega FE, but considering Vega FE is the same pcb/chip(minus some hbm) as RX vega reference i dont see why it wouldn't apply to both.

http://www.overclock.net/t/1635635/tt-amd-radeon-rx-vega-56-leaked-benchmarks-gtx-1070-killer
take with a grain of salty sea salt.

Who really knows right? *crickets chirping*


----------



## Y0shi

70 - 100 MH/s? Never! If it'll reach 50 MH/s someday that would be really really much.


----------



## sterlingpickens

Interesting thermal videos/images on the tomshardware review http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-11.html
Apparently the back of the board is within 5c of the gpu temp, and the HBM runs hotter than the gpu.


----------



## dagget3450

Trying to decide if i want to put mine on ek waterblocks, i already have the rest of the watercooling hardware.. or save the money for now and buy a threadripper cpu/board lol.


----------



## sterlingpickens

judging by the temps of the back of the card maybe ekwb should have made their backplate out of copper ?


----------



## dagget3450

I dont think it will matter on watercooling? I dont see it being an issue.


----------



## alucardis666

So what's the ETA for Watercooled/Hybrid RX Vega cards? I'm looking to dump my 1080 Ti's due to the Black screen loss of signal issue with HDMI and 4K HDR TVs


----------



## dagget3450

Quote:


> Originally Posted by *alucardis666*
> 
> So what's the ETA for Watercooled/Hybrid RX Vega cards? I'm looking to dump my 1080 Ti's due to the Black screen loss of signal issue with HDMI and 4K HDR TVs


I have not heard of that issue? Performance wise 1080ti is looking to be faster. You sure its not a result of unstable oc on gpu?


----------



## alucardis666

Quote:


> Originally Posted by *dagget3450*
> 
> I have not heard of that issue? Performance wise 1080ti is looking to be faster. You sure its not a result of unstable oc on gpu?


Considering I've extensively tested at oc an stock for months... yes... I'm sure, do some googling, it's not just me.


----------



## LionS7

Quote:


> Originally Posted by *dagget3450*
> 
> Well i gained more fps in doom 4k with ryzen 4ghz/3200/cl14 over my 5960x so yay. Its even closer to 1080ti but thats probably best case scenario for now. Im just waiting to see if anything changes with new drivers soon.
> 
> 
> 
> vs
> 
> 
> 
> 
> 
> 
> so vega fe stock 77/78fps vs 84/85 fps 1080ti
> 
> I will make a small video clip like i did before
> 
> 5960x @4.5ghz 3200mhz ddr4 vs ryzen r7 [email protected]/4ghz/3200cl14 i am getting roughly 1-5fps avg more @ 4k vulkan, which seems interesting because i would think @ 4k i'd be gpu capped.
> 
> If i didnt have to send one vega back i could setup both side by side... dangit


I like your videos and watched everything on Vega Frontier Edition, but can you try to stabilize the core @ 1600Мhz, and maybe try to oc the HBM2. You have only one test - Battlefield 1 DX12 in which the core is stable at 1600Mhz.


----------



## dagget3450

Quote:


> Originally Posted by *LionS7*
> 
> I like your videos and watched everything on Vega Frontier Edition, but can you try to stabilize the core @ 1600Мhz, and maybe try to oc the HBM2. You have only one test - Battlefield 1 DX12 in which the core is stable at 1600Mhz.


The videos your watching are DudeRandom84 on youtube. those are not my videos, i only used it to compare my vega against his. 1600mhz core can be achieved with adjusting power level and fan curve.


----------



## rdr09

Quote:


> Originally Posted by *dagget3450*
> 
> Well i gained more fps in doom 4k with ryzen 4ghz/3200/cl14 over my 5960x so yay. Its even closer to 1080ti but thats probably best case scenario for now. Im just waiting to see if anything changes with new drivers soon.
> 
> 
> 
> vs
> 
> 
> 
> 
> 
> 
> so vega fe stock 77/78fps vs 84/85 fps 1080ti
> 
> I will make a small video clip like i did before
> 
> 5960x @4.5ghz 3200mhz ddr4 vs ryzen r7 [email protected]/4ghz/3200cl14 i am getting roughly 1-5fps avg more @ 4k vulkan, which seems interesting because i would think @ 4k i'd be gpu capped.
> 
> If i didnt have to send one vega back i could setup both side by side... dangit


Not so impressed with Vega. But the 1700 . . .


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> If I modify any WattMan settings (post reboot) the system will shut off after a few seconds of light/heavy GPU load.
> I have to reset WattMan and reboot in order for the system to go back to normal.
> I haven't the faintest clue as to why this happens but it's probably just a driver issue.
> I wonder if in some form you're running into the issue.
> Hopefully we'll have access to a new driver when RX Vega launches.


UPDATE:
I got my RMA vega FE back and did some testing. So far i've had 0 problems with crashing under load. My testing was in games and played some for a while and no issues. So i will test more but i feel confident the other vega fe was defective in some manner. I have tested with powerlevel +50 and fan curves... so the old one was black screening within 5 minutes in doom 4k. Also haven't had any shutdowns either.


----------



## rdr09

Quote:


> Originally Posted by *dagget3450*
> 
> UPDATE:
> I got my RMA vega FE back and did some testing. So far i've had 0 problems with crashing under load. My testing was in games and played some for a while and no issues. So i will test more but i feel confident the other vega fe was defective in some manner. I have tested with powerlevel +50 and fan curves... so the old one was black screening within 5 minutes in doom 4k. Also haven't had any shutdowns either.


Hi dagget3450,

Does this at least matches crossfire Hawaii? It will be used for gaming and work.

Thanks.


----------



## JackCY

So what prices do you expect Vega to sell for? So far it seems even worse than previous launches and AMD pulling another reference MSRP cards unobtainable as was with Polaris. I bet their stock is so low they don't really care about the price anymore as they will sell it anyway sooner or later. Oligopoly and the GPU market is dead. AMD selling what ever they make to farms, don't really care at all anymore about the common folk being able to purchase reasonable GPUs.


----------



## rdr09

Quote:


> Originally Posted by *JackCY*
> 
> So what prices do you expect Vega to sell for? So far it seems even worse than previous launches and AMD pulling another reference MSRP cards unobtainable as was with Polaris. I bet their stock is so low they don't really care about the price anymore as they will sell it anyway sooner or later. Oligopoly and the GPU market is dead. AMD selling what ever they make to farms, don't really care at all anymore about the common folk being able to purchase reasonable GPUs.


I think you are right. That's exactly what will happen. None on the shelves.


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> UPDATE:
> I got my RMA vega FE back and did some testing. So far i've had 0 problems with crashing under load. My testing was in games and played some for a while and no issues. So i will test more but i feel confident the other vega fe was defective in some manner. I have tested with powerlevel +50 and fan curves... so the old one was black screening within 5 minutes in doom 4k. Also haven't had any shutdowns either.


I'm glad it's working correctly for you now. Did you receive a new card back or was it used-looking?

I still have my fingers crossed for the new drivers we'll likely receive on Monday.


----------



## dagget3450

Quote:


> Originally Posted by *rdr09*
> 
> Hi dagget3450,
> 
> Does this at least matches crossfire Hawaii? It will be used for gaming and work.
> 
> Thanks.


Well it does decent, but i was running 4 furyx's before these. So when Cf works the 4 furyx were a decent bit faster. That said i am also waiting to see what improvements new drivers bring because they literally threw the vega fe out the door just to say it launched. The drivers aren't very good right now esp if they have things not enabled yet.
Quote:


> Originally Posted by *Evil Penguin*
> 
> I'm glad it's working correctly for you now. Did you receive a new card back or was it used-looking?
> I still have my fingers crossed for the new drivers we'll likely receive on Monday.


I thought it was a new card except there were some scuff marks around the video ports. So i am inclined to say if its new it was at least used for a demo or something. everywhere else it looks brand new.

One thing for sure, i ran metroo LL redux benchmark and saw 1300watts at wall. So 2x vega fe, r7 [email protected] and 4 ssd, 3 i120mm fans and 1300 watts.... insane if you ask me.

Edit: Also on the mining fiasco, this club may be small for a while due to vega not being as good as hoped on gaming but huge on mining....


----------



## rdr09

Quote:


> Originally Posted by *dagget3450*
> 
> Well it does decent, but i was running 4 furyx's before these. So when Cf works the 4 furyx were a decent bit faster. That said i am also waiting to see what improvements new drivers bring because they literally threw the vega fe out the door just to say it launched. The drivers aren't very good right now esp if they have things not enabled yet.
> .


Thanks for the input. Some things haven't change with AMD. Wow, You had quadamage. Nice.


----------



## hyp36rmax

Curious if you're able to load the regular crimson beta drivers if it'll work?

I must be fooling myself but I just have a feeling these will be beast with optimized drivers. My concern is AMD will segment the pro drivers and the gaming drivers.


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> Curious if you're able to load the regular crimson beta drivers if it'll work?
> 
> I must be fooling myself but I just have a feeling these will be beast with optimized drivers. My concern is AMD will segment the pro drivers and the gaming drivers.


Only drivers that ive been able to use are 17.6. ive been looking but i havent seen anything vega driver yet outside that. I would love to get my hand on the driver shipping with vega 56/64 though and see if it works.(press release driver also)


----------



## hyp36rmax

Quote:


> Originally Posted by *dagget3450*
> 
> Only drivers that ive been able to use are 17.6. ive been looking but i havent seen anything vega driver yet outside that. I would love to get my hand on the driver shipping with vega 56/64 though and see if it works.(press release driver also)


Thanks for the confirmation. It would be a bust if VEGA FE owners weren't able to load the equivalent gaming drivers. I'm really hoping AMD has consistent driver releases for both. I'm on the fence for two Vega FE (i like throwing money away)


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> Thanks for the confirmation. It would be a bust if VEGA FE owners weren't able to load the equivalent gaming drivers. I'm really hoping AMD has consistent driver releases for both. I'm on the fence for two Vega FE (i like throwing money away)


The drivers are whats holding me back at the moment in the sense of what i plan to do. I will go full water cooling IF drivers improve both power and performance decently. Right now i am in a holding pattern before tearing apart these expensive cards. I am going to watch closely on the Vega water cooled units and newer drivers also....


----------



## Evil Penguin

Could someone please tell me what their default minimum fan speed is within WattMan?

I think I'm on to something with my fan issue.


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> Could someone please tell me what their default minimum fan speed is within WattMan?
> I think I'm on to something with my fan issue.


does this help?


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> does this help?


Yes, thank you. Is that with WattMan reset?


----------



## wolf9466

To all the people (like me) who are going to mod the VBIOS on their Vega - Linux or go home. Why?

Vega is checking sigs (apparently in HW) early in boot - you mod it, at the very least, no video from it. So... how do you do it on Linux?

Simple - patch your kernel to load an unsigned VBIOS file (if present) at a specific location... Vega's internal HW can't check what the DRIVER loads... only what's on its own ROM.


----------



## Evil Penguin

Here's what the fan's default min/max for my card:



Your idle RPM reported a lot lower than mine ~700 RPM.

Also, mine shows a target of 2000 RPM (grayed out) instead of 4900 RPM.

If I enable the min/max and apply it, restart my computer. The reported fan RPM spikes to over 3000 RPM even though it's much lower than that.

It fails to up the RPM while under load and forces the system to shut down due to overheating.

If however I set the minimum fan speed to 700, the system works as normal.

Something is probably wrong with the on-board fan controller but there's a simple enough workaround (raise minimum fan speed).


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> Here's what the fan's default min/max for my card:
> 
> 
> Your idle RPM reported a lot lower than mine ~700 RPM.
> Also, mine shows a target of 2000 RPM (grayed out) instead of 4900 RPM.
> If I enable the min/max and apply it, restart my computer. The reported fan RPM spikes to over 3000 RPM even though it's much lower than that.
> It fails to up the RPM while under load and forces the system to shut down due to overheating.
> If however I set the minimum fan speed to 700, the system works as normal.
> Something is probably wrong with the on-board fan controller but there's a simple enough workaround (raise minimum fan speed).


I did reset wattman before i took the screenshot, however i think its showing what i adjusted before but greyed out since its on auto now. however according to the active fan speed shown im 344rpm.... which is wierd...

in MSI AB both are at 340ish rpms... hah...

I will say ive been gaming witcher3 4k and leaving the fans on auto so far no issue overheating or shutdowns... i suspect my "faulty" vega i had was probably a thermal issue with paste or contact or something... i just was not gonna take it apart to see just yet... not sure what the warranty would be but probably voided... which i will cross that bridge if/when i get full cover ek blocks.


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> I did reset wattman before i took the screenshot, however i think its showing what i adjusted before but greyed out since its on auto now. however according to the active fan speed shown im 344rpm.... which is wierd...
> 
> in MSI AB both are at 340ish rpms... hah...
> 
> I will say ive been gaming witcher3 4k and leaving the fans on auto so far no issue overheating or shutdowns... i suspect my "faulty" vega i had was probably a thermal issue with paste or contact or something... i just was not gonna take it apart to see just yet... not sure what the warranty would be but probably voided... which i will cross that bridge if/when i get full cover ek blocks.


I've been messing around with the min RPM and now it's working properly with 400 RPM set.

This is so weird...

Maybe it's some sort of calibration issue?

I'm just glad it's finally working as it should *knock on wood*.


----------



## dagget3450

Well now that i rebooted to verify my defaults are showing properly now lol.



I think the wattman profile is buggy and doesn't truely reset until reboot...


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> Well now that i rebooted to verify my defaults are showing properly now lol.
> 
> 
> 
> I think the wattman profile is buggy and doesn't truely reset until reboot...


But now your idle RPM is higher?


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> But now your idle RPM is higher?


According to the second screenshot yes, however i have no way to manually verify the fan rpms. I really really really wish we had newer drivers to work with here.... was really hoping RX vega drivers would be out like tomorrow, but i guess not since they are not launching but only performance reviews?


----------



## outofmyheadyo

First review ? http://www.hwbattle.com/bbs/board.php?bo_table=hottopic&wr_id=7333&ckattempt=1


----------



## Y0shi

Seems legit, somebody forgot to look at the exact time on the NDA. ;-)


----------



## criminal

http://promotions.newegg.com/NEemail/Guerrilla/LP/17-AUG/index-landing_cat54ft7uj_14.html?utm_medium=Email&utm_source=GD081417&cm_mmc=EMC-GD081417-_-index-_-E0A-_-DeluxeDeal&et_cid=36351&et_rid=9674812&et_p1=&email64=cGNvb2tAY29uc3RydWN0aW9ucGFydG5lcnMubmV0

Bundle:
https://www.amazon.com/gp/aw/d/B074PKVSH9/ref=mp_s_a_1_1?ie=UTF8&qid=1502704673&sr=8-1&pi=AC_SX236_SY340_QL65&keywords=XFX+Radeon+RX+Vega+64+8GB+Graphics+Card+%2B+AMD+Ryzen+7+1700x+%2B+ASUS+ROG+Crosshair+VI+Hero+AM4+Motherboard+Bundle&dpPl=1&dpID=41frl7sPPiL&ref=plSrch


----------



## Newbie2009

Add me, Vega 64 air, soon to be water.


----------



## Evil Penguin

Welp, here's the RX Vega driver.

I wonder if it works with Vega FE (makes no mention of it in the release notes).


----------



## os2wiz

It was in all honesty a paper launch. Newegg did not have any of the liquid cooled RX Vega 64 models in stock. Before 10am the few cards they did have were sold out. Amazon is only offering the Vega 64 in a bundle , not separately. Micro Center only had less than a half dozen in each store at 9am EST. This launch is a sick joke on AMD enthusiasts. I am seriously considering buying the 1080 Ti for a few dollars more than the liquid cooled RX Vega 64. Less hassles and better overall performance.


----------



## Newbie2009

Quote:


> Originally Posted by *os2wiz*
> 
> It was in all honesty a paper launch. Newegg did not have any of the liquid cooled RX Vega 64 models in stock. Before 10am the few cards they did have were sold out. Amazon is only offering the Vega 64 in a bundle , not separately. Micro Center only had less than a half dozen in each store at 9am EST. This launch is a sick joke on AMD enthusiasts. I am seriously considering buying the 1080 Ti for a few dollars more than the liquid cooled RX Vega 64. Less hassles and better overall performance.


Don't blame you. But the power saving mode is 5% slower than turbo and uses 40-50% less power, something to keep in mind.

Looks like this will be a lonely owners club with all the hate.


----------



## Evil Penguin

Quote:


> Originally Posted by *Newbie2009*
> 
> Don't blame you. But the power saving mode is 5% slower than turbo and uses 40-50% less power, something to keep in mind.
> 
> Looks like this will be a lonely owners club with all the hate.


Well, it's been a pretty lonely Vega FE club so far...


----------



## alanthecelt

bought a couple of these puppy's (Vega 64) to replace 2 1080's that are mining away... looks like a bad move in the respect...

Launch (in the UK) well there was some stock on major supplier for an hour (overclockers)
and Scan appear to have stock at last check...
release price was £549.. (wow loads of money, £100 more than i paid for my 1080's) but overclockers had a promo for £100 off on first orders


----------



## Newbie2009

Quote:


> Originally Posted by *Evil Penguin*
> 
> Well, it's been a pretty lonely Vega FE club so far...


You only need more than 1 for a club







.


----------



## The EX1

Picked up a black 64. Really wanted one of the silver ones though.


----------



## Newbie2009

I'm gonna bet, the VEGA 56 will be flashable to a 64. Hence the wait on 56 , sell as many for full price as possible first before people catch on.


----------



## criminal

Quote:


> Originally Posted by *os2wiz*
> 
> It was in all honesty a paper launch. Newegg did not have any of the liquid cooled RX Vega 64 models in stock. Before 10am the few cards they did have were sold out. Amazon is only offering the Vega 64 in a bundle , not separately. Micro Center only had less than a half dozen in each store at 9am EST. This launch is a sick joke on AMD enthusiasts. I am seriously considering buying the 1080 Ti for a few dollars more than the liquid cooled RX Vega 64. Less hassles and better overall performance.


Amazon Vega 64: https://www.amazon.com/gp/product/B074DK6NHQ/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

Was only up for 5 minutes. I bought one @ $499, but doesn't show a ship date at all yet.


----------



## rancor

Quote:


> Originally Posted by *criminal*
> 
> Amazon Vega 64: https://www.amazon.com/gp/product/B074DK6NHQ/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
> 
> Was only up for 5 minutes. I bought one @ $499, but doesn't show a ship date at all yet.


Someone is already reselling for $1300







. I also got on the back order hopefully we actually get cards in a timely manner.


----------



## criminal

Quote:


> Originally Posted by *rancor*
> 
> Someone is already reselling for $1300
> 
> 
> 
> 
> 
> 
> 
> . I also got on the back order hopefully we actually get cards in a timely manner.


Yeah I saw that... lol

Maybe so. I got it to play with for a bit. If I like it I will keep it, if not I will probably resell here on OCN as long as I can get my money back. Don't really care about making money off of it.


----------



## 113802

Can a Frontier Edition user confirm if the new driver works?


----------



## Evil Penguin

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Can a Frontier Edition user confirm if the new driver works?


Seems like it

__
https://www.reddit.com/r/6tmnis/im_sure_im_not_the_only_one_to_test_this_but_rx/
 but probably not worth losing WattMan.


----------



## gupsterg

Quote:


> Originally Posted by *Newbie2009*
> 
> I'm gonna bet, the VEGA 56 will be flashable to a 64. Hence the wait on 56 , sell as many for full price as possible first before people catch on.


Bios mod even on RX VEGA is not possible yet. It uses same security feature as FE. See 



 ~2min in, info was used from thread here on OCN.


----------



## The EX1

Has anyone picked up an MSi card? I am curious if the rumors are true that MSi cards were not delivered to retailers in time for launch.


----------



## Energylite

Hop ! RX Vega 64 with a WB in a custom loop gonna come soon, the 19th August


----------



## gupsterg

I'm contemplating getting VEGA 56 if OCuk do it at £349 on 28th Aug. Otherwise VEGA is not on the cards for me currently, simply not right price to performance for me at current prices







.


----------



## 113802

I was able to order a Gigabyte RX Vega 64 Liquid Cooled version







Newegg status is currently at "packaging." Wanted Either a Gigabyte or MSI since they both have 3 year warranties. Unlike the XFX, Sapphire, and PowerColor that only have a 2 year warranty.


----------



## Newbie2009

Winter is coming, buy RX vega to heat the house.


----------



## madbrayniak

I'm considering Vega since I am wanting to go to Ryzen and package deal looks appealing.

Does anyone know of if the GPU compute performance would help with running a VM? Thinking of doing R7 1700 with three machines

Main gaming rig running win 10
Media server
Steam OS to do heavy lifting for wife's gaming needs and stream to her laptop or steam link.


----------



## rv8000

Quote:


> Originally Posted by *criminal*
> 
> Amazon Vega 64: https://www.amazon.com/gp/product/B074DK6NHQ/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
> 
> Was only up for 5 minutes. I bought one @ $499, but doesn't show a ship date at all yet.


Pretty much the same deal here. Logged into the mobile app for 1-click ordering at 9:00 am est on the dot, still not shipped.

I wonder if they actually got a shipment of cards from XFX yet. At the very least if the order gets fulfilled in the next two weeks I will have avoided the mining tax +3rd party seller inflation.


----------



## Nutty Pumpkin

I bought one. Should be here in 1-2 days









Found a retailer who had it priced very well. Cheaper than a 1080 FE that is.


----------



## tonyjones

Yeah but you'll be paying extra on your electric bill lol what's up with AMD not making it more power efficient.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *tonyjones*
> 
> Yeah but you'll be paying extra on your electric bill lol what's up with AMD not making it more power efficient.


Yeah not the best hey?

I am sure if they were able to engineer the card to perform better in that department they would. Seems they are pushing GCN to its limit and power usage isn't scaling well at all.


----------



## tonyjones

Yeah, I assume if they had more time they could but Nvidia was dominating the market too hard and they needed to release it. It's all about ROI.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *tonyjones*
> 
> Yeah, I assume if they had more time they could but Nvidia was dominating the market too hard and they needed to release it. It's all about ROI.


Problem with that is they had plenty of time.


----------



## bluej511

Ordered one in France, retailer has it on the site as in stock on the 21st. Yesterday the retail price was 649€ for the 64 and 749€ for the 64LC, couple hours later, all 3 64 editions were 749€







not sure why but its the only retailer i checked. Then i paid attention and with the coupon code Vega the price is 508€. Jumped on that just in case it was a mistake and my payment and stuff was validated so we shall wait and see, its about 80€ cheaper then the CHEAPEST 1080.

Will need to wait a week or so (for my next check) and will order an ekwb waterblock, its the only reason i went for a reference board this time around, shame that ekwb does not like making waterblocks for AIB AMD cards but it is what it is, plus for the price i can''t fault it. If you compare it to US prices and take 15% off or so in taxes it comes out JUST above the 499$ MSRP, around 510$ i believe.


----------



## bluej511

Quote:


> Originally Posted by *tonyjones*
> 
> Yeah but you'll be paying extra on your electric bill lol what's up with AMD not making it more power efficient.


For people always complaining about this, try this site, put in the power difference that youd actually see and game with (not the peak 400w usage or wtv, not sure about u but i have freeesync on my ultrawide so i cap my fps anyways) and see how minimal the difference in price is (unless you live in Germany or the Netherlands where its expensive af), here in France its anywhere from .09c per kwh to .14c per kwh nuclear ftw.


----------



## Irev

Just ordered a VEGA 64 from XFX .... got it at an amazing price


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *Irev*
> 
> Just ordered a VEGA 64 from XFX .... got it at an amazing price


PCCG? If you don't mind me asking what did you pick yours up for?

I got an XFX from PCCG and I am so goddam keen to push this 4K screen!


----------



## Irev

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> PCCG? If you don't mind me asking what did you pick yours up for?
> 
> I got an XFX from PCCG and I am so goddam keen to push this 4K screen!


yeah mate PCCG $729

its crazy how MSI gigabyte etc are asking $899 for the SAME THING!

PLE were selling a HIS VEGA 64 for $699 free shipping... now that is truly competition for the gtx1080

im so keen to drive this 1440p 144hz freesync monitor


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *Irev*
> 
> yeah mate PCCG $729
> 
> its crazy how MSI gigabyte etc are asking $899 for the SAME THING!
> 
> PLE were selling a HIS VEGA 64 for $699 free shipping... now that is truly competition for the gtx1080
> 
> im so keen to drive this 1440p 144hz freesync monitor


Yeah! Hard to justify two games, one that isn't out yet for that much of a premium...
Just need it to ship now.


----------



## Newbie2009

I should have my vega tomorrow, won't have block until 18th earliest. Ugh the thought of taking apart my loop.

Anyone want to buy a vega 64? €999


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> I should have my vega tomorrow, won't have block until 18th earliest. Ugh the thought of taking apart my loop.
> 
> Anyone want to buy a vega 64? €999


If it comes with the block maybe haha.

For me its easy to take apart my loop since i have a cube case and the mobo is horizontal (dont even need to drain it i can just remove the gpu or cpu block then redbleed and done) but i may take it completely apart this time just to make sure i get no water in the pcie slot and i can completely clean my rads of dust, i think ive had it a year so far without cleaning it. Same water with copper sulphate in it, i use alphacool distilled water.


----------



## Newbie2009

Quote:


> Originally Posted by *bluej511*
> 
> If it comes with the block maybe haha.
> 
> For me its easy to take apart my loop since i have a cube case and the mobo is horizontal (dont even need to drain it i can just remove the gpu or cpu block then redbleed and done) but i may take it completely apart this time just to make sure i get no water in the pcie slot and i can completely clean my rads of dust, i think ive had it a year so far without cleaning it. Same water with copper sulphate in it, i use alphacool distilled water.


Sounds pretty handy. Ah I should be able to do a ninja job over the weekend, guess will have to test card isn't DOA on air anyway before blocks arrive.

I've a feeling vega will be beastly under proper cooling.


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> Sounds pretty handy. Ah I should be able to do a ninja job over the weekend, guess will have to test card isn't DOA on air anyway before blocks arrive.
> 
> I've a feeling vega will be beastly under proper cooling.


Yea usually doesn't leak much, taking my block apart all i had to do was refill the block and rebleed so not too bad. I will test it on air just to make sure it works, id still have to take apart my block but may just run a straight tube from the cpu block to the other rad and wait for my block and stuff but not sure yet. Its still a preorder i believe so not even sure if ill get one haha, but for the price i paid i honestly don't care.


----------



## PontiacGTX

Anyone with RX Vega 56 could compare if the power draw drops while undervolting gamer nexus review shown it had lower power draw while being at stock


----------



## Newbie2009

Quote:


> Originally Posted by *bluej511*
> 
> Yea usually doesn't leak much, taking my block apart all i had to do was refill the block and rebleed so not too bad. I will test it on air just to make sure it works, id still have to take apart my block but may just run a straight tube from the cpu block to the other rad and wait for my block and stuff but not sure yet. Its still a preorder i believe so not even sure if ill get one haha, but for the price i paid i honestly don't care.


I'm looking forward to messing around with it, under volt/ over volt, OC, maybe eventually play a game.

Last purchase was rx 290 launch month. So bored.


----------



## rancor

Quote:


> Originally Posted by *PontiacGTX*
> 
> Anyone with RX Vega 56 could compare if the power draw drops while undervolting gamer nexus review shown it had lower power draw while being at stock


Vega 56 hasnt been released yet it be available August 28th.


----------



## PontiacGTX

Quote:


> Originally Posted by *rancor*
> 
> Vega 56 hasnt been released yet it be available August 28th.


Yes I mean whenever people gets it, forgot the say


----------



## Evil Penguin

Grrr! I knew I should have waited for newer Vega FE drivers:

http://support.amd.com/en-us/download/frontier?os=Windows%2010%20-%2064

Beta tab.


----------



## SlushPuppy007

Hi, can anyone tell me why the rx vega 64 liquid TDP is so much higher than the 64 air model? Can that little diffirence in clock speed eat up 50 extra watts?


----------



## bluej511

Quote:


> Originally Posted by *SlushPuppy007*
> 
> Hi, can anyone tell me why the rx vega 64 liquid TDP is so much higher than the 64 air model? Can that little diffirence in clock speed eat up 50 extra watts?


Yea probably, my guess is just higher voltage higher tdp. Most reviews show that using turbo mode youll get like an extra 50-75w of power consumption for little gain.

People need to realize that TDP is thermal design power, so it's how much the card can dissipate, and it being watercooled its going to dissipate more with that massive cooler. My r9 390 went down like 10-15w or so going from air to water cooled on a custom loop, im assuming Vega 64 will be the same and drop actual power consumption once on water.


----------



## SlushPuppy007

So regarding power draw of the vega 64, will it draw over 375watt when overclocked to maximum under custom water loop?


----------



## bluej511

Quote:


> Originally Posted by *SlushPuppy007*
> 
> So its not the liquid cooler thats pulling extra power via the pcb? The cooler prob has external power?


Yea that's their as well i think the pump and fan pull power from the pcb not sure if its from the pcie lanes or the 2x8.


----------



## SlushPuppy007

Im trying to figure out if I watercool the vega 64 and overclock it to max if it would consume more than 375 watt of power? More than that can be problematic since 2x8 pin plus pcie slot can only supply 375watt.


----------



## Newbie2009

Quote:


> Originally Posted by *SlushPuppy007*
> 
> Im trying to figure out if I watercool the vega 64 and overclock it to max if it would consume more than 375 watt of power? More than that can be problematic since 2x8 pin plus pcie slot can only supply 375watt.


It's a bit of an out there question, will depend on the chip. Hotter a card gets more power it needs, more power it gets, hotter it gets.
Some chips will need less volts for stock clocks but will hit a clock wall before a more leaky chip which takes more juice but can clock higher. If you go water do custom, not the crappy AMD water cooled one.


----------



## Newbie2009

Also will have more to do with the VRM than the 8 pin rated wattage.


----------



## dagget3450

Quote:


> Originally Posted by *Evil Penguin*
> 
> Grrr! I knew I should have waited for newer Vega FE drivers:
> http://support.amd.com/en-us/download/frontier?os=Windows%2010%20-%2064
> Beta tab.


Did you try this beta yet? I am still unable to get Radeon pro ui, and thus no wattman either.


----------



## gupsterg

Quote:


> Originally Posted by *SlushPuppy007*
> 
> Im trying to figure out if I watercool the vega 64 and overclock it to max if it would consume more than 375 watt of power? More than that can be problematic since 2x8 pin plus pcie slot can only supply 375watt.


PCI-SIG info is lower than what HW can do.



Pretty sure VEGA AIO does not use PCI-E slot for AIO as the power usage from slot is low like Hawaii / Fiji was.
Quote:


> In our testing, the power through the motherboard slot only goes up to 30 watts with the Vega FE


Source link.

So glad the ref PCB on VEGA is not a borked setup as Polaris was.


----------



## 113802

I'm a proud RX Vega 64 owner


----------



## Evil Penguin

Quote:


> Originally Posted by *dagget3450*
> 
> Did you try this beta yet? I am still unable to get Radeon pro ui, and thus no wattman either.


Just tried it, I'm running into the same thing (no gamer mode).


----------



## dagget3450

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I'm a proud RX Vega 64 owner


Congrats!

Quote:


> Originally Posted by *Evil Penguin*
> 
> Just tried it, I'm running into the same thing (no gamer mode).


Yeah no crossfire, no wattman i am crying as i reload 17.6


----------



## bluej511

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I'm a proud RX Vega 64 owner


Yea congrats man, how the hell did you get one so quickly?


----------



## 113802

Quote:


> Originally Posted by *bluej511*
> 
> Yea congrats man, how the hell did you get one so quickly?


I live in San Diego which is 2 hours away from Newegg's warehouse.


----------



## bluej511

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I live in San Diego which is 2 hours away from Newegg's warehouse.


Damn not bad, i got mine for dirt cheap in Europe, 508€ compared to 649€ and now 749€ so i dont mind waiting for mine, i need to wait and order an ekwb gpu block for it anyways so no rush for me.


----------



## steadly2004

Count me in the club soon. ??


----------



## 113802

My RX Vega 64 is running at 1750Mhz core? Everything I launch to check core speed shows it's at 1750Mhz stock.

http://www.3dmark.com/fs/13368332


----------



## bluej511

Quote:


> Originally Posted by *WannaBeOCer*
> 
> My RX Vega 64 is running at 1750Mhz core? Everything I launch to check core speed shows it's at 1750Mhz stock.
> 
> http://www.3dmark.com/fs/13368332


Is that what its supposed to be at? Not familiar with the LC version of it. If its at 1750mhz at all times thats gonna run quite hot.


----------



## 113802

Quote:


> Originally Posted by *bluej511*
> 
> Is that what its supposed to be at? Not familiar with the LC version of it. If its at 1750mhz at all times thats gonna run quite hot.


It's suppose to be 1677Mhz and a ton of reviewers struggled with 1700Mhz. Mine is stock and everything is reporting 1750Mhz and caps at 52C at max load.


----------



## bluej511

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It's suppose to be 1677Mhz and a ton of reviewers struggled with 1700Mhz. Mine is stock and everything is reporting 1750Mhz and caps at 52C at max load.


Well thats good then, they might have poor case flow or the pump or fan not running at a good speed. Id be happy at 1750, means i could probably get 1750 on my vega 64 air with my waterblock.


----------



## PontiacGTX

Quote:


> Originally Posted by *WannaBeOCer*
> 
> My RX Vega 64 is running at 1750Mhz core? Everything I launch to check core speed shows it's at 1750Mhz stock.
> 
> http://www.3dmark.com/fs/13368332


have you compared using VEGA Frontier Edition drivers on it?


----------



## gupsterg

Quote:


> Originally Posted by *bluej511*
> 
> Is that what its supposed to be at?


Nope. 1677MHz max boost.
Quote:


> Originally Posted by *WannaBeOCer*
> 
> My RX Vega 64 is running at 1750Mhz core? Everything I launch to check core speed shows it's at 1750Mhz stock.
> 
> http://www.3dmark.com/fs/13368332
> 
> 
> 
> Spoiler: Warning: Spoiler!


If you wouldn't mind, grab AtiFlash in OP of this thread, have no monitoring tools open in OS or running in background and save VBIOS and attach to post as a zip, can see if VBIOS has clocks as you are seeing in OS.


----------



## 113802

Quote:


> Originally Posted by *gupsterg*
> 
> Nope. 1677MHz max boost.
> If you wouldn't mind, grab AtiFlash in OP of this thread, have no monitoring tools open in OS or running in background and save VBIOS and attach to post as a zip, can see if VBIOS has clocks as you are seeing in OS.


Here you go

687F.zip 36k .zip file


----------



## rv8000

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It's suppose to be 1677Mhz and a ton of reviewers struggled with 1700Mhz. Mine is stock and everything is reporting 1750Mhz and caps at 52C at max load.


Steve from GN mentioned there were a lot of issues with software readout values, its quite possible it could be a buggy readout.

However, when Steve ran his Vega FE Hybrid tests, he did score around 24.9k @ 1700/1100. I would also try OC'ing your HBM2 and leave the core clocks alone just to see what happens.


----------



## rdr09

Quote:


> Originally Posted by *WannaBeOCer*
> 
> My RX Vega 64 is running at 1750Mhz core? Everything I launch to check core speed shows it's at 1750Mhz stock.
> 
> http://www.3dmark.com/fs/13368332


2 290s get 22K. That should still consume less power that these. Hope i find one in Egypt.


----------



## bluej511

Quote:


> Originally Posted by *rdr09*
> 
> 2 290s get 22K. That should still consume less power that these. Hope i find one in Egypt.


Well a single r9 390 (pretty much a 290) consumes 215w on water for me, so no 2 of em would not consume less then a single vega 64. Youd be at 430w on water and probably closer to 500 on air and tahts stock speeds.


----------



## rdr09

Quote:


> Originally Posted by *bluej511*
> 
> Well a single r9 390 (pretty much a 290) consumes 215w on water for me, so no 2 of em would not consume less then a single vega 64. Youd be at 430w on water and probably closer to 500 on air and tahts stock speeds.


My 850w should still be good. I want to pair this with a TR.


----------



## 113802

Quote:


> Originally Posted by *rv8000*
> 
> Steve from GN mentioned there were a lot of issues with software readout values, its quite possible it could be a buggy readout.
> 
> However, when Steve ran his Vega FE Hybrid tests, he did score around 24.9k @ 1700/1100. I would also try OC'ing your HBM2 and leave the core clocks alone just to see what happens.


I beat his score - only thing I overclocked is the HBM to 1100 it still shows 1750Mhz stock

http://www.3dmark.com/3dm/21591557?


----------



## gupsterg

It is 1750MHz







.



Zip has PP extracted, tables list, ref my thread OP for PowerPlay header info, etc.

VEGA64_Liquid.zip 810k .zip file


+rep for share @WannaBeOCer







, enjoy








.


----------



## bluej511

Quote:


> Originally Posted by *gupsterg*
> 
> It is 1750MHz
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> Zip has PP extracted, tables list, ref my thread OP for PowerPlay header info, etc.
> 
> VEGA64_Liquid.zip 810k .zip file


Damn thats nuts, so technically i should be able to hit that with my vega 64 and ekwb waterblock or i wonder if the BIOSes are different between water and air. Would be a hell of a nice overclock though. Maybe the press driver wasn't booster Vega 64LC as much as it should.


----------



## gupsterg

I had to double check I had not lost the plot







....

Lower cards are Power Limited, I have not kept up fully with reviews so don't know if PL can be adjusted. But as Buildzoid was aware of the OCN VEGA bios thread I think GN used Helm's SoftPowerPlay mod on VEGA FE, but they did not know how to do for RX VEGA which differs, Helm did an update today.

Need to check PowerTune table in Liquid, Buildzoid uploaded VEGA 56 VBIOS today, gotta view that as well.

So have VEGA FE, VEGA 56 and VEGA Liquid VBIOS, if a member has VEGA 64 Air please share VBIOS.


----------



## 113802

When I was running at stock it would run at 1750Mhz 100% of the time. Overclocked it HBM and now it's mostly running at 1668Mhz. The giant frequency drop is because I alt tabbed. HBM is stable at 1100Mhz with stock volts/auto settings. Temperatures do creep to 65C with auto fan settings. Had to increase fan speed to keep it below 65C.



Edit: My brother just let me know RX Vega 64 XTX max boost is 1750Mhz


----------



## bluej511

Quote:


> Originally Posted by *gupsterg*
> 
> I had to double check I had not lost the plot
> 
> 
> 
> 
> 
> 
> 
> ....
> 
> Lower cards are Power Limited, I have not kept up fully with reviews so don't know if PL can be adjusted. But as Buildzoid was aware of the OCN VEGA bios thread I think GN used Helm's SoftPowerPlay mod on VEGA FE, but they did not know how to do for RX VEGA which differs, Helm did an update today.
> 
> Need to check PowerTune table in Liquid, Buildzoid uploaded VEGA 56 VBIOS today, gotta view that as well.
> 
> So have VEGA FE, VEGA 56 and VEGA Liquid VBIOS, if a member has VEGA 64 Air please share VBIOS.


Not sure when ill have mine, i ordered it yesterday so we''ll see. Ill upload it before i even put it on water to make sure it runs and what not.
Quote:


> Originally Posted by *WannaBeOCer*
> 
> When I was running at stock it would run at 1750Mhz 100% of the time. Overclocked it HBM and now it's mostly running at 1668Mhz. The giant frequency drop is because I alt tabbed. HBM is stable at 1100Mhz with stock volts/auto settings. Temperatures do creep to 65C with auto fan settings. Had to increase fan speed to keep it below 65C.
> 
> 
> 
> Edit: My brother just let me know RX Vega 64 XTX max boost is 1750Mhz


Saw that in a couple reviews as well, 65°C is quite a bit hot for a water cooled card but it is a SINGLE 120mm rad, mine is a 240 and a 360 so pretty much 5 120mm rads. Should run significantly cooler.


----------



## steadly2004

Quote:


> Originally Posted by *WannaBeOCer*
> 
> When I was running at stock it would run at 1750Mhz 100% of the time. Overclocked it HBM and now it's mostly running at 1668Mhz. The giant frequency drop is because I alt tabbed. HBM is stable at 1100Mhz with stock volts/auto settings. Temperatures do creep to 65C with auto fan settings. Had to increase fan speed to keep it below 65C.
> 
> 
> 
> Edit: My brother just let me know RX Vega 64 XTX max boost is 1750Mhz


Isn't someone somewhere supposed to eat his shoe?.... Jk


----------



## rv8000

Quote:


> Originally Posted by *bluej511*
> 
> Not sure when ill have mine, i ordered it yesterday so we''ll see. Ill upload it before i even put it on water to make sure it runs and what not.
> Saw that in a couple reviews as well, 65°C is quite a bit hot for a water cooled card but it is a SINGLE 120mm rad, mine is a 240 and a 360 so pretty much 5 120mm rads. Should run significantly cooler.


All things considered that's actually fairly cool for a clc on a 350-400+ watt card.


----------



## 113802

Currently stable at 1800Mhz/1100Mhz and got it to sustain 1800Mhz when gaming.


----------



## rv8000

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Currently stable at 1800Mhz/1100Mhz and got it to sustain 1800Mhz when gaming.


Any luck getting fire strike to run at those clocks. Lookin good for some early results though


----------



## Arizonian

/subbed

Congrats to the new owners who were lucky to pick em up. Personally won't have 6 bills until February. Waiting for a Sapphire Nitro or might try MSI's gaming version this next time.

Looking toward member results of overclocking and game results.


----------



## 113802

Quote:


> Originally Posted by *rv8000*
> 
> Any luck getting fire strike to run at those clocks. Lookin good for some early results though


1750/1100Mhz scored the same as 1802/1100Mhz

I ran a FireStrike at 1847Mhz and made sure it sustained the 1847Mhz the entire run. Only 300 or so more than stock?

http://www.3dmark.com/3dm/21592487?



First Game: Tomb Raider with stock memory
average of three runs

Stock FPS: MIN FPS: 42 MAX FPS: 78 AVERAGE FPS: 60.6
1750Mhz Sustained: MIN FPS: 44.6 MAX FPS: 84 AVERAGE FPS: 62.1
1857Mhz Sustained: MIN FPS: 44 MAX FPS: 86 AVERAGE FPS: 62.6

1750Mhz Sustained w/ 1100Mhz HBM: MIN FPS: 46 MAX FPS: 88 AVERAGE FPS: 64.5


----------



## dagget3450

Work is in full swing for me, working my arse off this week. I will update thread op and list this weekend . Was thinking about doing some sort of benchmark lists also to try and compile more data on vega. Maybe to compare water against air etc...


----------



## Y0shi

@dagget3450

You should change the owners list in the OP. Profile link shows the wrong Y0shi









http://www.overclock.net/u/531359/y0shi <-- the real one


----------



## SAMiN

To those who had their hands on Vega, any issues with coil whine?


----------



## Newbie2009

yey my vega is out for delivery. No sign of the waterblock yet though.

A couple of odd things about vega :


A fury X clocked at same clocks seems to score same/similar to vega.
Fury has 8.9B transistors, vega has 12.5B.
Same amount of shaders, where are the improvements?
Why did AMD launch a product which IMO is not ready yet and just days after probably their most significant cpu launch in a decade.
Why would AMD price so high in comparison to Nvidia, they need market share.
I think there are two likely answers:
1- VEGA is broken beyond repair and they realized there is something fundamentally wrong with the chip, ship it out and sell whatever you can.
2- VEGA isn't ready but wanted to use the hype from the CPU division so launched early.

Strangest launch of an AMD card I can remember from recent memory.


----------



## ilmazzo

Quote:


> Originally Posted by *Newbie2009*
> 
> yey my vega is out for delivery. No sign of the waterblock yet though.
> 
> A couple of odd things about vega :
> 
> 
> A fury X clocked at same clocks seems to score same/similar to vega.
> Fury has 8.9B transistors, vega has 12.5B.
> Same amount of shaders, where are the improvements?
> Why did AMD launch a product which IMO is not ready yet and just days after probably their most significant cpu launch in a decade.
> Why would AMD price so high in comparison to Nvidia, they need market share.
> I think there are two likely answers:
> 1- VEGA is broken beyond repair and they realized there is something fundamentally wrong with the chip, ship it out and sell whatever you can.
> 2- VEGA isn't ready but wanted to use the hype from the CPU division so launched early.
> 
> Strangest launch of an AMD card I can remember from recent memory.


My ignorance is great in this filed but amd claimed that some transistors where used to permit the huge bump in frequencies, the great change on L2 architecture might require more space too for the interconnections...just my 0.01$


----------



## ilmazzo

sorry double post


----------



## LionS7

Quote:


> Originally Posted by *dagget3450*
> 
> Work is in full swing for me, working my arse off this week. I will update thread op and list this weekend . Was thinking about doing some sort of benchmark lists also to try and compile more data on vega. Maybe to compare water against air etc...


Did you try Crimson 17.8.1 on Vega Frontier Edition ?


----------



## Newbie2009

Quote:


> Originally Posted by *ilmazzo*
> 
> sorry double post


I would have thought the die shrink would have facilitated the clock bump.

Right now vega looking like a die shrunk fury x with higher clocks and TDP, with no increase in performance per core.

Looking at the user benches also, overclocking seems to yield very little.

Also what about the implementation of the new render engine, tiling. That was supposed to give a nice boost for free, how can fury x match vega at same clocks if this is enabled.


----------



## Y0shi

Quote:


> Originally Posted by *LionS7*
> 
> Did you try Crimson 17.8.1 on Vega Frontier Edition ?


I can't recommend that. You'll loose all Crimson features, only will Radeon Pro available. So no WattMan et al.


----------



## LionS7

Quote:


> Originally Posted by *Y0shi*
> 
> I can't recommend that. You'll loose all Crimson features, only will Radeon Pro available. So no WattMan et al.


There is 17.8.2 for Frontier Edition from 14th, what about them ? Did they have the RX Vega optimisations like for gaming... ?


----------



## Blameless

Quote:


> Originally Posted by *SlushPuppy007*
> 
> Hi, can anyone tell me why the rx vega 64 liquid TDP is so much higher than the 64 air model? Can that little diffirence in clock speed eat up 50 extra watts?


Yes, small increases in clock speed can result in large increases in power at the high-end of what an architecture is capable of.

However, the TDP value is a limiter on board power. The liquid cooled version can dissipate much more heat, so is allowed to generate more heat before being throttled. It has more clock headroom, but it also boosts more reliably to those limits.

Even if the base/boost clocks were the same, the increased power limit would be used and would result in more performance.
Quote:


> Originally Posted by *SlushPuppy007*
> 
> Im trying to figure out if I watercool the vega 64 and overclock it to max if it would consume more than 375 watt of power?


Yes, if you max out the power limit slider.
Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1750/1100Mhz scored the same as 1802/1100Mhz
> 
> I ran a FireStrike at 1847Mhz and made sure it sustained the 1847Mhz the entire run. Only 300 or so more than stock?


Hitting the power limit and being throttled, or getting too warm and having the cache/memory throw EDC errors.


----------



## SlushPuppy007

Quote:


> Originally Posted by *Blameless*
> 
> Yes, small increases in clock speed can result in large increases in power at the high-end of what an architecture is capable of.
> 
> However, the TDP value is a limiter on board power. The liquid cooled version can dissipate much more heat, so is allowed to generate more heat before being throttled. It has more clock headroom, but it also boosts more reliably to those limits.
> 
> Even if the base/boost clocks were the same, the increased power limit would be used and would result in more performance.
> Yes, if you max out the power limit slider.
> Hitting the power limit and being throttled, or getting too warm and having the cache/memory throw EDC errors.


Thank you,

Even though the RX Vega 64 is a power hog, I'm still very much interested in the card + a Samsung 32" CHG70 Gaming Monitor.

I'm very curious as to what type of performance I can expect when putting a top-notch Custom PCB Card (like an Asus ROG STRIX model) under a EKWB Waterblock in Custom Liquid Loop (2 x 360 Rads).

What bothers me is that if you pass 375 Watts of power draw for this card, how will that affect your motherboard pcie slot and psu pcie cables?

I have never run a GFX Card past 350 Watt of power draw (for the card alone).

PSU: Coolermaster Silent Pro Gold 1200W
Mobo: ASRock X370 Taichi


----------



## Newbie2009

Ok so add me please


----------



## gupsterg

@SlushPuppy007

Post 136 had some info. Ref VEGA PCI-E draw is well below PCi-SIG info for slot, slot is 75W, it drew 30W in the PCper measurements. This is similar to how Hawaii / Fiji was, as minor power draw elements the card uses PCI-E slot, GPU/Cooling solution is off PCI-E plugs.

If you are using separate cables to each PCI-E plug you will be perfectly safe for any OC exploits.



Above is showing current draw on slot from RX VEGA 64 AIR and VEGA FE AIR, non issue on PCI-E slot draw. The VRM on the ref PCB is excessively over built as well, you are more likely to run into cooling/silicon limits then VRM crappy out for OC exploits IMO.

@WannaBeOCer

Any chance of 2nd BIOS switch position VBIOS? cheers







.

From the monitored GPU MHz doesn't look like you have hit PL limit for card to throttle, but the polling rate in WattMan is not that fast from when I last used it. Could well be errors have crept in as stated by Blameless.

5% clock gain just did not scale as such clearly, link. Buildzoid tested VEGA FE on LN2, link. Hopefully he'll do some more runs.

PL in RX VEGA 64 Liquid is higher than VEGA FE AIR, but same as VEGA FE AIO. Temp limits have been lowered in Liquid/AIO vs AIR.



Spoiler: VEGA FE AIR PowerTune



Code:



Code:


typedef struct _ATOM_Vega10_PowerTune_Table_V2
{
07      UCHAR  ucRevId;
DC 00 (220W)    USHORT usSocketPowerLimit;
DC 00 (220W)    USHORT usBatteryPowerLimit;
DC 00 (220W)    USHORT usSmallPowerLimit;
2C 01 (300A)    USHORT usTdcLimit;
00 00           USHORT usEdcLimit;
59 00 (89°C)    USHORT usSoftwareShutdownTemp;
69 00 (105°C)   USHORT usTemperatureLimitHotSpot;
49 00 (73°C)    USHORT usTemperatureLimitLiquid1;
49 00 (73°C)    USHORT usTemperatureLimitLiquid2;
5F 00 (95°C)    USHORT usTemperatureLimitHBM;
73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
73 00 (115°C)   USHORT usTemperatureLimitVrMem;
64 00 (100°C)   USHORT usTemperatureLimitPlx;
40 00 (64O??)   USHORT usLoadLineResistance;
90      UCHAR ucLiquid1_I2C_address;
92      UCHAR ucLiquid2_I2C_address;
97      UCHAR ucLiquid_I2C_Line;
60      UCHAR ucVr_I2C_address;
96      UCHAR ucVr_I2C_Line;
00      UCHAR ucPlx_I2C_address;
90      UCHAR ucPlx_I2C_Line;
55 00 (85°C)    USHORT usTemperatureLimitTedge;
} ATOM_Vega10_PowerTune_Table_V2;







Spoiler: RX VEGA Liquid



Code:



Code:


typedef struct _ATOM_Vega10_PowerTune_Table_V2
{
07      UCHAR  ucRevId;
08 01 (264W)    USHORT usSocketPowerLimit;
08 01 (264W)    USHORT usBatteryPowerLimit;
08 01 (264W)    USHORT usSmallPowerLimit;
2C 01 (300A)    USHORT usTdcLimit;
00 00           USHORT usEdcLimit;
4A 00 (74°C)    USHORT usSoftwareShutdownTemp;
69 00 (105°C)   USHORT usTemperatureLimitHotSpot;
4A 00 (74°C)    USHORT usTemperatureLimitLiquid1;
4A 00 (74°C)    USHORT usTemperatureLimitLiquid2;
5F 00 (95°C)    USHORT usTemperatureLimitHBM;
73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
73 00 (115°C)   USHORT usTemperatureLimitVrMem;
64 00 (100°C)   USHORT usTemperatureLimitPlx;
40 00 (64O??)   USHORT usLoadLineResistance;
90      UCHAR ucLiquid1_I2C_address;
92      UCHAR ucLiquid2_I2C_address;
97      UCHAR ucLiquid_I2C_Line;
60      UCHAR ucVr_I2C_address;
96      UCHAR ucVr_I2C_Line;
00      UCHAR ucPlx_I2C_address;
90      UCHAR ucPlx_I2C_Line;
46 00 (70°C)    USHORT usTemperatureLimitTedge;
} ATOM_Vega10_PowerTune_Table_V2;


----------



## Newbie2009

Ok so literally nothing bar the card arrived in the box, no cables etc


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> Ok so literally nothing bar the card arrived in the box, no cables etc


Mmm a bit odd but makes sense, no driver CD or anything? Usually you get at least one HDMI cable (but to be honest thats totally pointless) pretty sure most of us use display port, i only use hdmi if i need to use my tv) maybe the cards are too expensive so no 2$ HDMI cable along with it haha.


----------



## Newbie2009

Quote:


> Originally Posted by *bluej511*
> 
> Mmm a bit odd but makes sense, no driver CD or anything? Usually you get at least one HDMI cable (but to be honest thats totally pointless) pretty sure most of us use display port, i only use hdmi if i need to use my tv) maybe the cards are too expensive so no 2$ HDMI cable along with it haha.


nothing.


----------



## biZuil

Does the HBM clock higher than 1100mhz? Vega core seems like its bandwidth starved.


----------



## ilmazzo

Excuse but why asking if he is using 17.8.1? For what I know sre the only one who supports rx vega or am I missing something?


----------



## ilmazzo

Quote:


> Originally Posted by *biZuil*
> 
> Does the HBM clock higher than 1100mhz? Vega core seems like its bandwidth starved.


In a scenario where primitive discard does not work I would agree with you about bandwith


----------



## Newbie2009

Didn't get a chance to play around with it, but the clock readings are all over the place.

Think I will leave after burner off until proper support.


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> Didn't get a chance to play around with it, but the clock readings are all over the place.
> 
> Think I will leave after burner off until proper support.


Isn't it supposed to be all over the place depending on load? My r9 390 does that in some games bounce around from 900-1040 mhz, try to see if theres a beta of afterburner and/or hwinfo and rivatuner. Hwinfo64s designer/ceo/writer is on the forum, I'm sure hes already got a beta version of it to work with Vega to get better readings. I think whatever you guys are seeing in wattman is probably the most accurate.


----------



## Newbie2009

Quote:


> Originally Posted by *bluej511*
> 
> Isn't it supposed to be all over the place depending on load? My r9 390 does that in some games bounce around from 900-1040 mhz, try to see if theres a beta of afterburner and/or hwinfo and rivatuner. Hwinfo64s designer/ceo/writer is on the forum, I'm sure hes already got a beta version of it to work with Vega to get better readings. I think whatever you guys are seeing in wattman is probably the most accurate.


Was going 0% to 99% gpu load ever 5 seconds. Anyway in my experience AB is funky with AMD cards at the best of times.


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> nothing.


can you run a 3dmark fire strike test to compare witht he other guy's RX VEGA?


----------



## Newbie2009

Quote:


> Originally Posted by *PontiacGTX*
> 
> can you run a 3dmark fire strike test to compare witht he other guy's RX VEGA?


Not at home, only had time to install the card, will play with this evening and post.


----------



## ontariotl

I originally pre-ordered the liquid edition as I really didn't want to put on another aftermarket waterblock. I thought I would go the easy route this time. However, stock didn't arrive as planned (really no surprise) and they had no idea when. I decided to hang on to hope until late last night. I check my local store for real time stock and they had the black air cooled in stock (4 of them). This morning I get up head over to stand in front of the door before opening and well......they now have 3 in stock







and the liquid order is cancelled. I will just get a EK wb when it comes out.



I guess you can add me to the owner list now. I'm a proud owner of a year delayed card










I am a little salty with the pricing where I live enough that I seriously thought about going green with a 1080ti and a gsync monitor. However, owning a freesync ultrawide monitor already I'm in with the red eco system. To replace with a Gsync ultrawide is probably more cost prohibitive than it looks on paper. Sure I can get more performance with a Ti but besides the extra cost for another monitor for G-Sync, another reason that compelled me to stick with AMD is the reason below.

I took a chance on pair of R9 290 cards when they first came out even though everyone *****ed about how power and heat where through the roof and kept telling me I should have bought a 780ti instead. I just used waterblocks to ease that pain and for almost 4 years they have served me well especially when I was able to flash them to a 290X easily. I certainly hope Vega can repeat at least 3 of those years. Thinking back if I had of bought a 780Ti, I probably would have upgraded at least 3 more times in that period. Sure Nvidia still has the performance crown, but being happy with a card that lasted almost 4 years while my wallet fattened up makes up for it.

I'm going to do some ultrawide 1440p benchmarks with my old 290(x)'s xfired if possible along with a killawatt meter and then compare it to the Vega card shortly. I'm really curious to see what I get.


----------



## Tgrove

Im in the same boat. I always ran sli nvidia gpus and surround or 1440p, then i got this monitor and fury x crossfire 2 years ago. Great experience.

I cant go back to a non sync display, the difference is overwhelming to me

With that said.....IMO amd gpu + freesync > nvidia + no g sync. Granted i could get any g sync monitor, they offer no variety


----------



## bluej511

Quote:


> Originally Posted by *Tgrove*
> 
> Im in the same boat. I always ran sli nvidia gpus and surround or 1440p, then i got this monitor.
> 
> I cant go back to a non sync display, the difference is overwhelming to me
> 
> With that said.....IMO amd gpu + freesync > nvidia + no g sync


yea i agree, I've had Freesync for over a year now and when it doesn't work on glitches i can tell and it drives me nuts.


----------



## Sicness

Quote:


> Originally Posted by *gupsterg*
> 
> So have VEGA FE, VEGA 56 and VEGA Liquid VBIOS, if a member has VEGA 64 Air please share VBIOS.


Gup,

here are the two BIOS files of the Sapphire Vega 64 Air Black I received today. Hope this helps











SapphireVega64AirBlack.zip 271k .zip file


----------



## gupsterg

+rep







.

Thanks, as you got full ROM I'm assuming you used command line to dump VBIOS?

Switch 2 (towards PCI-E plugs) is lower PL position, 200W. Where as position 1 (towards display IO) is 220W.

When I first saw VBIOS and started the thread on it I was scratching my head why the PL was low compared with TBP/TDP figures seen on web. Makes sense now after seeing the members driver panel shots. They have PL at least amount so right for say Power Saver, then it's boosted by the other presets or set PL as required when doing manual setup.

Been reading some reviews today, so strange they have spec of boost clock differing to what is in VBIOS.


----------



## hellm

thx, i made a SoftPowerPlay regkey for the 64 as well. i repost it here in this thread, makes maybe more sense as in the preliminary FE thread..









you will have to check [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\, if the card is installed under 0000. if not, edit the file or repair with DDU..

MorePowerVega56.zip 1k .zip file


max powerlimit

32 00 -> 0x32 -> +50%


A5 00 -> 0xA5 -> 165W Socket PowerLimit
A5 00 -> 0xA5 -> 165W Battery PowerLimit
A5 00 -> 0xA5 -> 165W Small PowerLimit
2C 01 -> 0x12C -> 300A Tdc Limit

MorePowerVega64.zip 1k .zip file


max powerlimit

32 00 -> 0x32 -> +50%


DC 00 -> 0xDC -> 220W Socket PowerLimit
DC 00 -> 0xDC -> 220W Battery PowerLimit
DC 00 -> 0xDC -> 220W Small PowerLimit
2C 01 -> 0x12C -> 300A Tdc Limit


----------



## Sicness

Quote:


> Originally Posted by *gupsterg*
> 
> +rep
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Thanks, as you got full ROM I'm assuming you used command line to dump VBIOS?


You're most welcome! No, actually I used the GUI of ATIFlash 2.77. I must've missed the part saying I needed to run it via CLI. Assuming you see full BIOS ROMs here, a new dump via CLI isn't necessary?

Quote:


> Originally Posted by *hellm*
> 
> thx, i made a SoftPowerPlay regkey for the 64 as well. i repost it here in this thread, makes maybe more sense as in the preliminary FE thread..


Thank you!


----------



## gupsterg

Quote:


> Originally Posted by *Sicness*
> 
> You're most welcome! No, actually I used the GUI of ATIFlash 2.77. I must've missed the part saying I needed to run it via CLI. Assuming you see full BIOS ROMs here, a new dump via CLI isn't necessary?


Nope all is fine







. Dunno why @WannaBeOCer earlier posted dump of RX VEGA Liquid was not full VBIOS but just Legacy section.

At least it is confirmed that either route works to gain full VBIOs dump







. Bulidzoid supplied VEGA 56 AIR using CLI of AtiFlash v2.77 in the other thread.


----------



## criminal

Quote:


> Originally Posted by *rv8000*
> 
> Pretty much the same deal here. Logged into the mobile app for 1-click ordering at 9:00 am est on the dot, still not shipped.
> 
> I wonder if they actually got a shipment of cards from XFX yet. At the very least if the order gets fulfilled in the next two weeks I will have avoided the mining tax +3rd party seller inflation.


So did you get ship date yet?

I ended up canceling my order.


----------



## tpi2007

I have a question / request, can members here with a Vega FE and an RX Vega post pictures from the side of the naked chip?

It seems like some of RX Vega's HBM2 dies sit lower than the GPU chip. I wonder if this will affect HBM2 temps and if the vapour chamber is the same.

It's probably because they didn't compensate for the lower height of the 4 GB dies compared to Vega FE's 8 GB dies.

And I wouldn't discard cases where they are using partially working 8 GB dies in the mix too, given how expensive they are to make; if they're salvageable, why not? This is probably the case for RX Vega 64 cards.

http://www.guru3d.com/news-story/amd-radeon-rx-vega-10-chips-differ-physically-and-quite-significantly.html


----------



## Newbie2009

My Vega just blew up my 1200w PSU, and maybe mobo at stock volts no less


----------



## ontariotl

What brand of power supply. I know my 3 R9 290x's took out my Seasonic X1250 not long ago after I bought it. It was more of a defect. Maybe your power supply has a defect that only now the Vega brought up.


----------



## Evil Penguin

Quote:


> Originally Posted by *tpi2007*
> 
> I have a question / request, can members here with a Vega FE and an RX Vega post pictures from the side of the naked chip?
> 
> It seems like some of RX Vega's HBM2 dies sit lower than the GPU chip. I wonder if this will affect HBM2 temps and if the vapour chamber is the same.
> 
> It's probably because they didn't compensate for the lower height of the 4 GB dies compared to Vega FE's 8 GB dies.
> 
> And I wouldn't discard cases where they are using partially working 8 GB dies in the mix too, given how expensive they are to make; if they're salvageable, why not? This is probably the case for RX Vega 64 cards.
> 
> http://www.guru3d.com/news-story/amd-radeon-rx-vega-10-chips-differ-physically-and-quite-significantly.html


From my understanding each DRAM die is very, very thin. I don't think going from 4 stacks to 8 would require a height differential on the heatsink.


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> My Vega just blew up my 1200w PSU, and maybe mobo at stock volts no less


How in the hell did that happen?


----------



## Newbie2009

Quote:


> Originally Posted by *ontariotl*
> 
> What brand of power supply. I know my 3 R9 290x's took out my Seasonic X1250 not long ago after I bought it. It was more of a defect. Maybe your power supply has a defect that only now the Vega brought up.


OCZ, managed crossfire 290x for years


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> OCZ, managed crossfire 290x for years


Maybe it was close to biting the dust and installing Vega was a coincidence. OCZ power supplies didn't really have the best longevity in my experience with them.


----------



## Newbie2009

I added 50 MHz on the core from stock when benchmarking and it tripped a breaker in the house and psu died.


----------



## Newbie2009

Thankfully mobo seems to still be alive


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> Thankfully mobo seems to still be alive


Id be pissed if vega took out my new ryzen build lol. Glad the mobo still works, about the card?


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> I added 50 MHz on the core from stock when benchmarking and it tripped a breaker in the house and psu died.


Well they didn't kid when they said a minimum of 1 K/w power supply and I'm sure it needs to be a very power efficient one. If it tripped a breaker that tells you something that yes Vega is power hungry, but your power supply was on the way out.
Quote:


> Originally Posted by *bluej511*
> 
> Id be pissed if vega took out my new ryzen build lol. Glad the mobo still works, about the card?


Glad the mobo works too. I'm sure the card is fine. The power supply and breaker more than likely took the brunt of it.


----------



## Newbie2009

Quote:


> Originally Posted by *bluej511*
> 
> Id be pissed if vega took out my new ryzen build lol. Glad the mobo still works, about the card?


No idea. Ive to strip down. I got the mobo into standby mode with spare psu but don't think Ive enough cables for it to hook up everything. Recommend me a PSU? 800w?

On Vega I under volted 1630mhz from 1.2 to 0.8 and passed benchmark ok. Problem happened when I reset volts to stock and bumped . Anyway these things happen.

I'm pretty sure clocks aren't being reported correctly, and all apps reported same 1630 base clock.


----------



## b0uncyfr0

Still nothing on an oc'd 1070 vs max oc'd 56? This is the where the battle is and not many people talking about it. why?


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> No idea. Ive to strip down. I got the mobo into standby mode with spare psu but don't think Ive enough cables for it to hook up everything. Recommend me a PSU? 800w?
> 
> On Vega I under volted 1630mhz from 1.2 to 0.8 and passed benchmark ok. Problem happened when I reset volts to stock and bumped . Anyway these things happen.
> 
> I'm pretty sure clocks aren't being reported correctly, and all apps reported same 1630 base clock.


Even though mine had a defect, I would recommend a high efficiency Seasonic model. I've used Corsair and Enermax in the past and they are fine as well.


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> No idea. Ive to strip down. I got the mobo into standby mode with spare psu but don't think Ive enough cables for it to hook up everything. Recommend me a PSU? 800w?
> 
> On Vega I under volted 1630mhz from 1.2 to 0.8 and passed benchmark ok. Problem happened when I reset volts to stock and bumped . Anyway these things happen.
> 
> I'm pretty sure clocks aren't being reported correctly, and all apps reported same 1630 base clock.


I wonder how stable .8v was thats insane, would consume nothing then.


----------



## ontariotl

Quote:


> Originally Posted by *b0uncyfr0*
> 
> Still nothing on an oc'd 1070 vs max oc'd 56? This is the where the battle is and not many people talking about it. why?


Probably because no one has the 56 in their possession besides reviewers. With the timeline to deliver a review to garner some revenue for their hard work, many were limited to doing any extra activities. It will probably happen closer to the release, but you may have to rely on the community to deliver the answer you seek when someone can get a hold of one.


----------



## Newbie2009

Quote:


> Originally Posted by *ontariotl*
> 
> Well they didn't kid when they said a minimum of 1 K/w power supply and I'm sure it needs to be a very power efficient one. If it tripped a breaker that tells you something that yes Vega is power hungry, but your power supply was on the way out.
> Glad the mobo works too. I'm sure the card is fine. The power supply and breaker more than likely took the brunt of it.


That's what i'm hoping.

Quote:


> Originally Posted by *b0uncyfr0*
> 
> Still nothing on an oc'd 1070 vs max oc'd 56? This is the where the battle is and not many people talking about it. why?


56 Isn't out until end of the month.
Quote:


> Originally Posted by *ontariotl*
> 
> Even though mine had a defect, I would recommend a high efficiency Seasonic model. I've used Corsair and Enermax in the past and they are fine as well.


Perfect thanks, will check them out.

Quote:


> Originally Posted by *bluej511*
> 
> I wonder how stable .8v was thats insane, would consume nothing then.


Ran a couple of bencharks, about 20k gpu score? I'm sure the clocks are wrong though. Wattage at the wall did drop from high 500s to high 400s, the wattage from the wall was a bit all over the place though, probably the PSU dumping its pants.


----------



## Newbie2009

Picked up a Corsair AX 860W 80 plus platinum platinum, seasonics similar price but this had a discount on it, should have on Friday so Vega will be benched Saturday.

It will do 2ghz on 800mv!!!!!


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> Picked up a Corsair AX 860W 80 plus platinum platinum, seasonics similar price but this had a discount on it, should have on Friday so Vega will be benched Saturday.
> 
> It will do 2ghz on 800mv!!!!!




__
https://www.reddit.com/r/6t7lzn/radeon_rx_vega_3dmark_entry_spotted_with_almost/dlijs40/
 it probably is using a different frequency or throttling


----------



## Newbie2009

Quote:


> Originally Posted by *PontiacGTX*
> 
> 
> __
> https://www.reddit.com/r/6t7lzn/radeon_rx_vega_3dmark_entry_spotted_with_almost/dlijs40/
> it probably is using a different frequency or throttling


Yup, agree, it's a bit of a mess really


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> Yup, agree, it's a bit of a mess really


Its why i dont pay attention to mhz when OCing but gaming and synthetic performance. If it has NO improvements then its not working and needs to dial the OC back a bit.


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> Yup, agree, it's a bit of a mess really


why not someone analyze the BIOS of the GPU and see if there is any limit will keep it limited at certain performance due to power consumption?


----------



## skullbringer

Vega 64 air here, did powerplay table reg mod to allow for +100% PL, which triggered the OCP on my HX850i when loading 3DMark Timespy for example. Had to swap out the psu for the old RM1000, seing max sustained power draw at the psu of about 550W (oc'd Ryzen 7 system).

Now I can run any benchmark at "1980MHz core, 1100MHz memory" with slightly more than stock perf. "" because this is the clock every application reports, so it is very likely a driver issue.

Does anyone know how to actually set a real, "non-fake" clock?


----------



## Newbie2009

Quote:


> Originally Posted by *skullbringer*
> 
> Vega 64 air here, did powerplay table reg mod to allow for +100% PL, which triggered the OCP on my HX850i when loading 3DMark Timespy for example. Had to swap out the psu for the old RM1000i, seing max sustained power draw at the psu of about 550W (oc'd Ryzen 7 system).
> 
> Now I can run any benchmark at "1980MHz core, 1100MHz memory" with slightly more than stock perf. "" because this is the clock every application reports, so it is very likely a driver issue.
> 
> Does anyone know how to actually set a real, "non-fake" clock?


You a holy man? haha


----------



## PontiacGTX

Quote:


> Originally Posted by *skullbringer*
> 
> Vega 64 air here, did powerplay table reg mod to allow for +100% PL, which triggered the OCP on my HX850i when loading 3DMark Timespy for example. Had to swap out the psu for the old RM1000i, seing max sustained power draw at the psu of about 550W (oc'd Ryzen 7 system).
> 
> Now I can run any benchmark at "1980MHz core, 1100MHz memory" with slightly more than stock perf. "" because this is the clock every application reports, so it is very likely a driver issue.
> 
> Does anyone know how to actually set a real, "non-fake" clock?


Probably if someone like @The Stilt or @gupsterg or @buildzoid checked the BIOS to override the limitation


----------



## bluej511

Quote:


> Originally Posted by *skullbringer*
> 
> Vega 64 air here, did powerplay table reg mod to allow for +100% PL, which triggered the OCP on my HX850i when loading 3DMark Timespy for example. Had to swap out the psu for the old RM1000i, seing max sustained power draw at the psu of about 550W (oc'd Ryzen 7 system).
> 
> Now I can run any benchmark at "1980MHz core, 1100MHz memory" with slightly more than stock perf. "" because this is the clock every application reports, so it is very likely a driver issue.
> 
> Does anyone know how to actually set a real, "non-fake" clock?


Kinda glad i got an rm1000 to begin with (and yes i know its garbage and i plan on getting the i version eventually)


----------



## rancor

Quote:


> Originally Posted by *skullbringer*
> 
> Vega 64 air here, did powerplay table reg mod to allow for +100% PL, which triggered the OCP on my HX850i when loading 3DMark Timespy for example. Had to swap out the psu for the old RM1000i, seing max sustained power draw at the psu of about 550W (oc'd Ryzen 7 system).
> 
> Now I can run any benchmark at "1980MHz core, 1100MHz memory" with slightly more than stock perf. "" because this is the clock every application reports, so it is very likely a driver issue.
> 
> Does anyone know how to actually set a real, "non-fake" clock?


Wait what? So you are seeing 550W sustained power draw but the peaks from Vega where enough to hit the 70A(850W) 12V current limit on the HX850i? Vega would need to be drawing 450+ amps(1.2V) maybe more?


----------



## skullbringer

Quote:


> Originally Posted by *rancor*
> 
> Wait what? So you are seeing 550W sustained power draw but the peaks from Vega where enough to hit the 70A(850W) 12V current limit on the HX850i?


Yes, apparently so. Without the power play table reg mod, I had no issues at +50% power limit, but with the mod at 400W total and +100% power limit in Wattman the OCP got triggered reproducably. Examples are the start of graphics test 1 of 3DMark Timespy or scene 26 of Unigine Heaven 4.0.

At first glance it looks like the VRM actually is not that overkill and that the lower power limit on the "gaming" products is there for a good reason.

I have not yet tested any further on the HX850i, as I was desperately trying to get any performance out of core clock changes. Though without success, 1980MHz performs as good as 1630MHz, tried Wattman, WattTool, Afterburner, GPU-Z, HWInfo, all report the same core clock, but none have any effect on performance.


----------



## TOMPPIX

i don't think your overclock is working then.


----------



## rancor

Quote:


> Originally Posted by *skullbringer*
> 
> Yes, apparently so. Without the power play table reg mod, I had no issues at +50% power limit, but with the mod at 400W total and +100% power limit in Wattman the OCP got triggered reproducably. Examples are the start of graphics test 1 of 3DMark Timespy or scene 26 of Unigine Heaven 4.0.
> 
> At first glance it looks like the VRM actually is not that overkill and that the lower power limit on the "gaming" products is there for a good reason.
> 
> I have not yet tested any further on the HX850i, as I was desperately trying to get any performance out of core clock changes. Though without success, 1980MHz performs as good as 1630MHz, tried Wattman, WattTool, Afterburner, GPU-Z, HWInfo, all report the same core clock, but none have any effect on performance.


I thought hawaii was bad overvolted but my power supply going to like that I only got one vega.

The clocks almost seem like a power issue or maybe temp. It won't actually let the card get much past 1630 due to power/heat but it still reports the max boost frequency you set. Are you brave enough to try increasing the current limit with the soft power mod to 350A or 400A assuming you can cool the beast?


----------



## buildzoid

Here's a 1685/1100 run on my FE with a 6950X: http://www.3dmark.com/3dm/21604833

I think the GPU score would be a tad higher on a 7700K. I initially ran on an R7 1700 at 4G and it was about 1K slower on graphics because Ryzen is just not fast enough for Firestrike. So even the 6950X might be a mild CPU bottleneck for the card.

I've been hearing about the RX cards having trouble with reporting clocks properly but I can't really do much about it without a card in hand. I can kinda replicate it with my card if I try set clocks greater than 1990MHz. However bellow 1990MHz my card insta crashes going under load. I've run 1800MHz on LN2 just fine. So I think for the FE or something with my setup basically makes clock reporting work just fine between 852 and 1980MHz but above that the clock readings glitch out for me.


----------



## rv8000

Quote:


> Originally Posted by *criminal*
> 
> So did you get ship date yet?
> 
> I ended up canceling my order.


No, the customer service rep was no help either. It doesn't seem like amazon had actually gotten any cards from xfx, unless they all went to bundle deals.

As it stands I'm going to wait it out as I got my order in at $499, I'm in no rush atm and would rather save the money for my block. Rumor has it they may be getting some stock Thursday.


----------



## criminal

Quote:


> Originally Posted by *rv8000*
> 
> No, the customer service rep was no help either. It doesn't seem like amazon had actually gotten any cards from xfx, unless they all went to bundle deals.
> 
> As it stands I'm going to wait it out as I got my order in at $499, I'm in no rush atm and would rather save the money for my block. Rumor has it they may be getting some stock Thursday.


That sucks. I almost kept my order, but a 1080 Ti came my way at a good deal, so I couldn't chance getting Vega too. I guess I could have gotten it and resold it, but I felt better just canceling. Good luck on Thursday stock.


----------



## bluej511

Quote:


> Originally Posted by *rv8000*
> 
> No, the customer service rep was no help either. It doesn't seem like amazon had actually gotten any cards from xfx, unless they all went to bundle deals.
> 
> As it stands I'm going to wait it out as I got my order in at $499, I'm in no rush atm and would rather save the money for my block. Rumor has it they may be getting some stock Thursday.


Amazon is HORRID with pre-orders. I preordered my ryzen from them and had no news of anything ever and customer service was no help so i canceled and bought from somewhere else.


----------



## skullbringer

Quote:


> Originally Posted by *buildzoid*
> 
> Here's a 1685/1100 run on my FE with a 6950X: http://www.3dmark.com/3dm/21604833
> 
> I think the GPU score would be a tad higher on a 7700K. I initially ran on an R7 1700 at 4G and it was about 1K slower on graphics because Ryzen is just not fast enough for Firestrike. So even the 6950X might be a mild CPU bottleneck for the card.
> 
> I've been hearing about the RX cards having trouble with reporting clocks properly but I can't really do much about it without a card in hand. I can kinda replicate it with my card if I try set clocks greater than 1990MHz. However bellow 1990MHz my card insta crashes going under load. I've run 1800MHz on LN2 just fine. So I think for the FE or something with my setup basically makes clock reporting work just fine between 852 and 1980MHz but above that the clock readings glitch out for me.


What os, driver and oc tool are you using? Would like to sanity check before going 400W+ on the reg mod or flashing an fe bios.


----------



## buildzoid

Win 10 64bit, 17.30.1051 RX VEGA driver, Wattool for volts, Afterburner for clocks. That run was done with a +114% power limit and 350A current limit.


----------



## PontiacGTX

Quote:


> Originally Posted by *buildzoid*
> 
> Win 10 64bit, 17.30.1051 RX VEGA driver, Wattool for volts, Afterburner for clocks. That run was done with a +114% power limit and 350A current limit.


you cant control voltage with MSI AB Beta16?


----------



## buildzoid

Quote:


> Originally Posted by *PontiacGTX*
> 
> you cant control voltage with MSI AB Beta16?


I'm not exactly keeping up with OC utility releases. I had both Wattool and an old version of AB so I just used those.


----------



## PontiacGTX

Quote:


> Originally Posted by *buildzoid*
> 
> I'm not exactly keeping up with OC utility releases. I had both Wattool and an old version of AB so I just used those.


if you can, try BETA16 and check whether or not you can control the voltage?
http://forums.guru3d.com/showpost.php?p=5459908&postcount=637


----------



## buildzoid

Quote:


> Originally Posted by *PontiacGTX*
> 
> if you can, try BETA16 and check whether or not you can control the voltage?
> http://forums.guru3d.com/showpost.php?p=5459908&postcount=637


The voltage slider is greyed out. So that would be a no.


----------



## Ne01 OnnA

Quote:


> Originally Posted by *buildzoid*
> 
> The voltage slider is greyed out. So that would be a no.


Hi Bratan' -> check this tool
http://forums.guru3d.com/showthread.php?t=416116

It's mainly for Polaris but it should work for any GCN









Keep up the Great work out there


----------



## Tgrove

I feel like they blocked the bios so miners cant mod the bios to make them better at mining


----------



## hellm

Quote:


> Originally Posted by *Ne01 OnnA*
> 
> Hi Bratan' -> check this tool
> http://forums.guru3d.com/showthread.php?t=416116
> 
> It's mainly for Polaris but it should work for any GCN
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Keep up the Great work out there


since this tool is using the AMD ADL API i don't believe there is any chance you can overvolt with it. WattTool was the same, and since 17.4.1 you can't overvolt with WattTool or even PowerPlay table.

We would have to edit ASIC_ProfilingInfo/ASIC_VDDCI_Info, but that is not an option, no mods possible; at least not for windows.

i2c could probably work. according to the stilts mod on the FE voi table, these are the "i2c-killing" registers:
71 00 00 00
72 00 00 00
24 00 80 00
none of them are found in the Vega 56 voi table.. so, maybe..


----------



## Nutty Pumpkin

Mine will be delivered today.
XFX Vega 64 running on an TX550. I'll let you know how I go. Won't be doing any overclocking until I have a different power supply.

I'll post some photos and benchmarks.


----------



## prom

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Mine will be delivered today.
> XFX Vega 64 running on an TX550. I'll let you know how I go. Won't be doing any overclocking until I have a different power supply.
> 
> I'll post some photos and benchmarks.


Undervolt that beast!
I've seen tons of coverage about undervolting the RXV56, but next to nothing in regard to any 64 variants.

GamersNexus did a great stream on the process and concluded that rather than crash out entirely, Vega will throttle back the clock if the voltage is too low.
IIRC he still achieved over 100mv down, with +50% power. Resulted in better temps, better (and solid) clocks, while only being ~20-30w more power draw from bone stock.

Undervolt the core, overclock the hbm, and if you dare, up the power limit a few percent.


----------



## rv8000

Quote:


> Originally Posted by *criminal*
> 
> That sucks. I almost kept my order, but a 1080 Ti came my way at a good deal, so I couldn't chance getting Vega too. I guess I could have gotten it and resold it, but I felt better just canceling. Good luck on Thursday stock.


Just got a shipping date, 30th of august. Not too bad but could be better, just glad it's tangible. Plenty of things to do in the mean time









1080ti is a beast of a card , the days of spending over $500 for a gpu are definitely over after my 1080 excursions on release.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *prom*
> 
> Undervolt that beast!
> I've seen tons of coverage about undervolting the RXV56, but next to nothing in regard to any 64 variants.
> 
> GamersNexus did a great stream on the process and concluded that rather than crash out entirely, Vega will throttle back the clock if the voltage is too low.
> IIRC he still achieved over 100mv down, with +50% power. Resulted in better temps, better (and solid) clocks, while only being ~20-30w more power draw from bone stock.
> 
> Undervolt the core, overclock the hbm, and if you dare, up the power limit a few percent.


Thanks for that I'll check it out tonight and report back!

I'll try with my CPU overclock but I'll put it back to stock to ek out some headroom if I have too.


----------



## AlphaC

To VEGA 64 owners:

EK-FC Radeon Vega

109.95€ / $ 153.09
https://www.ekwb.com/shop/ek-fc-radeon-vega
https://www.ekwb.com/shop/ek-fc-radeon-vega-acetal
https://www.ekwb.com/shop/ek-fc-radeon-vega-acetal-nickel

Swiftech KOMODO RX-ECO VEGA


http://www.swiftech.com/komodo-rx-eco-vega.aspx
Waterblock ($119.95)

Alphacool Eiswolf 120 GPX Pro ATI RX Vega M01 - Black
https://www.alphacool.com/detail/index/sArticle/22291
€ 159.95 *

$167.85 * http://www.aquatuning.us/detail/index/sArticle/22291

Alphacool NexXxos GPX - ATI RX Vega M01
https://www.alphacool.com/detail/index/sArticle/22292

€ 104.95 *
$110.08 * http://www.aquatuning.us/detail/index/sArticle/22292

http://www.performance-pcs.com/water-blocks-gpu/shopby/vga-series--amdr-radeonr-vega/?
EK-FC Radeon Vega - Copper Water Block with Plexi Top for multiple AMD® Radeon® Vega based graphics cards
$125.50

Watercool Heatkiller (none at this time)
http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Categories/Wasserk%C3%BChler/GPU_Kuehler

Others:
Aquacomputer - none at this time https://shop.aquacomputer.de/index.php?cPath=7_11_149
Bitspower - none listed under AMD VGA https://www.bitspower.com/html/product/product02.php?kind=269&kind2=269
Koolance - none listed http://koolance.com/index.php?route=product/category&path=29_148_46
Phanteks - none listed http://phanteks.com/Glacier-GPU.html
XSPC - none listed at this time http://www.xs-pc.com/waterblocks-gpu/

edit August 22, Heatkiller block is projected http://watercool.de/de/news , http://www.overclock.net/t/1636837/watercool-heatkiller-for-amd-radeon-rx-vega-in-progress#post_26298713 , http://www.overclock.net/t/528648/official-heatkiller-club/1600#post_26286150


----------



## rancor

Quote:


> Originally Posted by *rv8000*
> 
> Just got a shipping date, 30th of august. Not too bad but could be better, just glad it's tangible. Plenty of things to do in the mean time
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 1080ti is a beast of a card , the days of spending over $500 for a gpu are definitely over after my 1080 excursions on release.


I just did also. It's longer than I would have hoped for but considering I got it for 499 I can't complain too much. Time to actually buy the water block.


----------



## Blameless

Quote:


> Originally Posted by *SlushPuppy007*
> 
> What bothers me is that if you pass 375 Watts of power draw for this card, how will that affect your motherboard pcie slot and psu pcie cables?


It appears that the PCI-E slot only powers the fan and the HBM2, neither of which will come anywhere near the current limits of the +12v (fan) or +3.3v (memory) rails involved.

The rest of the power is via the 8-pin PCI-E power connectors, which should be able to handle about 8A per power pin. Since there are six power pins total (three per connector) you are looking at 48A or upwards of 550w. Even if you limit yourself to a more practically conservative 6A per pin, thats still more than 400w. The card's VRM itself can handle way more than this, so the weak point will be those 8-pin connectors.

Realistically, unless the total board power of the card is going significantly past 450w for extended periods of time, the hardware should be perfectly fine, assuming core voltage and temperatures are kept in check, of course. You'll also need a PSU that can handle peak currents.
Quote:


> Originally Posted by *Newbie2009*
> 
> My Vega just blew up my 1200w PSU, and maybe mobo at stock volts no less


How old was the PSU?


----------



## os2wiz

Quote:


> Originally Posted by *ontariotl*
> 
> Well they didn't kid when they said a minimum of 1 K/w power supply and I'm sure it needs to be a very power efficient one. If it tripped a breaker that tells you something that yes Vega is power hungry, but your power supply was on the way out.
> Glad the mobo works too. I'm sure the card is fine. The power supply and breaker more than likely took the brunt of it.


That is utter rubbish about requiring a 1000 watt psu for Vega 64. I have an 860 wattt Seasonic Platinum and it will be MORE than adequate with my Ryzen 1800X build.


----------



## kundica

I don't think I noticed this with my 470, but ReLive Instant Replay is keeping my Vega 64 Air pegged at 100% with max clock. I shut it off for now.


----------



## DrZine

Hi all!

Brand spanking new to these forums. I picked up my Vega 64 air cooled Monday morning and wanted to add to the thread in some way, so I started up an account!

Before I start I need to get this out of the way. My machine is not enough machine to handle this card right now. Hopefully that will change later in the year. I'm running a FX 8350 @4.7ghz on a 1st gen 990FX sabertooth. and to add insult to injury my PSU is a EVGA 650 P2. No I'm not doing any serious overclocking anytime soon. With that out of the way.

Undervolting and overclocking! I have been able to get the GPU clock to 1702Mhz in Watman. Anything over that doesn't seem to do anything. Sure I can set it higher but benchmarks don't change. This is undervolted to 1.1v. I will have to see if I can lower it some more, I haven't played around with undervolting too much yet. Memory is a different story. I can achieve 1100 Mhz sorta. I feel like its limited by power. Watman only lets me set up to 1.2v but I bet it would be fulling stable with a bit more. So what I mean by that. I get improved scores up to 1.1 Ghz but I get graphical glitchs. Sometimes the screen would go black for a second and on one run I had a crash. And it would go back to perfect dropping the HBM back down to 1080. Maybe I just need a stronger PSU? or its just power limited? or it's just driver issues? I am using the newest Vega driver FYI.

So some other fun facts about Watman I have discovered. Setting my clocks have to be done in a certain order. Steve @GN talked alot about it. I have to set power limit > Mem & GPU voltage > Mem clock > GPU clock. Saving in that order otherwise the memory clock defaults to 500 mhz.









Fans, I have to manually set that. Kinda sad when its set to auto and the card would rather try to melt itself and throttle instead of turning the fan up past 2500 rpm. I think the driver is defaulted to keep the dbs down no matter what. Setting it manually to 4000 max keeps the card at 75 to 80 when benchmarking. I can't wait to build a new system with a full water loop.


----------



## ontariotl

Quote:


> Originally Posted by *os2wiz*
> 
> That is utter rubbish about requiring a 1000 watt psu for Vega 64. I have an 860 wattt Seasonic Platinum and it will be MORE than adequate with my Ryzen 1800X build.


Oh I agree it's rubbish, but like I did say it needs something that is high power efficient. I'm sure AMD mentioned it *only* for the liquid version (higher clock and higher wattage) because to not piss off buyers who have inadequate inefficient power supplies less than 1Kw. They are playing it safe. Sure you can go lower but you probably need at least a 80 plus gold rated power supply for those peak current off the two 8 pin PCI-E power connectors. AdoreTV used his Corsair 750 RMX PSU to test the liquid version off a 7700K cpu.

Case in point one member already blowing his power supply while testing. Since he said it was OCZ brand, I immediately remember my experience with the OCZ power supply brand having either marginal good to poor efficiency.


----------



## flopper

Quote:


> Originally Posted by *os2wiz*
> 
> That is utter rubbish about requiring a 1000 watt psu for Vega 64. I have an 860 wattt Seasonic Platinum and it will be MORE than adequate with my Ryzen 1800X build.


they have to say that due to many run cheap chinese psu or whatever and they wont function properly in todays power requirement.


----------



## Newbie2009

Quote:


> Originally Posted by *Blameless*
> 
> It appears that the PCI-E slot only powers the fan and the HBM2, neither of which will come anywhere near the current limits of the +12v (fan) or +3.3v (memory) rails involved.
> 
> The rest of the power is via the 8-pin PCI-E power connectors, which should be able to handle about 8A per power pin. Since there are six power pins total (three per connector) you are looking at 48A or upwards of 550w. Even if you limit yourself to a more practically conservative 6A per pin, thats still more than 400w. The card's VRM itself can handle way more than this, so the weak point will be those 8-pin connectors.
> 
> Realistically, unless the total board power of the card is going significantly past 450w for extended periods of time, the hardware should be perfectly fine, assuming core voltage and temperatures are kept in check, of course. You'll also need a PSU that can handle peak currents.
> How old was the PSU?


Ah it was pretty old, 6 years maybe. Lasted well.

https://www.techpowerup.com/reviews/OCZ/ZX_1250W/8.html

But was when I bumped vega by 50mhz , within a second of that.


----------



## skullbringer

Did some more testing regarding the OCP trigger on my HX850i. The issue started occurring after I modded the power play tables to allow for +100% power target, 300W socket power limit and 400A tdc limit.
The important thing here seems to be the 400A figure.

Notice on all high end Corsair psus (and probably other manufacturers), they daisy chain 2 6+2 pin pcie connectors together and connect them to the psu with one 8 pin connector. I did some testing with daisy chained and separate pcie power cables and the issue only occurred when using the daisy chained setup with 1 8pin connector to handle all the current.

Did some research and found that this connector which looks to me like a 12V EPS has 4 12V and 4 ground wires, therefore 4 12V circuits with terminals rated for 7A. 7A x 4 = 28A total current draw over the connector.

However the Vega 64 with a current 400A x 1.2V = 480W peak power draw. 480W / 12V = 40A going through the single pcie power cable. So the issue seems to not be causes by total system current draw, but what is being handled by a single connector. Not sure how other psus monitor this kind of behavior, but I assume not all psus monitor individual connectors / terminals like that.

Doing some comparison with the stock current limit of 300A x 1.2V = 360W power draw. 360W / 12V = 30A over the pcie power cable. Still over spec, but within tolerance I assume.

When using separate, non-daisy-chained connectors the issue is alleviated, since the 40A load is being split over 2 28A rated connections.

This looks to be the 295x2 pcie power cable discussion all over again, except now people actually buy the product. Seems like the stock current limit is pretty reasonable after all.

EDIT: Currently testing if I can reproduce the connector overload on the RM1000, too...

EDIT: RM1000 ****, just runs without a problem. Tried monitoring the connector temperature by shoving a temp probe between the 8 wires right next to the psu connector. During a single run of Timespy temps went up from 35°C to 51°C, after running Heaven 4.0 49°C. When using separate cables, the max temp I saw with any of those benchmarks was 44°C (all at 26°C ambient). Take this all with a grain of salt. I dont know if the HX850i is sensing current or temperature, this is also just the temperature at the start of the cable on the outside of the wires insulation, not the connector or terminal its self. Also my dmm is crappy and has a tolerance of about 5°C.

BTW, this testing was all done with stock clocks and power limit in wattman at 0%, just the registry mod with 300W socket power limit and 400A tdc apply.

tldr When overclocking Vega, use separate pcie power cables.


----------



## bluej511

Quote:


> Originally Posted by *skullbringer*
> 
> Did some more testing regarding the OCP trigger on my HX850i. The issue started occurring after I modded the power play tables to allow for +100% power target, 300W socket power limit and 400A tdc limit.
> The important thing here seems to be the 400A figure.
> 
> Notice on all high end Corsair psus (and probably other manufacturers), they daisy chain 2 6+2 pin pcie connectors together and connect them to the psu with one 8 pin connector. I did some testing with daisy chained and separate pcie power cables and the issue only occurred when using the daisy chained setup with 1 8pin connector to handle all the current.
> 
> Did some research and found that this connector which looks to me like a 12V EPS has 4 12V and 4 ground wires, therefore 4 12V circuits with terminals rated for 7A. 7A x 4 = 28A total current draw over the connector.
> 
> However the Vega 64 with a current 400A x 1.2V = 480W peak power draw. 480W / 12V = 40A going through the single pcie power cable. So the issue seems to not be causes by total system current draw, but what is being handled by a single connector. Not sure how other psus monitor this kind of behavior, but I assume not all psus monitor individual connectors / terminals like that.
> 
> Doing some comparison with the stock current limit of 300A x 1.2V = 360W power draw. 360W / 12V = 30A over the pcie power cable. Still over spec, but within tolerance I assume.
> 
> When using separate, non-daisy-chained connectors the issue is alleviated, since the 40A load is being split over 2 28A rated connections.
> 
> This looks to be the 295x2 pcie power cable discussion all over again, except now people actually buy the product. Seems like the stock current limit is pretty reasonable after all.
> 
> EDIT: Currently testing if I can reproduce the connector overload on the RM1000, too...


Ah well that makes total sense, even on my r9 390 I'm using 2 pcie cables instead of one, i don't like the daisy chain idea one bit seems a bit odd to me to run lots of amps thru one set of cable when your psu and accessories come with tons of cables so why not use 2. I know it doesnt look as clean but im all about function over form.


----------



## gupsterg

@skullbringer

+rep for info share







.

I've been using 2 separate cable for PCI-E since Hawaii and plan to do the same if get VEGA. My CM V850 (Seasonic platform) doesn't have OCP but has OPP, TPU review had info, did also consult JonnyGuru review prior to purchase.


----------



## Newbie2009

Not to be rude, but any idiot could have seen that coming if daisy chaining.

Got my pc up and running on backup psu, all good, on the vega right now.


----------



## Irev

So just got my VEGA 64 .... noticed that running on powersave mode keeps the card under 75 degrees and performs around 5-10% slower then balanced mode.... balanced mode card hits around 84 degrees

I find the fan profile much nicer at this setting..... who else is having fun with VEGA?

I just wish the clocks would stay put not jump from 1530-1630 often


----------



## asdkj1740

7A or higher for a single 12v depends on the psu cable's awg.
for a high end psu like hxi from corsair 7a seems to be underrated.

http://forum.kingpincooling.com/showthread.php?t=3972








""This means that using common 18AWG cable from PSU, 6-pin connector as result specified for 17A of current (3 contacts for +12V power, 2 contacts for GND return, one contact for presence detect). Bigger 8-pin have 25.5A current specification (3 contacts for +12V power, 3 contacts for GND return and 2 contacts for detection). High end PSU usually have 16AWG wires for graphics power cable which translates info 240W or 360W power specification for 6 and 8-pin accordingly. This is given a connector temperature raise of 30 °C with all power pins used. With active airflow and decent cable quality, safe current limits are even higher.""

it is said that the rx480 reference design has a single 6pin connector wtih 3*12v & 3*ground, therefore it is as powerful as a single 8pin connector.

beside, there are few types of connectors with different specs too.


----------



## Nutty Pumpkin

I am also having fun with Vega!


----------



## Newbie2009

Spoke with EK customer support today, they will start shipping VEGA blocks tomorrow, 18th, as originally scheduled. No delay.


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> Spoke with EK customer support today, they will start shipping VEGA blocks tomorrow, 18th, as originally scheduled. No delay.


yea thats good news, i need to wait till my card arrives before even ordering one. They are cheaper then usual though which is surprising, ill be going copper of course







. I wonder how long theyve had their hands on vega considering theyve had the block on the site for a week now.


----------



## Newbie2009

Quote:


> Originally Posted by *bluej511*
> 
> yea thats good news, i need to wait till my card arrives before even ordering one. They are cheaper then usual though which is surprising, ill be going copper of course
> 
> 
> 
> 
> 
> 
> 
> . I wonder how long theyve had their hands on vega considering theyve had the block on the site for a week now.


They probably bought a FE. LOL


----------



## bluej511

Quote:


> Originally Posted by *Newbie2009*
> 
> They probably bought a FE. LOL


Oh i did as well, its the only reason i ordered a 64 air, ekwb just does not make blocks for anything other then reference boards. I have an alphacool on my r9 390 and with your average ambient temp of 21°C my card barely hits 40°C, now that its 30°C ambient its a bit hotter then that haha. Problem is they are very restrictive and anything below 50% pump speed and everything toasts, ill be glad to have an ekwb gpu block for once.


----------



## aylan1196

Good luck every one


----------



## Nutty Pumpkin

Burning in right now...
Furmark crashed though. Just started it up again. Not sure why that was.

Seems my TX550 will do for now. I won't be overclocking until I get a new unit. It is a single 8-pin to 2x8-pin cable.


----------



## bluej511

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Burning in right now...
> Furmark crashed though. Just started it up again. Not sure why that was.
> 
> Seems my TX550 will do for now. I won't be overclocking until I get a new unit. It is a single 8-pin to 2x8-pin cable.


That is so risky lol. Even a member with an ax850 daisy chained tripped his PSU, and please please please dont use furmark. Its a total joke of a tool now a days. Leave it running on a game menu with vsync and frame cap off will do the same thing provided it runs full clock speed. Usually i just game with it right away and pin the gpu at all times.


----------



## JackCY

Can someone please test tile vs immediate mode rasterization/rendering?


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *bluej511*
> 
> That is so risky lol. Even a member with an ax850 daisy chained tripped his PSU, and please please please dont use furmark. Its a total joke of a tool now a days. Leave it running on a game menu with vsync and frame cap off will do the same thing provided it runs full clock speed. Usually i just game with it right away and pin the gpu at all times.


I did the math. I am pushing the limit but theoretically it should be enough for now. Theoretically that is...

I use Furmark out of habit and the fact it is cross platform. Out of curiosity why do you recommend against it? It did crash on me, mind you...


----------



## bluej511

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> I did the math. I am pushing the limit but theoretically it should be enough for now. Theoretically that is...
> 
> I use Furmark out of habit and the fact it is cross platform. Out of curiosity why do you recommend against it? It did crash on me, mind you...


It tends to overvolt the gpu core/memory voltage its why most people don't use it anymore, and in recent benchmarks you can see that it doesnt even stress the gpu all the way, youll use more power gaming then you will in furmark, so they've either fixed it or its just totally useless haha.

First thing i do when i get a gpu is run firestrike a few times but doing gpu only runs to get the card nice and toasty.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *JackCY*
> 
> Can someone please test tile vs immediate mode rasterization/rendering?


Vega 64:


AMD 6670: (From video)


----------



## rancor

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> I did the math. I am pushing the limit but theoretically it should be enough for now. Theoretically that is...
> 
> I use Furmark out of habit and the fact it is cross platform. Out of curiosity why do you recommend against it? It did crash on me, mind you...


You are pushing the limit. Furmark is not recommended any more because cards throttle so hard under it it doesn't always give you the worst case power draw. Before power limits on GPUs it could tost VRMs. If you have 3Dmark firestrike it can be a good power test.
Quote:


> Originally Posted by *bluej511*
> 
> It tends to overvolt the gpu core/memory voltage its why most people don't use it anymore, and in recent benchmarks you can see that it doesnt even stress the gpu all the way, youll use more power gaming then you will in furmark, so they've either fixed it or its just totally useless haha.
> 
> First thing i do when i get a gpu is run firestrike a few times but doing gpu only runs to get the card nice and toasty.


Just no you are half right it doesn't necessarily stress the GPU as much as it could. It doesn't over volt anything it's just a power virus and modern cards throttle. Before power limits you could kill cards and VRMs by exceeding VRM current and power capabilities. It has nothing to do with voltage.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *bluej511*
> 
> It tends to overvolt the gpu core/memory voltage its why most people don't use it anymore, and in recent benchmarks you can see that it doesnt even stress the gpu all the way, youll use more power gaming then you will in furmark, so they've either fixed it or its just totally useless haha.
> 
> First thing i do when i get a gpu is run firestrike a few times but doing gpu only runs to get the card nice and toasty.


Quote:


> Originally Posted by *rancor*
> 
> You are pushing the limit. Furmark is not recommended any more because cards throttle so hard under it it doesn't always give you the worst case power draw. Before power limits on GPUs it could tost VRMs. If you have 3Dmark firestrike it can be a good power test.


Interesting. Good to know, cheers guys!

I'll do some Firestrike runs now.

EDIT: To add both posts!


----------



## bluej511

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Interesting. Good to know, cheers.
> 
> I'll do some Firestrike runs now.


You can even let heaven or superposition run in a loop (not sure the latter has a loop i havent ran it in months)


----------



## 113802

Who else has obnoxious coil whine above 60 FPS?


----------



## madbrayniak

I am hopefull that as drivers and games are optimized for this that we will start seeing better performance.

Looks like the VRM is really op to par for this as it looks to be directly from the Frontier Edition.

Hoping to also upgrade my PSU to an 850w Seasonic Prime Titanium or Platinum as well so I should have more than enough power to drive one or two of these bad boys if I go that route.


----------



## rancor

Quote:


> Originally Posted by *bluej511*
> 
> You can even let heaven or superposition run in a loop (not sure the latter has a loop i havent ran it in months)


If you go into game mode you can loop it without buying it.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Who else has obnoxious coil whine above 60 FPS?


Haven't noticed any sign of coil whine just yet, but I'm two hours into Vega. Ill let you know if I do.

EDIT: What are you running to get it, Ill try replicate


----------



## bluej511

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Haven't noticed any sign of coil whine just yet, but I'm two hours into Vega. Ill let you know if I do.
> 
> EDIT: What are you running to get it, Ill try replicate


Ive gotten be honest ive never had coil whine on any card ive owned in the past 3 years, and since i have very sensitive hearing (i can hear one of my rad fans once in a while humming and its only running at 1100rpm deep in my case lol) and i have never had it and heard it, I'm hoping Vega doesnt change that.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *bluej511*
> 
> Ive gotten be honest ive never had coil whine on any card ive owned in the past 3 years, and since i have very sensitive hearing (i can hear one of my rad fans once in a while humming and its only running at 1100rpm deep in my case lol) and i have never had it and heard it, I'm hoping Vega doesnt change that.


I am also in the same boat.
Here is to Vega not making obnoxious sounds.


----------



## 113802

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Haven't noticed any sign of coil whine just yet, but I'm two hours into Vega. Ill let you know if I do.
> 
> EDIT: What are you running to get it, Ill try replicate


Anything that uses the GPU. 3DMark, Overwatch, Rise of the Tomb Raider, BattleField 1, and Guild Wars. Every single one of my previous cards have had Coil Whine.

GTX 780, GTX 780 Ti KingPin, GTX 980, GTX 980 Ti but those weren't annoying. - These cards ran using AX850 and a AX1200i

The EVGA GTX 1070 FTW I had and this RX Vega are annoying. These two have 1 thing in common the Seasonic 850w Power Supply.


----------



## skullbringer

I dont get it.

Timespy:
stock with fan: *7588*


Spoiler: Warning: Spoiler!







stock clocks with fan and +100% power: *7633*


Spoiler: Warning: Spoiler!







fan, +100% power and +22% core clock: *7635*


Spoiler: Warning: Spoiler!







fan, +100% power, +22% core clock, 1105 MHz memory clock: *7833*


Spoiler: Warning: Spoiler!






(all on Ryzen 7 1800X system)

Powerplay table reg mod has no effect on performance. But hey, at least I can trip ocp on daisy-chained pcie cables now, without getting any benefit in performance.








not

Tried both bioses, reinstalling drivers, Windows 7 and 10, Afterburner, Watttool, Wattman.

I dont get it, ideas?


----------



## 113802

Quote:


> Originally Posted by *skullbringer*
> 
> I dont get it.
> 
> Timespy:
> stock with fan: *7588*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> stock clocks with fan and +100% power: *7633*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> fan, +100% power and +22% core clock: *7635*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> fan, +100% power, +22% core clock, 1105 MHz memory clock: *7833*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Powerplay table reg mod has no effect on performance. But hey, at least I can trip ocp on daisy-chained pcie cables now, without getting any benefit in performance.
> 
> 
> 
> 
> 
> 
> 
> not
> 
> Tried both bioses, reinstalling drivers, Windows 7 and 10, Afterburner, Watttool, Wattman.
> 
> I dont get it, ideas?


We have to wait on functional drivers.


----------



## bluej511

Quote:


> Originally Posted by *skullbringer*
> 
> I dont get it.
> 
> Timespy:
> stock with fan: *7588*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> stock clocks with fan and +100% power: *7633*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> fan, +100% power and +22% core clock: *7635*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> fan, +100% power, +22% core clock, 1105 MHz memory clock: *7833*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Powerplay table reg mod has no effect on performance. But hey, at least I can trip ocp on daisy-chained pcie cables now, without getting any benefit in performance.
> 
> 
> 
> 
> 
> 
> 
> not
> 
> Tried both bioses, reinstalling drivers, Windows 7 and 10, Afterburner, Watttool, Wattman.
> 
> I dont get it, ideas?


Could be that whatever youre setting for core clock isn't actually registering and therefore giving you no gain? Its a possibility, it does seem that overclocking memory has the best effect so i may just OC the memory and undervolt on water, id get some sick temps lol.


----------



## JackCY

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Vega 64:
> 
> 
> AMD 6670: (From video)


Seems it's same as FE driver, still immediate with some hackery in it and not tiled yet









Pascal, tiled:


Quote:


> Originally Posted by *skullbringer*
> 
> I dont get it.
> 
> Timespy:
> stock with fan: *7588*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> stock clocks with fan and +100% power: *7633*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> fan, +100% power and +22% core clock: *7635*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> fan, +100% power, +22% core clock, 1105 MHz memory clock: *7833*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Powerplay table reg mod has no effect on performance. But hey, at least I can trip ocp on daisy-chained pcie cables now, without getting any benefit in performance.
> 
> 
> 
> 
> 
> 
> 
> not
> 
> Tried both bioses, reinstalling drivers, Windows 7 and 10, Afterburner, Watttool, Wattman.
> 
> I dont get it, ideas?


What don't you get? It has been said many times that OCing is broken on Vega still and there is a bug that lets you push clocks to around 2GHz+ but the performance you get will not reflect it. Set it to 1800MHz and you might just crash but if you set 1900MHz+ you will not crash and the whole thing bugs out. Monitoring doesn't work, GPUz least of all.

Power target won't help you and tripping OCP is easy on a PSU if you don't have a single 12V rail PSU and you're loading most of it's capacity especially using some 1x8pin to 2x8pin conversion because your PSU is so low wattage it doesn't even come with dual 8pin PCIe. Or maybe it was someone else using this stupid conversion on a 300W+ GPU.

One way to verify your crazy OC actually works is that you get over 8000 at the least on TimeSpy score, at best over 8300 or 8500. With this glitch you're running stock clocks pretty much while monitoring falsely reports 2GHz etc.

And then also comes the OC artifacting and not rendering the scene properly giving you very high score. That's a second issue with Vega OC.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Anything that uses the GPU. 3DMark, Overwatch, Rise of the Tomb Raider, BattleField 1, and Guild Wars. Every single one of my previous cards have had Coil Whine.
> 
> GTX 780, GTX 780 Ti KingPin, GTX 980, GTX 980 Ti but those weren't annoying. - These cards ran using AX850 and a AX1200i
> 
> The EVGA GTX 1070 FTW I had and this RX Vega are annoying. These two have 1 thing in common the Seasonic 850w Power Supply.


I took my side panel off and turned PC on mute.

Yes. I can make it out. Its fairly intermittent and not very loud at all. That was Overwatch. I'll try Firestrike.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *JackCY*
> 
> Seems it's same as FE driver, still immediate with some hackery in it and not tiled yet
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pascal, tiled:
> SNIP


That's with the latest beta driver too. 17.30.1051

Anyone tried Vega in Linux? I'll be trying this weekend.


----------



## 113802

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> I took my side panel off and turned PC on mute.
> 
> Yes. I can make it out. Its fairly intermittent and not very loud at all. That was Overwatch. I'll try Firestrike.


Seems like multiple reviews have coil whine. Gamer Nexus mentioned it in his live overclocking. I run my PC fans at 400 RPM and the GPU radiator fan at 1800 RPM so the coil whine is the only noise I hear.

Tom's hardware with his Vega FE

http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-11.html

PC World

http://www.pcworld.com/article/3215123/components-graphics/amd-radeon-rx-vega-review-vega-56-vega-64-and-liquid-cooled-vega-64-tested.html?page=11


----------



## skullbringer

Quote:


> Originally Posted by *JackCY*
> 
> Seems it's same as FE driver, still immediate with some hackery in it and not tiled yet
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pascal, tiled:
> 
> What don't you get? It has been said many times that OCing is broken on Vega still and there is a bug that lets you push clocks to around 2GHz+ but the performance you get will not reflect it. Set it to 1800MHz and you might just crash but if you set 1900MHz+ you will not crash and the whole thing bugs out. Monitoring doesn't work, GPUz least of all.
> 
> Power target won't help you and tripping OCP is easy on a PSU if you don't have a single 12V rail PSU and you're loading most of it's capacity especially using some 1x8pin to 2x8pin conversion because your PSU is so low wattage it doesn't even come with dual 8pin PCIe. Or maybe it was someone else using this stupid conversion on a 300W+ GPU.
> 
> One way to verify your crazy OC actually works is that you get over 8000 at the least on TimeSpy score, at best over 8300 or 8500. With this glitch you're running stock clocks pretty much while monitoring falsely reports 2GHz etc.
> 
> And then also comes the OC artifacting and not rendering the scene properly giving you very high score. That's a second issue with Vega OC.


Corsair psus theses days like the RM and HXi dont even come with separate pcie cables anymore, they are all daisy-chained. So unless you want to have 2 additional connectors dangling around in your rig, because you care somewhat about esthetics, you are screwed anyways. This has also nothing todo with the internals of the psu and the 12V rail, since the psu's internals are capable of handling the load, just the connector is not.

Granted, someone wrote about splitting 1 8 pin PCIE connector into 2 at the gpu end with a 550W psu which is borderline.

To clarify on clocks, it does not matter if I set +1%, +2%, +5%, +10% or +22%, the reported clock will change, the performance will not change and my card has NEVER crashed.
Calling it a "glitch" implies differing from the normal behavior under certain circumstances. However my card has always behaved this way, no matter the clock, power target or current limit, so please excuse my incapability of comprehending this situation when all previous explanations fail to comply.


----------



## bluej511

Quote:


> Originally Posted by *skullbringer*
> 
> Corsair psus theses days like the RM and HXi dont even come with separate pcie cables anymore, they are all daisy-chained. So unless you want to have 2 additional connectors dangling around in your rig, because you care somewhat about esthetics, you are screwed anyways. This has also nothing todo with the internals of the psu and the 12V rail, since the psu's internals are capable of handling the load, just the connector is not.
> 
> Granted, someone wrote about splitting 1 8 pin PCIE connector into 2 at the gpu end with a 550W psu which is borderline.
> 
> To clarify on clocks, it does not matter if I set +1%, +2%, +5%, +10% or +22%, the reported clock will change, the performance will not change and my card has NEVER crashed.
> Calling it a "glitch" implies differing from the normal behavior under certain circumstances. However my card has always behaved this way, no matter the clock, power target or current limit, so please excuse my incapability of comprehending this situation when all previous explanations fail to comply.


My rm1000 came with a couple pcie 8pin connectors at least if not more. The rm1000i does as well and it has the caps soldered onto it.


----------



## Newbie2009

Yeah, looks like the functionality is just not there. They just made it look that way so people wouldn't complain day one probably.

My clock 1630 - don't believe it.
benched 1630 @ 800mv, didn't crash
Increased power level &50, no difference in performance.

Its all fake.


----------



## skullbringer

Quote:


> Originally Posted by *Newbie2009*
> 
> Yeah, looks like the functionality is just not there. They just made it look that way so people wouldn't complain day one probably.
> 
> My clock 1630 - don't believe it.
> benched 1630 @ 800mv, didn't crash
> Increased power level &50, no difference in performance.
> 
> Its all fake.


In reviews from GN and the like they talk about how they could oc vega 64 and 56 a few percent until it crashed.

So what driver did reviewers get then and where can we get it?


----------



## Newbie2009

Quote:


> Originally Posted by *skullbringer*
> 
> In reviews from GN and the like they talk about how they could oc vega 64 and 56 a few percent until it crashed.
> 
> So what driver did reviewers get then and where can we get it?


Good question, no driver CD with my card. Seems they released a stock cannot change anything driver which is stable. At least the review drivers had the power tune functional.


----------



## gupsterg

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> Burning in right now...
> Furmark crashed though. Just started it up again. Not sure why that was.
> 
> Seems my TX550 will do for now. I won't be overclocking until I get a new unit. It is a single 8-pin to 2x8-pin cable.


Nice to see HBM temps are exposed. Matches my hypothesis before that likely to be at core temp, but in truth their hotter







.

Yeah ditch Furmark. I have never used it or Kombustor/OCCT. Best to whack on Heaven/Valley/3DM loops. I also found [email protected] is handy to check GPUs, look out for bad states in log and dumping of units, etc.


----------



## bluej511

Quote:


> Originally Posted by *gupsterg*
> 
> Nice to see HBM temps are exposed. Matches my hypothesis before that likely to be at core temp, but in truth their hotter
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Yeah ditch Furmark. I have never used it or Kombustor/OCCT. Best to whack on Heaven/Valley/3DM loops. I also found [email protected] is handy to check GPUs, look out for bad states in log and dumping of units, etc.


Didn't even notice that, should be much lower on water though so always a plus haha. Thats quiet a lot of core voltage as well though damn, def gonna undervolt mine and see if i can keep the stock clocks.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Seems like multiple reviews have coil whine. Gamer Nexus mentioned it in his live overclocking. I run my PC fans at 400 RPM and the GPU radiator fan at 1800 RPM so the coil whine is the only noise I hear.
> 
> Tom's hardware with his Vega FE
> 
> http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-11.html
> 
> PC World
> 
> http://www.pcworld.com/article/3215123/components-graphics/amd-radeon-rx-vega-review-vega-56-vega-64-and-liquid-cooled-vega-64-tested.html?page=11


Firestrike the whine was much more prevelent until the fan speed kicked up then I couldn't hear it. That's with stock R5 cooler and a undervolted Silverstone 180mm fan but it's still noisy. Guess my fans are just blocking it out...


----------



## skullbringer

Quote:


> Originally Posted by *Newbie2009*
> 
> Good question, no driver CD with my card. Seems they released a stock cannot change anything driver which is stable. At least the review drivers had the power tune functional.


This in combination with the 100 USD increased pricing over what eveyone was told is borderline deceptive marketing. Things are tilting in a very bad way towards AMD Radeon Technologies Group imho... smh


----------



## Newbie2009

If they block me flashing to a water cooled bios will be the final straw


----------



## gupsterg

Quote:


> Originally Posted by *bluej511*
> 
> Didn't even notice that, should be much lower on water though so always a plus haha. Thats quiet a lot of core voltage as well though damn, def gonna undervolt mine and see if i can keep the stock clocks.


HBM1 was also supposed have temp sensor, but not exposed. Now in JEDEC PDF for HBM1 it is stated HBM will 'throttle performance' if temps are excessive.

In PowerPlay of Fiji there was no temp protection for HBM, but just the GPU/HBM VRM separately.

VEGA FE AIR has:-

Code:



Code:


5F 00 (95°C) USHORT usTemperatureLimitHBM;
73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
73 00 (115°C)   USHORT usTemperatureLimitVrMem;

As well as GPU being:-

Code:



Code:


59 00 (89°C) USHORT usSoftwareShutdownTemp;
55 00 (85°C)    USHORT usTemperatureLimitTedge;

Nutty Pumpkin was close if it has same values.



Will have to check RX VEGA AIR VBIOS.


----------



## ontariotl

Quote:


> Originally Posted by *skullbringer*
> 
> This in combination with the 100 USD increased pricing over what eveyone was told is borderline deceptive marketing. Things are tilting in a very bad way towards AMD Radeon Technologies Group imho... smh


It's not like that hasn't happened in the past with Nvidia either. Many discussions about it. It's not AMD setting the price, it's the retailers knowing they can gouge due to short supply, mining craze, and hype. It still looks bad, but nothing AMD can do, unless they copy Nvidia and have their own online store to buy directly from.
Quote:


> Originally Posted by *Newbie2009*
> 
> If they block me flashing to a water cooled bios will be the final straw


I don't think you will be blocked from flashing to a liquid bios, just not a modded one. Although when I was reading through reviews for the Air and liquid, I noticed gpu-z screenshots had the revision different between them. C0 for liquid and C1 for air.


----------



## skullbringer

Quote:


> Originally Posted by *Newbie2009*
> 
> If they block me flashing to a water cooled bios will be the final straw


Nope, just tried it. You cant flash a Vega XTX bios onto a Vega XT, bios and hardware device id have to match, otherwise secure boot keeps the system from posting.

Also tried FE bios, also does not work as expected.
Quote:


> Originally Posted by *ontariotl*
> 
> It's not like that hasn't happened in the past with Nvidia either. Many discussions about it. It's not AMD setting the price, it's the retailers knowing they can gouge due to short supply, mining craze, and hype. It still looks bad, but nothing AMD can do, unless they copy Nvidia and have their own online store to buy directly from.
> I don't think you will be blocked from flashing to a liquid bios, just not a modded one. Although when I was reading through reviews for the Air and liquid, I noticed gpu-z screenshots had the revision different between them. C0 for liquid and C1 for air.


The price difference is most likely not due to high demand of miners, but because AMD gave a different msrp compared to the previously communicated "launch offer price". I call bs on AMD's side.


----------



## The EX1

Quote:


> Originally Posted by *skullbringer*
> 
> This in combination with the 100 USD increased pricing over what eveyone was told is borderline deceptive marketing. Things are tilting in a very bad way towards AMD Radeon Technologies Group imho... smh


The $599 cards you are talking about were selling for MSRP. Those were Black Packs that came with the two games. Only a very few cards came outside of these bundles for $499, but they were there.


----------



## ontariotl

Quote:


> Originally Posted by *The EX1*
> 
> The $599 cards you are talking about were selling for MSRP. Those were Black Packs that came with the two games. Only a very few cards came outside of these bundles for $499, but they were there.


Agreed. I forgot about that too. Newegg offered packs (CPU, MB and Monitor) from the get go, but stores I've seen in my backyard offered only the ones that had the free games and the coupons for discount on Ryzen cpu and motherboard combo. They never had the Ultrawide monitor option here though. So in essence I paid for the pack deal card, but I could just eliminate the coupons instead of being forced to buy the other components in the pack.

Now as for reports from two retailers in the UK about the limited amount they could sell at the original AMD msrp then having to jack it up after those were sold is right now just that. Two retailers that could say anything they want to make excuses. AMD hasn't even commented yet.

Whether if it's true or not about the price hike, they are still way too damn high regardless.


----------



## PontiacGTX

Quote:


> Originally Posted by *gupsterg*
> 
> HBM1 was also supposed have temp sensor, but not exposed. Now in JEDEC PDF for HBM1 it is stated HBM will 'throttle performance' if temps are excessive.
> 
> In PowerPlay of Fiji there was no temp protection for HBM, but just the GPU/HBM VRM separately.
> 
> VEGA FE AIR has:-
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 5F 00 (95°C) USHORT usTemperatureLimitHBM;
> 73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
> 73 00 (115°C)   USHORT usTemperatureLimitVrMem;
> 
> As well as GPU being:-
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 59 00 (89°C) USHORT usSoftwareShutdownTemp;
> 55 00 (85°C)    USHORT usTemperatureLimitTedge;
> 
> Nutty Pumpkin was close if it has same values.
> 
> 
> 
> Will have to check RX VEGA AIR VBIOS.


The HBM modules dont have a way to find if they are underprrfoming due to heat?


----------



## wolf9466

Quote:


> Originally Posted by *PontiacGTX*
> 
> Quote:
> 
> 
> 
> Originally Posted by *gupsterg*
> 
> HBM1 was also supposed have temp sensor, but not exposed. Now in JEDEC PDF for HBM1 it is stated HBM will 'throttle performance' if temps are excessive.
> 
> In PowerPlay of Fiji there was no temp protection for HBM, but just the GPU/HBM VRM separately.
> 
> VEGA FE AIR has:-
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 5F 00 (95°C) USHORT usTemperatureLimitHBM;
> 73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
> 73 00 (115°C)   USHORT usTemperatureLimitVrMem;
> 
> As well as GPU being:-
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 59 00 (89°C) USHORT usSoftwareShutdownTemp;
> 55 00 (85°C)    USHORT usTemperatureLimitTedge;
> 
> Nutty Pumpkin was close if it has same values.
> 
> 
> 
> Will have to check RX VEGA AIR VBIOS.
> 
> 
> 
> The HBM modules dont have a way to find if they are underporming due to heat?
Click to expand...

They do - HBM has temp sensors on the chips.


----------



## skullbringer

So if you are hoping for bios flashing to bring any improvements, dont.

http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-frontier-edition-bios/180_30#post_26290232


----------



## ontariotl

Quote:


> Originally Posted by *skullbringer*
> 
> So if you are hoping for bios flashing to bring any improvements, dont.
> 
> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-frontier-edition-bios/180_30#post_26290232


Great job on your efforts even though the results are bad news. Maybe AMD finally got tired of owners trying to improve their GPU's to a higher model trying to unlock disabled threads. My guess Fury (oops forgot about the initial 480 with extra 4gb disabled) was the last card for you to be able to do that. They also probably feared that some will try to take a vega64 and get a FE without the extra 8gb for a workstation card.


----------



## wolf9466

Quote:


> Originally Posted by *skullbringer*
> 
> So if you are hoping for bios flashing to bring any improvements, dont.
> 
> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-frontier-edition-bios/180_30#post_26290232


You can't load an unsigned VBIOS? *You* can't? Watch me.

http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread/70#post_26281235


----------



## PontiacGTX

Quote:


> Originally Posted by *wolf9466*
> 
> They do - HBM has temp sensors on the chips.


I dont see those sensors on hwinfo


----------



## wolf9466

Quote:


> Originally Posted by *PontiacGTX*
> 
> Quote:
> 
> 
> 
> Originally Posted by *wolf9466*
> 
> They do - HBM has temp sensors on the chips.
> 
> 
> 
> I dont see those sensors on hwinfo
Click to expand...

Does not mean they are not there. Under Linux, I could check the HBM chip temps - I see no reason Vega would differ.


----------



## PontiacGTX

Quote:


> Originally Posted by *wolf9466*
> 
> Does not mean they are not there. Under Linux, I could check the HBM chip temps - I see no reason Vega would differ.


where do you find them? and can you tell me what driver do i need specifically


----------



## Irev

does anyone know if running the 2ndary bios on the VEGA 64 will lower fan noise and temperatures ?


----------



## Newbie2009

Quote:


> Originally Posted by *skullbringer*
> 
> Nope, just tried it. You cant flash a Vega XTX bios onto a Vega XT, bios and hardware device id have to match, otherwise secure boot keeps the system from posting.
> 
> Also tried FE bios, also does not work as expected.
> The price difference is most likely not due to high demand of miners, but because AMD gave a different msrp compared to the previously communicated "launch offer price". I call bs on AMD's side.


F U AMD, sincerely last card I buy from u ****s


----------



## wolf9466

Quote:


> Originally Posted by *PontiacGTX*
> 
> Quote:
> 
> 
> 
> Originally Posted by *wolf9466*
> 
> Does not mean they are not there. Under Linux, I could check the HBM chip temps - I see no reason Vega would differ.
> 
> 
> 
> where doy ou find them? and can you tell me what driver do i need specifically
Click to expand...

It doesn't depend on the driver, but uses direct access to the card. I can't tell you where to find them, because I don't have the source to the tool I was using.


----------



## Papa Emeritus

Add me to the club, got my VEGA 64 AIR today







Going to order a block from EK, but i'm not sure which one i should get Acetal or Plexi? Copper/Nickel dosen't really matter.


----------



## PontiacGTX

Quote:


> Originally Posted by *wolf9466*
> 
> It doesn't depend on the driver, but uses direct access to the card. I can't tell you where to find them, because I don't have the source to the tool I was using.


well if you find a way to check HBM temp on Fury I might be interested


----------



## gupsterg

Quote:


> Originally Posted by *PontiacGTX*
> 
> The HBM modules dont have a way to find if they are underprrfoming due to heat?


Quote:


> Temperature Compensated Refresh Reporting
> 
> The HBM DRAM provides temperature compensated refresh related information to the controller via an encoding on the TEMP[2:0] pins. The Gray-coded encoding defines the proper refresh rate expected by the DRAM to maintain data integrity. Absolute temperature values for each encoding are vendor specific and not defined in this specification. The encoding on the TEMP[2:0] pins is expected to reflect the required refresh rate for the hottest device in the stack and will be updated when the temperature exceeds the vendor specific trip-point levels appropriate for each refresh rate.


As temperature reaches a 'trip point' refresh rate will be lowered, so performance will be impacted.
Quote:


> Catastrophic Temperature Sensor
> 
> The CATTRIP sensor logic detects if the junction temperature of any die in the HBM stack exceeds the catastrophic trip threshold value CATTEMP. CATTEMP value is programmed by the manufacturer to a value below the temperature point that permanent damage would occur to the HBM stack. If the junction temperature anywhere in the stack exceeds the CATTEMP of the device, the HBM stack will drive the
> external CATTRIP pin to "1". This indicates that catastrophic damage may occur unless power is reduced. The CATTRIP output is sticky in that to clear a CATTRIP, power-off of the device is required to return the CATTRIP output to "0". Sufficient time should be allowed for the device to cool after a CATTRIP event.
> 
> If CATTEMP is higher than maximum operating junction temperature, CATTRIP circuit will operate correctly regardless of whether the external or internal clocks have stopped. Functionality testing of CATTRIP can be verified by writing a "1" to MR7 OP[7] to force a CATTRIP and "0" to clear.
> 
> CATTRIP is a mandatory feature for the HBM device with 4 Gb/channel or higher for Legacy mode and 2 Gb/channel or higher for Pseudo channel mode operation.


When critical temp is reached HBM is 'cut out' so power down and up of card is needed.

Above is for HBM1, very very likely HBM2 has similar implementation.
Quote:


> Originally Posted by *PontiacGTX*
> 
> where doy ou find them? and can you tell me what driver do i need specifically


You may need to reset order in HWiNFO for it to refresh and show up.


Quote:


> Originally Posted by *PontiacGTX*
> 
> well if you find a way to check HBM temp on Fury I might be interested


You can't on Fiji, not exposed via i2c or AMD ADL, see this post from 2016 by Mumak.
Quote:


> GPU HBM Temperature - doesn't seem to be supported by current GPUs/drivers


Situation is still the same, Mumak would have implemented if possible, besides being registered as dev with AMD he has Fiji cards.


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> F U AMD, sincerely last card I buy from u ****s


Well it's not like Nvidia will allow you to flash a 1070 to a 1080. Nvidia has been doing it for a long time that flashing to something else out isn't possible. AMD has finally caught on to owners trying to get something for free. It was awesome when the results were there, but looks like that time has come and gone.

I also feel we might have a better clocker once you get the card under water anyway. My feeling is AMD used the C0 version for their AIO because the silicon just was too much for air (well at least with their crappy cooler). Which is why wattage numbers are through the roof. They used C1 instead for their Air coolers as it was a little better, to a point. Again, just my assumption.


----------



## Newbie2009

Quote:


> Originally Posted by *ontariotl*
> 
> Well it's not like Nvidia will allow you to flash a 1070 to a 1080. Nvidia has been doing it for a long time that flashing to something else out isn't possible. AMD has finally caught on to owners trying to get something for free. It was awesome when the results were there, but looks like that time has come and gone.
> 
> I also feel we might have a better clocker once you get the card under water anyway. My feeling is AMD used the C0 version for their AIO because the silicon just was too much for air (well at least with their crappy cooler). Which is why wattage numbers are through the roof. They used C1 instead for their Air coolers as it was a little better, to a point. Again, just my assumption.


Sorry but if i wanted a card I couldn't flash, I'd have gone nvidia.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *Newbie2009*
> 
> Sorry but if i wanted a card I couldn't flash, I'd have gone nvidia.


So flashing was literally the main selling point for you?


----------



## Newbie2009

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> So flashing was literally the main selling point for you?


No, but I presumed the air wouldn't be bios locked vs the water. I presumed they would both be exact same with different cooler. Maybe different stock clocks.


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> No, but I presumed the air wouldn't be bios locked vs the water. I presumed they would both be exact same with different cooler. Maybe different stock clocks.


Main reason are probably miners,to block bios modding


----------



## Newbie2009

Has anyone tried gaming with these drivers? I'm getting worse performance than a single 290x in Prey.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *Newbie2009*
> 
> Has anyone tried gaming with these drivers? I'm getting worse performance than a single 290x in Prey.


I'm running the latest drivers I THINK 17.30.1051.

Overwatch was 50-60fps at 4K and Epic. Played some Diablo III as well, no issues there. I'd don't have anything I compare it too I've come from a 550 4GB.


----------



## Newbie2009

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> I'm running the latest drivers I THINK 17.30.1051.
> 
> Overwatch was 50-60fps at 4K and Epic. Played some Diablo III as well, no issues there. I'd don't have anything I compare it too I've come from a 550 4GB.


Hmm, I get about 20k graphics score in firestrike so its in line with reviews, the 50+ power tune doesn't work so taking that into account its ok.

Prey is borked though, 20-30 fps @ 1600p

I tried doom and reinstalled drivers. Runs fine. Looks like prey is broken with these drivers.


----------



## rv8000

So all of you who have Vega in hand already, did you order from Newegg? Go to a physical store to pick one up?

Seems like even worse stock than when I was going after a 1080 on release day, that or absolutely all stock is going to the Ryzen + Board combos.


----------



## punchmonster

Got mine from a local tiny store for MSRP in Europe.

Also not sure about other games but I went from 40~50 fps minimums in Rainbow Six: Siege on my Fury X to 80 fps minimums on my RX Vega 64 @ 1440p without hitting vram cap on the Fury X.
Max numbers are pretty similar though, it's just perfectly smooth now.
Quote:


> Originally Posted by *rv8000*
> 
> So all of you who have Vega in hand already, did you order from Newegg? Go to a physical store to pick one up?
> 
> Seems like even worse stock than when I was going after a 1080 on release day, that or absolutely all stock is going to the Ryzen + Board combos.


----------



## Nutty Pumpkin

Seems to be a heavy focus on the bundle to be honest...


----------



## pillowsack

I have a Vega 64 coming on the 23rd and a EKWB waterblock pre-ordered.

Is this card going to kill my 850W with the 6800K @4.2ghz or is there really no way to tell since drivers don't really allow over clocking? I'm hoping to push the card pretty hard when its under water.

I'm scared about how much power/heat it does because I had a 8800GTX at some point.


----------



## p4block

Depending on what the card checks, it could be bypassed. Given their recent claims on Microsoft's Secure Boot, I would guess they sign their VBIOSes with Microsoft's key, just like the current UEFI GOP VBIOSes.

You can check this yourself on non-vega cards by enabling secure boot but disabling the stock keys and adding your own secure boot key infrastructure (instructions for this on ie. Arch wiki). You will be unable to boot basically anything that isn't signed by your key, including the UEFI GOP VBIOS on your card. Those then usually fallback to their legacy VBIOSes.
Doing this on VEGA or rare pure UEFI VBIOSes will probably just blackscreen your mobo completely, no reason to do it.

If the card is internally checking the VBIOS to have the correct Microsoft signature we are in luck as their private key was leaked a long time ago. It allowed people to run homebrew on strict-Secure-Boot-only devices such as the Surface RT. Grab the key, sign a modded VEGA VBIOS (in surely a weird specific way), go straight to the bank with it.

I don't have a VEGA card to test at least as of yet, I'm leaving the idea out there in the meantime.


----------



## Kpjoslee

https://www.amazon.com/gp/product/B074L1Z2G1/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

If you are interested. Grab em.


----------



## tpi2007

Quote:


> Originally Posted by *Kpjoslee*
> 
> https://www.amazon.com/gp/product/B074L1Z2G1/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
> 
> If you are interested. Grab em.


Well, that was fast:
Quote:


> Gigabyte GV-RXVEGA64X W-8GD-B AMD Radeon RX VEGA 64 XTX Water-cooling 8G Graphic Cards


Quote:


> Currently unavailable.
> We don't know when or if this item will be back in stock.


What was the price btw?


----------



## Kpjoslee

Quote:


> Originally Posted by *tpi2007*
> 
> Well, that was fast:
> 
> What was the price btw?


It was $699. I managed to grab one before it was gone.


----------



## Papa Emeritus

Quote:


> Originally Posted by *rv8000*
> 
> So all of you who have Vega in hand already, did you order from Newegg? Go to a physical store to pick one up?
> 
> Seems like even worse stock than when I was going after a 1080 on release day, that or absolutely all stock is going to the Ryzen + Board combos.


Got mine from a retailer in europe for msrp launch day


----------



## tpi2007

Quote:


> Originally Posted by *Kpjoslee*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tpi2007*
> 
> Well, that was fast:
> 
> What was the price btw?
> 
> 
> 
> It was $699. I managed to grab one before it was gone.
Click to expand...

Thanks. Let us know how it goes when you get it (performance, clocks, power draw, coil whine, etc).


----------



## Kpjoslee

Quote:


> Originally Posted by *tpi2007*
> 
> Thanks. Let us know how it goes when you get it (performance, clocks, power draw, coil whine, etc).


Surely I will. Hopefully not the coil whine though


----------



## tpi2007

Quote:


> Originally Posted by *Kpjoslee*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tpi2007*
> 
> Thanks. Let us know how it goes when you get it (performance, clocks, power draw, coil whine, etc).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Surely I will. Hopefully not the coil whine though
Click to expand...











Yeah, hopefully not! Coil whine shouldn't exist at this point in time, it should be a basic quality assurance thing.


----------



## Kpjoslee

https://www.amazon.com/dp/B074N1JN4F/ref=olp_product_details?_encoding=UTF8&me=ATVPDKIKX0DER

Gigabyte Vega 64 Air is still up for preorder. $599.


----------



## Nutty Pumpkin

Quote:


> Originally Posted by *pillowsack*
> 
> I have a Vega 64 coming on the 23rd and a EKWB waterblock pre-ordered.
> 
> Is this card going to kill my 850W with the 6800K @4.2ghz or is there really no way to tell since drivers don't really allow over clocking? I'm hoping to push the card pretty hard when its under water.
> 
> I'm scared about how much power/heat it does because I had a 8800GTX at some point.


If it's a good unit it should be fine. I can't see your rig though because I'm on my phone.


----------



## The EX1

Has anyone successfully undervolted there Vega 64 and verified it with power draw? I have tried the latest version of Wattman and the 4.4.16 beta version of Afterburner and I cannot get any less power draw from the card when undervolting. My kilowatt meter numbers remain the same.


----------



## ontariotl

I finally installed the card tonight after some benchmarking of my old 290x's in crossfire to compare what I got myself into. Plus, I wanted to share my results with an Ultrawide as there isn't much posted results for us who own one. This should help who are curious what Vega 64 can do with a few titles at the moment. Sorry for the crude chart, but it should give an idea of the frame rates and power draw I got from the wall.



What have I learned from this? Well it looks like it's true that balanced mode is the best setting to use for the most part. And for the record, I did not fuss with power limits or anything like that. I kept it all at the default settings to show the performance without any tweaks. These aren't the best drivers, it's not called beta for nothing. I can see they concentrated on a few games and others really need work such as Tomb Raider.

This card is in dire need of a water block. Heat wasn't that bad really, reached a max temp of 82c, but I rather keep the clock speed at the ceiling for the setting I chose. Not bounce all over the place. As expected the power used is not spectacular. AMD has never been great in this regard since it became part of the discussion for GPU's for sometime now. It was nice when it was all about brute force and who gives a **** how much power it sucks. Well since my 290x was another power hungry GPU, it's good to see at least Vega 64 not toppling over power draw when crossfire was enabled with the 290x. Speaking of crossfire, damn wish recent games would support the damn option. Looking from the graph I could have kept going on with my almost 4 year old cards.

Now I did come across coil whine. I can happily say it only happened during the load intro screen for Doom as it was running at 5400 fps. Besides that, no other time did I experience coil whine. Not even close to what the 290x's had. Fan noise is a big improvement for me compared to the 290x OEM fan. It wasn't a distraction at all. Thank god I watercooled the 290x's soon after the install. Even though Vega's fan doesn't bother me, I still want it watercooled nonetheless.

So am I happy with the purchase? Yes and no. Plus is I can game with one card that has great avg frame rates and minimum frame rates that would stay in the LG's Freesync range. And have everything cranked to boot. PUBG looks so good on high and it's now thankfully playable instead of all low settings. That's why I really wanted to upgrade to stay within the monitors freesync range even after modding it to have a range of 35-75 instead of 55-75 without flicker. Tomb raider has a deceiving low minimum record as there is one second glitch that freezes during benchmarking that of course is recorded. If I had recorded different score from the 3 scenes for benchmark it would have been better, but it seems like that scene is the norm for testing. With that said by the way, oh boy did I miss freesync during the testing. Watching the benchmarks in between watching the kill-a-watt meter I couldn't stand the tearing and perceived studdering. I can't live without Freesync. I've even debated about upgrading from the 75hz LG to one of those Korean 100Hz panels. Looks like Vega can utilize some of the frames above 75hz, but not by much for some games. Hopefully that improves soon.

The no part of the equation is I wish crossfire was properly supported. With Freesync and the results of the benchmarks, I can clearly see two 290x's can beat a single Vega 64. That saddens me. It's great that I get better performance with a single Vega, but come on! I really thought Vega would beat out a crossfired 290x setup especially when I paid the same for a pair of 290s compared to just one Vega. I guess that's a no for the moment with Ultrawide. The other is no DVI for my Korean 1440p secondary monitor. Sure I don't overclock it to 120hz anymore but now its dark on the wall on top of my ultrawide monitor. Guess I'll have to hunt down a dual link DVI to HDMI cable.

It's a damn shame that more people can't own one with all the BS going on with price gouging and miners taking most of the limited stock again. Whether Vega 64 is mocked for not beating a 1080ti and going back and forth with a 1080 with more power usage. We all should have the right to at least take a chance on buying one and discuss what you bought instead of availability and the price rising.

Now just waiting for a waterblock and better drivers.


----------



## Newbie2009

Can you test witcher 3? Per guru 3d review, I'm getting 50% of the performance I should, 50fps vs 110fps.
Prey is unplayable for me, 20-30 fps

Doom runs at over 100fps for, cpu bottlenecked on that game I think.


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> Can you test witcher 3? Per guru 3d review, I'm getting 50% of the performance I should, 50fps vs 110fps.
> Prey is unplayable for me, 20-30 fps
> 
> Doom runs at over 100fps for, cpu bottlenecked on that game I think.


Yeah I just need to install it. Give me a few


----------



## ontariotl

Ok, just finished testing Witcher 3 with Guru 3d settings and at 1080p with a default balance power.

Average framerate : 113.9 FPS
Minimum framerate : 46.6 FPS
Maximum framerate : 139.6 FPS

Looks like it's working for me. Did you use DDU to clean up the older radeon drivers before installing the betas by chance? Something doesn't seem right for you.

Now testing it with 3440x1440p with HBAO+ and Hairworks on set to high I get avg 50 FPS. No matter cause freesync makes it look smooth.


----------



## Newbie2009

Quote:


> Originally Posted by *ontariotl*
> 
> Ok, just finished testing Witcher 3 with Guru 3d settings and at 1080p with a default balance power.
> 
> Average framerate : 113.9 FPS
> Minimum framerate : 46.6 FPS
> Maximum framerate : 139.6 FPS
> 
> Looks like it's working for me. Did you use DDU to clean up the older radeon drivers before installing the betas by chance? Something doesn't seem right for you.
> 
> Now testing it with 3440x1440p with HBAO+ and Hairworks on set to high I get avg 50 FPS. No matter cause freesync makes it look smooth.


You on windows 10 or 7?


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> You on windows 10 or 7?


Windows 10.


----------



## Newbie2009

Quote:


> Originally Posted by *ontariotl*
> 
> Windows 10.


I'm on 7, will have to check 10 out and compare.

Thanks btw


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> I'm on 7, will have to check 10 out and compare.
> 
> Thanks btw


No problem. I don't think it has anything to do with O/S as Witcher 3 is a Dx11 title. I have a feeling the driver install was botched, or the past drivers were not cleaned out properly. I don't care if AMD has a clean past drivers as part of their new driver installer, I still rather use DDU to properly clean out everything.


----------



## Irev

Ive found setting a custom power setting - power limit = -25% temp target = 84 degrees ... max fan rpm = 2000

it makes for a 1400mhz - 1530mhz boost while being wisper quiet

can also set fan speed as low as 1600 and works well and cant hear it

its great for the AIR vega 64


----------



## punchmonster

To undervolt your Vega 64 use WattTool. It's important to only set P-state 6 and 7 and set them identically or everything bugs out:


I have confirmed this reduces powerdraw and also gives me stable clocks. Don't use WattTool for memory OC.

Also make sure to set P-state 6 and 7 to identical values.
Quote:


> Originally Posted by *The EX1*
> 
> Has anyone successfully undervolted there Vega 64 and verified it with power draw? I have tried the latest version of Wattman and the 4.4.16 beta version of Afterburner and I cannot get any less power draw from the card when undervolting. My kilowatt meter numbers remain the same.


----------



## The EX1

Thanks +rep


----------



## Ragsters

Ok so where do I get one?


----------



## Newbie2009

So a 64 benchmark, Firstrike ultra @ stock

http://www.3dmark.com/3dm/21634673?


----------



## Newbie2009

Quote:


> Originally Posted by *ontariotl*
> 
> No problem. I don't think it has anything to do with O/S as Witcher 3 is a Dx11 title. I have a feeling the driver install was botched, or the past drivers were not cleaned out properly. I don't care if AMD has a clean past drivers as part of their new driver installer, I still rather use DDU to properly clean out everything.


I reseated the card and cleaned the connector for dust, made a big difference to games, but witcher 3 still isn't performing as it should, must be windows 7 driver issue.

Prey is improved buy ropey, again id put it down to w7 driver.

Card performing as expected in other games and benchmarking.


----------



## pillowsack

Dunno about you guys but I got a notification and my EKWB waterblock will be here the 21st(then I get the card on the 23rd







).

I'm hoping this Fine Wine Technology will help out for the next week before it arrives.


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> I reseated the card and cleaned the connector for dust, made a big difference to games, but witcher 3 still isn't performing as it should, must be windows 7 driver issue.
> 
> Prey is improved buy ropey, again id put it down to w7 driver.
> 
> Card performing as expected in other games and benchmarking.


It must be just Witcher 3 install then. LIke someone else mentioned is to clear out the config files so it will write new ones for the new GPU.

Quote:


> Originally Posted by *pillowsack*
> 
> Dunno about you guys but I got a notification and my EKWB waterblock will be here the 21st(then I get the card on the 23rd
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> I'm hoping this Fine Wine Technology will help out for the next week before it arrives.


Nice! I wish I had one coming on the 21st. I was going to order buy delivery from DHL is a no no where I come from with extortion fees tacked on besides duty and taxes. I'll wait until a watercooling place here has them in stock.


----------



## os2wiz

Quote:


> Originally Posted by *Irev*
> 
> 
> 
> So just got my VEGA 64 .... noticed that running on powersave mode keeps the card under 75 degrees and performs around 5-10% slower then balanced mode.... balanced mode card hits around 84 degrees
> 
> I find the fan profile much nicer at this setting..... who else is having fun with VEGA?
> 
> I just wish the clocks would stay put not jump from 1530-1630 often


The power and performance issues with Vega 64 is very simple: The Hynix memory is slower than what AMD wanted for Vega. But Hynix screwed up production and could only get high enough yields at a slower memory speed. Samsung will be ramping up production of faster HBM2 memory in the first quarter of 2018. That is why Vega performance sucks. To compensate for this AMD boosted the core clock speed of Vega 64 which led to doubling power requirements for maybe 10% improved performance. A refresh of Vega is likely to happen in early 2018 with the faster HBM2. Then Vega will be much closer to 1080Ti performance. I would not buy it until this issue is resolved.


----------



## 113802

Quote:


> Originally Posted by *os2wiz*
> 
> The power and performance issues with Vega 64 is very simple: The Hynix memory is slower than what AMD wanted for Vega. But Hynix screwed up production and could only get high enough yields at a slower memory speed. Samsung will be ramping up production of faster HBM2 memory in the first quarter of 2018. That is why Vega performance sucks. To compensate for this AMD boosted the core clock speed of Vega 64 which led to doubling power requirements for maybe 10% improved performance. A refresh of Vega is likely to happen in early 2018 with the faster HBM2. Then Vega will be much closer to 1080Ti performance. I would not buy it until this issue is resolved.


Are reference cards using Hynix memory? Mine reports Micron memory.
Quote:


> Originally Posted by *tpi2007*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah, hopefully not! Coil whine shouldn't exist at this point in time, it should be a basic quality assurance thing.


Coil Whine exist on every single Vega card according to reviews and personal use.


----------



## criminal

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Coil Whine exist on every single Vega card according to reviews and personal use.


That's a shame. I haven't had coil whine in so long I don't know if I could accept it.


----------



## kundica

Quote:


> Originally Posted by *pillowsack*
> 
> Dunno about you guys but I got a notification and my EKWB waterblock will be here the 21st(then I get the card on the 23rd
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> I'm hoping this Fine Wine Technology will help out for the next week before it arrives.


I went to EK's website today to possibly order one since they were supposed to release on the 18th. Out of stock. Guess I should've pre-ordered.

Looking forward to hearing your thoughts and seeing some benches once you get yours and everything tested.


----------



## ontariotl

Quote:


> Originally Posted by *criminal*
> 
> That's a shame. I haven't had coil whine in so long I don't know if I could accept it.


I only experience whine during certain loading screens that want to have 3k to 5k FPS. Guess I could eliminate that with using vsync. Either way, during gaming I haven't heard anything like my old 290x's.


----------



## skullbringer

So has anyone found a driver for Vega 64 that actually applies core clock or core voltage changes?

Tried the Frontier Edition driver, but did not install to the graphics adapter, just all Radeon icons are blue now. smh


----------



## PontiacGTX

Quote:


> Originally Posted by *skullbringer*
> 
> So has anyone found a driver for Vega 64 that actually applies core clock or core voltage changes?
> 
> Tried the Frontier Edition driver, but did not install to the graphics adapter, just all Radeon icons are blue now. smh


MSI AB BETA16?


----------



## punchmonster

Quote:


> Originally Posted by *skullbringer*
> 
> So has anyone found a driver for Vega 64 that actually applies core clock or core voltage changes?
> 
> Tried the Frontier Edition driver, but did not install to the graphics adapter, just all Radeon icons are blue now. smh


here you go friend, this works on beta mining driver
Quote:


> Originally Posted by *punchmonster*
> 
> To undervolt your Vega 64 use WattTool. It's important to only set P-state 6 and 7 and set them identically or everything bugs out:
> 
> 
> I have confirmed this reduces powerdraw and also gives me stable clocks. Don't use WattTool for memory OC.
> 
> Also make sure to set P-state 6 and 7 to identical values.


----------



## drufause

Just Installed my Veaga 64 here
Ran firestrike extreme with b driver today got 10093
http://www.3dmark.com/3dm/21639183


----------



## PontiacGTX

Quote:


> Originally Posted by *drufause*
> 
> Just Installed my Veaga 64 here
> Ran firestrike extreme with b driver today got 10093
> http://www.3dmark.com/3dm/21639183


what B driver?


----------



## drufause

Quote:


> Originally Posted by *PontiacGTX*
> 
> what B driver?


win10-64bit-radeon-rx-vega-software-17.30.1051-b6-aug7


----------



## PontiacGTX

Quote:


> Originally Posted by *drufause*
> 
> win10-64bit-radeon-rx-vega-software-17.30.1051-b6-aug7


17.8.1 BETA then

http://www.guru3d.com/files-details/amd-radeon-vega-17-8-1-beta-6-driver-download.html
Quote:


> Originally Posted by *ontariotl*
> 
> I finally installed the card tonight after some benchmarking of my old 290x's in crossfire to compare
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> what I got myself into. Plus, I wanted to share my results with an Ultrawide as there isn't much posted results for us who own one. This should help who are curious what Vega 64 can do with a few titles at the moment. Sorry for the crude chart, but it should give an idea of the frame rates and power draw I got from the wall.
> 
> 
> 
> What have I learned from this? Well it looks like it's true that balanced mode is the best setting to use for the most part. And for the record, I did not fuss with power limits or anything like that. I kept it all at the default settings to show the performance without any tweaks. These aren't the best drivers, it's not called beta for nothing. I can see they concentrated on a few games and others really need work such as Tomb Raider.
> 
> This card is in dire need of a water block. Heat wasn't that bad really, reached a max temp of 82c, but I rather keep the clock speed at the ceiling for the setting I chose. Not bounce all over the place. As expected the power used is not spectacular. AMD has never been great in this regard since it became part of the discussion for GPU's for sometime now. It was nice when it was all about brute force and who gives a **** how much power it sucks. Well since my 290x was another power hungry GPU, it's good to see at least Vega 64 not toppling over power draw when crossfire was enabled with the 290x. Speaking of crossfire, damn wish recent games would support the damn option. Looking from the graph I could have kept going on with my almost 4 year old cards.
> 
> Now I did come across coil whine. I can happily say it only happened during the load intro screen for Doom as it was running at 5400 fps. Besides that, no other time did I experience coil whine. Not even close to what the 290x's had. Fan noise is a big improvement for me compared to the 290x OEM fan. It wasn't a distraction at all. Thank god I watercooled the 290x's soon after the install. Even though Vega's fan doesn't bother me, I still want it watercooled nonetheless.
> 
> So am I happy with the purchase? Yes and no. Plus is I can game with one card that has great avg frame rates and minimum frame rates that would stay in the LG's Freesync range. And have everything cranked to boot. PUBG looks so good on high and it's now thankfully playable instead of all low settings. That's why I really wanted to upgrade to stay within the monitors freesync range even after modding it to have a range of 35-75 instead of 55-75 without flicker. Tomb raider has a deceiving low minimum record as there is one second glitch that freezes during benchmarking that of course is recorded. If I had recorded different score from the 3 scenes for benchmark it would have been better, but it seems like that scene is the norm for testing. With that said by the way, oh boy did I miss freesync during the testing. Watching the benchmarks in between watching the kill-a-watt meter I couldn't stand the tearing and perceived studdering. I can't live without Freesync. I've even debated about upgrading from the 75hz LG to one of those Korean 100Hz panels. Looks like Vega can utilize some of the frames above 75hz, but not by much for some games. Hopefully that improves soon.
> 
> The no part of the equation is I wish crossfire was properly supported. With Freesync and the results of the benchmarks, I can clearly see two 290x's can beat a single Vega 64. That saddens me. It's great that I get better performance with a single Vega, but come on! I really thought Vega would beat out a crossfired 290x setup especially when I paid the same for a pair of 290s compared to just one Vega. I guess that's a no for the moment with Ultrawide. The other is no DVI for my Korean 1440p secondary monitor. Sure I don't overclock it to 120hz anymore but now its dark on the wall on top of my ultrawide monitor. Guess I'll have to hunt down a dual link DVI to HDMI cable.
> 
> It's a damn shame that more people can't own one with all the BS going on with price gouging and miners taking most of the limited stock again. Whether Vega 64 is mocked for not beating a 1080ti and going back and forth with a 1080 with more power usage. We all should have the right to at least take a chance on buying one and discuss what you bought instead of *****ing about availability and the price rising.
> 
> Now just waiting for a waterblock and better drivers.


It is Curious my R9 Fury get 46FPS on DOOM on 4k, i wonder if i could do VSR 1440 UWD


----------



## dieanotherday

so like, there's 10 RX vegas on OCN?


----------



## kundica

Quote:


> Originally Posted by *skullbringer*
> 
> So has anyone found a driver for Vega 64 that actually applies core clock or core voltage changes?
> 
> Tried the Frontier Edition driver, but did not install to the graphics adapter, just all Radeon icons are blue now. smh


You can't manually adjust clock, you need to use the percentage. Also, you can only adjust voltage on core and both steps need to have the same value.


----------



## skullbringer

Quote:


> Originally Posted by *PontiacGTX*
> 
> MSI AB BETA16?


Do you have a download page you can link to, please? I can only find the 4.4.0 Beta 15 and 3.0.0 Beta 16, suppose you are referring to a 4.4.0 Beta 16 version.


----------



## pillowsack

Does anyone else here think that there will be hard mods done to give the HBM2 more voltage?


----------



## PontiacGTX

Quote:


> Originally Posted by *skullbringer*
> 
> Do you have a download page you can link to, please? I can only find the 4.4.0 Beta 15 and 3.0.0 Beta 16, suppose you are referring to a 4.4.0 Beta 16 version.


http://forums.guru3d.com/showpost.php?p=5459908&postcount=637


----------



## buildzoid

Quote:


> Originally Posted by *pillowsack*
> 
> Does anyone else here think that there will be hard mods done to give the HBM2 more voltage?


I can already hard mod Vcore. HBM isn't any harder just need to find the pin for it. Though I would recommend not pushing the HBM2 beyond 1.35V anyway as HBM1 degraded fast on 1.42V. Though Samsung HBM2 might be a lot tougher than Hynix HBM1.


----------



## rancor

Quote:


> Originally Posted by *buildzoid*
> 
> I can already hard mod Vcore. HBM isn't any harder just need to find the pin for it. Though I would recommend not pushing the HBM2 beyond 1.35V anyway as HBM1 degraded fast on 1.42V. Though Samsung HBM2 might be a lot tougher than Hynix HBM1.


Can I ask how you're doing the vcore hard mod? From my quick look at the IR35211 data sheet it might require cutting traces or are you modding with digital control?


----------



## The Stilt

Isn't the VRM controller (IR35217) accessible on RX Vegas and only blocked on the Frontier Edition? Based on the quick look I had at the 56 bios it should be. If that's the case I fail to see the point in making physical modifications is order to increase the HBM2 voltage?


----------



## skullbringer

Quote:


> Originally Posted by *punchmonster*
> 
> here you go friend, this works on beta mining driver


The mining driver (17.30.1029) - 3-4 MH/s more on most Dagger miners, broken in every other way:

The card even coughs when the driver initalizes. Seriously, the fan spins up to max rpm for a tenth of a second.

Guess I am going back to the .1051 "launch" driver for now...

Notice how the Radeon settings icon is still blue? Already did uninstall with "AMDCleanUninstallationUtility" and DDU. Looks like the intern who integrated the blue ci into the vega fe driver package did not think about uninstalling it. And no, I am not going to reinstall Windows.


----------



## buildzoid

Quote:


> Originally Posted by *The Stilt*
> 
> Isn't the VRM controller (IR35217) accessible on RX Vegas and only blocked on the Frontier Edition? Based on the quick look I had at the 56 bios it should be. If that's the case I fail to see the point in making physical modifications is order to increase the HBM2 voltage?


I have an FE


----------



## The Stilt

Quote:


> Originally Posted by *buildzoid*
> 
> I have an FE


In that case you could try and see what happens if you replace the 0.845k resistor connected to pin 22 of the controller, with a 1.780k (or 2.87k, 4.12k, 5.49k, 6.98k, 8.87k, 11k) one. That should restore the I2C and PMBUS comms, since the bios can no longer disable them


----------



## PontiacGTX

Quote:


> Originally Posted by *skullbringer*
> 
> The mining driver (17.30.1029) - 3-4 MH/s more on most Dagger miners, broken in every other way:
> 
> The card even coughs when the driver initalizes. Seriously, the fan spins up to max rpm for a tenth of a second.
> 
> Guess I am going back to the .1051 "launch" driver for now...
> 
> Notice how the Radeon settings icon is still blue? Already did uninstall with "AMDCleanUninstallationUtility" and DDU. Looks like the intern who integrated the blue ci into the vega fe driver package did not think about uninstalling it. And no, I am not going to reinstall Windows.


3-4 vs what? also do you get worse graphics performance on that driver?


----------



## skullbringer

Quote:


> Originally Posted by *PontiacGTX*
> 
> 3-4 vs what? also do you get worse graphics performance on that driver?


3-4 MH/s more, so where my card did 30-34 MH/s with the launch driver (.1051) depending on the miner, with the mining driver (.1029) it does 34-38 MH/s depending on the miner.

Performance was technically the same, though I noticed an anomaly where just after installing the driver and rebooting the power limit was somehow bugged. When running a 3D load like Heaven 4.0, the system would only draw 350W max. On the contrary I did not see lower core clocks, as you would expect with lower power draw, still flapping between 1580 and 1630 MHz.

Maybe that is some kind of power tuning specifically for mining that AMD has included out of the box with this driver. However when opening Wattman and doing a reset of all settings, power draw went up again the normal 450-500W. Now when manually adjusting the power limit to e.g. -50% the core clock drops, as expected.

I dont know if out of the box the clocks were actually normal or if the driver was just reporting false values again, I can only safely say that it dew about 100W less than normal. Having done some mining and tuning, this kind of underclocking and power limiting makes sense for mining to increase H/W ratio, though it would still be nice if the driver was transparent about what it was doing here.

Btw after uninstalling the mining driver the Windows registry got corrupted somehow, getting error 1603 when trying to install the launch driver again. Trying dism registry repair atm.

Tldr dont use the mining driver, unless you are actually mining. If you are hoping for more performance or oc, the .1029 driver will not help you with that.


----------



## Newbie2009

Some benchmarks on firestrike Extreme:

Core : 1712 (*+5% on stock*) + 50 power offset, HBM Stock



http://www.3dmark.com/fs/13391766

Core : 1630 (*stock*) + 50 power offset, HBM Stock



http://www.3dmark.com/fs/13388060


----------



## Newbie2009

Some benchmarks on firestrike Ultra:

Core : 1712 (*+5% on stock*) + 50 power offset, HBM Stock



http://www.3dmark.com/fs/13391735

Core : 1630 (*stock*) + 50 power offset, HBM Stock



http://www.3dmark.com/fs/13388105


----------



## djsatane

Quote:


> Originally Posted by *os2wiz*
> 
> The power and performance issues with Vega 64 is very simple: The Hynix memory is slower than what AMD wanted for Vega. But Hynix screwed up production and could only get high enough yields at a slower memory speed. Samsung will be ramping up production of faster HBM2 memory in the first quarter of 2018. That is why Vega performance sucks. To compensate for this AMD boosted the core clock speed of Vega 64 which led to doubling power requirements for maybe 10% improved performance. A refresh of Vega is likely to happen in early 2018 with the faster HBM2. Then Vega will be much closer to 1080Ti performance. I would not buy it until this issue is resolved.


OUch..... So I guess time to wait again... sigh, anyone here that owns the prebuilt liquid vega rx can tell me if they having large pump/coil whine and how does it compare to fury x(if they also owned it)? I have fury x and was wondering about the liquid vega but after reading of the troubles and current hbm2 situation I guess I will be waiting.


----------



## Energylite

Nice one Buildzoid


----------



## PontiacGTX

Quote:


> Originally Posted by *Energylite*
> 
> Nice one Buildzoid


then bios was modded or tricked ocing software?


----------



## kundica

Quote:


> Originally Posted by *PontiacGTX*
> 
> then bios was modded or tricked ocing software?


No. They used a registry hack.


----------



## Chaoz

Quote:


> Originally Posted by *pillowsack*
> 
> Dunno about you guys but I got a notification and my EKWB waterblock will be here the 21st(then I get the card on the 23rd
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> I'm hoping this Fine Wine Technology will help out for the next week before it arrives.


Yup, got a notification aswell, even got a tracking code. So most likely it will get here on Monday.
So I can slap it on my GPU right away. Just received my GPU today







.


----------



## Arizonian

Quote:


> Originally Posted by *Chaoz*
> 
> Yup, got a notification aswell, even got a tracking code. So most likely it will get here on Monday.
> So I can slap it on my GPU right away. Just received my GPU today
> 
> 
> 
> 
> 
> 
> 
> .


Noice! Congrats.

Post back with thoughts / results after you've had some time with it.


----------



## Chaoz

Quote:


> Originally Posted by *Arizonian*
> 
> Noice! Congrats.
> 
> Post back with thoughts / results after you've had some time with it.


Had some fun with it today and I must say it's quite a nice card. Overclocks quite good. Although Turbo mode when ran with Benchmarks gives a total system power draw of around 600W







.

Ran a couple of benchmarks, the latest FS is CO'ed at 1712/1000 +50:



Others are run at Turbo mode (1630/945):




Will try to tweak it more when my waterblock arrives, max temp it hits in FS is 85°C :s.


----------



## buildzoid

Can anyone check HBM voltage on a V64? My FE has no issue doing 1050 HBM OC and most of the time even does 1100.


----------



## skullbringer

It works, clockspeed affects performance!

default with unlocked power: *7199*


Spoiler: Warning: Spoiler!







underclocked to 1200 MHz core with unlocked power: *6041*


Spoiler: Warning: Spoiler!







overclocked to 1750 MHz core with unlocked power: *7335*


Spoiler: Warning: Spoiler!







Now what did I do differently this time? I am not certain. After uninstalling the mining driver .1029 the .1051 driver would not install anymore, so I had to uninstall all vcredist packages and deleted some registry keys. After some fiddling the installation of .1051 worked again.

Then I used Afterburner 4.4.0 Beta 16 (+rep to PontiacGTX) and Watttool to set power, fan, core clock and voltage and performance scaled! Never touched Radeon Settings and Wattman since then.


----------



## 113802

Quote:


> Originally Posted by *buildzoid*
> 
> Can anyone check HBM voltage on a V64? My FE has no issue doing 1050 HBM OC and most of the time even does 1100.


I can run 1050Mhz on HBM, 1100Mhz will run for an hour fine then suddenly my screen turns black


----------



## Irev

how are you guys overclocking?

I cant seem to use MSI AB

I tried +10% core on wattman and actually got worse performance even with +50% power limit... what am I doing wrong?


----------



## 113802

Quote:


> Originally Posted by *Irev*
> 
> how are you guys overclocking?
> 
> I cant seem to use MSI AB
> 
> I tried +10% core on wattman and actually got worse performance even with +50% power limit... what am I doing wrong?


I am waiting until overclocking works properly. I am using wattman. I increase the power limit so the card runs at 1750Mhz and overclocking HBM works fine in wattman.


----------



## Chaoz

Quote:


> Originally Posted by *Irev*
> 
> how are you guys overclocking?
> 
> I cant seem to use MSI AB
> 
> I tried +10% core on wattman and actually got worse performance even with +50% power limit... what am I doing wrong?


I used MSI AB perfectly fine, Clocked it to 1720/1020 +50% which out any issues. Using version 4.3.0 atm.


----------



## skullbringer

Quote:


> Originally Posted by *buildzoid*
> 
> Can anyone check HBM voltage on a V64? My FE has no issue doing 1050 HBM OC and most of the time even does 1100.


1350mV at idle, 1370mV under load measured at the memory vrm

clocks up to 1105 MHz just fine 100% of the time, falls over at 1110 MHz


----------



## skullbringer

Quote:


> Originally Posted by *Irev*
> 
> how are you guys overclocking?
> 
> I cant seem to use MSI AB
> 
> I tried +10% core on wattman and actually got worse performance even with +50% power limit... what am I doing wrong?


From my experience when you set a core clock with vega, the driver always reports it as successfully applied, there is no crash and under load gpu-z will report this clock as a steady max clock. However this is not necessarily representative for what the card actually runs at.

It sounds weird, but vega seems to have some kind of "last known good" clock that it will run at, regardless of what the driver reports. So you can push upwards in small increments like 25 MHz and run benchmarks in between to see if you get any performance increase with your last change. At a certain point performance will no longer increase. The card sees the clock you tried to apply, the power management is intelligent enough to see it can not run it and then throttles down to its highest "last known good" clock.

If you push past that tipping point performance actually degrades slightly with every increment. If I had to guess it is because when it fails to apply the too high clock, it has to recover back down to its last known good clock, which takes a very small amount of time. The higher the difference between the two clocks, the more performance is lost during trying to apply the clock and throttling back down.

But take all of this with a humongous grain of salt since I am just a bozo staring at red "Radeon" leds.

Also I seemed to have more success with Watttool for core clock, core voltage, power and fan and AB for memory clock. Wattman is kinda meh...


----------



## dagget3450

Quote:


> Originally Posted by *Y0shi*
> 
> @dagget3450
> 
> You should change the owners list in the OP. Profile link shows the wrong Y0shi
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.overclock.net/u/531359/y0shi <-- the real one


I updated list in OP, its not linked to users names now just manually typed it in for now. I will nee to make a google doc most likely soon.

Quote:


> Originally Posted by *LionS7*
> 
> Did you try Crimson 17.8.1 on Vega Frontier Edition ?


Yes, it worked if i recall but no radeon pro(gaming mode) option so no wattman, or crossfire.... so went back to original vega FE 17.6 launch drivers.

Quote:


> Originally Posted by *AlphaC*
> 
> To VEGA 64 owners:
> 
> EK-FC Radeon Vega
> 
> 109.95€ / $ 153.09
> https://www.ekwb.com/shop/ek-fc-radeon-vega
> https://www.ekwb.com/shop/ek-fc-radeon-vega-acetal
> https://www.ekwb.com/shop/ek-fc-radeon-vega-acetal-nickel
> 
> Swiftech KOMODO RX-ECO VEGA
> 
> 
> http://www.swiftech.com/komodo-rx-eco-vega.aspx
> Waterblock ($119.95)
> 
> Alphacool Eiswolf 120 GPX Pro ATI RX Vega M01 - Black
> https://www.alphacool.com/detail/index/sArticle/22291
> € 159.95 *
> 
> $167.85 * http://www.aquatuning.us/detail/index/sArticle/22291
> 
> Alphacool NexXxos GPX - ATI RX Vega M01
> https://www.alphacool.com/detail/index/sArticle/22292
> 
> € 104.95 *
> $110.08 * http://www.aquatuning.us/detail/index/sArticle/22292
> 
> http://www.performance-pcs.com/water-blocks-gpu/shopby/vga-series--amdr-radeonr-vega/?
> EK-FC Radeon Vega - Copper Water Block with Plexi Top for multiple AMD® Radeon® Vega based graphics cards
> $125.50
> 
> Watercool Heatkiller (none at this time)
> http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Categories/Wasserk%C3%BChler/GPU_Kuehler
> 
> Others:
> Aquacomputer - none at this time https://shop.aquacomputer.de/index.php?cPath=7_11_149
> Bitspower - none listed under AMD VGA https://www.bitspower.com/html/product/product02.php?kind=269&kind2=269
> Koolance - none listed http://koolance.com/index.php?route=product/category&path=29_148_46
> Phanteks - none listed http://phanteks.com/Glacier-GPU.html
> XSPC - none listed at this time http://www.xs-pc.com/waterblocks-gpu/


Thank you for this post, ill add this to OP soon
Quote:


> Originally Posted by *dieanotherday*
> 
> so like, there's 10 RX vegas on OCN?


According to this thread i counted 19ish RX Vega 64 owners and only 4 Vega FE owners 

Added/updated Owners list, its very basic atm. when i get more time hopefully this weekend i'll make it nicer and maybe use google docs. I honestly didn't expect many owners.


----------



## rdr09

Quote:


> Originally Posted by *Chaoz*
> 
> I used MSI AB perfectly fine, Clocked it to 1720/1020 +50% which out any issues. Using version 4.3.0 atm.


Around 600W for an oc'ed Vega 64 and i7 5820. That's a 140W CPU. Not bad.


----------



## ontariotl

Quote:


> Originally Posted by *skullbringer*
> 
> From my experience when you set a core clock with vega, the driver always reports it as successfully applied, there is no crash and under load gpu-z will report this clock as a steady max clock. However this is not necessarily representative for what the card actually runs at.
> 
> It sounds weird, but vega seems to have some kind of "last known good" clock that it will run at, regardless of what the driver reports. So you can push upwards in small increments like 25 MHz and run benchmarks in between to see if you get any performance increase with your last change. At a certain point performance will no longer increase. The card sees the clock you tried to apply, the power management is intelligent enough to see it can not run it and then throttles down to its highest "last known good" clock.
> 
> If you push past that tipping point performance actually degrades slightly with every increment. If I had to guess it is because when it fails to apply the too high clock, it has to recover back down to its last known good clock, which takes a very small amount of time. The higher the difference between the two clocks, the more performance is lost during trying to apply the clock and throttling back down.
> 
> But take all of this with a humongous grain of salt since I am just a bozo staring at red "Radeon" leds.
> 
> Also I seemed to have more success with Watttool for core clock, core voltage, power and fan and AB for memory clock. Wattman is kinda meh...


Thanks for the interesting information. I'm using some of your tools and doing some new benchmarks for my chart update. Some interesting findings for sure just like you. I will be posting shortly.


----------



## Energylite

Quote:


> Originally Posted by *PontiacGTX*
> 
> then bios was modded or tricked ocing software?


GN said they couldn't modify the bios because of that security on the card, so it's just a software trick. And that security works with something on microsoft


----------



## kundica

So after ramping my fans way up to do some tests I noticed that performance doesn't increase in TimeSpy for my card past 1700 while using +50 power limit. I tested all the way up to 1762 and my card yields the same scores from 1700 on. Temps maxed in the low 70s and the clock remains constant.

Also, my HBM isn't stable past 985 at all when using +50%. If I leave my power limit at 0 and don't OC the core I can get up to about 1020 but also notice a lack of increased performance(maybe a bottleneck?). I don't have a way to measure the voltage at the moment so I'm kind of stuck there.


----------



## Irev

has anyone had the issue with the card being under load for no reason?

for example Ive quit playing a game and for the last 15 minutes the card has all the tacho lights on and wattman says its running at 1630mhz 1400rpm fanspeed 74 degrees.

why is the card under load when im on the desktop doing nothing? :\

EDIT:

found a fix ...... disable and then re-enable "ReLive"

for some reason if I quit a game the card stays under load even on desktop if I disable and re-enable relive then the card drops back down to idle state...... hmm wonder if this is a bug?


----------



## aylan1196

Hi every one iam currently testing overclock on my vega 64 lc in gaming iam not into benching ?
So far I can hit 1800 on core and 1050 on hbm the difference in fps between stock and this overclock is in hellblad 1440p max setting I think fully stable fps average from 64to 78
CPU 1800x 4.0 ghz ram 32gb gskillz 3200mhz
sorry for bad pics


----------



## Nutty Pumpkin

In case anyone is wondering the hash cracking performance:



Going to try out Linux soon.


----------



## Newbie2009

Quote:


> Originally Posted by *Newbie2009*
> 
> Some benchmarks on firestrike Extreme:
> 
> Core : 1712 (*+5% on stock*) + 50 power offset, HBM Stock
> 
> 
> 
> http://www.3dmark.com/fs/13391766
> 
> Core : 1630 (*stock*) + 50 power offset, HBM Stock
> 
> 
> 
> http://www.3dmark.com/fs/13388060


Quote:


> Originally Posted by *Newbie2009*
> 
> Some benchmarks on firestrike Ultra:
> 
> Core : 1712 (*+5% on stock*) + 50 power offset, HBM Stock
> 
> 
> 
> http://www.3dmark.com/fs/13391735
> 
> Core : 1630 (*stock*) + 50 power offset, HBM Stock
> 
> 
> 
> http://www.3dmark.com/fs/13388105


So an update on the above. I find anything over 5% on core doesn't yield any extra more performance, so I started with the memory. I will leave the memory where it is and see if anything more is now yielded with a higher core clock and then return to overclocking the memory.
Once I have everything done at stock volts I will have a look at undervolting. Ignore total as my cpu is pretty old, the graphics score is the number to look for.

*Fire strike extreme*

Core : 1712 (*+5% on stock*) + 50 power offset, HBM @ 1100mhz



http://www.3dmark.com/fs/13396744

*Fire Strike Ultra*

Core : 1712 (*+5% on stock*) + 50 power offset, HBM @ 1100mhz



http://www.3dmark.com/fs/13396789

EDIT * No performance gain > than 5% with overclock on the core with HBM @ 1100mhz

Hard crash when HBM @ 1125mhz so lowering back to 1095mhz (150mhz OC from stock) and will try out undervolting now.

So at least I have my starting clocks on air for when my block arrives.


----------



## Energylite

Hey, you can add me as owner of RX V64











BTW i cant test it now, I need to wait my WB from EKWB cauz of the custom loop.
I'll try to keep that warranty sticker intact.
And I hope i could push a little bit this monster and if I cant, gonna try to find a way to do some bios editing and wait for proper OC drivers


----------



## rdr09

Quote:


> Originally Posted by *Energylite*
> 
> Hey, you can add me as owner of RX V64
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> BTW i cant test it now, I need to wait my WB from EKWB cauz of the custom loop.
> I'll try to keep that warranty sticker intact.
> And I hope i could push a little bit this monster and if I cant, gonna try to find a way to do some bios editing and wait for proper OC drivers


No test before installing block? What if . . .


----------



## Energylite

Quote:


> Originally Posted by *rdr09*
> 
> No test before installing block? What if . . .


how can i test with that case ? rofl


----------



## rdr09

Quote:


> Originally Posted by *Energylite*
> 
> how can i test with that case ? rofl


Eww. You need a backup rig or something. Ask a favor from a friend? I was gonna suggest QDCs but not gonna work in your case. lol

Nice rig btw.


----------



## Energylite

Quote:


> Originally Posted by *rdr09*
> 
> Eww. You need a backup rig or something. Ask a favor from a friend? I was gonna suggest QDCs but not gonna work in your case. lol


Ehh, i dont have a backup rig lmao, Im not rich that much (student matter you know)








Quote:


> Originally Posted by *rdr09*
> 
> Nice rig btw.


Ty, Gonna update the pics when V64'll be inside


----------



## Newbie2009

Quote:


> Originally Posted by *Energylite*
> 
> Ehh, i dont have a backup rig lmao, Im not rich that much (student matter you know)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ty, Gonna update the pics when V64'll be inside


I managed to squeeze mine into 3rd pci-e express slot without taking apart loop. You should really test it.


----------



## Energylite

Quote:


> Originally Posted by *Newbie2009*
> 
> I managed to squeeze mine into 3rd pci-e express slot without taking apart loop. You should really test it.


Oh, but did you let your old GPU on the PCI-E ? without power supply cables (2x8p for my 295x2) ?


----------



## Newbie2009

Quote:


> Originally Posted by *Energylite*
> 
> Oh, but did you let your old GPU on the PCI-E ? without power supply cables (2x8p for my 295x2) ?


yeah


----------



## Chaoz

I squeezed mine behind the 2 T-splitters which serve as a place holder







.

It barely fits. There is around 3mm left between the splitters and the GPU







.



Quote:


> Originally Posted by *rdr09*
> 
> Around 600W for an oc'ed Vega 64 and i7 5820. That's a 140W CPU. Not bad.


My thought exactly. And the performance is really good







.


----------



## Paxi

You can also add me as owner











EK block should come next week as well


----------



## alucardis666

So no word yet on Vega 128 that will stack up against the 1080 Ti eh?


----------



## Paxi

Anyone encountered issues while installing the AMD Radeon Vega 17.8.1 Beta 6 driver with Vega 64?

I was just about to install these and while installing the display driver I get a bluescreen arguing about faulty atimdag.sys.

Win10 64 bit.


----------



## 113802

Quote:


> Originally Posted by *Paxi*
> 
> Anyone encountered issues while installing the AMD Radeon Vega 17.8.1 Beta 6 driver with Vega 64?
> 
> I was just about to install these and while installing the display driver I get a bluescreen arguing about faulty atimdag.sys.
> 
> Win10 64 bit.


These are the drivers for Vega

http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-for-Radeon-RX-Vega-Series-Release-Notes.aspx


----------



## Newbie2009

Quote:


> Originally Posted by *rdr09*
> 
> Around 600W for an oc'ed Vega 64 and i7 5820. That's a 140W CPU. Not bad.


Very good, I'm pulling that with a 3770k


----------



## Ragsters

I have an sf600w and a 6800k. Will my power supply be able to power a Vega 64 with no overclock on any components?


----------



## Blameless

Quote:


> Originally Posted by *bluej511*
> 
> please please please dont use furmark. Its a total joke of a tool now a days.


If it can't run FurMark, or even more demanding applications (I do testing with OCCT GPU 3.1.0 as well, which is about ~10% more current draw than FurMark on most GPUs), for at least short periods without risk, it's either a defective sample or a conceptually flawed design.
Quote:


> Originally Posted by *rancor*
> 
> Furmark is not recommended any more because cards throttle so hard under it it doesn't always give you the worst case power draw.


Testing it with any sort of power limiter enabled rather defeats the purpose.
Quote:


> Originally Posted by *gupsterg*
> 
> As temperature reaches a 'trip point' refresh rate will be lowered, so performance will be impacted.


Temperature sensor is nice, but of limited utility.

I'd really like to see an EDC error counter like Hawaii had, but I'm doubtful if such a thing is possible on Vega.
Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> So flashing was literally the main selling point for you?


It may well be the deciding factor for me.

I have a week left to make up my mind about Vega 56. With the ability to use modded firmware, I'd order a pair of them for sure. But as launch approaches with, no universal workaround being forthcoming, I'm on the fence. I may only get one, or none at all, especially if I can't snag one at original MSRP.
Quote:


> Originally Posted by *WannaBeOCer*
> 
> Are reference cards using Hynix memory? Mine reports Micron memory.


I've heard varying reports of Samsung and Hynix. Samsung seems most likely.

Micron doesn't even make HBM2, so whatever is reporting that is incorrect.
Quote:


> Originally Posted by *Ragsters*
> 
> I have an sf600w and a 6800k. Will my power supply be able to power a Vega 64 with no overclock on any components?


Yes.


----------



## Paxi

Quote:


> Originally Posted by *WannaBeOCer*
> 
> These are the drivers for Vega
> 
> http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-for-Radeon-RX-Vega-Series-Release-Notes.aspx


Thanks for pointing out the link, but these are the drivers I am using. I already tried 4 or 5 times now also using DDU for uninstall. Even disabled FreeSync on my monitor.

But still always getting blue screen in atimdag.sys about SYSTEM THREAD EXEPTION.

Weird no one encountered any issue so far


----------



## dagget3450

I cannot wait for water block results from you owners. I am debating ek blocks for my. Vega fe airs but want to see the gains made by you guys first. Trying to decide on funds toward an nvidia build allng side my vega/ryzen or water cool... I am half tempted to see about a universal gpu block since vega has hbm and dont really need full cover...


----------



## 113802

Quote:


> Originally Posted by *Paxi*
> 
> Thanks for pointing out the link, but these are the drivers I am using. I already tried 4 or 5 times now also using DDU for uninstall. Even disabled FreeSync on my monitor.
> 
> But still always getting blue screen in atimdag.sys about SYSTEM THREAD EXEPTION.
> 
> Weird no one encountered any issue so far


Sounds to me like it could be a bad card.

http://support.amd.com/en-us/kb-articles/Pages/737-27116RadeonSeries-ATIKMDAGhasstoppedrespondingerrormessages.aspx


----------



## Paxi

Well I hope not. I mean the card is brand new..

I will try with a clean install of OS tomorrow, maybe there will be a new driver as well.


----------



## dagget3450

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Sounds to me like it could be a bad card.
> 
> http://support.amd.com/en-us/kb-articles/Pages/737-27116RadeonSeries-ATIKMDAGhasstoppedrespondingerrormessages.aspx


Might be an unstable cpu oc assuming hes overclocked also.. i seem to recall a bluescreen issue installing my vega fe when i first got it... I forgot what i did but havent had it since.


----------



## kundica

Quote:


> Originally Posted by *Irev*
> 
> has anyone had the issue with the card being under load for no reason?
> 
> for example Ive quit playing a game and for the last 15 minutes the card has all the tacho lights on and wattman says its running at 1630mhz 1400rpm fanspeed 74 degrees.
> 
> why is the card under load when im on the desktop doing nothing? :\
> 
> EDIT:
> 
> found a fix ...... disable and then re-enable "ReLive"
> 
> for some reason if I quit a game the card stays under load even on desktop if I disable and re-enable relive then the card drops back down to idle state...... hmm wonder if this is a bug?


Are you running instant replay? I noticed the card won't throttle down when running it.


----------



## ontariotl

Back with an update on my chart.

What I'm finding is balanced setting and 50% Power limit to keep the clocks stable although the ceiling wasn't 1536 but 1630 instead on the core and 1000 on HBM2.is the best setting so far. This will probably be my setting when it gets watercooled as I'm not keeping the fans at 4000 RPM all the time (290x all over again)

Even with it overclocked with Turbo setting and a clock of 1750 core, 1000 HBM2 it did give me higher frame rates but not marginally better. Of course that came with a boost in wattage as well. With the new balanced setting, I'm either matching or beating the Turbo custom setting and just a smudge less in wattage. I guess every little watt counts since eveyone freaks out or mocks that it's power hungry. PUBG seems to be an anomaly as it looks like the higher clock the less framerate I would get. It is really happy with the balanced mode.
I also had to chuck out a Tomb Raider result in balanced 50% that bested the crossfire 290x's with a result of 69.64 Avg, 78.79 Max with 646 peak watts. I just could not replicate that score again so I might have a setting I screwed up but ended up with a better score and I have no idea what I did. It did put a smile on my face when I seen it beat my crossfire avg.

I could clock my HB2 to 1100 but I backed off after witnessing some artifacts during the RotTR benchmark. It was the only one that showed artifacts.

Either way it will be nice to get a waterblock and retest along with being able to keep a stable core clock instead of bouncing around. For now, it's just time to enjoy playing games again.


----------



## Energylite

fck yeah, a special rig for 2 days lmao


btw thx my r9295x2, you did well for 3.5yrs


So the program, going to play for at least 4hrs at games (TW3 probably) and try to OC after


----------



## 113802

I was bored this morning.

http://www.3dmark.com/3dm/21660190?


----------



## PontiacGTX

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I was bored this morning.
> 
> http://www.3dmark.com/3dm/21660190?


can you try same clocks on time spy?


----------



## 113802

Quote:


> Originally Posted by *PontiacGTX*
> 
> can you try same clocks on time spy?


http://www.3dmark.com/3dm/21660674?


----------



## PontiacGTX

Quote:


> Originally Posted by *WannaBeOCer*
> 
> http://www.3dmark.com/3dm/21660674?


builzoidscore with 1807Mhz
http://www.3dmark.com/3dm/21393617

8194 vs 8203 witha 50MHz difference


----------



## jugs

Quote:


> Originally Posted by *WannaBeOCer*
> 
> http://www.3dmark.com/3dm/21660674?


Hey @WannaBeOCer can you post a full AIO BIOS dump?


----------



## Ragsters

Does anyone know where I can get one for a reasonable price?


----------



## skullbringer

Quote:


> Originally Posted by *jugs*
> 
> Hey @WannaBeOCer can you post a full AIO BIOS dump?


If you are looking to flash an XTX bios onto an XT, it wont make it past the secure boot check.

Btw also did some runs: http://hwbot.org/hardware/videocard/radeon_rx_vega_64/

As the YT people would say, FIRST!!!


----------



## 113802

Quote:


> Originally Posted by *Ragsters*
> 
> Does anyone know where I can get one for a reasonable price?


Yeah, it's called a GTX 1080









https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20601194948%20601203901&IsNodeId=1&Description=gtx%201080&bop=And&Order=PRICE&PageSize=36


----------



## Ragsters

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Yeah, it's called a GTX 1080
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20601194948%20601203901&IsNodeId=1&Description=gtx%201080&bop=And&Order=PRICE&PageSize=36


How about I buy the one you linked and trade you for your Vega 64?


----------



## cooljaguar

It's going to take weeks, maybe even months, for the market to settle. We're not going to see reasonable prices on Vega for a while.


----------



## 113802

Quote:


> Originally Posted by *Ragsters*
> 
> How about I buy the one you linked and trade you for your Vega 64?


I purchased the card because it looks nice. They seriously need to put a warning on the aluminum cards. The shroud is hot to the touch unlike plastic cards where they can be removed a few minutes after turning off the pc.


----------



## gupsterg

Quote:


> Originally Posted by *Blameless*
> 
> I'd really like to see an EDC error counter like Hawaii had, but I'm doubtful if such a thing is possible on Vega.


I enquired with 2 who would know regarding Fiji and it was not possible, so I too doubt we'll ever see it on VEGA







.


----------



## Ragsters

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I purchased the card because it looks nice. They seriously need to put a warning on the aluminum cards. The shroud is hot to the touch unlike plastic cards where they can be removed a few minutes after turning off the pc.


Yeah. That shroud would match my Ncase M1 perfectly.


----------



## dagget3450




----------



## Energylite

Waow, ok, so the card hasn't any damage or etc, but it's freaking bad at overclocking, i can barely reach 1712 Mhz on the core (+5 or 6%) and 955 Mhz on HBM2. Really shocked









Edit, Im doing a Heaven bench and it's so confusing, like look what afterburner gives to me and look what HB gives to me as core clock and memory.

I dont know what i need to think


----------



## ontariotl

Quote:


> Originally Posted by *Energylite*
> 
> Waow, ok, so the card hasn't any damage or etc, but it's freaking bad at overclocking, i can barely reach 1712 Mhz on the core (+5 or 6%) and 955 Mhz on HBM2. Really shocked
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Edit, Im doing a Heaven bench and it's so confusing, like look what afterburner gives to me and look what HB gives to me as core clock and memory.
> 
> I dont know what i need to think


What wattage is your power supply?


----------



## Energylite

Quote:


> Originally Posted by *ontariotl*
> 
> What wattage is your power supply?


1k, Corsair RM1000


----------



## 113802

Quote:


> Originally Posted by *Energylite*
> 
> Waow, ok, so the card hasn't any damage or etc, but it's freaking bad at overclocking, i can barely reach 1712 Mhz on the core (+5 or 6%) and 955 Mhz on HBM2. Really shocked
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Edit, Im doing a Heaven bench and it's so confusing, like look what afterburner gives to me and look what HB gives to me as core clock and memory.
> 
> I dont know what i need to think


Looks normal to me, mine reports the same clock speeds in Heaven. Just wait until proper drivers are released. Remember we are still using beta drivers.


----------



## Energylite

Ok, Btw i dont know if it's the case for everyone but, I've a huge coil whine under load.


----------



## dagget3450

Quote:


> Originally Posted by *Energylite*
> 
> Ok, Btw i dont know if it's the case for everyone but, I've a huge coil whine under load.


Your under water now right?


----------



## 113802

Quote:


> Originally Posted by *Energylite*
> 
> Ok, Btw i dont know if it's the case for everyone but, I've a huge coil whine under load.


Yes, it's the loudest thing in my computer since I run every fan in my computer at 400RPM except the GPU radiator fan which is at 1850 RPM. Every reviewer so far mentioned coil whine. People with the air version can't hear it since the fan is louder than the whine.


----------



## Newbie2009

I've no coil whine under any load for some reason


----------



## dagget3450

Anyone on water keeping the gpu under 60c while benching overclocking? One of the reviews i thought said to keep it under 60c


----------



## 113802

Quote:


> Originally Posted by *dagget3450*
> 
> Anyone on water keeping the gpu under 60c while benching overclocking? One of the reviews i thought said to keep it under 60c


When running the radiator fan at 3000 RPM the GPU never goes above 52C. You can see my benches from previous post.


----------



## Ne01 OnnA

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Looks normal to me, mine reports the same clock speeds in Heaven. Just wait until proper drivers are released. Remember we are still using beta drivers.


+1









Early adopters heaven


----------



## Energylite

Quote:


> Originally Posted by *dagget3450*
> 
> Your under water now right?


No, not yet, I'm gonna receive my wb and fittings next week
Quote:


> Originally Posted by *dagget3450*
> 
> Anyone on water keeping the gpu under 60c while benching overclocking? One of the reviews i thought said to keep it under 60c


Yeah, with fans at 4900 rpm








Quote:


> Originally Posted by *Ne01 OnnA*
> 
> +1
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Early adopters heaven


+1


----------



## kundica

Quote:


> Originally Posted by *Newbie2009*
> 
> So an update on the above. I find anything over 5% on core doesn't yield any extra more performance, so I started with the memory. I will leave the memory where it is and see if anything more is now yielded with a higher core clock and then return to overclocking the memory.
> Once I have everything done at stock volts I will have a look at undervolting. Ignore total as my cpu is pretty old, the graphics score is the number to look for.


I noticed the same thing. Somewhere between 1700-1712 the card stops yielding increased performance. There are a lot of people posting 1800+ benches on their air cards, but I'm not convinced it's actually doing anything. With my card there is a hard wall at just under 5% OC on the core. I'll put together my scores in a post and show how it peaks.


----------



## GroupB

Quote:


> Originally Posted by *WannaBeOCer*
> 
> When running the radiator fan at 3000 RPM the GPU never goes above 52C. You can see my benches from previous post.


This make me wonder if the core ( hbm) is so sensitive to temperature if I should go full cover and risking to add vrm heat to that or my old mc80w from my 6970 era

My system is cooling a i7 6700k not delid and 2 r9 290 OC @ 1.3V full cover and both card hover in 55C core ,60c vrm range doing ETH at 1245/1350 ( temp are the same if I game on one and mine on the other, I dont remember crossfire temp since new game dont work on crossfire anymore). I will delid the cpu before redoing my loop with vega so that gonna add some heat to the loop BUT I will cool only one card vs 2, I think its should be around the same temps or maybe less. My loop is ex 420 and ex 280 , mcp35x

52C its on a water cool vega or using a aftermarket cooler on die? If its aftermarket what rad size?


----------



## 113802

Quote:


> Originally Posted by *GroupB*
> 
> This make me wonder if the core ( hbm) is so sensitive to temperature if I should go full cover and risking to add vrm heat to that or my old mc80w from my 6970 era
> 
> My system is cooling a i7 6700k not delid and 2 r9 290 OC @ 1.3V full cover and both card hover in 55C core ,60c vrm range doing ETH at 1245/1350 ( temp are the same if I game on one and mine on the other, I dont remember crossfire temp since new game dont work on crossfire anymore). I will delid the cpu before redoing my loop with vega so that gonna add some heat to the loop BUT I will cool only one card vs 2, I think its should be around the same temps or maybe less. My loop is ex 420 and ex 280 , mcp35x
> 
> 52C its on a water cool vega or using a aftermarket cooler on die? If its aftermarket what rad size?


Using the water cooled version. Main reason I bought Vega was just for the shroud and I didn't want an annoying fan so I went with the XTX.

The 120MM fan isn't loud at 3000 RPM it's just that my entire computer is silent since I de-lidded my 6700k. My 6700k idles at 22C and caps at 72C with AVX prime95. So I run all my fans at 400RPM beside the GPU.


----------



## Newbie2009

Quote:


> Originally Posted by *kundica*
> 
> I noticed the same thing. Somewhere between 1700-1712 the card stops yielding increased performance. There are a lot of people posting 1800+ benches on their air cards, but I'm not convinced it's actually doing anything. With my card there is a hard wall at just under 5% OC on the core. I'll put together my scores in a post and show how it peaks.










Yeah my card can supposedly do 1800mhz also, but worse performance lol


----------



## Echoa

Quote:


> Originally Posted by *Newbie2009*
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah my card can supposedly do 1800mhz also, but worse performance lol


Isn't that a driver bug that beyond a certain point the frequency doesn't actually apply and 1800mhz+ isnt actually happening?


----------



## GroupB

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Using the water cooled version. Main reason I bought Vega was just for the shroud and I didn't want an annoying fan so I went with the XTX.
> 
> The 120MM fan isn't loud at 3000 RPM it's just that my entire computer is silent since I de-lidded my 6700k. My 6700k idles at 22C and caps at 72C with AVX prime95. So I run all my fans at 400RPM beside the GPU.


My concern is not the noise its hitting that 60C and less on my loop. but if you do it on a 120 rad, I dont see any reason I cant do it on 420+280 then, even with a 6700k on the same loop. After all im already doing it with 2 r9 290 card and I dont think delid on a 6700k will add that much specially if I switch from 2 to 1 gpu.


----------



## Newbie2009

Quote:


> Originally Posted by *Echoa*
> 
> Isn't that a driver bug that beyond a certain point the frequency doesn't actually apply and 1800mhz+ isnt actually happening?


Probably, alpha drivers


----------



## 113802

Quote:


> Originally Posted by *kundica*
> 
> I noticed the same thing. Somewhere between 1700-1712 the card stops yielding increased performance. There are a lot of people posting 1800+ benches on their air cards, but I'm not convinced it's actually doing anything. With my card there is a hard wall at just under 5% OC on the core. I'll put together my scores in a post and show how it peaks.


I've been testing my card and can confirm increasing the power target does increase performance over stock. Stock I score 24000-24100, +50% power target gains are 24700. I finally have time to stress test my HBM and can confirm I can run 1105Mhz stable when keeping the card below 55C










http://www.3dmark.com/3dm/21665850?

Asus:
Quote:


> "This is totally incorrect. In this generation, AMD made the overclocking a lot easier, where if you raise the "GPU power limit", it will do its job automatically and overclock the GPU clock automatically. If the power limit (TDP) is raised, there is no way a card can underperform another card of the same GPU provided it has not thermally throttled. By the way, AMD stock power limit should be 200W and 220W for VEGA 64, depending on the profile used.".


Read more: http://www.tweaktown.com/news/58803/asus-strix-vega-64-trades-blows-reference/index.html


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I've been testing my card and can confirm increasing the power target does increase performance over stock. Stock I score 24000-24100, +50% power target gains are 24700. I finally have time to stress test my HBM and can confirm I can run 1105Mhz stable when keeping the card below 55C
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.3dmark.com/3dm/21665850?
> 
> Asus:
> Read more: http://www.tweaktown.com/news/58803/asus-strix-vega-64-trades-blows-reference/index.html


I should've been more clear. All my tests I'm referring to are at +50% power limit. It's worth noting however, I just ran some more tests all at +50% with various core OCs and there is little difference between 2% OC and 6% OC in the scores.

+50% 2% OC (1662) - 7579, GS 7400
+50% 6% OC (1727) - 7567, GS 7375
+50% 7% OC (1742) - 7579, GS 7374

My HBM is all over the place. Perhaps it's heat related but I can only OC my HBM over 1000 if I don't increase the power limit on the core. Once I increase power limit, my HBM is seriously gimped. I can't seem to get more than 885 if I put the power limit at +50%. I might try maxing the fans completely and see what happens.

Regarding the post you linked from Tweaktown. The Asus response makes sense, but what the hell is Tweaktown talking about with the Strix card performing worse? All the numbers I've see show the Strix outperforming reference 64.


----------



## dagget3450

Well, i know next to no one will be running crossfire vega, but i am and it's not really working well for what i had hoped i could do.

CF Vega FE in 8k resolution is broken, not because of resolution but something else. My furyX didn't have this issue. So i hope they fix this at some point, but i have a feeling they won't for some time if ever given multi gpu is in such a bad state nowdays. Also add insult to injury, Vega FE new drivers don't even have radeon pro(gaming mode) available which means no CF or wattman...


----------



## steadly2004

Quote:


> Originally Posted by *dagget3450*
> 
> Well, i know next to no one will be running crossfire vega, but i am and it's not really working well for what i had hoped i could do.
> 
> CF Vega FE in 8k resolution is broken, not because of resolution but something else. My furyX didn't have this issue. So i hope they fix this at some point, but i have a feeling they won't for some time if ever given multi gpu is in such a bad state nowdays. Also add insult to injury, Vega FE new drivers don't even have radeon pro(gaming mode) available which means no CF or wattman...


I bought 2.... Impusle buy on the second. I sold my two Titan X's for 1200 back at the end of 2016 and was planning on replacing with crossfire VEGA. Now I got a water block for the first pre-ordered from EK. Not sure if I should order a second block or sell the second card. I might end up being able to use it for mining once someone figures out how to use this efficiency. I don't mine now. Or if somehow crossfire gets fixed. I don't know. I do want to use this 1600w evga power supply.....


----------



## IvantheDugtrio

Some news


----------



## dagget3450

Quote:


> Originally Posted by *steadly2004*
> 
> I bought 2.... Impusle buy on the second. I sold my two Titan X's for 1200 back at the end of 2016 and was planning on replacing with crossfire VEGA. Now I got a water block for the first pre-ordered from EK. Not sure if I should order a second block or sell the second card. I might end up being able to use it for mining once someone figures out how to use this efficiency. I don't mine now. Or if somehow crossfire gets fixed. I don't know.


Well mine seem to be working @ 4k in games that CF works. I do think though it could be better once drivers get more mature. I am used to waiting with AMD, but i am planning on getting nvidia 1080ti's soon. if i do, ill just sell the rest of my Fury's and 390x's to recoup costs.

doing an AMD build and intel/nvidia build to balance out


----------



## IvantheDugtrio

Did some undervolting on my Vega FE air. Lowest stable voltage I could run Firestrike on was 1150 mV. The benchmark would start at 1600 MHz but would quickly throttle down to 1269 MHz and 1348 MHz.

I plan on getting the EK block and loop soon. A 360mm rad + a 240mm rad should do.


----------



## dagget3450

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Did some undervolting on my Vega FE air. Lowest stable voltage I could run Firestrike on was 1150 mV. The benchmark would start at 1600 MHz but would quickly throttle down to 1269 MHz and 1348 MHz.
> 
> I plan on getting the EK block and loop soon. A 360mm rad + a 240mm rad should do.


Im running last 3 power states 1442/1100mv 1527/1100mv 1602/1150mv and PL +50% - i think i am going to adjust fan curve from stock to keep the cooler. i haven't checked clock speed yet but in firestrike/timespy i got decent performance improvements. So even if the clock speed is down some, it is gaining over stock settings.


----------



## ontariotl

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Did some undervolting on my Vega FE air. Lowest stable voltage I could run Firestrike on was 1150 mV. The benchmark would start at 1600 MHz but would quickly throttle down to 1269 MHz and 1348 MHz.
> 
> I plan on getting the EK block and loop soon. A 360mm rad + a 240mm rad should do.


I'm ready for my EK waterblock too









http://s81.photobucket.com/user/OntarioTL/media/P1040131_zps73d2a424.jpg.html


----------



## Irev

Quote:


> Originally Posted by *kundica*
> 
> Are you running instant replay? I noticed the card won't throttle down when running it.


yes I am..

is there any fix?

So far disabling relive after playing a game and re-enabling tends to work most of the time

otherwise I have to reboot windows

the gpu sits 100% usage even when idle on desktop.... is it a driver issue?


----------



## Kelen

Quote:


> Originally Posted by *dagget3450*
> 
> Well mine seem to be working @ 4k in games that CF works. I do think though it could be better once drivers get more mature. I am used to waiting with AMD, but i am planning on getting nvidia 1080ti's soon. if i do, ill just sell the rest of my Fury's and 390x's to recoup costs.
> 
> doing an AMD build and intel/nvidia build to balance out


I bought two RX Vega 64 cards, got lucky enough when they sold about hundred cards at almost MSRP. You could only order one card per person, however they sold two different brands and allowed 1 card per brand. Don't really need Crossfire but I wanted to play around with it nonetheless.

How do you enable it? I installed both cards right away, uninstalled drivers, installed the drivers AMD is offering for their Vega cards. Both cards are recognized without glitches, I get two gaming tabs and two Wattman tabs within the Radeon Conrol Panel. However Xfire is missing there.

Second card shows always a single LED on GPU Tach, the primary one blinks as it should during load. There was a Reddit user who also bought two Vega RXs, yet had the same problem: Both cards appear in the system, but Crossfire option isn't available.


----------



## Irev

Hey fellas this is my machine

Ryzen 1700 + VEGA 64
vs my friends machine

i7 6700k + GTX1080 Strix

Can anyone tell me what's going on here..... my combined score seems to be so low compared to his system

https://s30.postimg.org/h72dq854x/ujbyughuy.jpg


----------



## Energylite

Quote:


> Originally Posted by *Irev*
> 
> Hey fellas this is my machine
> 
> Ryzen 1700 + VEGA 64
> vs my friends machine
> 
> i7 6700k + GTX1080 Strix
> 
> Can anyone tell me what's going on here..... my combined score seems to be so low compared to his system
> 
> https://s30.postimg.org/h72dq854x/ujbyughuy.jpg


1.V64 is still using beta drivers so optimization is not good sometimes
2.OC is crap cauz it's bugged
3.AMD says there is a problem with benchmarks software
Look at known issues: http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-for-Radeon-RX-Vega-Series-Release-Notes.aspx
4.I let Oc community to find the number 4 if there is a number 4









So for now, don't trust anything like bench and else, trust your gaming experience!


----------



## ontariotl

Quote:


> Originally Posted by *Kelen*
> 
> I bought two RX Vega 64 cards, got lucky enough when they sold about hundred cards at almost MSRP. You could only order one card per person, however they sold two different brands and allowed 1 card per brand. Don't really need Crossfire but I wanted to play around with it nonetheless.
> 
> How do you enable it? I installed both cards right away, uninstalled drivers, installed the drivers AMD is offering for their Vega cards. Both cards are recognized without glitches, I get two gaming tabs and two Wattman tabs within the Radeon Conrol Panel. However Xfire is missing there.
> 
> Second card shows always a single LED on GPU Tach, the primary one blinks as it should during load. There was a Reddit user who also bought two Vega RXs, yet had the same problem: Both cards appear in the system, but Crossfire option isn't available.


Crossfire is not enabled in these initial drivers and AMD refuses to comment to when they will release one.


----------



## dagget3450

Quote:


> Originally Posted by *ontariotl*
> 
> Crossfire is not enabled in these initial drivers and AMD refuses to comment to when they will release one.


Thats quite sad really, i mean why does vega fe have crossfire on first launch drivers? I would expect rx vega to be the one to get cf since its "gamkng" gpu...

AMD really botching this launch hard. I think this shows they really did rush out vega which is really sad considering how long they took to release it.


----------



## ontariotl

Quote:


> Originally Posted by *dagget3450*
> 
> Thats quite sad really, i mean why does vega fe have crossfire on first launch drivers? I would expect rx vega to be the one to get cf since its "gamkng" gpu...
> 
> AMD really botching this launch hard. I think this shows they really did rush out vega which is really sad considering how long they took to release it.


AMD needs to stop with the HBM option. It's really biting them in the bum. Limited vram size on Fury because of HBM1, and now limited stock and issues with manufacturing along with broken promises of performance on HBM2. This is what's hurting their GPU's. I'll give Nvidia kudos for sticking with GDDR5x.

As for rushing Vega, they were in a catch 22. If they waited longer to release it, the ridicule they will get for releasing it way too late (not like that isn't already implied). Now they release it and they are ridiculed for no crossfire support and a driver that is beta at best.

I only bought Vega because I didn't want to support crossfire anymore. Maybe a few years ago it made sense. Unless you are into benchmarking 3dmark and alike, crossfire is mostly useless. Not because of the tech, but it's just not supported as much and it's a big gamble if that new game you really want to play supports it or not. For me, that was few and far between. So frustrating to see 2 gpu's in my PC and only one is used 98% of the time during gaming.

As I've mentioned before, yes it sucks that crossfire is not supported from the start, but imagine buying a Dual GPU (HD5970) and having an initial driver that doesn't support crossfire in 98% (becoming a pattern here) of the games out there. So you have a useless GPU sitting on that card.

The good news is that they have crossfire for FE cards, it will be just a matter of time before one driver that supports crossfire. However, doesn't DX12 allow multiple GPU option (if built into the game) without the need for crossfire support?


----------



## kundica

Quote:


> Originally Posted by *Irev*
> 
> yes I am..
> 
> is there any fix?
> 
> So far disabling relive after playing a game and re-enabling tends to work most of the time
> 
> otherwise I have to reboot windows
> 
> the gpu sits 100% usage even when idle on desktop.... is it a driver issue?


You don't need to disable ReLive, just bring up the ReLive overlay menu via hotkey and disable instant replay. You can I use the same menu to enable it when gaming and you'd like to use it.


----------



## Paxi

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Sounds to me like it could be a bad card.
> 
> http://support.amd.com/en-us/kb-articles/Pages/737-27116RadeonSeries-ATIKMDAGhasstoppedrespondingerrormessages.aspx


Well i ended up posting the issue in the official forums and in the worst case need to return the card..

https://community.amd.com/thread/219437

If you guys have any more suggestions, please let me know


----------



## 113802

Quote:


> Originally Posted by *Paxi*
> 
> Well i ended up posting the issue in the official forums and in the worst case need to return the card..
> 
> https://community.amd.com/thread/219437
> 
> If you guys have any more suggestions, please let me know


Please fill out your sig rig so we can see your parts. I would suggest flashing your bios. I saw in the Vega FE release notes they had issues with Z170.


----------



## Paxi

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Please fill out your sig rig so we can see your parts. I would suggest flashing your bios. I saw in the Vega FE release notes they had issues with Z170.


GA-Z68X-UD4-B3
2600K (no OC)
Corsair RMi750

(I think you need to register on the amd site, to view it)

The Event Viewer tells:
"The computer has rebooted from a bugcheck. The bugcheck was: 0x0000007e (0xffffffffc0000005, 0xfffff80182ffb401, 0xffff9f811d6ba8f8, 0xffff9f811d6ba130). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: d1cd2e08-6c21-46bf-91cc-713e2d341246."

The bugcheck looks like following:
==================================================
Dump File : 082017-4296-01.dmp
Crash Time : 20/08/2017 16:14:05
Bug Check String : SYSTEM_THREAD_EXCEPTION_NOT_HANDLED
Bug Check Code : 0x1000007e
Parameter 1 : ffffffff`c0000005
Parameter 2 : fffff801`82ffb401
Parameter 3 : ffff9f81`1d6ba8f8
Parameter 4 : ffff9f81`1d6ba130
Caused By Driver : atikmdag.sys
Caused By Address : atikmdag.sys+f43d4c
File Description :
Product Name :
Company :
File Version :
Processor : x64
Crash Address : atikmdag.sys+16b401
Stack Address 1 :
Stack Address 2 :
Stack Address 3 :
Computer Name :
Full Path : C:\Windows\Minidump\082017-4296-01.dmp
Processors Count : 8
Major Version : 15
Minor Version : 15063
Dump File Size : 788.980
Dump File Time : 20/08/2017 16:14:44
==================================================


----------



## ontariotl

Quote:


> Originally Posted by *Paxi*
> 
> GA-Z68X-UD4-B3
> 2600K (no OC)
> Corsair RMi750
> 
> (I think you need to register on the amd site, to view it)
> 
> The Event Viewer tells:
> "The computer has rebooted from a bugcheck. The bugcheck was: 0x0000007e (0xffffffffc0000005, 0xfffff80182ffb401, 0xffff9f811d6ba8f8, 0xffff9f811d6ba130). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: d1cd2e08-6c21-46bf-91cc-713e2d341246."
> 
> The bugcheck looks like following:
> ==================================================
> Dump File : 082017-4296-01.dmp
> Crash Time : 20/08/2017 16:14:05
> Bug Check String : SYSTEM_THREAD_EXCEPTION_NOT_HANDLED
> Bug Check Code : 0x1000007e
> Parameter 1 : ffffffff`c0000005
> Parameter 2 : fffff801`82ffb401
> Parameter 3 : ffff9f81`1d6ba8f8
> Parameter 4 : ffff9f81`1d6ba130
> Caused By Driver : atikmdag.sys
> Caused By Address : atikmdag.sys+f43d4c
> File Description :
> Product Name :
> Company :
> File Version :
> Processor : x64
> Crash Address : atikmdag.sys+16b401
> Stack Address 1 :
> Stack Address 2 :
> Stack Address 3 :
> Computer Name :
> Full Path : C:\Windows\Minidump\082017-4296-01.dmp
> Processors Count : 8
> Major Version : 15
> Minor Version : 15063
> Dump File Size : 788.980
> Dump File Time : 20/08/2017 16:14:44
> ==================================================


Could it be possible Vega or at least the driver does not like PCI-E 2.0? I mean the driver is buggy enough as is.

If possible, I would find a friend who would let you install this card and see if you can replicate the issue from your system. If not, another thing I would do if possible is try to wipe windows 10 and reinstall it, or better yet, try a different O/S if you happen to have windows 8 or 7.


----------



## Paxi

Actually an issue related to PCI-E 2.0 sound reasonable. I already asked a friend with a Rampage 5 Extreme to try it out.
Already did a clean fresh install of Win10. I also got Win7 still laying around but I don't think there should be a difference.

Thanks for you suggestion.


----------



## kundica

Quote:


> Originally Posted by *skullbringer*
> 
> Yes, apparently so. Without the power play table reg mod, I had no issues at +50% power limit, but with the mod at 400W total and +100% power limit in Wattman the OCP got triggered reproducably. Examples are the start of graphics test 1 of 3DMark Timespy or scene 26 of Unigine Heaven 4.0.
> 
> At first glance it looks like the VRM actually is not that overkill and that the lower power limit on the "gaming" products is there for a good reason.
> 
> I have not yet tested any further on the HX850i, as I was desperately trying to get any performance out of core clock changes. Though without success, 1980MHz performs as good as 1630MHz, tried Wattman, WattTool, Afterburner, GPU-Z, HWInfo, all report the same core clock, but none have any effect on performance.


What happens exactly when you trigger over current protection? Does the whole PSU shut off or just the power to the card? I've been trying to push my card to its limits but when OC'ing the memory past a certain a certain point while running at +50% my card exhibits behavior of what seems like a power cycle. The whole computer stays running but 3dmark will momentarily freezes for about 1 second, then both of my monitors go black, the card's fan stops running for a split second then the app crashes back to the desktop with the GPU drivers reset. It always happens at the same exact point at the beginning to Time Spy.

I'm currently running a Seasonic 1000w Prime Platinum PSU. I've stopped using a daisy chained cable and now have a 2 separate cables running from my PSU to my card. I also tried switching the cables around on the

When OC'ing memory on other cards I typically get artifacts or other anomalies before it crashes.


----------



## rv8000

Quote:


> Originally Posted by *ontariotl*
> 
> AMD needs to stop with the HBM option. It's really biting them in the bum. Limited vram size on Fury because of HBM1, and now limited stock and issues with manufacturing along with broken promises of performance on HBM2. This is what's hurting their GPU's. I'll give Nvidia kudos for sticking with GDDR5x.
> 
> As for rushing Vega, they were in a catch 22. If they waited longer to release it, the ridicule they will get for releasing it way too late (not like that isn't already implied). Now they release it and they are ridiculed for no crossfire support and a driver that is beta at best.
> 
> I only bought Vega because I didn't want to support crossfire anymore. Maybe a few years ago it made sense. Unless you are into benchmarking 3dmark and alike, crossfire is mostly useless. Not because of the tech, but it's just not supported as much and it's a big gamble if that new game you really want to play supports it or not. For me, that was few and far between. So frustrating to see 2 gpu's in my PC and only one is used 98% of the time during gaming.
> 
> As I've mentioned before, yes it sucks that crossfire is not supported from the start, but imagine buying a Dual GPU (HD5970) and having an initial driver that doesn't support crossfire in 98% (becoming a pattern here) of the games out there. So you have a useless GPU sitting on that card.
> 
> The good news is that they have crossfire for FE cards, it will be just a matter of time before one driver that supports crossfire. However, doesn't DX12 allow multiple GPU option (if built into the game) without the need for crossfire support?


So you wanted a card with less performance, lower clocks, higher tdp, and to be absolutely impossible to cool on a reference cooler?

HBM2 pulls 15-19w, even while overclocked. A more conventional memory setup would eat tons of die space and use 3x the power. Vega would have a much higher stock TDP with GDDR5/X on a 512/384-bit bus, or be severely under clocked in comparison; maybe 1000mhz with 1200mhz boost.

We should all stop pretending to know the benefits and drawbacks of using different memory systems for Vega. If AMD had decided to go your route and released a card last year, the situation wouldn't be any different in all regards. The moral of this story being GCN needs to go, or AMD needs to follow suite with nvidia and strip unescessary components for gaming consumers; again unlikely to happen with huge budget restrictions for now and the foreseeable future.


----------



## skullbringer

Quote:


> Originally Posted by *kundica*
> 
> What happens exactly when you trigger over current protection? Does the whole PSU shut off or just the power to the card? I've been trying to push my card to its limits but when OC'ing the memory past a certain a certain point while running at +50% my card exhibits behavior of what seems like a power cycle. The whole computer stays running but 3dmark will momentarily freezes for about 1 second, then both of my monitors go black, the card's fan stops running for a split second then the app crashes back to the desktop with the GPU drivers reset. It always happens at the same exact point at the beginning to Time Spy.
> 
> I'm currently running a Seasonic 1000w Prime Platinum PSU. I've stopped using a daisy chained cable and now have a 2 separate cables running from my PSU to my card. I also tried switching the cables around on the
> 
> When OC'ing memory on other cards I typically get artifacts or other anomalies before it crashes.


When the issue occurred on the HX850i, the whole system shut off. Using dedicated PCIe cables or the RM1000 alleviated the issue for me.

What your describing sounds like an actual driver crash due to too high oc. With timespy I saw that behavior usually in gt2 either in the beginning near the u boat display case or at the end before the big Galax logo, as power draw and load seems to be highest there.

Mine started crashing around 1815 MHz core, then the 3d mark workload gets interrupted, and the 3d mark result screen shows black square artifacts.


----------



## Neutronman

Still waiting for my damn EK waterblock. Sapphire Vega 64 sitting on my bench..... Perhaps I can sell my Prey and Wolfenstein coupons to offset the price gouging bastards where I bought my Vega 64.... Pricing is getting stupid on these cards now. Newegg is listing the plastic air Vega64 for $699 now, not that there is any stock...
Looking forward to seeing how my Vega 64 performs on my Ryzen 1600X build...


----------



## kundica

Quote:


> Originally Posted by *skullbringer*
> 
> When the issue occurred on the HX850i, the whole system shut off. Using dedicated PCIe cables or the RM1000 alleviated the issue for me.
> 
> What your describing sounds like an actual driver crash due to too high oc. With timespy I saw that behavior usually in gt2 either in the beginning near the u boat display case or at the end before the big Galax logo, as power draw and load seems to be highest there.
> 
> Mine started crashing around 1815 MHz core, then the 3d mark workload gets interrupted, and the 3d mark result screen shows black square artifacts.


Thanks for the info. I figured the whole system would shutdown.

It happens when I don't OC the core at all, just up the power limit to +50% and OC the HBM. My initial thought was that the HBM was getting too hot. I've tried maxing the fans but I can only get to 1020 fully maxed, and that's if I'm lucky, before the crash happens.I was thinking about putting this card under water but man would I be pissed if I spend all the cash to go full loop and be stuck running the memory at like 985 which seems to be my most consistent stable OC.


----------



## Soggysilicon

Quote:


> Originally Posted by *Neutronman*
> 
> Still waiting for my damn EK waterblock. Sapphire Vega 64 sitting on my bench..... Perhaps I can sell my Prey and Wolfenstein coupons to offset the price gouging bastards where I bought my Vega 64.... Pricing is getting stupid on these cards now. Newegg is listing the plastic air Vega64 for $699 now, not that there is any stock...
> Looking forward to seeing how my Vega 64 performs on my Ryzen 1600X build...


Same story, my EK-FC-VEGA should show up tomorrow according the the UPS shipping. The stock AC gets an A for effort... but it just isn't up to the task of keeping this thing in check, and its way too loud for comfy' gaming.


----------



## ontariotl

Quote:


> Originally Posted by *rv8000*
> 
> So you wanted a card with less performance, lower clocks, higher tdp, and to be absolutely impossible to cool on a reference cooler?
> 
> HBM2 pulls 15-19w, even while overclocked. A more conventional memory setup would eat tons of die space and use 3x the power. Vega would have a much higher stock TDP with GDDR5/X on a 512/384-bit bus, or be severely under clocked in comparison; maybe 1000mhz with 1200mhz boost.
> 
> We should all stop pretending to know the benefits and drawbacks of using different memory systems for Vega. If AMD had declared fed to go your route and released a card last year, the situation wouldn't be any different in all regards. The moral of this story being GCN needs to go, or AMD needs to follow suite with nvidia and strip unescessary components for gaming consumers; again unlikely to happen with huge budget restrictions for now and the foreseeable future.


I'm not debating nor do I pretend to know everything about HBM/2 tech. I think it's great that AMD is going out and doing something different for sure. It's just AMD has been bitten twice from the looks of it for using HBM.

I only state not using HBM because it would seem cost prohibitive and this time it appears to have delayed releases because of issues with the supplier AMD went with.

In regards to GCN, it may have to go, but it still competes with the 1080 which shows it has some life left in it. I agree hopefully Navi will be a different architecture, but in the mean time we are stuck with GCN.

Oh I do know for sure it would be a different story if it was released last year or even 6 months ago. Let's see, what have I read so far in reviews and comments? "We've had this performance in the 1080 14 months ago and with less power usage blah blah blah." The only argument if it was out sooner would be about the power draw instead as usual which was going to be the case either way with HBM2 or if they used GDDR5x. We should be used to that argument by now. AMD could have repeated history with an AIO only for their flagship again. So fear of a low clock because of reference cooling becomes a moot point.

The same thing was said when the 290x came out. It could beat the Titan but of course the argument was about the power draw and the heat. At least it arrived at a decent time. Nothing about it being late to the game. After market cooling got rid of that argument of heat pretty quick and the same would have happened with Vega if it went with GDDR.

Fury X comes along and while it couldn't compete with the 980Ti at the time of release (thankfully drivers have improved it). They couldn't complain about power, but instead it was mocked by the community for only having 4Gb of Vram because of the limitations of HBM1 (the first bite with HBM)

It's funny how power draw has become a big thing since a few generations back of graphic cards. It used to be just the raw power it can muster from the day I started with GPU's in the 3Dfx days.


----------



## Neutronman

My plan for my Vega 64 is to aim for
Quote:


> Originally Posted by *Soggysilicon*
> 
> Same story, my EK-FC-VEGA should show up tomorrow according the the UPS shipping. The stock AC gets an A for effort... but it just isn't up to the task of keeping this thing in check, and its way too loud for comfy' gaming.


Let me know how your HSF->EK waterblock fitting goes. What TIM are you using? I will have to use non-conductive TIM in order to prevent possible contact between core and HBM modules.


----------



## ontariotl

Quote:


> Originally Posted by *Neutronman*
> 
> Still waiting for my damn EK waterblock. Sapphire Vega 64 sitting on my bench..... Perhaps I can sell my Prey and Wolfenstein coupons to offset the price gouging bastards where I bought my Vega 64.... Pricing is getting stupid on these cards now. Newegg is listing the plastic air Vega64 for $699 now, not that there is any stock...
> Looking forward to seeing how my Vega 64 performs on my Ryzen 1600X build...


You should throw it in your system to play around with it at least, even just to make sure that it works. I would hate for you to put in the effort of the install of the block and not have it working.

Besides, the air cooling isn't that bad. It's not like a GTX480 or 290x Jet fan. Mind you if you set the fan to 4000 RPM like I did, it does become a Dyson. In my gaming and bench marking, I'm happy to say that speed is never reached. Only when I was forcing it to.


----------



## Neutronman

I'm not really bothered by the higher TDP on Vega compared to Pascal, the monthly cost is minimal.
I swapped my GTX 1080 for Vega 64 (invested in nice new 2560x1440 144hz Freesync IPS monitor)....
Worst case scenario is an 85W difference under load, as I game around 10 hours a week these days, the additional cost per month is around $1....


----------



## Neutronman

Quote:


> Originally Posted by *ontariotl*
> 
> You should throw it in your system to play around with it at least, even just to make sure that it works. I would hate for you to put in the effort of the install of the block and not have it working.
> 
> Besides, the air cooling isn't that bad. It's not like a GTX480 or 290x Jet fan. Mind you if you set the fan to 4000 RPM like I did, it does become a Dyson. In my gaming and bench marking, I'm happy to say that speed is never reached. Only when I was forcing it to.


I had it in my system to play with for a couple of hours, works fine, lost the stutters I had in BF4 with my old GTX 1080, so happy. Little coil whine at high fps, but nothing I can't live with. Right now I took it to pieces as I was scheduled to receive my waterblock yesterday, but it has been delayed until Wednesday!!!

I'm far from a fanboy, over the years I have spent a fortune on Intel CPU's and Nvidia GPU's but I just wanted a change, so now I have Ryzen/Vega build....


----------



## ontariotl

Quote:


> Originally Posted by *Neutronman*
> 
> I had it in my system to play with for a couple of hours, works fine, lost the stutters I had in BF4 with my old GTX 1080, so happy. Little coil whine at high fps, but nothing I can't live with. Right now I took it to pieces as I was scheduled to receive my waterblock yesterday, but it has been delayed until Wednesday!!!
> 
> I'm far from a fanboy, over the years I have spent a fortune on Intel CPU's and Nvidia GPU's but I just wanted a change, so now I have Ryzen/Vega build....


That's good to hear. Freesync is a great thing for AMD cards when it's in use. I frame cap just to keep in the range as higher frames outside Freesync range don't look as appealing. Of course if you had a Gsync monitor the same could be said about your old 1080.

At least you are lucky to be getting your block soon. I have to wait until my Canadian water cooling store gets them in stock. I wanted to order directly from EK, but shipping with DHL can be costly when it comes to your door here. Even if I decided to take the plunge from EK, they are sold out now.


----------



## Soggysilicon

Quote:


> Originally Posted by *Neutronman*
> 
> My plan for my Vega 64 is to aim for
> Let me know how your HSF->EK waterblock fitting goes. What TIM are you using? I will have to use non-conductive TIM in order to prevent possible contact between core and HBM modules.


Funny you mention fitting... I was tempted to make a bracket and try to fit one of my old EK-Supreme HF Universals but considering how temperamental the card (was trouble shooting it right out of the box for no DP response on any channel)







, and that I have absolutely no faith that there are *ANY* to replace a screw up I bit the bullet and went with the FC block. (Played hell just trying to get this one from an E-Tailer without them canceling or looting my basket - VEGA 64 AC).

As far as TIM goes, I have been using arctic silver 5 for ages, and will likely use that again. Use a razor blade to spread a very thin coating over the whole die, then a very thin X over the 2 HBM and Proc. Ill use a 2 part cleaning prep solution before putting it in place, and may do a test spread just to be sure I am not swedging the material out onto the res/caps.

Once its on, and the loop is up, I'll get some picks and do a mini review.









https://www.amazon.com/Arctic-Silver-Thermal-Compound-ArctiClean/dp/B002DILLMS/ref=sr_1_5?ie=UTF8&qid=1503254519&sr=8-5&keywords=arctic+silver+5


----------



## PontiacGTX

Quote:


> Originally Posted by *ontariotl*
> 
> The same thing was said when the 290x came out. It could beat the Titan but of course the argument was about the power draw and the heat. At least it arrived at a decent time. Nothing about it being late to the game. After market cooling got rid of that argument of heat pretty quick and the same would have happened with Vega if it went with GDDR.
> 
> Fury X comes along and while it couldn't compete with the 980Ti at the time of release (thankfully drivers have improved it). They couldn't complain about power, but instead it was mocked by the community for only having 4Gb of Vram because of the limitations of HBM1 (the first bite with HBM)
> 
> It's funny how power draw has become a big thing since a few generations back of graphic cards. It used to be just the raw power it can muster from the day I started with GPU's in the 3Dfx days.


Well R9 290X didnt have a significant difference in power consumption compare t GTX Titan, just 30-50w or so meanwhile GTX 1080 has a difference with vega of almost 100-120w

About R9 Fury (x) Well if the main competition for 980 was R9 nano had also 4GB i dont see why people also doesnt mock 980 (4GB) 970 (3.5gb) 960 (2GB) vram


----------



## Echoa

Quote:


> Originally Posted by *PontiacGTX*
> 
> About R9 Fury (x) Well if the main competition for 980 was R9 nano had also 4GB i dont see why people also doesnt mock 980 (4GB) 970 (3.5gb) 960 (2GB) vram


Because Nvidia, you only mock AMD for RAM. XP

I've mocked Nvidia about Vram since cards like the 8800gts with weird 320mb of RAM and such


----------



## ontariotl

Quote:


> Originally Posted by *PontiacGTX*
> 
> Well R9 290X didnt have a significant difference in power consumption compare t GTX Titan, just 30-50w or so meanwhile GTX 1080 has a difference with vega of almost 100-120w
> 
> About R9 Fury (x) Well if the main competition for 980 was R9 nano had also 4GB i dont see why people also doesnt mock 980 (4GB) 970 (3.5gb) 960 (2GB) vram


My point exaclty. There is always one inherent flaw sought after and brought up with every AMD gpu release.

Even if it was 30-50w difference, it was still talked about and shunned. Nvidia could poop in a box and ship it out and it would be rejoiced. Sure it was talked about with the 970 vram fiasco, but people were still eating them up. No one said not to buy one. AMD was mocked because they went with a new limited tech at the time.

I just miss the days of a cool new video card coming out and it just be about the gaming experience. Not who is better or this fanboi crap going on.


----------



## Energylite

Quote:


> Originally Posted by *kundica*
> 
> I'm currently running a Seasonic 1000w Prime Platinum PSU. I've stopped using a daisy chained cable and now have a 2 separate cables running from my PSU to my card.


Oh, and did you noticed any improvements on the card?


----------



## kundica

Quote:


> Originally Posted by *Energylite*
> 
> Oh, and did you noticed any improvements on the card?


Unfortunately, no. My HBM OCs poorly no matter what I do. Might be my card, but people being able to hit 1100 on the Air card vs my 985 seems like a drastic difference. I initially intended to buy the AIO card but couldn't find it in stock. Definitely regretting not having it.


----------



## Energylite

Quote:


> Originally Posted by *kundica*
> 
> Unfortunately, no. My HBM OCs poorly no matter what I do. Might be my card, but people being able to hit 1100 on the Air card vs my 985 seems like a drastic difference. I initially intended to buy the AIO card but couldn't find it in stock. Definitely regretting not having it.


Oh there is the same with my HBM, I mean I can barely hit 1050 Mhz but it begins to show some artefact. The best score that I've on Heaven Bench is 1697 on core and 990Mhz on memory. Only 990Mhz arf, it's so slow compare to 1100Mhz.
I hope the WB, Proper Oc drivers or bios mod could push harder this HBM2.


----------



## rv8000

Quote:


> Originally Posted by *kundica*
> 
> Unfortunately, no. My HBM OCs poorly no matter what I do. Might be my card, but people being able to hit 1100 on the Air card vs my 985 seems like a drastic difference. I initially intended to buy the AIO card but couldn't find it in stock. Definitely regretting not having it.


What are your temps like at stock? According to buildzoid HBM is very temperature sensitive


----------



## PontiacGTX

Quote:


> Originally Posted by *Echoa*
> 
> Because Nvidia, you only mock AMD for RAM. XP
> 
> I've mocked Nvidia about Vram since cards like the 8800gts with weird 320mb of RAM and such


wlel before AMD had cards with more vram and now due to having a smaller footprint and power consumption they have less VRAM than their competition

Quote:


> Originally Posted by *ontariotl*
> 
> My point exaclty. There is always one inherent flaw sought after and brought up with every AMD gpu release.
> 
> Even if it was 30-50w difference, it was still talked about and shunned. Nvidia could poop in a box and ship it out and it would be rejoiced. Sure it was talked about with the 970 vram fiasco, but people were still eating them up. No one said not to buy one. AMD was mocked because they went with a new limited tech at the time.
> 
> I just miss the days of a cool new video card coming out and it just be about the gaming experience. Not who is better or this fanboi crap going on.


well the critic towards the 290x was ridiculous because high voltage/bad temps on reference cooling still was a much better launch than Vega

About 970 VRAM people before even said 3GB were fine because 780Ti were 3GB , i think people usually just talks positevely of Nvidia even if the AMD equivalent is as good like 980 vs R9 Fury 1070 vs RX VEGA 56 290x vs GTX 780


----------



## Neutronman

Quote:


> Originally Posted by *Soggysilicon*
> 
> Funny you mention fitting... I was tempted to make a bracket and try to fit one of my old EK-Supreme HF Universals but considering how temperamental the card (was trouble shooting it right out of the box for no DP response on any channel)
> 
> 
> 
> 
> 
> 
> 
> , and that I have absolutely no faith that there are *ANY* to replace a screw up I bit the bullet and went with the FC block. (Played hell just trying to get this one from an E-Tailer without them canceling or looting my basket - VEGA 64 AC).
> 
> As far as TIM goes, I have been using arctic silver 5 for ages, and will likely use that again. Use a razor blade to spread a very thin coating over the whole die, then a very thin X over the 2 HBM and Proc. Ill use a 2 part cleaning prep solution before putting it in place, and may do a test spread just to be sure I am not swedging the material out onto the res/caps.
> 
> Once its on, and the loop is up, I'll get some picks and do a mini review.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.amazon.com/Arctic-Silver-Thermal-Compound-ArctiClean/dp/B002DILLMS/ref=sr_1_5?ie=UTF8&qid=1503254519&sr=8-5&keywords=arctic+silver+5


I have some concerns that using a TIM that is even slightly conducting might create shorting issues between the 2 HBM modules and the GPU. I plan on using Kryonaut myself.

EDIT: Actually ignore this comment, after all the cooling block that touches the gpu and HBM stacks is a lump of metal!!! Perhaps I'll try conductonaut


----------



## kundica

Quote:


> Originally Posted by *rv8000*
> 
> What are your temps like at stock? According to buildzoid HBM is very temperature sensitive


Mid 70s in balanced mode with no fan changes. Under prolonged use it'll reach high 70s low 80s. If I run the fans high then low 70s high 60s.

I've definitely suspected it's a heat issue, but others should have similar temps and can run theirs so much higher.


----------



## rv8000

Quote:


> Originally Posted by *kundica*
> 
> Mid 70s in balanced mode with no fan changes. Under prolonged use it'll reach high 70s low 80s. If I run the fans high then low 70s high 60s.
> 
> I've definitely suspected it's a heat issue, but others should have similar temps and can run theirs so much higher.


Try maxing the fans or 70-80% and see how high you can push the HBM on a few loops of fire strike. If you can't get much higher than 1000, could be a dud for overclocking or it may be possible cards are shipping with different stock voltages for the HBM2.


----------



## kundica

Quote:


> Originally Posted by *rv8000*
> 
> Try maxing the fans or 70-80% and see how high you can push the HBM on a few loops of fire strike. If you can't get much higher than 1000, could be a dud for overclocking or it may be possible cards are shipping with different stock voltages for the HBM2.


I did that, ran the fan at 4000. FS will go up to 1000 for multiple runs but Timespy can only do 985.


----------



## rv8000

Quote:


> Originally Posted by *kundica*
> 
> I did that, ran the fan at 4000. FS will go up to 1000 for multiple runs but Timespy can only do 985.


Steves V56 sample peaked at ~980 on the HBM2 due to the lower default voltage; I think he measured about 1.3-1.31v whereas most V64 and FE cards run at 1.35v. It'd be worth checking if you have a DMM and have access to the readout point under load.


----------



## Echoa

Quote:


> Originally Posted by *PontiacGTX*
> 
> wlel before AMD had cards with more vram and now due to having a smaller footprint and power consumption they


Im more talking about awkward amounts of Vram than having more or less lol


----------



## punchmonster

I have a question: any GPU aftermarket AIO or aircooler known to fit with Vega cards? I'd like to slap something on there but have no idea if anything fits or if I need to make it myself


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> I have a question: any GPU aftermarket AIO or aircooler known to fit with Vega cards? I'd like to slap something on there but have no idea if anything fits or if I need to make it myself


I think some Raijintek Morpheus II http://www.pcgameshardware.de/Vega-Codename-265481/News/Radeon-Vega-Frontier-Edition-Morpheus-II-Umbau-1233558/


----------



## Neutronman

Quote:


> Originally Posted by *punchmonster*
> 
> I have a question: any GPU aftermarket AIO or aircooler known to fit with Vega cards? I'd like to slap something on there but have no idea if anything fits or if I need to make it myself


Is that you want?

https://www.alphacool.com/detail/index/sArticle/22291


----------



## Nutty Pumpkin

So I’ve been playing DOOM at 4K with Freesync and my god it is amazing.


----------



## PontiacGTX

Quote:


> Originally Posted by *Nutty Pumpkin*
> 
> So I've been playing DOOM at 4K with Freesync and my god it is amazing.


how many FPS do you get with TSAAA 8X (vulkan) at 1080,1440 and 4k ? and what vulkan version are you using


----------



## Formula383

Has anyone see info on how to increase the power limit on vega64?


----------



## punchmonster

Quote:


> Originally Posted by *PontiacGTX*
> 
> I think some Raijintek Morpheus II http://www.pcgameshardware.de/Vega-Codename-265481/News/Radeon-Vega-Frontier-Edition-Morpheus-II-Umbau-1233558/


hmm interesting

Quote:


> Originally Posted by *Neutronman*
> 
> Is that you want?
> 
> https://www.alphacool.com/detail/index/sArticle/22291


Yes but it's out of stock so.


----------



## Chaoz

Is it normal that my GPU usage isn't 99-100% while in-game? The game itself runs smooth af, tho. FPS is stable aswell.


----------



## PontiacGTX

Quote:


> Originally Posted by *Chaoz*
> 
> Is it normal that my GPU usage isn't 99-100% while in-game? The game itself runs smooth af, tho. FPS is stable aswell.


what graphics quality and resolution? why not disable vsync?


----------



## punchmonster

After a bit of testing, simple thermal throttling seems to be the biggest issue with HBM2. With stock fan settings I was only able to get 1030Mhz on the HBM2 before it started throttling down to 800Mhz under short spans of load. With fan set to 100% and some extra airflow added with external fan I can hit 1080Mhz+ stable on the memory OC. Haven't pushed it further than 1080Mhz so far.

This is why I was wondering about replacement coolers. Although having non-buggy drivers would also be a fine alternative, as then undervolting core consistently would probably make it easier for the heatsink to dissipate the HBM2 heat.

*UPDATE:* 1105Mhz stable up from 1030Mhz on the memory with extra cooling so far. This will be the biggest reason to buy custom AIB I suspect. And yes I checked if it's actually going up. I had a bench looping and saw consistent gains at each step. That's a whopping 75Mhz just from improving HBM2 cooling and I think there's some more headroom.


----------



## Chaoz

Quote:


> Originally Posted by *PontiacGTX*
> 
> what graphics quality and resolution? why not disable vsync?


2560x1080, Ultra quality. I did disable VSync, but for some reason it won't turn off.

With my previous GTX1070 it was constantly maxed out at 99-100% at the same settings.


----------



## PontiacGTX

Quote:


> Originally Posted by *Chaoz*
> 
> 2560x1080, Ultra quality. I did disable VSync, but for some reason it won't turn off.
> 
> With my previous GTX1070 it was constantly maxed out at 99-100% at the same settings.


try changing the GPU priority in windows scheduler and also disable vsync, also try to unpark cores(setting power setting to high performance) and see what setting can be changed in cfg file for battlefield

also try using high performance/turbo GPU mode and try to do a clean install with DDU


----------



## Chaoz

Quote:


> Originally Posted by *PontiacGTX*
> 
> try changing the GPU priority in windows scheduler and also disable vsync, also try to unpark cores(setting power setting to high performance) and see what setting can be changed in cfg file for battlefield
> 
> also try using high performance/turbo GPU mode and try to do a clean install with DDU


Already did a clean install with DDU. The weird thing is in benchmarks it does go up to 100% utilisation. Even on balanced mode.


----------



## Energylite

Oh boyz. I receive that this morning


Now I just need to wait my fittings rofl... I hope I can install the WB this week end.


----------



## SlushPuppy007

Any custom water-cooled test results so far ?


----------



## rv8000

Wahoo, amazon came through on the XFX pre-order at MSRP, card should be here Wednesday!


----------



## ontariotl

Quote:


> Originally Posted by *Energylite*
> 
> Oh boyz. I receive that this morning
> 
> 
> Now I just need to wait my fittings rofl... I hope I can install the WB this week end.


Congrats!


----------



## Chaoz

Quote:


> Originally Posted by *Energylite*
> 
> Oh boyz. I receive that this morning
> 
> 
> Now I just need to wait my fittings rofl... I hope I can install the WB this week end.


Oh damn. That's nice. Still waiting on mine. The status hasn't updated today. So hopefully I'll receive it this week. 75°C in-game is too hot for me.


----------



## punchmonster

For now 1105Mhz seems to be the actual limit, not simply the thermal limit. Now I have to figure out how to stay at that temperature without having a jet fan in my room.
Quote:


> Originally Posted by *punchmonster*
> 
> After a bit of testing, simple thermal throttling seems to be the biggest issue with HBM2. With stock fan settings I was only able to get 1030Mhz on the HBM2 before it started throttling down to 800Mhz under short spans of load. With fan set to 100% and some extra airflow added with external fan I can hit 1080Mhz+ stable on the memory OC. Haven't pushed it further than 1080Mhz so far.
> 
> This is why I was wondering about replacement coolers. Although having non-buggy drivers would also be a fine alternative, as then undervolting core consistently would probably make it easier for the heatsink to dissipate the HBM2 heat.
> 
> *UPDATE:* 1105Mhz stable up from 1030Mhz on the memory with extra cooling so far. This will be the biggest reason to buy custom AIB I suspect. And yes I checked if it's actually going up. I had a bench looping and saw consistent gains at each step. That's a whopping 75Mhz just from improving HBM2 cooling and I think there's some more headroom.


----------



## SlushPuppy007

Quote:


> Originally Posted by *punchmonster*
> 
> For now 1105Mhz seems to be the actual limit, not simply the thermal limit. Now I have to figure out how to stay at that temperature without having a jet fan in my room.


How much extra perf from 1105MHz on the HBM2 ?


----------



## pillowsack

My dhl guy went to the back door with a broken door bell and needed a signature ?

I guess air cooling is fine for a night ???

This is very good news to hear that HBM likes lower temperature though. My water temp hits 40C max at the moment with a 6800K and 390X, so I can imagine 42C with this Vega 64


----------



## punchmonster

In mining it upped performance by 25~%. In gaming benches the 12% uplift is about 9~% extra performance.
Quote:


> Originally Posted by *SlushPuppy007*
> 
> How much extra perf from 1105MHz on the HBM2 ?


----------



## pillowsack

Quote:


> Originally Posted by *punchmonster*
> 
> In mining it upped performance by 25~%. In gaming benches the 11% uplift is about 8~% extra performance.


AHHHH I NEED MY WATERBLOCK


----------



## paulc010

It really does seem that IF AMD had managed to obtain production quantities of faster HBM2 for these cards, then they would have landed at about the right performance level. Sucks to be bleeding edge.


----------



## SlushPuppy007

Quote:


> Originally Posted by *punchmonster*
> 
> In mining it upped performance by 25~%. In gaming benches the 12% uplift is about 9~% extra performance.


Okay, how much extra performance can one realistically expect when custom-loop cooling RX Vega 64 and maxing out GPU Clock Speed and HBM2?

10% ?


----------



## punchmonster

I haven't pushed the core clock much since core clock is very buggy. But at least the cooling is worth it for HBM2 which will give you I imagine +5% real world performance bare minimum just from that. 10% seems like a fair estimate.
Quote:


> Originally Posted by *SlushPuppy007*
> 
> Okay, how much extra performance can one realistically expect when custom-loop cooling RX Vega 64 and maxing out GPU Clock Speed and HBM2?
> 
> 10% ?


----------



## aliquis

I received my rx vega64 liquid cooled edition today, i am so far (from what i have tested satisfied with the card) but i swear, that fan noise on the radiatior is driving me insane. Its not that it is loud but it has a certain tone that is intolerable for me . Already ordered a 120mm noctua fan to replace it with. (anyone with liquid edition have the same complaint ?)


----------



## pillowsack

Ahhhh!!! Truck driver redelivered.


----------



## kundica

Quote:


> Originally Posted by *pillowsack*
> 
> Ahhhh!!! Truck driver redelivered.
> 
> 
> Spoiler: Warning: Spoiler!


Nice! Will your setup be up and running today? Really looking forward to hearing results. I should've pre-ordered when they were up, now it just shows out of stock.


----------



## SlushPuppy007

Quote:


> Originally Posted by *punchmonster*
> 
> I haven't pushed the core clock much since core clock is very buggy. But at least the cooling is worth it for HBM2 which will give you I imagine +5% real world performance bare minimum just from that. 10% seems like a fair estimate.


So 10% from OC under water, and 10% from "fine wine" brings it 15-20% from the 1080 Ti?


----------



## rv8000

Quote:


> Originally Posted by *SlushPuppy007*
> 
> So 10% from OC under water, and 10% from "fine wine" brings it 15-20% from the 1080 Ti?


That would put V64 roughly 3-5% behind a stock FE 1080ti, all depending on the application of course (I used Firestrike for this estimate).


----------



## djsatane

Quote:


> Originally Posted by *aliquis*
> 
> I received my rx vega64 liquid cooled edition today, i am so far (from what i have tested satisfied with the card) but i swear, that fan noise on the radiatior is driving me insane. Its not that it is loud but it has a certain tone that is intolerable for me . Already ordered a 120mm noctua fan to replace it with. (anyone with liquid edition have the same complaint ?)


Hmm, what about the pump itself on the card, does it have any noise or weird sound? That was the problem with many fury x liquid cards, wonder how that pump noise compares on vega rx liquid.


----------



## Newbie2009

My block is arriving tomorrow apparently, don't feel like I can push the card any harder though until less buggy drivers are out.


----------



## 113802

Quote:


> Originally Posted by *aliquis*
> 
> I received my rx vega64 liquid cooled edition today, i am so far (from what i have tested satisfied with the card) but i swear, that fan noise on the radiatior is driving me insane. Its not that it is loud but it has a certain tone that is intolerable for me . Already ordered a 120mm noctua fan to replace it with. (anyone with liquid edition have the same complaint ?)


I just ordered a 2150RPM Gentle Typhoon to replace it. The shroud comes off easily. I am curious if the fan has a 3 pin connection that is connected to an extender or if it's just a long cable.



Here's a photo of the Frontier Edition - which is the same cooling solution. I can't see if the fan is connected via a 3 pin because of the water hose.


----------



## gupsterg

Any members with RX VEGA able to see if MSI AB gets i2cdump, 



 and/or do AIDA64 SMBus dump, 



. Does AIDA64 show VID per DPM in registers dump, 



. To be able to bring up the menu to select dumps go to view menu and enable status bar and then right click status bar, even an evaluation version of AIDA64 will get dumps if it supports VEGA.


----------



## pillowsack

Let's hope he doesn't take the remaining two and a half hours he has.


----------



## hellm

Quote:


> Originally Posted by *Newbie2009*
> 
> My block is arriving tomorrow apparently, don't feel like I can push the card any harder though until less buggy drivers are out.


17.8.1 is here.. and at least GPU-Z is working again


----------



## aliquis

Quote:


> Hmm, what about the pump itself on the card, does it have any noise or weird sound?


I can hear the pump clearly from the outside, at least on my model it is not silent at all.
Although i have many fans running in my pc case, they are all very silent, so the rx vega liquid is by far the part that makes the loudest noise.

From what i have read ( i have not removed the shroud yet) i think there is some sort of mini 4pin pwm connector on the card, i have read that you need a adapter to plug in a normal 4 pin case fan .

Anyway, i plan to post a picture when i have exchanged the fan and share what parts i used and if the modification was any good.

edit: should have tried that way earlier, you can use the custom profile in wattman to lower the default rpm of the fan, which reduces the idle rpm from ~1000rpm to about 500rpm and reduces the noise significantly.


----------



## Kokin

I wasn't lucky enough to get a Vega at launch and it's honestly disappointing at the moment. I do feel like AMD will eventually pull it together with the drivers, but the expected low stock for the upcoming weeks/months (like all launches) and combined with the price hike due to miners and retailers gouging, I have opted to get a 1080Ti FTW3 instead. It's going to be weird getting my first-ever Nvidia card, but I'm looking forward for Vega to improve over the upcoming months and hopefully butt heads with the 1080Ti.








Quote:


> Originally Posted by *rv8000*
> 
> That would put V64 roughly 3-5% behind a stock FE 1080ti, all depending on the application of course (I used Firestrike for this estimate).


It's going to take at least 25-30% to reach the stock 1080Ti and 40% for an overclocked 1080Ti. Using Firestrike, stock Vega64 is ~22K graphics score, while stock 1080Ti is ~28K, an overclocked 1080Ti is ~30-31K.

If drivers can bring up stock Vega64 performance to 25K-26K, then I can realistically see it overclocking to stock/mild OC 1080Ti speeds.
Quote:


> Originally Posted by *djsatane*
> 
> Hmm, what about the pump itself on the card, does it have any noise or weird sound? That was the problem with many fury x liquid cards, wonder how that pump noise compares on vega rx liquid.


The pump noise issue was mostly on the first batches/months of Fury X. Mine is super quiet and maxes out at around 52C with a custom fan profile running at about 1400 RPM load, 800 RPM idle. I'm hoping this doesn't repeat for RX Vega Liquid or else everyone is going to assume ALL RX Vega Liquids have noisy pumps.









However, I do use a Case Labs Mercury S3, so the GPU is mounted in a vertical position instead of horizontal, which might attribute to not having any noise issues.


----------



## rv8000

Quote:


> Originally Posted by *Kokin*
> 
> I wasn't lucky enough to get a Vega at launch and it's honestly disappointing at the moment. I do feel like AMD will eventually pull it together with the drivers, but the expected low stock for the upcoming weeks/months (like all launches) and combined with the price hike due to miners and retailers gouging, I have opted to get a 1080Ti FTW3 instead. It's going to be weird getting my first-ever Nvidia card, but I'm looking forward for Vega to improve over the upcoming months and hopefully butt heads with the 1080Ti.
> 
> 
> 
> 
> 
> 
> 
> 
> It's going to take at least 25-30% to reach the stock 1080Ti and 40% for an overclocked 1080Ti. Using Firestrike, stock Vega64 is ~22K graphics score, while stock 1080Ti is ~28K, an overclocked 1080Ti is ~30-31K.
> 
> If drivers can bring up stock Vega64 performance to 25K-26K, then I can realistically see it overclocking to stock/mild OC 1080Ti speeds.
> The pump noise issue was mostly on the first batches/months of Fury X. Mine is super quiet and maxes out at around 52C with a custom fan profile running at about 1400 RPM load, 800 RPM idle. I'm hoping this doesn't repeat for RX Vega Liquid or else everyone is going to assume ALL RX Vega Liquids have noisy pumps.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However, I do use a Case Labs Mercury S3, so the GPU is mounted in a vertical position instead of horizontal, which might attribute to not having any noise issues.


~1730/1050 already puts V64 at a GPU score of approximately 25000, 10% from drivers puts it just under 28k.

To those willing to push very high wattage under water they may come close to 29/30k, IF, IFFF there are 10% gains to be had from drivers.


----------



## pillowsack

Preeeeettty

Can you guys suggest method of overclocking and is furmark still fine?


----------



## Echoa

Quote:


> Originally Posted by *pillowsack*
> 
> 
> 
> 
> 
> 
> Preeeeettty
> 
> Can you guys suggest method of overclocking and is furmark still fine?


Furmark hasn't been fine for years. If you want to test it use heaven, valley, Firestrike, etc.


----------



## pillowsack

http://www.3dmark.com/3dm/21704264?

Pretty good score?

I can't seem to get core over 1680, and I guess I can't raise voltage above 1.2? I had a max temp of 46C









What about the memory? Im at 1100 with 1070 voltage. What's the limit on this voltage? What should I aim for guys


----------



## Echoa

Quote:


> Originally Posted by *pillowsack*
> 
> http://www.3dmark.com/3dm/21704264?
> 
> Pretty good score?
> 
> I can't seem to get core over 1680, and I guess I can't raise voltage above 1.2? I had a max temp of 46C
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What about the memory? Im at 1100 with 1070 voltage. What's the limit on this voltage? What should I aim for guys


Pushing hard isn't going to work out right now from what I hear. Drivers are buggy ATM so I'd stick with that for now and wait till the driver's mature over the next couple months to push harder.


----------



## pillowsack

Quote:


> Originally Posted by *Echoa*
> 
> Pushing hard isn't going to work out right now from what I hear. Drivers are buggy ATM so I'd stick with that for now and wait till the driver's mature over the next couple months to push harder.


You're right, I shouldn't complain. AMD finally put out a card that can max overwatch for my freesync monitor


----------



## kundica

Quote:


> Originally Posted by *pillowsack*
> 
> http://www.3dmark.com/3dm/21704264?
> 
> Pretty good score?
> 
> I can't seem to get core over 1680, and I guess I can't raise voltage above 1.2? I had a max temp of 46C
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What about the memory? Im at 1100 with 1070 voltage. What's the limit on this voltage? What should I aim for guys


Are you running with +50% power limit? Also, upping the voltages has no impact unless you did it through the recently shared registry hack power play table from this thread: http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26297003


----------



## Kokin

Quote:


> Originally Posted by *rv8000*
> 
> ~1730/1050 already puts V64 at a GPU score of approximately 25000, 10% from drivers puts it just under 28k.
> 
> To those willing to push very high wattage under water they may come close to 29/30k, IF, IFFF there are 10% gains to be had from drivers.


Was this posted somewhere? I'd like a link if you have one


----------



## punchmonster

Good news: 1.7.8.1 non beta SEEMS to have working voltage control.

I didn't have a clamp around today sadly, but I tested it by setting voltage to 1630/1100 then adjusting powerlimit 1% above where it stopped throttling down the clock. Then when clock was stable I upped voltage to 1200 and instantly saw the throttling come back.

I then set fan to 4k RPM and powerlimit all the way open. Then dropped voltage to see if temperature would drop. Between 1200mV and 1070mV I only saw a 5ºC drop which seems a bit low? But it was consistent and reproducible.

I then set power limit to 0 with P7 set to 1630/1070 which held clocks whereas at stock it would power throttle. All together this indicates that it's probably working, although maybe not as intended.

If anyone can confirm it would be nice. This is simply in WattMan with latest AB beta for monitoring.


----------



## rv8000

Quote:


> Originally Posted by *Kokin*
> 
> Was this posted somewhere? I'd like a link if you have one


http://www.3dmark.com/3dm/21665850 From a user here.

There are a few more links in this thread to scores between 24500 to 25500.

Also if you go take a look at Steves (gamersnexus) FE hybrid mod results from a few weeks ago he pulled 24.9k gpu scores at 1700/1100 with fluctuating core clocks.

The majority of the performance is coming from HBM clocks and stabilizing core clock rather than pushing it much higher. Bandwidth alotting, scaling is around 0.7.

*hard to go through and find all the links on my phone


----------



## pillowsack

Well I tried the registry mod and albeit showed 1250 vcore I don't think it actually did anything. I'm still happy with the 2% core overclock and 1100 mem cause this card does not throttle with full block at 50% power.


----------



## punchmonster

Card is now happily chugging along at sustained 1630Mhz/1100Mhz with 0% power target with core @ 1060mV. Truly amazing







these cards come out of the factory wayyyyy overvolted.


----------



## cooljaguar

Quote:


> Originally Posted by *punchmonster*
> 
> Card is now happily chugging along at sustained 1630Mhz/1100Mhz with 0% power target with core @ 1060mV. Truly amazing
> 
> 
> 
> 
> 
> 
> 
> these cards come out of the factory wayyyyy overvolted.


Overclocked, undervolted, _and_ 0% power target? Damn, that's impressive!


----------



## punchmonster

Yes it is. It feels like a universe of difference compared to the out of the box experience.
Quote:


> Originally Posted by *cooljaguar*
> 
> Overclocked, undervolted, _and_ 0% power target? Damn, that's impressive!


----------



## Kokin

Quote:


> Originally Posted by *rv8000*
> 
> http://www.3dmark.com/3dm/21665850 From a user here.
> 
> There are a few more links in this thread to scores between 24500 to 25500.
> 
> Also if you go take a look at Steves (gamersnexus) FE hybrid mod results from a few weeks ago he pulled 24.9k gpu scores at 1700/1100 with fluctuating core clocks.
> 
> The majority of the performance is coming from HBM clocks and stabilizing core clock rather than pushing it much higher. Bandwidth alotting, scaling is around 0.7.
> 
> *hard to go through and find all the links on my phone


My mistake, I thought you actually had some links regarding the 10% improvement in drivers.

I've read/watched through all of Steve's articles and Youtube videos prior to Vega launch. He was pulling an insane 600W







from the wall to reach that FE hybrid score. FE hybrid OC was at 400W on just the GPU PCI-E cables alone, which does not include the power coming from the PCI-E slot. At stock, Vega FE was pulling 440W at the wall, approximately 280W+, which would be pretty similar to an AIB 1080Ti.

Stock performance is still at 22K, but theoretical 5-10% driver improvement would put that at 23K-24K, which is GTX 1080 OC numbers. Combine with what is considered a "max" OC of 1700/1100, we can add 3K to the score, which would be around 26-27K, nipping the toes of the stock 1080Ti. Overclocked 1080Ti, still gets 30K-32K depending on max sustained OC, but would be considered close enough for the price difference to matter.

It would be impressive for Vega64 to reach stock 1080Ti speeds in gaming and not just synthetics as well. We'll have to see what the watercooled Vega64 is capable of and how long the wait for better drivers will take.


----------



## DrZine

Quote:


> Originally Posted by *punchmonster*
> 
> Card is now happily chugging along at sustained 1630Mhz/1100Mhz with 0% power target with core @ 1060mV. Truly amazing
> 
> 
> 
> 
> 
> 
> 
> these cards come out of the factory wayyyyy overvolted.


I can confirm. The new driver seems to be better. Wattman is accurately controlling voltage at least in terms of adjustments. HWinfo shows 1.15v when watman says 1.2v. they basically only disagree by .05v regardless what you set. Also HWinfo is showing a lot less vcore fluctuation now. I am able to run stable at 1630/1100 Vcore at 1050. However my power target is at +100%. I thought I was stable at 1v on the core, but Heaven crashed at the very end and it was only once. Maybe a fluke? Temps are A LOT easier to control. Without all the throttling I am getting slightly better scores the lower the voltage as well. Getting a water block is looking better and better.

Edit: quoted the wrong post. I should also mention HWinfo is goofing on the power sensor reading. All the gpu and memory power related readings are backwards. Everything its says for gpu is the hbm.


----------



## pillowsack

Reading this makes me feel a lot better because I don't think I've throttled once.

Does anyone know if the registry edits really work for the 17.8.1 driver?


----------



## Energylite

Oh à new driver? So what's going on?


----------



## punchmonster

Simply the non-beta 17.8.1
Quote:


> Originally Posted by *Energylite*
> 
> Oh à new driver? So what's going on?


----------



## aliquis

I installed the new driver and did some undervolting tests: my systems main components are a i7 [email protected] and the rx vega 64 liquid, i made a custom profile with a reduced max boost clock (from 1752MHz) to 1650MHz @ 1050mV, - 10% power target

timespy graphics score was :~7100, firestrike graphics score: 22 234 , the card was able to hold its max clock (1650MHz) all the time without throttling, i have my whole rig plugged in a power meter at the wall, during the benches it was pulling on average ~350W during load (with my undervolted rx 480 my system pulled about 230-250W during load)

i ll do some more testing later.


----------



## Soggysilicon

Nickel Version EK-FC with some white LEDs. Loop and block are more than sufficient to keep the card under 35C at load. Nominal sits around 32-33. Idle 22C. Easily hit and hold 1105 on mem. clock, unfortunately beyond that is an instant crash more often than not. No artifacts.

Core clock OC' is pointless on the driver I was using, no measurable affect. New drivers maybe will correct that bug.

Everything about the block is what you would expect from EK, my only real complaint was that the LED placement is far less than desirable, very difficult to populate the holes.

I did notice a consistent 1% or better improvement with the HBCC on, I suspect mileage will vary considerably dependent on your ram clocks / stability.

More to come! Cheers!


----------



## Pleskac

Hello guys can anyone answer this question? What is memory voltage on water cooled vega is it same as vega 64 air?
Thanks

__
https://www.reddit.com/r/6v82sl/what_is_memory_voltage_on_liquid_cooled_vega/


----------



## Newbie2009

My block arrived today so will post results. I noticed with the new driver my card will hold 1630 core with no powerplay % increase.

Also a pic of my vega chip with HBM as the different configs seem a hot topic at the moment.

I've just had time to strip it down, needs a clean.


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Nickel Version EK-FC with some white LEDs. Loop and block are more than sufficient to keep the card under 35C at load. Nominal sits around 32-33. Idle 28-29. Easily hit and hold 1105 on mem. clock, unfortunately beyond that is an instant crash more often than not. No artifacts.
> 
> Core clock OC' is pointless on the driver I was using, no measurable affect. New drivers maybe will correct that bug.
> 
> Everything about the block is what you would expect from EK, my only real complaint was that the LED placement is far less than desirable, very difficult to populate the holes.
> 
> I did notice a consistent 1% or better improvement with the HBCC on, I suspect mileage will vary considerably dependent on your ram clocks / stability.
> 
> More to come! Cheers!


Have you seen this regarding HBCC? https://techgage.com/article/a-look-at-amd-radeon-vega-hbcc/

The EK blocks look like a good addition to Vega. I've been debating a full loop but the high cost of entry has been holding me back. I caved and ordered the Liquid Vega 64 last night as a simpler solution.


----------



## Newbie2009

The HBCC option is only in windows 10 apparently, don't see the option in windows 7.


----------



## aliquis

Here are a few screenshots of firestrike runs with different undervolting/overclocking combinations:
I dont know yet if they are 100% stable
Power limit is always -10%, total system power draw measured with a wattmater at the wall always about 350W during load (i use a i7 [email protected] 4Ghz and a rx vega 64 luqid)

max boost [email protected] 1050mV


max boost [email protected]


max boost [email protected] , hbm2 overclock = 1000MHz


in all these configurations the card was able to hold its max boost clock all the time.


----------



## kundica

Quote:


> Originally Posted by *aliquis*
> 
> Here are a few screenshots of firestrike runs with different undervolting/overclocking combinations:
> I dont know yet if they are 100% stable
> Power limit is always -10%, total system power draw measured with a wattmater at the wall always about 350W during load (i use a i7 [email protected] 4Ghz and a rx vega 64 luqid)
> 
> max boost [email protected] 1050mV
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> max boost [email protected]
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> max boost [email protected] , hbm2 overclock = 1000MHz
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> in all these configurations the card was able to hold its max boost clock all the time.


What are your temps like on the Liquid card?


----------



## aliquis

max temp measured during benching was 63C. (most of the time they are lower of course)


----------



## Newbie2009

Probably get a better idea @ extreme or ultra, more stress on cards and GPU is the total bottleneck. Normal score will depend a lot on cpu me thinks.


----------



## sneida

please post some mhz/voltage combinations that work for you.

i have a sapphire vega 64 liquid, what seems to be the default settings in wattman are (values when i disable auto):
* core 1752mhz, 1200mv
* memory 945mhz, 950mv

results so far:
crimson 17.8.1 (final), ALL stock with power limit +50%
firestrike score (grahics score): 23703

crimson 17.8.1 (final), core 1752mhz (stock) / 1100mv (undervolted from 1200mv), memory 1000mhz (overclocked from 945) stock voltage 950mv, power limit +50%:
firestrike score (grahics score): 23541

while gaming (e.g. bf1, [email protected], maxed out) is seems to easily keep the 1752/945 (observed in wattman) @ about 65°C.


----------



## aliquis

Hello, i have the same card as you, the default settings are also the same in Wattman.

From my observation so far it seems completly unreasonable to run the card with a high power limit enabled and high clocks, because at least on my card the hbm2 wouldn't overclock/become unstable when overclocked when i ran the card at high core clocks (at 1700MHz or higher) but from what i saw in some ingame and synthetic benchmarks, a hbm2 overclock seems to increase the performancy clearly.

So by running the card at lower clocks (and lower power limit) you not only save a ton of power but it enables you to run a stable hbm2 overclock which compensates for the lost performance on core clock or yield even more performance in applications that are memory bottle necked(thats what it looks like to me for now )


----------



## DampMonkey

Installed the EK block last night. Didn't mess around with any voltages, I dont remember exactly what the temps maxed out at but I don't think it got past the low 40's. This card definitely loves being cool, just a few cursory benchmarks and I was seeing gains across the board. I'll have more details later, but I'm VERY happy so far.


----------



## sneida

interesting!

do you have any memory mhz/mv combinations that work for your card?


----------



## Newbie2009

Quote:


> Originally Posted by *aliquis*
> 
> Hello, i have the same card as you, the default settings are also the same in Wattman.
> 
> From my observation so far it seems completly unreasonable to run the card with a high power limit enabled and high clocks, because at least on my card the hbm2 wouldn't overclock/become unstable when overclocked when i ran the card at high core clocks (at 1700MHz or higher) but from what i saw in some ingame and synthetic benchmarks, a hbm2 overclock seems to increase the performancy clearly.
> 
> So by running the card at lower clocks (and lower power limit) you not only save a ton of power but it enables you to run a stable hbm2 overclock which compensates for the lost performance on core clock or yield even more performance in applications that are memory bottle necked(thats what it looks like to me for now )


HBM is very temperature sensitive apparently.


----------



## Newbie2009

Very nice. Can you put an led behind the Radeon logo?


----------



## aliquis

Quote:


> do you have any memory mhz/mv combinations that work for your card?


The best results so far (i am searching for a sweetspot between power consumption/performance) is: max boost 1652MHz @ 1050mV, HBM2 clock = 1000MHz, memory voltage =default, power limit = -10%

But i have not tested my card for long enough to be sure that the settings are 100% stable (and our samples may yield different results anyway)


----------



## Newbie2009

Quote:


> Originally Posted by *aliquis*
> 
> The best results so far (i am searching for a sweetspot between power consumption/performance) is: max boost 1652MHz @ 1050mV, HBM2 clock = 1000MHz, memory voltage =default, power limit = -10%
> 
> But i have not tested my card for long enough to be sure that the settings are 100% stable (and our samples may yield different results anyway)


I'd push HBM as far as you can, will have zero effect on power consumption.


----------



## punchmonster

Not true. It might not matter much but it does take extra power which could force you to move the power limit.
Quote:


> Originally Posted by *Newbie2009*
> 
> I'd push HBM as far as you can, will have zero effect on power consumption.


----------



## 113802

Core overclocking seems like it's working for me with the new driver.

When going above 1812Mhz I black screen as before I could run at 1980Mhz and score the same.

This time around I scored 600 points over stock

https://www.3dmark.com/fs/13421757


----------



## sneida

Quote:


> Originally Posted by *aliquis*
> 
> The best results so far (i am searching for a sweetspot between power consumption/performance) is: max boost 1652MHz @ 1050mV, HBM2 clock = 1000MHz, memory voltage =default, power limit = -10%
> 
> But i have not tested my card for long enough to be sure that the settings are 100% stable (and our samples may yield different results anyway)


i just tried very similar settings: 1667mhz/1000mhz (state 6/7 with 1050mv, memory 950mv, -10% power limit)

time spy score graphics is 7552.
-> this is about the same as i get with the balanced profile. with power limit +50% (otherwise stock) i get 7767.

update:

1667mhz/1050mhz (state 6/7 with 1050mv, memory 950mv, default power limit)
time spy score graphics is 7634.


----------



## Newbie2009

Quote:


> Originally Posted by *punchmonster*
> 
> Not true. It might not matter much but it does take extra power which could force you to move the power limit.


a watt or 2


----------



## aliquis

As far as HBM overclocking goes, it does indeed seem to yield good performance gains (much bigger then when i overclocked the memory on my rx 480 for comparisons sake) but i am worried about stability first and foremost, because i already have *a lot* of failed hbm2 overclocking tries and i feel that the memory is very delicate when it comes to higher clock rates..


----------



## zdude

Can anybody with the Vega FE verify if SR-IOV is enabled on the card? It doesn't appear to be enabled on consumer Vega 64 but if it is on the FE that makes it well worth the $1000 to me....


----------



## 113802

Quote:


> Originally Posted by *zdude*
> 
> Can anybody with the Vega FE verify if SR-IOV is enabled on the card? It doesn't appear to be enabled on consumer Vega 64 but if it is on the FE that makes it well worth the $1000 to me....


SR-IOV is also disabled on the Frontier Edition. Seems like it's only enabled for FirePro/WX


----------



## zdude

That is really unfortunate, guess I will have to look into hard modding some cards then. Does anybody know how AMD is setting PCI-e IDs on these cards?


----------



## pillowsack

Quote:


> Originally Posted by *DampMonkey*
> 
> Installed the EK block last night. Didn't mess around with any voltages, I dont remember exactly what the temps maxed out at but I don't think it got past the mid 50's. This card definitely loves being cool, just a few cursory benchmarks and I was seeing gains across the board. I'll have more details later, but I'm VERY happy so far.


Black acetal + nickel ftw


----------



## sneida

*updated benchmarks:*

crimson 17.8.1 (final), core 1752mhz (stock) / 1100mv (undervolted from 1200mv), memory 1000mhz (overclocked from 945) stock voltage 950mv, power limit +50%:

firestrike score (grahics score): 23541

crimson 17.8.1 (final), core 1667mhz (lowered from 1752) / 1050mv (undervolted from 1200mv), memory 1050mhz (overclocked from 945) stock voltage 950mv, default power limit (+/- 0):

firestrike score (graphics score): 23991

crimson 17.8.1 (final), core 1667mhz (lowered from 1752) / 1050mv (undervolted from 1200mv), memory 1100mhz (overclocked from 945) stock voltage 950mv, default power limit (+/- 0):

firestrike score (graphics score): 24348

*comparison:*

crimson 17.8.1 (final), ALL stock with power limit +50%

firestrike score (grahics score): 23703


----------



## punchmonster

Probed the card, and sadly the memory voltage control still does nothing on 17.8.1 final guys.
But core voltage is definitely working!

Really gonna need an aftermarket cooler just for the HBM2 though. Shame my country doesn't have the Morpheus 2 available.


----------



## 113802

Quote:


> Originally Posted by *sneida*
> 
> *updated benchmarks:*
> 
> crimson 17.8.1 (final), core 1752mhz (stock) / 1100mv (undervolted from 1200mv), memory 1000mhz (overclocked from 945) stock voltage 950mv, power limit +50%:
> 
> firestrike score (grahics score): 23541
> 
> crimson 17.8.1 (final), core 1667mhz (lowered from 1752) / 1050mv (undervolted from 1200mv), memory 1050mhz (overclocked from 945) stock voltage 950mv, default power limit (+/- 0):
> 
> firestrike score (graphics score): 23991
> 
> crimson 17.8.1 (final), core 1667mhz (lowered from 1752) / 1050mv (undervolted from 1200mv), memory 1100mhz (overclocked from 945) stock voltage 950mv, default power limit (+/- 0):
> 
> firestrike score (graphics score): 24348
> 
> *comparison:*
> 
> crimson 17.8.1 (final), ALL stock with power limit +50%
> 
> firestrike score (grahics score): 23703


Why are those graphic scores low? With FreeSync disabled I score the same. Are you thermal throttling?

HBM Overclocked:

https://www.3dmark.com/fs/13403212

I get this at stock:

https://www.3dmark.com/fs/13368189


----------



## Worldwin

To clarify the memory voltage found in wattman ONLY affects *auxiliary voltage*. HBM is specced @ 1.2v and you guys are adjusting it at ~1V. It becomes pretty apparent the memory voltage is not being changed. Memory voltage has not been adjustable since Tahiti GPU's.


----------



## sneida

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Why are those graphic scores low? With FreeSync disabled I score the same. Are you thermal throttling?
> 
> HBM Overclocked:
> 
> https://www.3dmark.com/fs/13403212
> 
> I get this at stock:
> 
> https://www.3dmark.com/fs/13368189


you are running it with full core speed 1752 and with what power limit? - thats probably one part of the difference...


----------



## aliquis

I have the rx vega 64 liquid too, if i try to overclock my hbm2 memory with a max core boost of 1752MHz my system becomes unstable and eventually crashes.


----------



## sneida

Quote:


> Originally Posted by *aliquis*
> 
> I have the rx vega 64 liquid too, if i try to overclock my hbm2 memory with a max core boost of 1752MHz my system becomes unstable and eventually crashes.


same here - by lowering core clock to 1667 (@1050mv) i can get it up to 1100mhz (1150 crashes)


----------



## PontiacGTX

Quote:


> Originally Posted by *aliquis*
> 
> As far as HBM overclocking goes, it does indeed seem to yield good performance gains (much bigger then when i overclocked the memory on my rx 480 for comparisons sake) but i am worried about stability first and foremost, because i already have *a lot* of failed hbm2 overclocking tries and i feel that the memory is very delicate when it comes to higher clock rates..


even a worse memory bottleneck than 480?


----------



## aliquis

I only overclocked my hbm2 by a bit (from 945Mhz to 1000Mhz ) and even before running a synthetic benchmark i could see the performance increase in games. I usually run 2-3 ROTTR dx12 bench loops to test stability (because that always crashed my rx 480 when the undervolt was not stable) and only with this slight memory clock increase the average framerate increases by about 5 fps on average (i run it at 3440 *1440p max settings)


----------



## PontiacGTX

Quote:


> Originally Posted by *aliquis*
> 
> I only overclocked my hbm2 by a bit (from 945Mhz to 1000Mhz ) and even before running a synthetic benchmark i could see the performance increase in games. I usually run 2-3 ROTTR dx12 bench loops to test stability (because that always crashed my rx 480 when the undervolt was not stable) and only with this slight memory clock increase the average framerate increases by about 5 fps on average (i run it at 3440 *1440p max settings)


That's probably a sign of memory's Bandwidth bottleneck

R9 Fury didnt gain much performance from 5-7% mem OC


----------



## dagget3450

I will update list soon if you havent been added yet dont worry. I will try to do it tonight after work.


----------



## Pleskac

Memory voltage cant be changed at all on last generations of gpu? even with modded bios or the registry?!? if that is true it sucks I was hoping we could change it from 1.2V to 1.35V for the liquid cooled version..


----------



## CaptainTom

Quote:


> Originally Posted by *cooljaguar*
> 
> Overclocked, undervolted, _and_ 0% power target? Damn, that's impressive!


Quote:


> Originally Posted by *punchmonster*
> 
> Yes it is. It feels like a universe of difference compared to the out of the box experience.


First time posting here (But I have used this forums guide's countless times!).

I can also attest that tweaking this card makes it perform entirely differently. I am still messing around a lot and unsure of _exactly_ which settings wattman applies correctly, but I can say that with my current stable clock of 1702/1085 (undervolted) that it performs a solid 9-12% above stock settings while humming along quietly at 77c in BF1 MP. This is in a hot and humid house.

Just as a snippet of the performance:

-I normally game at [email protected] Ultra in BF1, but I ran resolution scale at 200% for fun (So 4K equivalent) and I was getting 80-100 FPS on Fao Fortress 64 player!!!
-I benchmarked Metro Last Light and Deus Ex:MD - I am within 5% of a 1080 Ti in those two games. So I would say in general this card is currently performing a lot like a 1080 Ti.
-In fact my PC started crashing and I couldn't figure out why - turns out my CPU was overheating! 6700K almost holds an overclocked Vega back (Just like 1080 Ti).


----------



## kundica

Quote:


> Originally Posted by *Pleskac*
> 
> Memory voltage cant be changed at all on last generations of gpu? even with modded bios or the registry?!? if that is true it sucks I was hoping we could change it from 1.2V to 1.35V for the liquid cooled version..


I believe the Liquid Cooled version is already 1.35v. Also, it can be changed through the registry hack posted in the other thread.


----------



## Pleskac

Quote:


> Originally Posted by *kundica*
> 
> I believe the Liquid Cooled version is already 1.35v. Also, it can be changed through the registry hack posted in the other thread.


Its possible I dont really know some say the liquid cooled version run at 1.2 some say 1.35








There are multiple sources of HBM memory and dies at different voltages it was very confusing. Its good to know the voltage can be modified in some way.









This provides some info about that..
https://videocardz.com/72173/there-are-at-least-three-variants-of-vega-10-gpu-packages

Best HMB should be these and Samsung rates them at 256gb/s while amd at 242/gb/s
https://news.samsung.com/global/samsung-increases-production-of-industrys-fastest-dram-8gb-hbm2-to-address-rapidly-growing-market-demand

I have ordered liquid cooled version so I hope it can be pushed above 1100mhz at a little bit higher voltages. Last hope is that its the same HMB but it will run at lower temp with liquid cooling and require less voltage than the air or its higher quality piece but thats just my dreams i guess









I would love if anyone who has the liquid card could check their bios and post back


----------



## Newbie2009

Vega sure does like water. Increased score across the board at same clocks. Core and HBM maxing out @ 34c and 33c


----------



## The EX1

Quote:


> Originally Posted by *Newbie2009*
> 
> Vega sure does like water. Increased score across the board at same clocks. Core and HBM maxing out @ 34c and 33c


From my testing, it appears that the HBM on Vega loosens timings as it reaches certain temperature thresholds, which is causing a drop in performance while still maintaining the same clock rate. My compute performance on my Vega 64 card instantly drops around 9% when my HBM temps crosses the 80C mark which is very common on air.

I need a waterblock


----------



## Pleskac

Quote:


> Originally Posted by *Newbie2009*
> 
> Vega sure does like water. Increased score across the board at same clocks. Core and HBM maxing out @ 34c and 33c


Thats nice I have same cpu and did you try to overclock just the memory to 1100mhz or does it crash at lower? I saw people hitting little bit over 1000 just on air


----------



## Newbie2009

My card can do stock clocks 1630/1000 at 1000mv, down from the 1200mv stock.

My peak wattage from the wall went from just over 600w to about 460w.

EDIT: Tell a lie, dipped for a moment in firestrike combined, have to tweak.


----------



## rv8000

So whats the deal with combined on firestrike, score/test seems incredibly borked. Being at 1080 equivalent an all, even my 1070 was scoring up to 8k in combined with the same base system (GPU score around 21k on my 1070). My 290 scores the same in combined


----------



## Newbie2009

Quote:


> Originally Posted by *rv8000*
> 
> So whats the deal with combined on firestrike, score/test seems incredibly borked. Being at 1080 equivalent an all, even my 1070 was scoring up to 8k in combined with the same base system (GPU score around 21k on my 1070). My 290 scores the same in combined


You probably ran performance mode (1080p), that is extreme, (1440p)


----------



## rv8000

Quote:


> Originally Posted by *Newbie2009*
> 
> You probably ran performance mode (1080p), that is extreme, (1440p)


https://www.3dmark.com/fs/13428654

Nah, something seems off. GPU utilization is all over the place in combined, I wonder if this can be chalked up to Ryzen issues + AMD problems (lack of proper optimization) with 3DMark.


----------



## 113802

Quote:


> Originally Posted by *rv8000*
> 
> https://www.3dmark.com/fs/13428654
> 
> Nah, something seems off. GPU utilization is all over the place in combined, I wonder if this can be chalked up to Ryzen issues + AMD problems (lack of proper optimization) with 3DMark.


That combined score is horrible!

https://www.3dmark.com/fs/13400060


----------



## Newbie2009

Quote:


> Originally Posted by *rv8000*
> 
> https://www.3dmark.com/fs/13428654
> 
> Nah, something seems off. GPU utilization is all over the place in combined, I wonder if this can be chalked up to Ryzen issues + AMD problems (lack of proper optimization) with 3DMark.


yeah probably.


----------



## rv8000

I'm wondering if I can get away with the Aluminum kit from EK with 2x240 rads, really don't have $600+ to spend on a crazy loop right now.


----------



## pmc25

I'm more interested in actual game benchmarks.

It's a shame so few have them these days.

Still flabbergasted that DOOM doesn't have one, and even DIRT 4 doesn't (dirt was always known for its built in benches).


----------



## punchmonster

Does anyone have a Raijintek Morpheus II they could use on their Vega? Considering getting one but wouldn't want to get stuck with a cooler that isn't adequate.


----------



## Newbie2009

Quote:


> Originally Posted by *rv8000*
> 
> https://www.3dmark.com/fs/13428654
> 
> Nah, something seems off. GPU utilization is all over the place in combined, I wonder if this can be chalked up to Ryzen issues + AMD problems (lack of proper optimization) with 3DMark.


Yeah I just ran it there again. On performance mode my gpu clocks downlock, thought I wasn't giving it enough volts.

But it seems that test is cpu bound, cpu cannot feed the gpu.

Try the same test in 1440p mode and it should perform better. My 3770k @ 4.8ghz struggles to keep the gpu fed in that test.


----------



## rv8000

Speaking of which for got to show off the goods...




What are peoples opinion on the aluminum kit from EK, think 2x240 rads would be enough for a moderately clocked 1700 + lightly oc'd V64 (1650/1100)? Not trying for miracles would just hope for around 60/65c on the gpu.
Quote:


> Originally Posted by *Newbie2009*
> 
> Yeah I just ran it there again. On performance mode my gpu clocks downlock, thought I wasn't giving it enough volts.
> 
> But it seems that test is cpu bound, cpu cannot feed the gpu.
> 
> Try the same test in 1440p mode and it should perform better. My 3770k @ 4.8ghz struggles to keep the gpu fed in that test.


Might be something up with AMD drivers then, my 1700 @ 3.8 has no issue feeding my 1070, scores up to 8200 in combined @ 1080p if I remember correctly.


----------



## Newbie2009

The performance issues i was having in prey and witcher 3 seem to have vanished.


----------



## pmc25

Quote:


> Originally Posted by *rv8000*
> 
> What are peoples opinion on the aluminum kit from EK, think 2x240 rads would be enough for a moderately clocked 1700 + lightly oc'd V64 (1650/1100)? Not trying for miracles would just hope for around 60/65c on the gpu.


Aluminium isn't THAT much worse than copper. Should be way below 60C. 50C tops.

I think you'll do much better than 1650Mhz core once drivers are more stable.


----------



## kundica

Quote:


> Originally Posted by *rv8000*
> 
> What are peoples opinion on the aluminum kit from EK


You'll have to wait for it to become available. EK said they planned to do a Fluid Gaming block for Vega but didn't give a time frame.


----------



## DrZine

Quote:


> Originally Posted by *rv8000*
> 
> What are peoples opinion on the aluminum kit from EK, think 2x240 rads would be enough for a moderately clocked 1700 + lightly oc'd V64 (1650/1100)? Not trying for miracles would just hope for around 60/65c on the gpu.


Has EK even released the aluminum vega kit? I'm not seeing one on their site.


----------



## rv8000

Quote:


> Originally Posted by *DrZine*
> 
> Has EK even released the aluminum vega kit? I'm not seeing one on their site.


They stated ~ 2 months a few weeks ago in the thread they announced the other Vega blocks in. I'm expecting late September, early October, and I have no issues waiting.

Who knows, may even swap out to an 8600k if the price is right, so not the ideal time to build my loop anyways. Glad the aluminum kit should be adequate with 480+ rad space though.

*Forgot to add Enhanced sync actually worked very well in GW2 (can't use Freesync in GW2 due to no LFC and lots of random fps drops in cities).


----------



## dagget3450

So 17.8.1 is WHQL? Vega FE never got a WHQL driver yet...


----------



## steadly2004

sorry about the dusty pic... just pulled the side off and stuck it in there. It'll get a good cleaning when I re-do my loop.

Got the 2nd VEGA card in today. Not sure what I'm going to do with it yet. Got 2 blocks on order now with EKWB. They let me drop the backplate from the first order and add the 2nd block. Thanks EK!

I tried to download niceHash and it closes Radeon settings and says it's not compatible. That sucks. I don't know much about mining.


----------



## dagget3450

Quote:


> Originally Posted by *steadly2004*
> 
> 
> sorry about the dusty pic... just pulled the side off and stuck it in there. It'll get a good cleaning when I re-do my loop.
> 
> Got the 2nd VEGA card in today. Not sure what I'm going to do with it yet. Got 2 blocks on order now with EKWB. They let me drop the backplate from the first order and add the 2nd block. Thanks EK!
> 
> I tried to download niceHash and it closes Radeon settings and says it's not compatible. That sucks. I don't know much about mining.


I don't see CF not being enable soon for vega. I am super stumped why it isn't already enabled.


----------



## AlphaC

https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html




Undervolted VEGA 56 is the star

Compare with out of the box from HardOCP:




https://www.hardocp.com/article/2017/08/22/amd_radeon_rx_vega_56_video_card_review/17


----------



## Tgrove

This launch has officially made me sick. They raised the price of the cards in the bundles to $800 (water), some are even $820. Every other site has followed suit with an even greater price hike. Looking at my options for a g sync monitor at this point (they dont offer much)


----------



## steadly2004

When applying TIM for the waterblock. Are people putting a dab on each HBM stack and a dab on the GPU? Or.... something else?


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> Have you seen this regarding HBCC? https://techgage.com/article/a-look-at-amd-radeon-vega-hbcc/
> 
> The EK blocks look like a good addition to Vega. I've been debating a full loop but the high cost of entry has been holding me back. I caved and ordered the Liquid Vega 64 last night as a simpler solution.


I had similar experience as this article, all my benchies where done with 4x AA 3440x1440 where possible, and I noticed "some"minor trends towards the positive. There are some oddities though, such as first passes with really low minimums frame rates and then a second run with 5x minimums... thinking maybe it has something to do with the shader cache. Who knows.

In the grand scheme I doubt its does all that much, like the article stated, its within testing variance.









Time will tell, for the time being its a button to piddle with!


----------



## Soggysilicon

Quote:


> Originally Posted by *Newbie2009*
> 
> HBM is very temperature sensitive apparently.


Very temperature sensitive, but this isn't particularly new with Radeons; typical you know your on the edge of your thermal envelope if your getting artifacts.


----------



## Soggysilicon

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Why are those graphic scores low? With FreeSync disabled I score the same. Are you thermal throttling?
> 
> HBM Overclocked:
> 
> https://www.3dmark.com/fs/13403212
> 
> I get this at stock:
> 
> https://www.3dmark.com/fs/13368189


Since this morning with the freq. OC'n now working...

1712 core 1105 mem, Firestrike was 24965 there may still be a little more in the tank...

Auto +50% Enhanced Free-sync "on", if that makes a difference?

Still seeing where others are at.


----------



## PontiacGTX

Quote:


> Originally Posted by *rv8000*
> 
> So whats the deal with combined on firestrike, score/test seems incredibly borked. Being at 1080 equivalent an all, even my 1070 was scoring up to 8k in combined with the same base system (GPU score around 21k on my 1070). My 290 scores the same in combined


update mothebroard bios and increase ram clock speed/make thighter timmings,update windows,check if cores are unparked


----------



## Soggysilicon

Quote:


> Originally Posted by *Tgrove*
> 
> This launch has officially made me sick. They raised the price of the cards in the bundles to $800 (water), some are even $820. Every other site has followed suit with an even greater price hike. Looking at my options for a g sync monitor at this point (they dont offer much)


Agon 35" is a solid G'sync monitor.

https://www.amazon.com/AOC-AG352UCG-Curved-Gaming-Monitor/dp/B06X9CBRTP


----------



## Soggysilicon

Quote:


> Originally Posted by *steadly2004*
> 
> When applying TIM for the waterblock. Are people putting a dab on each HBM stack and a dab on the GPU? Or.... something else?


Used the artic 2 part cleaner and a ton of swabs to clean out all the crap from the factory install.

Spread a very very thin uniform coating over the GPU IHS and 2 HBM. Artic silver 5, then a very thin X across each die. Lower the block onto the board after you have prepped it with the heat pads for the drives/fets. I prefer as little material as possible on the dies. The EK block was very well machined, and my IHSs where uniform in height so no need to gloop the stuff all over the place... your mileage may vary. I went to the spread - X method a couple years back and its treated me well.


----------



## pmc25

Quote:


> Originally Posted by *Soggysilicon*
> 
> Agon 35" is a solid G'sync monitor.
> 
> https://www.amazon.com/AOC-AG352UCG-Curved-Gaming-Monitor/dp/B06X9CBRTP


I wouldn't touch those AUO manufactured VA panels. They're ridden with motion, response, banding, colour shift and viewing angle issues.

I've tried most of them, and the Sharp made panel in the old Eizo Foris 120hz (240hz with blank frame insertion).

I found all of them positively awful. The motion was so bad it made me feel cross eyed.

I now have 3x 144Hz Samsung VA panel monitors. None of the monitors are perfect, but the panels themselves are without question the best compromise between responsiveness and image quality on the market, until OLED finally arrives. 2 x 24" 1920x1080 in portrait, 1 x 32" 2560x1440 in landscape in the centre

Besides, the GSYNC 'tax' on that monitor is obscene. You can get the same panel without GSYNC for nearly $400 less.


----------



## CaptainTom

Quote:


> Originally Posted by *PontiacGTX*
> 
> even a worse memory bottleneck than 480?


I can confirm this as well. Now keep in mind this example isn't gaming, but at least in Ethereum mining Vega's memory clock IS ALL that matters (Even more-so than with Polaris). I have core clock all the way down to 900MHz, and yet hashrate is scaling linearly all the way up to 1105MHz and above!

Additionally I can confirm that BF1 is totally running great at 144Hz 1080p Ultra at 902/1105 clocks (which is insane). I am gonna do a lot more testing tomorrow because some of these results are blowing my mind.


----------



## Irev

Quote:


> Originally Posted by *rv8000*
> 
> https://www.3dmark.com/fs/13428654
> 
> Nah, something seems off. GPU utilization is all over the place in combined, I wonder if this can be chalked up to Ryzen issues + AMD problems (lack of proper optimization) with 3DMark.


It's a problem with high core cpus combined test in firestrike is borked for ryzen cpus... futuremark need to fix it. you'll find your score will improve if you drop down to 6 cores or set windows power plan to power saver.

http://www.overclock.net/t/1627430/the-tale-of-ryzen-and-firestrike-problems-ahead/0_30

http://cdn.overclock.net/e/e0/e0b8b92c_knewit.jpeg


----------



## diggiddi

Quote:


> Originally Posted by *steadly2004*
> 
> 
> sorry about the dusty pic... just pulled the side off and stuck it in there. It'll get a good cleaning when I re-do my loop.
> 
> Got the 2nd VEGA card in today. Not sure what I'm going to do with it yet. Got 2 blocks on order now with EKWB. They let me drop the backplate from the first order and add the 2nd block. Thanks EK!
> 
> I tried to download niceHash and it closes Radeon settings and says it's not compatible. That sucks. I don't know much about mining.


There is a beta?? mining driver try it and see, otherwise 17.7.1 might work


----------



## Newbie2009

My 64 undervolts great, 1000mv, 460w peak system draw vs stock volts of 620w


----------



## beatfried

anybody else got really annoying noise from the card at high fps?

atm i'm happy I haven't mounted the waterblock already, so the fan can mask the noise a little.... :/


----------



## ilmazzo

Coming from the vga or the psu? hot glue on inductors would help I think (every manufacturer should apply it but...seems not......)


----------



## beatfried

thats a really good question which I can't answer atm (i'm not at my computer)
didn't really check as the gpu and the psu are right next to each other.

but I had the same noise from my RX480 while scrolling (and only scrolling... somehow







) but I also don't know if it is from the psu or the gpu.

the psu is a EVGA SuperNova G2 850 Watt.


----------



## Irev

Quote:


> Originally Posted by *beatfried*
> 
> anybody else got really annoying noise from the card at high fps?
> 
> atm i'm happy I haven't mounted the waterblock already, so the fan can mask the noise a little.... :/


Yes I get coil wine at high fps it does go away after a little while and then once the fans ramp up i cant hear it at all... its not that bad but noticable before the fans kick in


----------



## dagget3450

Quote:


> Originally Posted by *Irev*
> 
> It's a problem with high core cpus combined test in firestrike is borked for ryzen cpus... futuremark need to fix it. you'll find your score will improve if you drop down to 6 cores or set windows power plan to power saver.
> 
> http://www.overclock.net/t/1627430/the-tale-of-ryzen-and-firestrike-problems-ahead/0_30
> 
> http://cdn.overclock.net/e/e0/e0b8b92c_knewit.jpeg


Expect a fix in the future year of never. Of all the AMD bugs i recall having in futuremark none were ever fixed. Even when they just released Timespy knowing of a multigpu bug for AMD never got fixed, thus causing way lower gpu scores than it should have. Its really funny how much stock people put into futuremark benchmarks using it as such a end all metric. Things like score leaks for upcoming gpus...ah well..


----------



## Newbie2009

Quote:


> Originally Posted by *dagget3450*
> 
> Expect a fix in the future year of never. Of all the AMD bugs i recall having in futuremark none were ever fixed. Even when they just released Timespy knowing of a multigpu bug for AMD never got fixed, thus causing way lower gpu scores than it should have. Its really funny how much stock people put into futuremark benchmarks using it as such a end all metric. Things like score leaks for upcoming gpus...ah well..


Gameplay ain't great in it either.


----------



## CaptainTom

I don't remember who first mentioned it here, but I can also now confirm that lowering the temperature below 75c lowers the timings on the HBM.

I tried it on 2 systems:


Desktop (1105MHz): ETH went from 38.9 to 40 MH/s
Mining Rig (1020MHz): ETH went from 36.8 to 38.9 MH/s!


----------



## pmc25

As I posted on OCUK, I now have my card:

Wattman / drivers are in a total state.

Did a clean install of latest drivers.

No HBCC on/off toggle (assume they removed it with latest driver update?).
HBM overclocking is just a jumbled mess of numbers in Wattman (-52900Mhz and always reverts to this as soon as you press Enter).
Power limit I can set between a brilliant -1%, 0% and +1%.
Enabling GPU voltage control automatically makes HBM voltage manual too, and sets it to the same value as what you set for GPU. Just total facepalm.
Unfortunately you can't adjust ANYTHING on power saver / balanced / turbo - not even fan profile. Since you can't adjust other values properly in custom mode currently, it's a mess.

Latest beta of Afterburner can't do anything yet. Not even fan profiles.

As of now, you need to use external tools.

Two positives.

1) My card seems to be a decent bin, as it's happy on 1040mV running AoTS built in benchmark on Crazy presets. But I need to increase the power limit (can't in Wattman) to stop it throttling. As there was zero instability and it was still momentarily peaking at 1600Mhz, I suspect it might be happy as low as 1000mV. Even at 1040mV on the HBM it's still fine - which it forces if I put it on the GPU.

2) The default fan settings on Balanced are heavily geared towards silence. On all stock settings it thermal throttles as it gets above 80C. At max fan (which is very loud but unlike most cards has almost zero motor / bearing whine) and undervolted it remains below 60C, but is throttling because it needs higher power limit. 75% fan which if wearing headphones you certainly won't hear over in-game audio gets to 70-72C. Turbo mode heavily thermal throttles as it stays at 85C because, again, fan profile remains below 50%.

P.S. Absolutely ZERO buzz / hiss / whine from the card, even with your ear right next to it, at full load, with fans low. Probably the first graphics card I've had where the board itself is completely silent..

P.P.S. Individual game profile settings do absolutely nothing, in Radeon Settings.


----------



## Chaoz

Installed my waterblock today. Hoping for great temps. 85°C on 100°C fanprofile is too damn high.




Seems I got a molded package.


----------



## punchmonster

Everything you mentioned should be working fine and no one else has reported any of the issues you mention. How did you do a "clean" install? DDU doesn't work for these. Use regular uninstall + official AMD clean utility.

Also HBM voltage does nothing.
Quote:


> Originally Posted by *pmc25*
> 
> As I posted on OCUK, I now have my card:
> 
> Wattman / drivers are in a total state.
> 
> Did a clean install of latest drivers.
> 
> No HBCC on/off toggle (assume they removed it with latest driver update?).
> HBM overclocking is just a jumbled mess of numbers in Wattman (-52900Mhz and always reverts to this as soon as you press Enter).
> Power limit I can set between a brilliant -1%, 0% and +1%.
> Enabling GPU voltage control automatically makes HBM voltage manual too, and sets it to the same value as what you set for GPU. Just total facepalm.
> Unfortunately you can't adjust ANYTHING on power saver / balanced / turbo - not even fan profile. Since you can't adjust other values properly in custom mode currently, it's a mess.
> 
> Latest beta of Afterburner can't do anything yet. Not even fan profiles.
> 
> As of now, you need to use external tools.
> 
> Two positives.
> 
> 1) My card seems to be a decent bin, as it's happy on 1040mV running AoTS built in benchmark on Crazy presets. But I need to increase the power limit (can't in Wattman) to stop it throttling. As there was zero instability and it was still momentarily peaking at 1600Mhz, I suspect it might be happy as low as 1000mV. Even at 1040mV on the HBM it's still fine - which it forces if I put it on the GPU.
> 
> 2) The default fan settings on Balanced are heavily geared towards silence. On all stock settings it thermal throttles as it gets above 80C. At max fan (which is very loud but unlike most cards has almost zero motor / bearing whine) and undervolted it remains below 60C, but is throttling because it needs higher power limit. 75% fan which if wearing headphones you certainly won't hear over in-game audio gets to 70-72C. Turbo mode heavily thermal throttles as it stays at 85C because, again, fan profile remains below 50%.
> 
> P.S. Absolutely ZERO buzz / hiss / whine from the card, even with your ear right next to it, at full load, with fans low. Probably the first graphics card I've had where the board itself is completely silent..
> 
> P.P.S. Individual game profile settings do absolutely nothing, in Radeon Settings.


----------



## pmc25

Quote:


> Originally Posted by *punchmonster*
> 
> Everything you mentioned should be working fine and no one else has reported any of the issues you mention. How did you do a "clean" install? DDU doesn't work for these. Use regular uninstall + official AMD clean utility.
> 
> Also HBM voltage does nothing.


I did the latter, then the former when it didn't work. Done it 3-4 times now. Same behaviour every time.

W7 x64, coming from a Fury Nano. I know someone with exactly the same issues. Only difference is he's on W10 x64 and came from a Fury X ... also, his Vega is an 'LE' ... but that only means his shroud is different to mine.


----------



## punchmonster

Which driver are you using? And beta or final? Have you tried a different driver? Also are you specifically using the Vega package? Or just generic Radeon driver?
Quote:


> Originally Posted by *pmc25*
> 
> I did the latter, then the former when it didn't work. Done it 3-4 times now. Same behaviour every time.
> 
> W7 x64, coming from a Fury Nano. I know someone with exactly the same issues. Only difference is he's on W10 x64 and came from a Fury X ... also, his Vega is an 'LE' ... but that only means his shroud is different to mine.


----------



## pmc25

Quote:


> Originally Posted by *punchmonster*
> 
> Which driver are you using? And beta or final? Have you tried a different driver? Also are you specifically using the Vega package? Or just generic Radeon driver?


17.8.1 WHQL from AMD's site, after refining to Vega / RX Vega / W7 64. Haven't tried the launch drivers as I want voltage control etc.

Friend said he was using 17.8.1 WHQL. Dunno where from.


----------



## pmc25

I used CCleaner registry cleaner after removing the installation, then reinstalled.

Radeon Settings / Wattman were then behaving a bit better ...... still no HBCC toggle, still no game settings being applied, still generally buggy and crap.
*
However, if you have Afterburner installed, if you open it even ONCE, it will reduce Wattman to a pool of excrement, from which it can't be resurrected* (factory reset, restart, even uninstalling Afterburner - none of it works). You then have to remove the drivers again, clean again, then reinstall.

*PSA: Uninstall Afterburner if you have Vega. Until such time as AMD get a handle on their drivers, and AfterBurner is updated, don't touch it.*


----------



## SystemTech

can yo uplease add me to the list of owners :


----------



## pmc25

Early opinion, but I think the better Vega64 bins may at some point massacre 1080Tis in some games, *IF* the memory bandwidth issues are driver / firmware related to at least some degree.

The reason for such optimism?

I'm still dropping it, but so far down to 890mV GPU core voltage @1630Mhz with only a few very minor fluctations, max temp of 66C on 100% fans (can obviously be reduced) at +15% power limit (again could be dropped as I've only been testing +15%). 1000Mhz HBM2 (haven't tested anything else) at 1.1V (dunno if voltage settings do anything for memory).

Testing on Crazy presets @1920x1080 on Ashes: Escalation (Vulkan) in game benchmark. Zero instability whatsoever thus far.

Looks like I may get it to 850mV or below, unless a sudden wall is hit.

This bodes EXCEEDINGLY well as far as overclocking potential is concerned, especially as I have an EK block waiting to go on.

Ambient air temperature is ~25C.


----------



## pmc25

Scrub that, I think Wattman is still doing silly things.

HWInfo is reporting GPU core voltage as locked at 1.356V and HBM2 voltage at load (it drops at idle) of around 1.05V. I'm guessing they've mislabeled them, and HBM2 Voltage is actually core voltage. My guess is that WattMan either doesn't apply voltages lower than 1050mV or 1000mV. Difficult to tell as I think modern graphics cards have LLC.

Anyway, at load, HWInfo is showing around 210-220W power consumption.

Wattman will (pretend) to apply any core voltage as low as 800mV (and it's stable there), anything below and it will scrub it back to the last valid value. If they aren't applying lower than 1000mV - 1050mV, then why have it set at 800mV??!?!? So silly.

Does anyone know if WattTool is able to apply lower core voltage, or is this driver / firmware level gimping?


----------



## pmc25

Update, WattMan is a total joke.

AMD really need to pull themselves together.

It appears that core voltage and HBM voltage are the wrong way round in WattMan!

It makes sense. HBM voltage is supposed not to do anything yet. GPU core reads as 1.356V permanently in HWInfo. However when I change HBM voltage in Wattman, GPU Memory Voltage goes up / down the same amount!

So, folks, if you want to use Wattman (don't!), you need to remember to use HBM voltage to undervolt your GPU core.

They shouldn't have released this latest driver if it was this dysfunctional. Better to just not allow undervolting.


----------



## The EX1

Quote:


> Originally Posted by *pmc25*
> 
> It makes sense. HBM voltage is supposed not to do anything yet. GPU core reads as 1.356V permanently in HWInfo. However when I change HBM voltage in Wattman, GPU Memory Voltage goes up / down the same amount!


You mean the GPU Core Voltage goes u /down right?


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> Scrub that, I think Wattman is still doing silly things.
> 
> HWInfo is reporting GPU core voltage as locked at 1.356V and HBM2 voltage at load (it drops at idle) of around 1.05V. I'm guessing they've mislabeled them, and HBM2 Voltage is actually core voltage. My guess is that WattMan either doesn't apply voltages lower than 1050mV or 1000mV. Difficult to tell as I think modern graphics cards have LLC.
> 
> Anyway, at load, HWInfo is showing around 210-220W power consumption.
> 
> Wattman will (pretend) to apply any core voltage as low as 800mV (and it's stable there), anything below and it will scrub it back to the last valid value. If they aren't applying lower than 1000mV - 1050mV, then why have it set at 800mV??!?!? So silly.
> 
> Does anyone know if WattTool is able to apply lower core voltage, or is this driver / firmware level gimping?


Dude. You're ranting.

HWInfo is not reading info correctly. 1.35 is actually the HBM voltage not core.

Wattman is working fine for most of us, within limits.


----------



## pmc25

Quote:


> Originally Posted by *The EX1*
> 
> You mean the GPU Core Voltage goes u /down right?


Quote:


> Originally Posted by *The EX1*
> 
> You mean the GPU Core Voltage goes u /down right?


Quote:


> Originally Posted by *kundica*
> 
> Dude. You're ranting.
> 
> HWInfo is not reading info correctly. 1.35 is actually the HBM voltage not core.
> 
> Wattman is working fine for most of us, within limits.


Read what I actually wrote.

Wattman has GPU core voltage and HBM voltage the wrong way round.

I have to change 'HBM Voltage' (GPU core voltage) to change GPU core voltage, which is labelled the wrong way round in HWInfo too.

They have mislabeled their sensors ... it's total facepalm.


----------



## pillowsack

I got curious and did a complete driver wipe. I was running valley benchmark and lowered gpu core vcore to the absolute lowest it would go. It didn't crash.... The coil whine sounded the same too.









Maybe it will be a while until we can actually do something with these cards...


----------



## theBee2112

Thank you to everyone in the last 67 pages for killing a whole 8 hour work day. I read every post and every link that I thought was relevant.

I've had a XFX Vega 64 for 3 days now, and this is what I've determined...

The drivers are garbage. I still cant even install the new 17.8.1 package. Installer fails with some vague error message. No blue screen or anything
Core and HBM clocks are not stable out of box. Thermals probably.
I'm using those BETA mining drivers 17.30 or something. They're the only ones I could get working with Vega.
OC controls are all there in wattman or AB (BETA), but they don't do anything when applied
Card is as loud and hot as a hairdryer when mining.
Hashrates drop from 37, to 34, to 30 almost like it's in steps. Probably because thermals on HBM
Will update with a more specific error once I get home and start tweaking more, but any suggestions on driver install?

EDIT: Forgot to mention that Ive tried 430 watt, 500 watt, and 900 watt PSU's. Contrary to what many here have said about PSUs, I've noticed no change switching between the three.

Edited for spelling and PSU choice.


----------



## pmc25

@pillowsack

Hmmmmmm. Sounds like you have the same 'feature' as me.

Install latest HWInfo64. If it reads your GPU core as ~1.35V, and HBM V as lower, then you do have the same problem with WattMan. The sensors are mis-labeled and you need to change the HBM voltage to undervolt the GPU core. It will allow you to set down to 800mV, but it won't apply lower than ~1000mV.

Try that.


----------



## pillowsack

Thank you, I'll give it a go right now.

EDIT:



So this is what I normally run my vega at. Judging by that and what i've explained, that's your similar situation? Max temp has been 44C before my pump spooled up.


----------



## Elmy

Just got my EK waterblock on Monday. Idle 22c. Runs pretty damn good.


----------



## GAN77

Quote:


> Originally Posted by *Elmy*
> 
> Just got my EK waterblock on Monday. Idle 22c. Runs pretty damn good.


Load temp C?


----------



## pmc25

I've sort of figured out what's going on with Wattman.

It doesn't reduce GPU core voltage at all unless you reduce the HBM voltage value. It won't reduce it past around 0.98-1V though, unless you then reduce the P6 and P7 GPU core voltages. At the moment I've got all 3 values set to 850mV. At these values, GPU memory voltage (actually GPU core voltage) in HWInfo is reading mostly between 890mV and 930mV, with 900-910 being the value most of the time under load.

Under full load, with the stock cooler at 100%, and locked to 1630Mhz / 1100Mhz, it is hitting max temperatures in Ashes Escalation 1920x1080 Crazy presets of 51C.

Wattman is a total joke, and the drivers can kindly be described as alpha, but it seems that this is actually the coolest, most efficient stock performance card ever released, with a few tweaks. In contrast to being the hottest, most power hungry as initial reviews seemed to say.

FYI, peak additive GPU core power and GPU memory power values are ~192W under full load. I'm told that HWInfo is fairly accurate for peak figures, even if the readings jump around all over the place.

It's not applying values below 900-950mV ... either it's 950mV and Vdroop is pushing it towards 900mV, or it's applying ~900mV and LLC is causing overshoot.


----------



## punchmonster

Your information have 0 bearing on any of us as our adjustments are working fine albeit there being a voltage floor for GPU.
Quote:


> Originally Posted by *pmc25*
> 
> I've sort of figured out what's going on with Wattman.
> 
> It doesn't reduce GPU core voltage at all unless you reduce the HBM voltage value. It won't reduce it past around 0.98-1V though, unless you then reduce the P6 and P7 GPU core voltages. At the moment I've got all 3 values set to 850mV. At these values, GPU memory voltage (actually GPU core voltage) in HWInfo is reading mostly between 890mV and 930mV, with 900-910 being the value most of the time under load.
> 
> Under full load, with the stock cooler at 100%, and locked to 1630Mhz / 1100Mhz, it is hitting max temperatures in Ashes Escalation 1920x1080 Crazy presets of 51C.
> 
> Wattman is a total joke, and the drivers can kindly be described as alpha, but it seems that this is actually the coolest, most efficient stock performance card ever released, with a few tweaks. In contrast to being the hottest, most power hungry as initial reviews seemed to say.
> 
> FYI, peak additive GPU core power and GPU memory power values are ~192W under full load. I'm told that HWInfo is fairly accurate for peak figures, even if the readings jump around all over the place.
> 
> It's not applying values below 900-950mV ... either it's 950mV and Vdroop is pushing it towards 900mV, or it's applying ~900mV and LLC is causing overshoot.


----------



## rv8000

Quote:


> Originally Posted by *pmc25*
> 
> I've sort of figured out what's going on with Wattman.
> 
> It doesn't reduce GPU core voltage at all unless you reduce the HBM voltage value. It won't reduce it past around 0.98-1V though, unless you then reduce the P6 and P7 GPU core voltages. At the moment I've got all 3 values set to 850mV. At these values, GPU memory voltage (actually GPU core voltage) in HWInfo is reading mostly between 890mV and 930mV, with 900-910 being the value most of the time under load.
> 
> Under full load, with the stock cooler at 100%, and locked to 1630Mhz / 1100Mhz, it is hitting max temperatures in Ashes Escalation 1920x1080 Crazy presets of 51C.
> 
> Wattman is a total joke, and the drivers can kindly be described as alpha, but it seems that this is actually the coolest, most efficient stock performance card ever released, with a few tweaks. In contrast to being the hottest, most power hungry as initial reviews seemed to say.
> 
> FYI, peak additive GPU core power and GPU memory power values are ~192W under full load. I'm told that HWInfo is fairly accurate for peak figures, even if the readings jump around all over the place.
> 
> It's not applying values below 900-950mV ... either it's 950mV and Vdroop is pushing it towards 900mV, or it's applying ~900mV and LLC is causing overshoot.


AFAIK Watttool 0.92 should function with Vega.

Honestly I haven't had any severe issues with wattman; adjusting power limit works, setting dpm6/7 clocks work, ive gotten my card to sustain 1630 with fan tweaks for benching purposes, the only thing I haven't double checked are if actual voltage adjustments are being applied properly.


----------



## pillowsack

Quote:


> Originally Posted by *pmc25*
> 
> I've sort of figured out what's going on with Wattman.
> 
> It doesn't reduce GPU core voltage at all unless you reduce the HBM voltage value. It won't reduce it past around 0.98-1V though, unless you then reduce the P6 and P7 GPU core voltages. At the moment I've got all 3 values set to 850mV. At these values, GPU memory voltage (actually GPU core voltage) in HWInfo is reading mostly between 890mV and 930mV, with 900-910 being the value most of the time under load.
> 
> Under full load, with the stock cooler at 100%, and locked to 1630Mhz / 1100Mhz, it is hitting max temperatures in Ashes Escalation 1920x1080 Crazy presets of 51C.
> 
> Wattman is a total joke, and the drivers can kindly be described as alpha, but it seems that this is actually the coolest, most efficient stock performance card ever released, with a few tweaks. In contrast to being the hottest, most power hungry as initial reviews seemed to say.
> 
> FYI, peak additive GPU core power and GPU memory power values are ~192W under full load. I'm told that HWInfo is fairly accurate for peak figures, even if the readings jump around all over the place.
> 
> It's not applying values below 900-950mV ... either it's 950mV and Vdroop is pushing it towards 900mV, or it's applying ~900mV and LLC is causing overshoot.


I'm wondering if undervolting would provide any less coil whine. Or if overclocking past 1650/1100 would help. I'm ok with this card consuming 500W
















Could you post a photo of your wattman?


----------



## rv8000

Quote:


> Originally Posted by *pillowsack*
> 
> I'm wondering if undervolting would provide any less coil whine. Or if overclocking past 1650/1100 would help. I'm ok with this card consuming 500W


It's possible. Coil whine exists due to the electrical signal passing through the chokes functions @/around the resonant frequency of the chokes. The more you change voltage and total power usage the closer/further you'll go from the resonant frequency of the choke. This is also why adding mass to the chokes can also help lessen coil whine; normally done by placing some kind of glue (I think, don't remember the mod) on the chokes.


----------



## Newbie2009

Quote:


> Originally Posted by *pillowsack*
> 
> I'm wondering if undervolting would provide any less coil whine. Or if overclocking past 1650/1100 would help. I'm ok with this card consuming 500W
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Could you post a photo of your wattman?


Fyi I've been using watt tool to set gpu volts, had HBM mem volts & clock in watt man set to manual and adjusted HBM frequency (speed) clocks without touching the wrongly reported HBM voltage figure in wattman.

And worked fine.


----------



## pmc25

Quote:


> Originally Posted by *punchmonster*
> 
> Your information have 0 bearing on any of us as our adjustments are working fine albeit there being a voltage floor for GPU.


Complete nonsense. Two people contacted me via OCUK forums to ask about what I was doing as they had similar problems, and pillowsack here seems to have similar issues.

You think 1 person is having these issues when the WattMan is so horrendously buggy? You're delusional.

@pillowsack, I suggest you do the following.

P6, P7, HBM voltage all at 950mV.

Power limit +20%

HBM clock 1095

Fans at 4900 minimum and 4900 target (you don't want temperature throttling when testing)

Core clock at 0% (i.e. will boost to 1630).

If you get a crash in game benchmarks, first increase P6, P7, HBM Voltage 975mV. If that doesn't work, 1000mV. Still not? Then try HBM speed at 1075Mhz, then 1050, then 1000Mhz. Find what's stable.

N.B. You will almost certainly have to increase voltage to the core if you want to overclock it significantly though. I think I will probably have to go to around 1015mV at 1700Mhz


----------



## Dolk

Anyone have a LG 29UM67 monitor with Vega 64?

I just updated to 17.8.1 and now my Vega card cannot see the FreeSync option with my monitor. The driver says my monitor is not supported. Anyone else have this issue with the latest driver?


----------



## pmc25

Have you installed the monitor's driver, or running generic windows one?


----------



## Dolk

Good suggestion but W10 overrides any attempt of me installing the LG monitor drivers. It says that I have the best installed.


----------



## Echoa

Quote:


> Originally Posted by *Dolk*
> 
> Good suggestion but W10 overrides any attempt of me installing the LG monitor drivers. It says that I have the best installed.


Disable Auto driver installation and remove the preexisting drivers. Then install the ones you want. The instructions might be a little off as I haven't done it in a while but that's right how I installed my Dell monitor drivers


----------



## Peter Nixeus

Could I be added to the list too? Thank-you!



The picture is the same one as in my own twitter post:


__ https://twitter.com/i/web/status/899795905058021376


----------



## Dolk

Quote:


> Originally Posted by *Echoa*
> 
> Disable Auto driver installation and remove the preexisting drivers. Then install the ones you want. The instructions might be a little off as I haven't done it in a while but that's right how I installed my Dell monitor drivers


No luck, just says the same thing. I doubt it has to do with monitor drivers. I had FreeSync in the beta drivers, but the WHQL dropped support. I know there was a bunch of fixes around FreeSync with WHQL so I wonder if they broke something along the way.


----------



## Elmy

37c is the highest Ive seen so far today.


----------



## Kokin

Quote:


> Originally Posted by *Elmy*
> 
> 37c is the highest Ive seen so far today.


That's a pretty low load temp.


----------



## Chaoz

Mine's at 38-39°C at 100%. The damn card dropped 45°C with the waterblock compared to the aircooler.


----------



## dagget3450

New owners added to list:

Energylite RX Vega 64 (AC) (Now waterblocked)
Paxi RX Vega 64 (AC) (soon to be watercooled)
Kelen RX Vega 64 x2
Neutronman RX Vega 64 (AC) (soon to be watercooled)
Soggysilicon RX Vega 64 (AC) (soon to be watercooled)
aliquis RX Vega 64 (WC)
sneida RX Vega 64 (WC)
DampMonkey RX Vega 64 (AC) (now watercooled)
CaptainTom RX Vega 64
Pleskac RX Vega 64 (FE)
rv8000 RX Vega 64 (AC)
beatfried RX Vega 64
SysemTech RX Vega 64 (AC)
theBee2112 RX Vega 64 (AC)
Elmy RX Vega 64 (AC) (watercooled now)
pmc25 RX Vega 64
Dolk RX Vega 64
Peter Nixeus RX Vega 64 (AC)

Welcome to the club! Maybe if i can get a little more time ill make the OP nicer and more info esp for RX Vega. I added some people based on their posts and it may not have all valid info. Please post in here or PM me if your information is incorrect and ill update it as needed. Maybe we can get a little banner also to add to signatures for club soon.


----------



## Peter Nixeus

Quote:


> Originally Posted by *Dolk*
> 
> No luck, just says the same thing. I doubt it has to do with monitor drivers. I had FreeSync in the beta drivers, but the WHQL dropped support. I know there was a bunch of fixes around FreeSync with WHQL so I wonder if they broke something along the way.


1) Use DDU to uninstall the drivers, in DDU there is an option to disable Windows OS from auto installing or auto updating to the AMD WHQL Drivers.

2) Check if VSR is ON, make sure it is OFF for your monitor as it may affect FreeSync ON/OFF.

3) If you first hot plugged your monitor, it may cause FreeSync not being able to detect. Try turning OFF your Monitor and PC. Unplug the Power Supply from your monitor and disconnect the monitor from the GPU- wait a few seconds. Connect everything back and turn everything back ON. Should detect FreeSync if it is a FreeSync Certified monitor.


----------



## dagget3450

Also forgot to add

IvantheDugtrio Vega FE (AC)

to the vega fe club, DONE.


----------



## Peter Nixeus

Quote:


> Originally Posted by *dagget3450*
> 
> New owners added to list:
> 
> Energylite RX Vega 64 (AC) (Now waterblocked)
> Paxi RX Vega 64 (AC) (soon to be watercooled)
> Kelen RX Vega 64 x2
> Neutronman RX Vega 64 (AC) (soon to be watercooled)
> Soggysilicon RX Vega 64 (AC) (soon to be watercooled)
> aliquis RX Vega 64 (WC)
> sneida RX Vega 64 (WC)
> DampMonkey RX Vega 64 (AC) (now watercooled)
> CaptainTom RX Vega 64
> Pleskac RX Vega 64 (FE)
> rv8000 RX Vega 64 (AC)
> beatfried RX Vega 64
> SysemTech RX Vega 64 (AC)
> theBee2112 RX Vega 64 (AC)
> Elmy RX Vega 64 (AC) (watercooled now)
> pmc25 RX Vega 64
> Dolk RX Vega 64
> Peter Nixeus RX Vega 64 (AC)
> 
> Welcome to the club! Maybe if i can get a little more time ill make the OP nicer and more info esp for RX Vega. I added some people based on their posts and it may not have all valid info. Please post in here or PM me if your information is incorrect and ill update it as needed. Maybe we can get a little banner also to add to signatures for club soon.


Thank-you!


----------



## CaptainTom

Quote:


> Originally Posted by *dagget3450*
> 
> New owners added to list:
> 
> Energylite RX Vega 64 (AC) (Now waterblocked)
> Paxi RX Vega 64 (AC) (soon to be watercooled)
> Kelen RX Vega 64 x2
> Neutronman RX Vega 64 (AC) (soon to be watercooled)
> Soggysilicon RX Vega 64 (AC) (soon to be watercooled)
> aliquis RX Vega 64 (WC)
> sneida RX Vega 64 (WC)
> DampMonkey RX Vega 64 (AC) (now watercooled)
> CaptainTom RX Vega 64
> Pleskac RX Vega 64 (FE)
> rv8000 RX Vega 64 (AC)
> beatfried RX Vega 64
> SysemTech RX Vega 64 (AC)
> theBee2112 RX Vega 64 (AC)
> Elmy RX Vega 64 (AC) (watercooled now)
> pmc25 RX Vega 64
> Dolk RX Vega 64
> Peter Nixeus RX Vega 64 (AC)
> 
> Welcome to the club! Maybe if i can get a little more time ill make the OP nicer and more info esp for RX Vega. I added some people based on their posts and it may not have all valid info. Please post in here or PM me if your information is incorrect and ill update it as needed. Maybe we can get a little banner also to add to signatures for club soon.


Idk if it matters, but I actually have 2 x Vega 64's. One is XFX (in my mining rig), one is SAPPHIRE (gaming desktop). SAPPHIRE definitely seems to overclock better, but ik these are all reference cards.


----------



## punchmonster

new beta mining driver:
https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-Beta-for-Blockchain-Compute-Release-Notes.aspx


----------



## Chaoz

Quote:


> Originally Posted by *dagget3450*
> 
> New owners added to list:
> 
> Energylite RX Vega 64 (AC) (Now waterblocked)
> Paxi RX Vega 64 (AC) (soon to be watercooled)
> Kelen RX Vega 64 x2
> Neutronman RX Vega 64 (AC) (soon to be watercooled)
> Soggysilicon RX Vega 64 (AC) (soon to be watercooled)
> aliquis RX Vega 64 (WC)
> sneida RX Vega 64 (WC)
> DampMonkey RX Vega 64 (AC) (now watercooled)
> CaptainTom RX Vega 64
> Pleskac RX Vega 64 (FE)
> rv8000 RX Vega 64 (AC)
> beatfried RX Vega 64
> SysemTech RX Vega 64 (AC)
> theBee2112 RX Vega 64 (AC)
> Elmy RX Vega 64 (AC) (watercooled now)
> pmc25 RX Vega 64
> Dolk RX Vega 64
> Peter Nixeus RX Vega 64 (AC)
> 
> Welcome to the club! Maybe if i can get a little more time ill make the OP nicer and more info esp for RX Vega. I added some people based on their posts and it may not have all valid info. Please post in here or PM me if your information is incorrect and ill update it as needed. Maybe we can get a little banner also to add to signatures for club soon.


Could you update the info for mine? I added a waterblock to mine today.


----------



## steadly2004

Quote:


> Originally Posted by *punchmonster*
> 
> new beta mining driver:
> https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-Beta-for-Blockchain-Compute-Release-Notes.aspx


Does the mining driver break gaming?


----------



## DrZine

Quote:


> Originally Posted by *punchmonster*
> 
> new beta mining driver:
> https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-Beta-for-Blockchain-Compute-Release-Notes.aspx


Do these blockchain drivers do anything that the powerplay mod can't do? Basically does the driver change more that just clocks and volts?


----------



## Nuke33

Undervolting below 1050mv on Vega64 needs powerplay mods (which hellm provided). No need to fiddle with HBM voltage.
I posted my finding in this thread --> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26301292

If anyone needs my modified ones I can post them here.


----------



## pmc25

Quote:


> Originally Posted by *Nuke33*
> 
> Undervolting below 1050mv on Vega64 needs powerplay mods (which hellm provided). No need to fiddle with HBM voltage.
> I posted my finding in this thread --> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26301292
> 
> If anyone needs my modified ones I can post them here.


Simpler just to set 950mV on P6, P7 and HBM Voltage and it will target 950mV to the GPU core.

If you want to go below 950mV then yours is better (if it goes lower).

Still feel like the driver / Wattman team must have been as high as kites when they were doing this update.


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> I wouldn't touch those AUO manufactured VA panels. They're ridden with motion, response, banding, colour shift and viewing angle issues.
> 
> I've tried most of them, and the Sharp made panel in the old Eizo Foris 120hz (240hz with blank frame insertion).
> 
> I found all of them positively awful. The motion was so bad it made me feel cross eyed.
> 
> I now have 3x 144Hz Samsung VA panel monitors. None of the monitors are perfect, but the panels themselves are without question the best compromise between responsiveness and image quality on the market, until OLED finally arrives. 2 x 24" 1920x1080 in portrait, 1 x 32" 2560x1440 in landscape in the centre
> 
> Besides, the GSYNC 'tax' on that monitor is obscene. You can get the same panel without GSYNC for nearly $400 less.


The G'sync tax on the Agon is real for sure. I've had the Sammy CF791 for a couple months now and have had very little trouble out of it... no dead PX, and the contra shift crying is a little over blown... but as your original post would imply... until Vega dropped I didn't have a video card.







and for most "Vega" still hasn't dropped. So again AMD tossing dollars chasing pennies...


----------



## Nuke33

Quote:


> Originally Posted by *DrZine*
> 
> Do these blockchain drivers do anything that the powerplay mod can't do? Basically does the driver change more that just clocks and volts?


I think the last mining driver fixed some bug that also RX4xx/5xx series suffered from and was preventing better mining performance.


----------



## Nuke33

Quote:


> Originally Posted by *pmc25*
> 
> Simpler just to set 950mV on P6, P7 and HBM Voltage and it will target 950mV to the GPU core.
> 
> If you want to go below 950mV then yours is better (if it goes lower).
> 
> Still feel like the driver / Wattman team must have been as high as kites when they were doing this update.


Yeah okay you can do that, but its kind of quick`n dirty, I like clean









It does go lower, I checked consumption on pcie rails with my Corsair Ax760i which has inbuilt monitoring.
Unfortunately it does mess up HBM clocks, they get stuck at 800mhz.
Until someone figures out how to circumvent that issue it is not really useful to go below 950mv in my opinion.

Yeah AMDs Driver Team seems to really be under a lot of pressure. If I were them I would want to get high too


----------



## Soggysilicon

Quote:


> Originally Posted by *beatfried*
> 
> anybody else got really annoying noise from the card at high fps?
> 
> atm i'm happy I haven't mounted the waterblock already, so the fan can mask the noise a little.... :/


Good ole' "Coil Whine"

Happens when your on a static screen that is being rendered in 3d (typical) with the inductors oscillating in the hearing frequency range, 20hz-22khz. Its pretty obvious on my Vega as well, but I have also experienced it on heavily oc'd cards as well as other devices so its not something I can say is a Vega exclusive problem.


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> As I posted on OCUK, I now have my card:
> 
> Wattman / drivers are in a total state.
> 
> Did a clean install of latest drivers.
> 
> No HBCC on/off toggle (assume they removed it with latest driver update?).




Dunno about that, updated my drivers just the other morning to fix the freq. bug and HBCC is still kickin' chicken. Windows 10 Pro C, if that makes any difference?
Quote:


> HBM overclocking is just a jumbled mess of numbers in Wattman (-52900Mhz and always reverts to this as soon as you press Enter).
> Power limit I can set between a brilliant -1%, 0% and +1%.
> Enabling GPU voltage control automatically makes HBM voltage manual too, and sets it to the same value as what you set for GPU. Just total facepalm.
> Unfortunately you can't adjust ANYTHING on power saver / balanced / turbo - not even fan profile. Since you can't adjust other values properly in custom mode currently, it's a mess.


Sounds all broken, I would definately remove all the drivers in safe mode and reinstall... prayer to the silicon gods may help too!
Quote:


> Latest beta of Afterburner can't do anything yet. Not even fan profiles.


I gave up on good ole burner with Vega at the moment, I liked it for RTSS to output to my G15 LCD... but coretemp patch up so out the window it went... RTSS has been sorta meh with Ryzen and these latest Crimson drivers... as well as wall paper engine...
Quote:


> As of now, you need to use external tools.
> 
> Two positives.
> 
> 1) My card seems to be a decent bin, as it's happy on 1040mV running AoTS built in benchmark on Crazy presets. But I need to increase the power limit (can't in Wattman) to stop it throttling. As there was zero instability and it was still momentarily peaking at 1600Mhz, I suspect it might be happy as low as 1000mV. Even at 1040mV on the HBM it's still fine - which it forces if I put it on the GPU.
> 
> 2) The default fan settings on Balanced are heavily geared towards silence. On all stock settings it thermal throttles as it gets above 80C. At max fan (which is very loud but unlike most cards has almost zero motor / bearing whine) and undervolted it remains below 60C, but is throttling because it needs higher power limit. 75% fan which if wearing headphones you certainly won't hear over in-game audio gets to 70-72C. Turbo mode heavily thermal throttles as it stays at 85C because, again, fan profile remains below 50%.
> 
> P.S. Absolutely ZERO buzz / hiss / whine from the card, even with your ear right next to it, at full load, with fans low. Probably the first graphics card I've had where the board itself is completely silent..
> 
> P.P.S. Individual game profile settings do absolutely nothing, in Radeon Settings.


Radeon settings have been garbage for years for individual games... every other driver rev breaks settings... at least P State seems to have been mostly fixed, the 280x has been screwed on that for many years.

The stock air cooler does the job, no real complaints; as far as coil whine you must be the lucky one cause mine whines on static 3d rendered screens as much as any card I have ever owned.

My display ports on this card are also very very meh, when I first installed the card all I would get is a brief green streak across the screen, and nothing. Eventually had to use an HDMI cable, get it into windows, scrub drivers, reinstall, boot, and then... finally the DP came on. Sometimes rebooting drops the DP so now I have both DP and HDMI in case I need into the BIOS... JANKY... other than that, its a great card on my custom loop... as a stand alone card on air, ehhh I dunno.


----------



## theBee2112

@dagget3450
Thanks! If you do see this post, I will be ordering an EK-Nickel Waterblock for my Vega 64. Seems like a no brainer at this point lol. If you get around to updating the OP you could add that.

Some updates:
- Installed new drivers with DDU.
- Merged mining ETH 39.1Mh/s and DCR 1160Mh/s on air
- I can only adjust GPU MHz and voltage in WattTool 9.2. nothing else works in it. (Using 1401MHz and 1000mV)
- I am using Wattman to adjust power limit, fan speed, and memory. (+35% power, 80% Fan, 1100MHz Mem)
- Power draw is around 300 watts

Not favorable to those who are looking at a quiet system. Now if you cared about noise and power draw, I've managed 32-34 Mh/s @ 150 Watts, 50% fan - with core clock of 852 @ 800mV and -50% power limit.

If anyone has some updates regarding Linux drivers, or how to mod the BIOS, please let me know. I know there's been some discussion about it needing to be signed and it self checks on boot. If there's a fix for that, i'd love to hear it!


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> Update, WattMan is a total joke.
> 
> AMD really need to pull themselves together.
> 
> It appears that core voltage and HBM voltage are the wrong way round in WattMan!
> 
> It makes sense. HBM voltage is supposed not to do anything yet. GPU core reads as 1.356V permanently in HWInfo. However when I change HBM voltage in Wattman, GPU Memory Voltage goes up / down the same amount!
> 
> So, folks, if you want to use Wattman (don't!), you need to remember to use HBM voltage to undervolt your GPU core.
> 
> They shouldn't have released this latest driver if it was this dysfunctional. Better to just not allow undervolting.


For whatever reason... on this topic, Wattman misplaces my custom settings at every boot / restart with the W10 notification on startup...


----------



## kundica

My LC version arrived today. Cost a pretty penny but saved me from dropping $500+ to custom loop my system for the air version I initially bought. Not sure how to deal with the hose and I had to remove my fancy PCI cable extensions because the were messing with power to the card.

Ran a few benches +50% power limit HBM @ 1075 for both. Temps maxed at 58. No coil whine.

Time Spy - https://www.3dmark.com/3dm/21741838
FS Extreme - https://www.3dmark.com/3dm/21741925

Vanity shots:


Spoiler: Warning: Spoiler!









This is what my system looked like when I built it:


Spoiler: Warning: Spoiler!


----------



## Soggysilicon

Quote:


> Originally Posted by *Dolk*
> 
> Good suggestion but W10 overrides any attempt of me installing the LG monitor drivers. It says that I have the best installed.


I had to refresh my monitor driver on my sammy... winblows still says that garbage too, but I assure you its referencing the installed driver.


----------



## Soggysilicon

Quote:


> Originally Posted by *dagget3450*
> 
> New owners added to list:
> 
> Energylite RX Vega 64 (AC) (Now waterblocked)
> Paxi RX Vega 64 (AC) (soon to be watercooled)
> Kelen RX Vega 64 x2
> Neutronman RX Vega 64 (AC) (soon to be watercooled)
> Soggysilicon RX Vega 64 (AC) (soon to be watercooled)
> aliquis RX Vega 64 (WC)
> sneida RX Vega 64 (WC)
> DampMonkey RX Vega 64 (AC) (now watercooled)
> CaptainTom RX Vega 64
> Pleskac RX Vega 64 (FE)
> rv8000 RX Vega 64 (AC)
> beatfried RX Vega 64
> SysemTech RX Vega 64 (AC)
> theBee2112 RX Vega 64 (AC)
> Elmy RX Vega 64 (AC) (watercooled now)
> pmc25 RX Vega 64
> Dolk RX Vega 64
> Peter Nixeus RX Vega 64 (AC)
> 
> Welcome to the club! Maybe if i can get a little more time ill make the OP nicer and more info esp for RX Vega. I added some people based on their posts and it may not have all valid info. Please post in here or PM me if your information is incorrect and ill update it as needed. Maybe we can get a little banner also to add to signatures for club soon.


Blocked up as of Monday night!

EK-FC-VEGA Nickel



EK should get a medal for this metal, the card should come like this out of the box.


----------



## Soggysilicon

Quote:


> Originally Posted by *CaptainTom*
> 
> Idk if it matters, but I actually have 2 x Vega 64's. One is XFX (in my mining rig), one is SAPPHIRE (gaming desktop). SAPPHIRE definitely seems to overclock better, but ik these are all reference cards.


Better wafer for the batch they where supplied? Mine has OC'd at least as good as any I have seen people report on without a hitch... mind you going further is "insta crash" on the benchies... especially Heaven. But yeah... the Sapphire card... in a bag, in a box, with a really meager manual... no frills, no cube, no sticker... no disc... its the most no frills un-boxing of the year


----------



## theBee2112

Quote:


> Originally Posted by *Soggysilicon*
> 
> But yeah... the Sapphire card... in a bag, in a box, with a really meager manual... no frills, no cube, no sticker... no disc... its the most no frills un-boxing of the year


I agree that the un-boxing was terrible with no frills, but i received a driver disk and 2 PCIE power adapters in my XFX box. No manual though. Sadly enough, I was really hoping for a sticker.


----------



## CaptainTom

Quote:


> Originally Posted by *theBee2112*
> 
> I agree that the un-boxing was terrible with no frills, but i received a driver disk and 2 PCIE power adapters in my XFX box. No manual though. Sadly enough, I was really hoping for a sticker.


Haha another +1 to that! XFX definitely had nicer packaging and accessories (And they were still somewhat subpar).


----------



## CaptainTom

Quote:


> Originally Posted by *Soggysilicon*
> 
> The stock air cooler does the job, no real complaints; as far as coil whine you must be the lucky one cause mine whines on static 3d rendered screens as much as any card I have ever owned.
> 
> My display ports on this card are also very very meh, when I first installed the card all I would get is a brief green streak across the screen, and nothing. Eventually had to use an HDMI cable, get it into windows, scrub drivers, reinstall, boot, and then... finally the DP came on. Sometimes rebooting drops the DP so now I have both DP and HDMI in case I need into the BIOS... JANKY... other than that, its a great card on my custom loop... as a stand alone card on air, ehhh I dunno.


1. Yeah this is by far the nicest blower cooler I have ever handled. It works very well.

2. I have definitely been having some issues unplugging and plugging monitors back in. Weird flashing, and occasionally a blue screen.


----------



## Soggysilicon

Quote:


> Originally Posted by *Soggysilicon*
> 
> I had to refresh my monitor driver on my sammy... winblows still says that garbage too, but I assure you its referencing the installed driver.


...and ignore this garbage, after rummaging around turns out I was replaced with a generic PnP after installing Vega... sammy website is worthless needed to find the disc that came with the monitor to get the correct .inf file... neat... worth looking into.


----------



## Soggysilicon

Quote:


> Originally Posted by *theBee2112*
> 
> I agree that the un-boxing was terrible with no frills, but i received a driver disk and 2 PCIE power adapters in my XFX box. No manual though. Sadly enough, I was really hoping for a sticker.


Considering all the issues I have had to the display ports on Vega being janky, I thought for sure Sapphire had shipped me a $600 deuce in a box with a little note saying "get wreckt".


----------



## steadly2004

Quote:


> Originally Posted by *Soggysilicon*
> 
> Considering all the issues I have had to the display ports on Vega being janky, I thought for sure Sapphire had shipped me a $600 deuce in a box with a little note saying "get wreckt".


Hahaha. No issues with DP here. Worked immediately and no problems here.


----------



## Soggysilicon

Quote:


> Originally Posted by *steadly2004*
> 
> Hahaha. No issues with DP here. Worked immediately and no problems here.


Well I did manage to get the proper driver installed again, and it seems for "whatever reason" that seems to have corrected my wattman custom settings from reverting??? So maybe just maybe the DP issue has gone away... only one way to find out... weee... trouble shooting.


----------



## The EX1

Hopefully this will help someone else. I had issues with core voltage applying in Wattman, but the actual draw from the card wasn't dropping as much as it should of been. I figured the card was still cycling through the other power states we can't adjust (P1-5). To fix this, I left clicked where is says Power State 6 in Wattman and then set it to Min value. That really helped to stabilize the power draw and clock. It also dropped the power from stock to about 245 watts consistently. Big improvement in temps and stability.


----------



## Dolk

Quote:


> Originally Posted by *Peter Nixeus*
> 
> 1) Use DDU to uninstall the drivers, in DDU there is an option to disable Windows OS from auto installing or auto updating to the AMD WHQL Drivers.
> 
> 2) Check if VSR is ON, make sure it is OFF for your monitor as it may affect FreeSync ON/OFF.
> 
> 3) If you first hot plugged your monitor, it may cause FreeSync not being able to detect. Try turning OFF your Monitor and PC. Unplug the Power Supply from your monitor and disconnect the monitor from the GPU- wait a few seconds. Connect everything back and turn everything back ON. Should detect FreeSync if it is a FreeSync Certified monitor.


Just doing number (3) fixed my issue. Both beta and WHQL driver now works and finds my FreeSync.

Thanks


----------



## theBee2112

Quote:


> Originally Posted by *punchmonster*
> 
> new beta mining driver:
> https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-Beta-for-Blockchain-Compute-Release-Notes.aspx



Now we're getting somewhere. But 350 watts is a bit much. Should be better than 43MH/s on water and more support coming! But it broke WattTool, nothing works in it now.

@Soggysilicon - sucks that you're having DP issues. I had that happen on a RX 580, and never got it as it should be. If you ever resolve that, please post your findings.


----------



## Soggysilicon

Quote:


> Originally Posted by *theBee2112*
> 
> 
> Now we're getting somewhere. But 350 watts is a bit much. Should be better than 43MH/s on water and more support coming! But it broke WattTool, nothing works in it now.
> 
> @Soggysilicon - sucks that you're having DP issues. I had that happen on a RX 580, and never got it as it should be. If you ever resolve that, please post your findings.


Some post back a user was discussing winblows 10 not accepting driver for their monitor. So I looked at mine and lo' n behold my driver install had not taken either. (changing the vidya card clearly changed the device ID assignment, causing windows to roll the generic pnp)

Scrounging up the disc from the manufacturer found the .inf installed manually, then updated. This seems to work much better (for whatever reason), and I shed the HDMI cable (this solution had the negative trade off of windows reporting 2 display devices. Last couple boots DP has been functional in POST, albeit slower than I would like before the monitor gets going. Mind you the card that Vega is replacing (280x) did not have Freesync, never-the-less enhanced FS for 48-100hz, so there could be a communication / protocol latency there.

My issue may not be Vega specifically but it could be related as this Asus crosshair 6 hero has had issues out of the box with the first PCIe slot concerning video in general. So is it the monitor CF791, is it Vega, is it the Asus mobo on 3200?

Early days of Ryzen I would sometimes have to pull the video card, power up the mobo, power down, reseat, and try again... I know some folks bricked this mobo when power states went low on the Asus C6.

At the end of the day, it "works" just cross fingers and hope for some backend driver support going forward.


----------



## CaptainTom

Some new discoveries and questions from tests I have run:

1) *1800/1105 clocks were totally stable at stock voltages*. However the performance uplift wasn't that big, and thus I have to once again say that Vega's performance is very much so tied to temperature because undervolted stock clocks almost bring the same uplift.

2) Speaking of temperature - *Do NOT forget that the reported temperature is only the CORE temp!* The latest version of HW64 reports HBM temp correctly (At least with 17.8.1 drivers), and I had to set the temp target to 68c for the HBM to _reliably_ stay below 75c. Using mining drivers I get 42 MH/s ETH + 1400 SC with 1537/1105 clocks.

3) *Anyone getting an odd (and random) mega performance drop?* I have been getting this while playing BF1 almost once a day. My framerate goes from 144Hz to ~27Hz and the only fix is a reboot. Temperatures go down and the clocks are reported as the same, but ALL performance is terrible. Even in Ethereum mining my performance dropped to an insane 7 MH/s!

4) *Anyone else notice a performance drop from using multiple monitors?* (Obviously while only using 1 for gaming) I haven't actually tried gaming yet, but I have noticed my Ethereum hashrate increases ~0.3 MH/s per monitor I disable (I have 4 monitors total).


----------



## rdr09

I love and hate miners at same time.


----------



## pmc25

Quote:


> Originally Posted by *CaptainTom*
> 
> Some new discoveries and questions from tests I have run:
> 
> 1) *1800/1105 clocks were totally stable at stock voltages*. However the performance uplift wasn't that big, and thus I have to once again say that Vega's performance is very much so tied to temperature because undervolted stock clocks almost bring the same uplift.
> 
> 2) Speaking of temperature - *Do NOT forget that the reported temperature is only the CORE temp!* The latest version of HW64 reports HBM temp correctly (At least with 17.8.1 drivers), and I had to set the temp target to 68c for the HBM to _reliably_ stay below 75c. Using mining drivers I get 42 MH/s ETH + 1400 SC with 1537/1105 clocks.


1) I don't think it's worth OC'ing the core at the moment. Even with 10Mhz increments past 1630, I have to step up voltage quite quickly. I suspect with driver and firmware revisions it will begin to get a bit more efficient past stock clocks. As of now, when it will do 950mV for totally solid 1630Mhz, it's not worth the gains, as the cards are so memory bandwidth limited in most games / applications.

Once the drivers / firmware / WattMan / everything get better, and hopefully they manage to reduce the memory bottleneck a bit, then core overclocking should become much more attractive.

2) I'm pretty sure these will do well over 50MH/s once the mining ISA is fully implemented, and that probably at less than Polaris wattage. If I can run my card at 950mV at 1630Mhz full gaming load, then running it at 800mV / 900Mhz should be perfectly possible once controls are fixed. However this has obviously dire implications for pricing .....


----------



## aliquis

At least in this thread, it seems everyone (myself included) has been able to undervolt his card significantly, i undervolted from 1,2V to 1040mV although i slightly lowered my max core clock to 1652MHz too.

It was similar with the rx 480 in a sense at least, the card could run very efficient up to a certain frequency (probably different on each chip), i ran my x480 @ 1300MHz @ 1060mV and that was already slightly above the performance/power consumption sweetspot for my card.

I mean, i think i understand why amd does push their cards so hard beyond the efficiency sweetspot , they probably think they have to beat the gtx 1060(with the rx 480/580) and the gtx 1080 with vega 64 now, so they raise the stock core clocks and core voltage to make the card perform at least slightly better in comparison against the competing nvidia product, but the efficiency/power consumption shoots through the roof.

Was similar with fiji too, the nano (which was the same die as fury /fury x) showed that the chip could run very efficient and many fury owners tended to undervolt their cards, but surely efficiency is not what the cards will be remembered for.

Probably same story with vega. With the undervolting and hbm2 overclock so far, i am overall satisfied with the card (vega lauch was a mess, it is obviously too expensive now etc.) but the card will probably be remembered as a power hungry underperforming failure


----------



## theBee2112

Quote:


> Originally Posted by *rdr09*
> 
> I love and hate miners at same time.


Oh yeah eh!? Well I love and hate gamers! They have so much more leisure time than I









At the end of the day, it doesn't matter what we use our cards for, everyone here on OCN just wants to push the limits of this product to suit their use case!


----------



## pmatio

Quote:


> Originally Posted by *dagget3450*
> 
> New owners added to list:
> 
> Energylite RX Vega 64 (AC) (Now waterblocked)
> Paxi RX Vega 64 (AC) (soon to be watercooled)
> Kelen RX Vega 64 x2
> Neutronman RX Vega 64 (AC) (soon to be watercooled)
> Soggysilicon RX Vega 64 (AC) (soon to be watercooled)
> aliquis RX Vega 64 (WC)
> sneida RX Vega 64 (WC)
> DampMonkey RX Vega 64 (AC) (now watercooled)
> CaptainTom RX Vega 64
> Pleskac RX Vega 64 (FE)
> rv8000 RX Vega 64 (AC)
> beatfried RX Vega 64
> SysemTech RX Vega 64 (AC)
> theBee2112 RX Vega 64 (AC)
> Elmy RX Vega 64 (AC) (watercooled now)
> pmc25 RX Vega 64
> Dolk RX Vega 64
> Peter Nixeus RX Vega 64 (AC)
> 
> Welcome to the club! Maybe if i can get a little more time ill make the OP nicer and more info esp for RX Vega. I added some people based on their posts and it may not have all valid info. Please post in here or PM me if your information is incorrect and ill update it as needed. Maybe we can get a little banner also to add to signatures for club soon.


Pls add me to the list:

RX Vega 64 (WC) soon to be full watercooled, pictures are in my rig segment.


----------



## The EX1

Quote:


> Originally Posted by *theBee2112*
> 
> Now we're getting somewhere. But 350 watts is a bit much. Should be better than 43MH/s on water and more support coming! But it broke WattTool, nothing works in it now.
> 
> @Soggysilicon - sucks that you're having DP issues. I had that happen on a RX 580, and never got it as it should be. If you ever resolve that, please post your findings.


Is this a updated version of the original beta blockchain driver? When was this released?


----------



## kundica

A couple more benches with my LC version. Stock clock, +50% power limit and HBM @ 1100. My card isn't taking too kindly to OC on the core, might be a power issue or just its limit, still need more testing.

Time Spy - 8195, GS 8075 - https://www.3dmark.com/3dm/21749482 - Not sure why I'm getting the time measurement issue.

FS Extreme - 11093, GS 12082 - https://www.3dmark.com/3dm/21749600


----------



## Papa Emeritus

Finally got a block on the way, ocuk got some in stock today


----------



## CaptainTom

Quote:


> Originally Posted by *pmc25*
> 
> 1) I don't think it's worth OC'ing the core at the moment. Even with 10Mhz increments past 1630, I have to step up voltage quite quickly. I suspect with driver and firmware revisions it will begin to get a bit more efficient past stock clocks. As of now, when it will do 950mV for totally solid 1630Mhz, it's not worth the gains, as the cards are so memory bandwidth limited in most games / applications.
> 
> Once the drivers / firmware / WattMan / everything get better, and hopefully they manage to reduce the memory bottleneck a bit, then core overclocking should become much more attractive.
> 
> 2) I'm pretty sure these will do well over 50MH/s once the mining ISA is fully implemented, and that probably at less than Polaris wattage. If I can run my card at 950mV at 1630Mhz full gaming load, then running it at 800mV / 900Mhz should be perfectly possible once controls are fixed. However this has obviously dire implications for pricing .....


1) It's worth keeping the core somewhere between 1500 - 1700MHz, but yes past that I haven't seen almost any gains. Although I hesitate to confirm anything because of how buggy core overclocking is right now.

2) 900MHz @800mV is what I am running on my mining rig. No drop in hashrate from switching it down from 1700MHz lol. I suspect I will need to clock it higher though once some more optimizations are implemented, and from the lower timings winter will bring me







. Indeed I expect to get to 45 MH/s if I can keep the HBM below 60c in a month or so, right now 42 is what i am getting (at night). Further optimizations could bring 50+.


----------



## pmc25

Quote:


> Originally Posted by *CaptainTom*
> 
> 2) 900MHz @800mV is what I am running on my mining rig. No drop in hashrate from switching it down from 1700MHz lol. I suspect I will need to clock it higher though once some more optimizations are implemented, and from the lower timings winter will bring me
> 
> 
> 
> 
> 
> 
> 
> . Indeed I expect to get to 45 MH/s if I can keep the HBM below 60c in a month or so, right now 42 is what i am getting (at night). Further optimizations could bring 50+.


I don't think people (including miners) have realised just how heavily these can be undervolted. Once they do, if past price fluctuations say anything, these will be over $1000 ...


----------



## CaptainTom

Quote:


> Originally Posted by *pmc25*
> 
> I don't think people (including miners) have realised just how heavily these can be undervolted. Once they do, if past price fluctuations say anything, these will be over $1000 ...


I don't think people in general realize what Vega really is capable of. I put most of that blame on AMD, but some of it also goes to the people that actually believe silly things like this chart:

https://img.purch.com/rx-vega-mining/o/aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9WL0wvNzAyMjczL29yaWdpbmFsL21pbmluZy5wbmc=

^I have no clue how they even got the hashrate that low lol. Out of the box it gets 34-37 in power-saving mode...


----------



## sugarhell

https://forum.beyond3d.com/posts/1997699/

Primitive shaders are still not enabled on the drivers.


----------



## 113802

Quote:


> Originally Posted by *sugarhell*
> 
> https://forum.beyond3d.com/posts/1997699/
> 
> Primitive shaders are still not enabled on the drivers.


I'm curious how this card would of worked out if nVidia's software team worked on this. Release unfinished drivers for the card along with multiple features not working properly.
Quote:


> Q : 6) Many argue that vega is just a refined polaris gpu, how would you respond to this ?
> 
> Raja Koduri - Chief Architect Radeon Technologies Group - Reddit AMA
> 
> A: My software team wishes this was true


----------



## sugarhell

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I'm curious how this card would of worked out if nVidia's software team worked on this. Release unfinished drivers for the card along with multiple features not working properly.


Reminds me the release drivers of 7970 11.12 RC11 drivers. Oh boy, that was a joke really.


----------



## pmc25

Quote:


> Originally Posted by *CaptainTom*
> 
> I don't think people in general realize what Vega really is capable of. I put most of that blame on AMD, but some of it also goes to the people that actually believe silly things like this chart:
> 
> https://img.purch.com/rx-vega-mining/o/aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9WL0wvNzAyMjczL29yaWdpbmFsL21pbmluZy5wbmc=
> 
> ^I have no clue how they even got the hashrate that low lol. Out of the box it gets 34-37 in power-saving mode...


If reviews are done ENTIRELY by the German office of Tom's Hardware, then you can put some stock in them. If not, they're hopeless.


----------



## CaptainTom

Quote:


> Originally Posted by *sugarhell*
> 
> https://forum.beyond3d.com/posts/1997699/
> 
> Primitive shaders are still not enabled on the drivers.


Nice link. Although it wasn't abundantly clear if even the "auto" mode is turned in yet.

It seemed like most people there thought it was obvious that "Native Vega" mode is already turned on, but I really highly doubt it. That means double the geometry IPC of Fiji, and if it had that we would see AT LEAST 15-20% higher clock-for-clock performance. Heck even Polaris was shown to have 5-15% higher IPC than Tonga/Fiji!


----------



## pmc25

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I'm curious how this card would of worked out if nVidia's software team worked on this. Release unfinished drivers for the card along with multiple features not working properly.


Not very well, most likely, but probably better given the much larger team sizes and budgets. Neither NVIDIA nor AMD have tried implementing so many new leading-edge features in one step before.


----------



## theBee2112

Quote:


> Originally Posted by *The EX1*
> 
> Is this a updated version of the original beta blockchain driver? When was this released?


It was released sometime yesterday. I had it installed by 8PM EST, with great results so far. Above the photo I uploaded is a link that I quoted from someone else. All of the information is in there.

I now have an EK waterblock on its way! Should be here late next week.

I've had luck getting my voltage down to 800mV as well, but did not leave it there long assuming instability over long term. Been stable at 1000mV for a day now, so it's time to drop it some more.
Will be reporting more mining hash rates and benchmarks later today, along with firestrike and unigine, to see if the blockchain drivers effect gaming.


----------



## PontiacGTX

Quote:


> Originally Posted by *sugarhell*
> 
> https://forum.beyond3d.com/posts/1997699/
> 
> Primitive shaders are still not enabled on the drivers.


will it make a noticeably difference if it is enabled?


----------



## Nuke33

Quote:


> Originally Posted by *PontiacGTX*
> 
> will it make a noticeably difference if it is enabled?


If it works right it should at least double geometry processing performance, which is not so good right now.


----------



## sugarhell

Quote:


> Originally Posted by *Nuke33*
> 
> If it works right it should at least double geometry processing performance, which is not so good right now.


The peak performance of this pipeline is around 4x. Even if they hit 2x it should be enough for Vega.

The best part is this one: deferred vertex attribute computation that is a huge win but this requires engine modifications and a lot of research


----------



## PontiacGTX

that might explain the List 100% culled benchmark are the same as Fury X at same core clock?
http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/#a2

or that is due to having same amount of tessellator units


----------



## Newbie2009

Excuse my ignorance, the primitive shader thing have anything to do with the rendering mode copied from Nvidia from maxwell?


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> Excuse my ignorance, the primitive shader thing have anything to do with the rendering mode copied from Nvidia from maxwell?


I think that might be Draw Stream binning rasterizer/Tiled Based Rendering and it is not a copy it is just a similar approach
Quote:


> The company describes this rasterizer as an essentially tile-based approach to rendering that lets the GPU more efficiently shade pixels, especially those with extremely complex depth buffers. The fundamental idea of this rasterizer is to perform a fetch for overlapping primitives only once, and to shade those primitives only once. This approach is claimed to both improve performance and save power, and the company says it's especially well-suited to performing deferred rendering.


----------



## sugarhell

Quote:


> Originally Posted by *Newbie2009*
> 
> Excuse my ignorance, the primitive shader thing have anything to do with the rendering mode copied from Nvidia from maxwell?


Tiled based rendering started as a mobile rendering mostly. The first that had a Tiled based rendering was from [email protected] Nvidia managed to do a semi way of tiled based rendering with Maxwell.

Primitive shaders are a completely different thing though. There are 2 important stages that happen on the gpu in order to draw a pixel on the screen. The vertex shader that passes the vertex data from the mesh model to the shader and the fragment shader that passes the texture data to the uv data of the vertex. Primitive shaders combine all these steps including the tessellation/geometry stage with the advantage that can cull much faster the triangles that overdraw on the screen or the back of the triangle.

That means that they can cull the triangles before they write them on the buffer. In general with this pipeline Vega has higher peak geometry performance than 1080ti.


----------



## pmc25

Does anyone else get a freeze of 2-3 seconds (not crash type freeze, just temporary lock up) every single time a Flash or HTML5 video initially loads in (doesn't happen after the player initially loading?

It seems to be to do with idle / low GPU utilisation HBM2 memory clocks.

The player will begin loading, then everything will freeze, the HBM clock momentarily boosts from 167Mhz to full clock, and it unfreezes, then as soon as it's unfrozen it goes back to 167Mhz.

Drivers are REALLY bad.

Using Opera / Chrome in W7 x64.


----------



## sugarhell

Quote:


> Originally Posted by *pmc25*
> 
> Does anyone else get a freeze of 2-3 seconds (not crash type freeze, just temporary lock up) every single time a Flash or HTML5 video initially loads in (doesn't happen after the player initially loading?
> 
> It seems to be to do with idle / low GPU utilisation HBM2 memory clocks.
> 
> The player will begin loading, then everything will freeze, the HBM clock momentarily boosts from 167Mhz to full clock, and it unfreezes, then as soon as it's unfrozen it goes back to 167Mhz.
> 
> Drivers are REALLY bad.


I don't have any problem with my FE


----------



## PontiacGTX

Quote:


> Originally Posted by *sugarhell*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Tiled based rendering started as a mobile rendering mostly. The first that had a Tiled based rendering was from [email protected] Nvidia managed to do a semi way of tiled based rendering with Maxwell.
> 
> Primitive shaders are a completely different thing though. There are 2 important stages that happen on the gpu in order to draw a pixel on the screen. The vertex shader that passes the vertex data from the mesh model to the shader and the fragment shader that passes the texture data to the uv data of the vertex. Primitive shaders combine all these steps including the tessellation/geometry stage with the advantage that can cull much faster the triangles that overdraw on the screen or the back of the triangle.
> 
> 
> 
> That means that they can cull the triangles before they write them on the buffer. In general with this pipeline Vega has higher peak geometry performance than 1080ti.


Not Yet http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/#a2


----------



## sugarhell

Quote:


> Originally Posted by *PontiacGTX*
> 
> Not Yet http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/#a2


Yes I already know that


----------



## PontiacGTX

Quote:


> Originally Posted by *sugarhell*
> 
> Yes I already know that


but they get similar List triangle throughtput when culled

one thing is curious when none is culled strip or list shows AMD has higher throughtput,why?

http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/#a2


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> Does anyone else get a freeze of 2-3 seconds (not crash type freeze, just temporary lock up) every single time a Flash or HTML5 video initially loads in (doesn't happen after the player initially loading?
> 
> It seems to be to do with idle / low GPU utilisation HBM2 memory clocks.
> 
> The player will begin loading, then everything will freeze, the HBM clock momentarily boosts from 167Mhz to full clock, and it unfreezes, then as soon as it's unfrozen it goes back to 167Mhz.
> 
> Drivers are REALLY bad.
> 
> Using Opera / Chrome in W7 x64.


I don't have that issue but you can try disabling hardware acceleration in Chrome.


----------



## sugarhell

Quote:


> Originally Posted by *PontiacGTX*
> 
> but they get similar List triangle throughtput when culled
> 
> one thing is curious when none is culled strip or list shows AMD has higher throughtput,why?
> 
> http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/#a2


IF something doesn't need to get culled early then Vega is fast. If it needs to be culled 100% then the 4 Geometry Engines are a bottleneck as they can discard 4 triangles per clock


----------



## 113802

Quote:


> Originally Posted by *pmc25*
> 
> Does anyone else get a freeze of 2-3 seconds (not crash type freeze, just temporary lock up) every single time a Flash or HTML5 video initially loads in (doesn't happen after the player initially loading?
> 
> It seems to be to do with idle / low GPU utilisation HBM2 memory clocks.
> 
> The player will begin loading, then everything will freeze, the HBM clock momentarily boosts from 167Mhz to full clock, and it unfreezes, then as soon as it's unfrozen it goes back to 167Mhz.
> 
> Drivers are REALLY bad.
> 
> Using Opera / Chrome in W7 x64.


Yes on every single computer when surfing this website(Even Mac). I fixed it by using ad blocker(I do not promote) until they take the flash based ad off of this site.

Open task manager when this occurs and you will see CPU usage spike up. It's not a GPU issue.

Edit: After re-reading your post sounds like a different issue. Videos play fine for me.


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> I don't have that issue but you can try disabling hardware acceleration in Chrome.


Thanks, yes. Have had to.

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Yes on every single computer when surfing this website(Even Mac). I fixed it by using ad blocker(I do not promote) until they take the flash based ad off of this site.
> 
> Open task manager when this occurs and you will see CPU usage spike up. It's not a GPU issue.
> 
> Edit: After re-reading your post sounds like a different issue. Videos play fine for me.


Indeed, yes, it's to do with the aggressive down-clocking of the HBM2 at idle, I think. Have had to disable hardware acceleration.


----------



## PontiacGTX

Quote:


> Originally Posted by *sugarhell*
> 
> IF something doesn't need to get culled early then Vega is fast. If it needs to be culled 100% then the 4 Geometry Engines are a bottleneck as they can discard 4 triangles per clock


then that's why DICE is using compute to process better the geometry ?


----------



## Whatisthisfor

I am curious if anybody has managed to switch the fan of the Vega AIO with a quieter one?


----------



## Peter Nixeus

Quote:


> Originally Posted by *Whatisthisfor*
> 
> I am curious if anybody has managed to switch the fan of the Vega AIO with a quieter one?


I read somewhere someone switched their fan to a noctua fan = quieter and good air flow.


----------



## Whatisthisfor

Quote:


> Originally Posted by *Peter Nixeus*
> 
> I read somewhere someone switched their fan to a noctua fan = quieter and good air flow.


Thats good to know, if only the noctua fans would have better looking colors.


----------



## aliquis

I have swtiched my rx vega 64 liquid fan with a noctua one, and all i can say is that it was a lot more trouble/work than i expected...

To open the shroud you need a torx screwdriver ( torx 5 is the correct size but i am from europe and we use metric numbers, i dont know if it is the same size in us)



When the shroud is removed the 4 pin fan connector is at the very back end of the the pcb (opposite where the power cables are connected)
It is not a normal 4 pin connector, if you want to plug in a normal 4 pin pwm fan, you need a adapter (mini 4 pin, vga 4 pin, gpu 4 pin or whatever it is called)
The fan cable needs to long enough from the 4 pin header on the pcb to the place on the other side where the other cables go out (its the only open place where you can get the cable out)
Also, the inside is rather narrow, but there is of course enough space for the fan cable.



This is the 4pin case fan - 4 pin mini adapter




The noctua fan ( i currently use the NF-P12 PWM (it is completly silent, even when maxed it is barely audible) but i think i will buy the NF-F12 PWM too (has way more static pressure) but i am not sure if this one will be as silent.

Exchanging the fan will be a lot faster now, because i used a extension cable and i can plug in the other new fan outside of the card.


----------



## theBee2112

Quote:


> Originally Posted by *aliquis*
> 
> I have swtiched my rx vega 64 liquid fan with a noctua one, and all i can say is that it was a lot more trouble/work than i expected...


Is it not just 4 screws?

I can't tell from pics, but is the power to the fan just a cable dangling off or no?


----------



## aliquis

The fan is connected to the 4pin header on the graphics card pcb (in that one pic it is not yet connected, wasnt finished with the cablemanagement inside the shroud yet) (you can read, modify pwm in wattmann etc. all works)


----------



## theBee2112

Yeah, I just read your updated post. That's nuts, all that work to change the fan. That's kinda why I do custom water cooling, the AIO's tend to be harder to work on if you want to modify something.

Id prefer to have a 4 pin just dangle off the fan, as opposed to some over complicated way to hide it and plug it to PCB. Thanks for the pics! +REP


----------



## kundica

Quote:


> Originally Posted by *aliquis*
> 
> I have swtiched my rx vega 64 liquid fan with a noctua one, and all i can say is that it was a lot more trouble/work than i expected...
> 
> To open the shroud you need a torx screwdriver ( torx 5 is the correct size but i am from europe and we use metric numbers, i dont know if it is the same size in us)
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> When the shroud is removed the 4 pin fan connector is at the very back end of the the pcb (opposite where the power cables are connected)
> It is not a normal 4 pin connector, if you want to plug in a normal 4 pin pwm fan, you need a adapter (mini 4 pin, vga 4 pin, gpu 4 pin or whatever it is called)
> The fan cable needs to long enough from the 4 pin header on the pcb to the place on the other side where the other cables go out (its the only open place where you can get the cable out)
> Also, the inside is rather narrow, but there is of course enough space for the fan cable.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> This is the 4pin case fan - 4 pin mini adapter
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> The noctua fan ( i currently use the NF-P12 PWM (it is completly silent, even when maxed it is barely audible) but i think i will buy the NF-F12 PWM too (has way more static pressure) but i am not sure if this one will be as silent.
> 
> Exchanging the fan will be a lot faster now, because i used a extension cable and i can plug in the other new fan outside of the card.
> 
> 
> Spoiler: Warning: Spoiler!


This is great, thanks for posting! Ordering a fan now.


----------



## 113802

Quote:


> Originally Posted by *aliquis*
> 
> The fan is connected to the 4pin header on the graphics card pcb (in that one pic it is not yet connected, wasnt finished with the cablemanagement inside the shroud yet) (you can read, modify pwm in wattmann etc. all works)


Thanks for the photos. I ordered a 2150 RPM PWM Gentle Typhoon. This is the adapter correct?

https://www.newegg.com/Product/Product.aspx?Item=9SIA9F943Y0385&cm_re=4_pin_male_to_4_mini_gpu-_-9SIA9F943Y0385-_-Product


----------



## aliquis

Looks right (i am 99,9% sure that adapter is correct)

*edit: Also, what i noticed and want to mention for those that want to exchange the fan on rx vega liquid:
*
If you buy a fan that has a different rpm range then the stock one (my new one has like 300 rpm min and 1300 max) so it is completly different than the stock one which had like 600rpm min and like 3000 rpm max.

What this means, the rpm of the new fan are still reported correct but if you use a fan that has a competly different rpm range then the stock one the rpm you set in wattman will not match the one the fan is set to. (it seems to me that the rpm you set in wattman are actually %pwm but displayed as rpm) I have set the rpm in wattman to min to 2800 ( which equals 1000rpm on my new fan) and target to 3300 (which equals max rpm on my fan ´, 1300 rpm). That also means that the preset powerprofile (balanced, power save, and turbo) are kind of useless to me now, because the fan settings in these profiles are off now. This is not a big deal to me, because i use the custom profile all the time anyway, but you should know this in advance. Anyway and this is only kind of an issue if your fan has a completly different rpm range than the stock one, if its also using similar rpm ranges, everything should be more or less the same)


----------



## Nevril

Guys does anybody else have the following problem/know how to solve it?

Testing with Superposition

Stock/Balanced my Vega 64 Air runs with clock speed of 1401 with peaks to 1536. (P5/P6 states I'm guessing)

I'm trying to downvolt it but as soon as I go even from 1200 to 1199 on the P7 the clock rates during the benchmark drop to 1138/1200.
I've already reinstalled everything, cleaned up with DDU and even tried with the Register mod but nothing seems to be working.

Every AMD preset works quite fine instead. Any ideas? (Drivers 17.8.1 obviously)


----------



## n3squ1ck

Hey Guys,

anyone has the same issue:

I undervolted my 64 to 1110mv @ 1538
I set my Fancurve in Wattman to 400-3500 but it mostly caps at 2400 and is throtteling whilst it could easy do 1538 @ 3500 but it just doesnt

Any tips?


----------



## feedthenoob

I have the exact same problem. Undervolting causes throttling.


----------



## steadly2004

Quote:


> Originally Posted by *Nevril*
> 
> Guys does anybody else have the following problem/know how to solve it?
> 
> Testing with Superposition
> 
> Stock/Balanced my Vega 64 Air runs with clock speed of 1401 with peaks to 1536. (P5/P6 states I'm guessing)
> 
> I'm trying to downvolt it but as soon as I go even from 1200 to 1199 on the P7 the clock rates during the benchmark drop to 1138/1200.
> I've already reinstalled everything, cleaned up with DDU and even tried with the Register mod but nothing seems to be working.
> 
> Every AMD preset works quite fine instead. Any ideas? (Drivers 17.8.1 obviously)


Have you tried going all the way down to 1150 or 1100? Also have you tried increasing the power limit?
Quote:


> Originally Posted by *n3squ1ck*
> 
> Hey Guys,
> 
> anyone has the same issue:
> 
> I undervolted my 64 to 1110mv @ 1538
> I set my Fancurve in Wattman to 400-3500 but it mostly caps at 2400 and is throtteling whilst it could easy do 1538 @ 3500 but it just doesnt
> 
> Any tips?


Why not try a more aggressive curve? Like 1000-3500? Also what's your target temp? Maybe try and lower that? I think if you have a target temp and Target fan speed it should attempt to get to that temp within that an speed. But I'm not 100%.

What's your power limit at?


----------



## Nevril

Quote:


> Originally Posted by *steadly2004*
> 
> Have you tried going all the way down to 1150 or 1100? Also have you tried increasing the power limit?


Yes, tried both, up to 1050 and +10% Power Limit. Power consumption goes up, but clocks and benchmark score down.

EDIT: I've tried with 3D Mark cause it was a little bit easier to manage changes.
As soon as I go to Custom and apply the Manual Voltage control, even without any change to the voltages, my frequency drops to 1138/1200.


----------



## n3squ1ck

Hey,

my powerlimit is +50%
Ill try a more agressive curve, i put it to 1500 -> 3500 and for now it looks better, ill see if it stays stable, fanstable









€: nope doesnt help, i tried 1700 -> 3500 but same, fan goes down to 2400, hits 85 degrees and thermalthrottle.


----------



## Newbie2009

AMD Radeon Crimson ReLive Drivers 17.8.2 Beta

https://www.techpowerup.com/download/amd-radeon-graphics-drivers/

*Support For*
F1 2017
Up to 4% performance improvement measured on Radeon RX Vega 64 graphics when compared to Radeon Software Crimson ReLive edition 17.8.1
PLAYERUNKNOWN'S BATTLEGROUNDS Early Access
Up to 18% performance improvement measured on Radeon RX Vega 64 graphics when compared to Radeon Software Crimson ReLive edition 17.8.1
Destiny 2 Beta
*Fixed Issues*

Display may blank or go black after install upgrade with Radeon RX Vega Series graphics products.
Random corruption may appear in Microsoft desktop productivity applications on Radeon RX Vega series graphics products.
The "Reset" option in Radeon Settings Gaming tab may enable the "HBCC Memory Segment" feature instead of setting it to the default disabled state.
Radeon WattMan may not reach applied overclock states on Radeon RX Vega series graphics.
Unable to create Eyefinity configurations through the Eyefinity Advanced Setup option.
*Known Issues*

Mouse stuttering may be observed on some Radeon RX graphics products when Radeon WattMan is open and running in the background or other third party GPU information polling apps are running in the background.
GPU Scaling may fail to work on some DirectX11 applications.
Windows Media Player may experience an application hang during video playback if Radeon ReLive is actively recording desktop.
Secondary displays may show corruption or green screen when the display/system enters sleep or hibernate with content playing.
After resuming from sleep and playing back video content the system may become unresponsive on Radeon RX Vega series graphics products.
Bezel compensation in mixed mode Eyefinity cannot be applied.
http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.8.2-Release-Notes.aspx


----------



## Dolk

Anyone else getting poor performance in Wolfenstien: The New Order?

I'm seeing between 45-60fps at max settings on a 2560x1080 res. I'm keeping stock settings on my RX Vega until I get my waterblock. My GTX1080 seemed to have no issue with this game, but yet Vega is struggling to perform better than a RX480.


----------



## n3squ1ck

weird, for example in pubg wich is ultra demanding i have around 50 - 80 in 3440x1440


----------



## Newbie2009

Quote:


> Originally Posted by *Dolk*
> 
> Anyone else getting poor performance in Wolfenstien: The New Order?
> 
> I'm seeing between 45-60fps at max settings on a 2560x1080 res. I'm keeping stock settings on my RX Vega until I get my waterblock. My GTX1080 seemed to have no issue with this game, but yet Vega is struggling to perform better than a RX480.


Just tested it there @ 1600p, v poor performance with v sync on. Needs a driver update id guess.

It seemed ok with enhanced sync and vsync in game off though fyi


----------



## 113802

Someone please overclock the core and give us an example! I'm stuck at work









Here are some signature badges

*Radeon Vega Frontier Edition Owner*
*Radeon RX Vega 64 XTX Owner*
*Radeon RX Vega 64 Owner*
*Radeon RX Vega 56 Owner*

Code:



Code:


[CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon Vega Frontier Edition Owner[/B][/URL][/CENTER]

[CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 XTX Owner[/B][/URL][/CENTER]

[CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 Owner[/B][/URL][/CENTER]

[CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 56 Owner[/B][/URL][/CENTER]


----------



## Whatisthisfor

Pump noise. I had that problem with my 1st RX Vega liquid just days ago. I hope the sound of that first card was not normal, as i considered it as not normal and already RMAed it and ordered another one. You can hear that annoying sound in a short video, that i did before sending the card back to the vendor.

https://www.file-upload.net/download-12669615/Klappern.mp4.html


----------



## Whatisthisfor

Quote:


> Originally Posted by *aliquis*
> 
> I have swtiched my rx vega 64 liquid fan with a noctua one, and all i can say is that it was a lot more trouble/work than i expected...


Thank you for the detailed documentation you provided here. Ill soon get my AIO card and maybe will change the fan too. I had hoped for an easier way to do it ;-)


----------



## CaptBhlavious

I have a Vega FE and have been going back and forth with installing the Vega FE drivers and the Vega 64 gaming drivers. The drawback of installing the gaming drivers is that I get no Wattman tab in the Radeon settings. Have any other FE owners figured out a way to solve this? I had been using WattTool to adjust power targets and max fan RPM, but I can't adjust HBM2 voltage or frequency using that tool.


----------



## hellm

it is not beta anymore

17.8.2


----------



## Newbie2009

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Someone please overclock the core and give us an example! I'm stuck at work
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here are some signature badges
> 
> *Radeon Vega Frontier Edition*
> *Radeon RX Vega 64 XTX Owner*
> *Radeon RX Vega 64 Owner*
> *Radeon RX Vega 56 Owner*
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon Vega Frontier Edition[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 XTX Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 56 Owner[/B][/URL][/CENTER]


SOOooooo, these drives have muddied the overclocking for me further.

My clocks have gone from 1630 stock to hovering around 1590, but my benchmark score is slightly better.

Upped limit to +50 powerplay, no change. Restored volts to stock, no change.

Bear in mind my P7 is 1630.

So now you have to overclock the card past 1630 to hit 1630, it will float around 1630 +or- a few mhz, won't lock on to it.

I'm guessing the previous driver wasn't reading the correct core clock and this one is.

If you want to hit a core clock of 1630 you have to overclock to 1650 or 1660 core target.


----------



## punchmonster

Ordered a Morpheus II and 2 Arctic F12 rev. 2 fans. Once it arrives I'll do a run down of using it with my aircooled Vega64

hope it's worth it


----------



## 113802

Quote:


> Originally Posted by *Newbie2009*
> 
> SOOooooo, these drives have muddied the overclocking for me further.
> 
> My clocks have gone from 1630 stock to hovering around 1590, but my benchmark score is slightly better.
> 
> Upped limit to +50 powerplay, no change. Restored volts to stock, no change.
> 
> Bear in mind my P7 is 1630.
> 
> So now you have to overclock the card past 1630 to hit 1630, it will float around 1630 +or- a few mhz, won't lock on to it.
> 
> I'm guessing the previous driver wasn't reading the correct core clock and this one is.
> 
> If you want to hit a core clock of 1630 you have to overclock to 1650 or 1660 core target.


My core was running at 1640Mhz the entire Firestrike run with 50% and 1105Mhz HBM

Scored 26092 GPU score in Firestrike. I had to leave so i couldnt overclock the core yet.


----------



## pillowsack

So what's the unofficial way of overclocking? I don't think watt tool was working for me. I feel like I have no control over the voltage....


----------



## pmc25

__ https://twitter.com/i/web/status/900865148633657345%5B%2FURL


----------



## theBee2112

*Some Benchmarks:*
*Driver: Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23*

*Benchmarks*
_Settings:_ Max fan, +50% power, 1090 mem, stock core, stock voltages. Nothing I tried would stick.
Superposition, 4K Optimized.

Heaven, 2560x1440 Ultra


In this last benchmark I *upped the core to 1887Mhz*, and kept all of the other settings the same. the core OC actually worked with these drivers!


*Mining*
_Settings:_ Max fan, +35% power, 1100 mem, stock core, stock voltages.

MAX Mh/s (with disregard for power draw)
ETH + DCR I'm getting a steady 43Mh/s and 1280Mh/s respectively, at 350 watts
ZEC I'm getting about 500 Sol/s, at 250 watts

Power efficiency:
_Settings:_ Max fan, +15% power, 1100 mem, stock core, stock voltages.
ETH + DCR I'm getting a steady 39Mh/s and 1180Mh/s respectively, at 175 watts
ZEC I'm getting about 450 Sol/s, at 125 watts

*Findings:*

Upping power limit above 40% is going to thermal throttle the card on air with max fan while mining. Best keep it below that, but above +15%, because the mem clock will not stick with less.
Lots of tearing/stuttering while gaming with a mem clock of 1100 Mhz
My Celeron works better than expected! Don't hate on it, it's just a temp system until my waterblock arrives. Will be paired with a 1700x and I will rerun these tests to see if CPU was a bottleneck lol.
Voltage control doesn't stay at what you set.
Oh, and the new 17.8.2 Driver (Non Beta) Kills mining performance slightly, but lowers power draw by about 100 watts.


----------



## kundica

Something interesting going on with the today's new drivers.

Still using the stock clock with +50% power limit on my LC card, I'm no longer hitting/sustaining 1750, but the card seems to be performing better, getting less hot and thus staying more quiet. I need to bust out my meter and see if it's pulling less power too. Max temps while benching with 17.8.1 was 58, with 17.8.2 it's 56. Ambient temp is exactly the same.

Some bench comparisons.

Time Spy - stock clock, +50%, HBM 1075
17.8.1 - max clock 1750 - 8137, GS 8017 - https://www.3dmark.com/3dm/21741838
17.8.2 - max clock 1741 - 8171, GS 8042 - https://www.3dmark.com/3dm/21762526

Time Spy - stock clock, +50%, HBM 1000
17.8.1 - max clock 1750 - 8037, GS 7888 - https://www.3dmark.com/3dm/21741734
17.8.2 - max clock 1741 - 8043, GS 7902 - https://www.3dmark.com/3dm/21762230

Firestrike Extreme - stock clock, +50%, HBM 1075
17.8.1 - max clock 1750 - 11071, GS 12026, https://www.3dmark.com/3dm/21741925
17.8.2 - max clock 1734 - 11083, GS 12074, https://www.3dmark.com/3dm/21762466


----------



## pmc25

Quote:


> Originally Posted by *theBee2112*
> 
> *Some Benchmarks:*
> *Driver: Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23*
> 
> *Benchmarks*
> _Settings:_ Max fan, +50% power, 1090 mem, stock core, 1050mV on core and mem
> 
> In this last benchmark I *upped the core to 1887Mhz*, and kept all of the other settings the same. the core OC actually worked with these drivers!


Wow. Can you confirm the clocks in HWInfo, and also check that the GPU Core Voltage (probably still reading as GPU Memory voltage) is as applied? Is performance actually scaling? You said this was on air?? Was the huge OC on the Beta or non-Beta?

Does anyone know which is newer, the Beta or non-Beta?


----------



## theBee2112

Quote:


> Originally Posted by *pmc25*
> 
> Wow. Can you confirm the clocks in HWInfo, and also check that the GPU Core Voltage (probably still reading as GPU Memory voltage) is as applied? Is performance actually scaling? You said this was on air?? Was the huge OC on the Beta or non-Beta?
> 
> Does anyone know which is newer, the Beta or non-Beta?




Looks like it's sticking to me. Slightly lower though at 1872Mhz, I gave Wattman +15% to core. Actually pretty stable overclocked like this. If I had better CPU I'd run firestrike and timespy. Should I anyways?

Yes this is on air. I have the fan maxxed out and really good case ventilation. the OC was on the BETA. The performance seems to scale so far. I haven't run many benchmarks yet.


----------



## prom

Quote:


> Originally Posted by *theBee2112*
> 
> 
> Lots of tearing/stuttering while gaming with a mem clock of 1100 Mhz
> My Celeron works better than expected! Don't hate on it, it's just a temp system until my waterblock arrives. Will be paired with a 1700x and I will rerun these tests to see if CPU was a bottleneck lol.


I'm imagining that the Celeron is a bottleneck, but I'm just as curious to know if enabling the HBCC reduces or removes tearing/stuttering in your particular situation.


----------



## pmc25

Quote:


> Originally Posted by *theBee2112*
> 
> Looks like it's sticking to me. Slightly lower though at 1872Mhz, I gave Wattman +15% to core. Actually pretty stable overclocked like this. If I had better CPU I'd run firestrike and timespy. Should I anyways?
> 
> Yes this is on air. I have the fan maxxed out and really good case ventilation. the OC was on the BETA. The performance seems to scale so far. I haven't run many benchmarks yet.


You need to press the sensor button the main window of HWiNFO64 and then scroll down to the bottom where the GPU sensor readings are. Observe the GPU Core Voltage / GPU Memory Voltage ... 1.356V (or thereabouts) fixed is the HBM voltage, the other is GPU core voltage.

Observe it whilst under load. Does it spend most of the time at or around your set voltage?

Also, what was highest OC previously?


----------



## theBee2112

Seems that there's no performance to be had over 1850Mhz, the scaling just stops after that. I'm being held back by heat at 71C, and my CPU is going to skew my results a bit I imagine.
I'll do more testing soon, but at least from what I'm seeing, the OC sticks. Weather or not it scales that well is up for debate. My benchmarks show a 4% improvement from stock to 1872Mhz.

On a side note, enabling HBCC with 11600 MB memory segment helped some, but still some less noticeable tearing. A smoother experience none the less.

EDIT:
The core voltage did not stay at 1050mV. It stayed at default.








HWinfo reports core voltage at 1.356V and mem at 850mV


----------



## PontiacGTX

Quote:


> Originally Posted by *theBee2112*
> 
> *Some Benchmarks:*
> *Driver: Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23*
> 
> *Benchmarks*
> _Settings:_ Max fan, +50% power, 1090 mem, stock core, stock voltages. Nothing I tried would stick.
> Superposition, 4K Optimized.
> 
> Heaven, 2560x1440 Ultra
> 
> 
> In this last benchmark I *upped the core to 1887Mhz*, and kept all of the other settings the same. the core OC actually worked with these drivers!
> 
> 
> *Mining*
> _Settings:_ Max fan, +35% power, 1100 mem, stock core, stock voltages.
> 
> MAX Mh/s (with disregard for power draw)
> ETH + DCR I'm getting a steady 43Mh/s and 1280Mh/s respectively, at 350 watts
> ZEC I'm getting about 500 Sol/s, at 250 watts
> 
> Power efficiency:
> _Settings:_ Max fan, +15% power, 1100 mem, stock core, stock voltages.
> ETH + DCR I'm getting a steady 39Mh/s and 1180Mh/s respectively, at 175 watts
> ZEC I'm getting about 450 Sol/s, at 125 watts
> 
> *Findings:*
> 
> Upping power limit above 40% is going to thermal throttle the card on air with max fan while mining. Best keep it below that, but above +15%, because the mem clock will not stick with less.
> Lots of tearing/stuttering while gaming with a mem clock of 1100 Mhz
> My Celeron works better than expected! Don't hate on it, it's just a temp system until my waterblock arrives. Will be paired with a 1700x and I will rerun these tests to see if CPU was a bottleneck lol.
> Voltage control doesn't stay at what you set.
> Oh, and the new 17.8.2 Driver (Non Beta) Kills mining performance slightly, but lowers power draw by about 100 watts.


one detail about superposition it seems to favor higher compute unit/frequency(not really comparing improved architecture pipeline, unless there are some features will make a difference in superposition) comparing AMD vs AMD so there isnt a big difference on VEGA vs Fury X maybe a better comparison can be 3dmark
Quote:


> Originally Posted by *theBee2112*
> 
> Seems that there's no performance to be had over 1850Mhz, the scaling just stops after that. I'm being held back by heat at 71C, and my CPU is going to skew my results a bit I imagine.
> I'll do more testing soon, but at least from what I'm seeing, the OC sticks. Weather or not it scales that well is up for debate. My benchmarks show a 4% improvement from stock to 1872Mhz.
> 
> On a side note, enabling HBCC with 11600 MB memory segment helped some, but still some less noticeable tearing. A smoother experience none the less.
> 
> EDIT:
> The core voltage did not stay at 1050mV. It stayed at default.
> 
> 
> 
> 
> 
> 
> 
> 
> HWinfo reports core voltage at 1.356V and mem at 850mV


@Buildzoid had found that at certain frequency there was a power limiting feature then there wasnt a difference between 1900+ score and slighly oced


----------



## Soggysilicon

Quote:


> Originally Posted by *theBee2112*
> 
> Oh yeah eh!? Well I love and hate gamers! They have so much more leisure time than I
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At the end of the day, it doesn't matter what we use our cards for, everyone here on OCN just wants to push the limits of this product to suit their use case!


Its this all day! +1


----------



## 99belle99

I'm on mobile and want to subscribe to thread and don't know how to do it without posting. So useless post ftw.


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> Does anyone else get a freeze of 2-3 seconds (not crash type freeze, just temporary lock up) every single time a Flash or HTML5 video initially loads in (doesn't happen after the player initially loading?
> 
> It seems to be to do with idle / low GPU utilisation HBM2 memory clocks.
> 
> The player will begin loading, then everything will freeze, the HBM clock momentarily boosts from 167Mhz to full clock, and it unfreezes, then as soon as it's unfrozen it goes back to 167Mhz.
> 
> Drivers are REALLY bad.
> 
> Using Opera / Chrome in W7 x64.


I started seeing this issue with the last 2 driver push outs... x.18.1 and x.18.2 this week. So yes, I have had this issue. Specifically in chrome... it isn't "as bad" as it was with 18.2, but it could be an issue with hardware acceleration in chrome... chrome and windows 10 have been really dodgy for me for the past few months anyways.


----------



## dagget3450

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Someone please overclock the core and give us an example! I'm stuck at work
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here are some signature badges
> 
> *Radeon Vega Frontier Edition Owner*
> *Radeon RX Vega 64 XTX Owner*
> *Radeon RX Vega 64 Owner*
> *Radeon RX Vega 56 Owner*
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon Vega Frontier Edition Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 XTX Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 56 Owner[/B][/URL][/CENTER]


I added this to OP, but i changed vega color on vega Fe to blue. any thoughts on these? Updated list, and still behing on RX Vega info. I will also possibly use google spreadsheet. Then i am not sure if this needs anything else but maybe try to make it an official club.


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> 
> __ https://twitter.com/i/web/status/900865148633657345%5B%2FURL


HBCC caused a 20% loading penalty launching Warhammer Total War Benchie... and I lost 3 fps consistently across 5 runs... so.... off it goes. The only difference I could see was that it would report 12G of video ram. Of course this is FXAA, MSAA x4 was unacceptable.

This was 3440x1440 Ultra all the bells n' whistles LESS Vsync (native enhanced Fsync sammy CF791). HBCC ON 55.3 avg vs. HBCC OFF 58.5 avg fps (sammy 3200 single rank b die g.skill on a R7 1800X).

I have had some very very tepid success with it in other stuff, but I feel strongly that this is a "future proof" option rather than something worth considering anytime this year. Point being if you don't need dx12, HBCC isn't a very convincing argument to make the switch just yet.

My 2c.


----------



## rdr09

Quote:


> Originally Posted by *Soggysilicon*
> 
> Its this all day! +1


Can't find any at msrp. But at same time, you miners bought my old radeons more than i bought them.

http://www.marketwatch.com/story/nvidia-and-amd-are-deluged-with-orders-for-pc-graphics-cards-2017-08-24

At the end of the year i might snag a 64.


----------



## DrZine

New driver, new benching. The only difference the driver made for me is the clock reporting. I did a series of undervolts then tried some overclocks. I will just copypasta my notepad.

TLDR; performance did not change just the clock reporting.

Heaven Benchmarking

fan target set at 4.5k rpm temp target set at 70C
powerplay mod used 220W limit -> AIO 265W limit +100% PL

gpuclk/memclk @Vcore, score

17.8.1
DDU install w/powerplay
Stock Custom profile +fan, 2129
1630/1100 @1.2v +100%, 2235

17.8.2
DDU install no powerplay mod
Stock turbo profile, 1957
1632/1100 @1.2v +50%, 2233

w/ powerplay mod
Stock turbo profile, 2029
Stock Custom profile +fan, 2121 (timespy 7350)
1632/1100 @1.2v +100%, 2247
1632/1150 @1.2v +100%, BSOD
1632/1125 @1.2v +100%, BSOD

HBM just will not push past 1100

1632/1100 1100mv +100%, 2242

Temp stayed at 69c and fan maxed at 3.5k. Lowering temp target to 60c

1632/1100 1050mv +100%, 2245

Clock nearly glued to 1600Mhz. Not gaining on score however. max temp reached 61C. Wonder if gpu is capping to maintain temp. Increasing target back to 70c and increasing fan minimum to 4k.

1632/1100 1000mv +100%, BSOD

Crashes at the same point every time. P-state 5 is hard set to 1100mv. Maybe driver is getting bugged out and crashing the system. running test again with p-state 6 set as the minimum and p-state 7 set to max.

SUCCESS!!
1632/1100 1000mv +100%, 2202

PS 6 set to minimum clock never drops below 1580. Bench score however is now suffering. (timespy crashed however)

Overclock time!
Going full 1.2v +100% on all test untill I max out.

1652/1100, 2266
1667/1100, 2278
1702/1100, 2303
1727/1100, crash
1717/1100, crash
1702/1100, to be sure x3, 2285, 2290, crash.

Not a lot of overclock headroom as I did get a crash at 1700Mhz on the final run. I called it a night at this point.


----------



## kundica

I had 4 crashes on 17.8.2 while gaming tonight, 2 of them at stock/balanced mode in my LC 64. Couldn't for the life of me get 17.8.1(which was stable while gaming around 4 hours last night) to install without it trying to grab 17.8.2. Found a full 17.8.1 package instead of the minimal one but it says it doesn't recognize my card. Ended up reinstalling the beta driver. Don't have time to test it extensively tonight so it'll have to wait until tomorrow.

If anyone figures out/knows a way to get 17.8.1 to install it'll be very much appreciated.

Sent from my ONEPLUS A3000 using Tapatalk


----------



## Whatisthisfor

Quote:


> Originally Posted by *kundica*
> 
> IIf anyone figures out/knows a way to get 17.8.1 to install it'll be very much appreciated.


The beta driver is very stable, as far as i can say. I would be surprised if there is much difference to 17.8.1.


----------



## Nevril

A quick update after 7h hours spent trying to make my Vega 64 Air work as I wanted.

I've tried different settings both with 17.8.1 and 17.8.2 gaming drivers but Wattman simply refuses to apply anything except power limit changes.

No matter what I did as soon as I tried manual voltage settings the card would downclock to 1138/1200 core (even leaving everything at stock, just by selecting and applying the manual settings without changing them).
On the HBM side of things, same stuff. With everything at stock/auto, just changing the memory clock in any way would make it drop to 500/800Mhz.

Results confirmed using 3DMark and Superposition. In both cases performance where lower but power consumption was the same.

Obviously I've tried reinstalling everything multiple times after running both the ATI Cleanup tool and/or DDU.
Also tried tuning using WattTool or the Windows Registry mod.

As of now an extremely bad experience.
Going on vacation for a week, I hope I'll find a working driver once I'm back.
Being unable to mod the BIOS directly is simply a pain in these situations.
I sincerely hope AMD reconsiders.


----------



## Newbie2009

Quote:


> Originally Posted by *Nevril*
> 
> A quick update after 7h hours spent trying to make my Vega 64 Air work as I wanted.
> 
> I've tried different settings both with 17.8.1 and 17.8.2 gaming drivers but Wattman simply refuses to apply anything except power limit changes.
> 
> No matter what I did as soon as I tried manual voltage settings the card would downclock to 1138/1200 core (even leaving everything at stock, just by selecting and applying the manual settings without changing them).
> On the HBM side of things, same stuff. With everything at stock/auto, just changing the memory clock in any way would make it drop to 500/800Mhz.
> 
> Results confirmed using 3DMark and Superposition. In both cases performance where lower but power consumption was the same.
> 
> Obviously I've tried reinstalling everything multiple times after running both the ATI Cleanup tool and/or DDU.
> Also tried tuning using WattTool or the Windows Registry mod.
> 
> As of now an extremely bad experience.
> Going on vacation for a week, I hope I'll find a working driver once I'm back.
> Being unable to mod the BIOS directly is simply a pain in these situations.
> I sincerely hope AMD reconsiders.


Use watttool, idiot proof. Also is your pc set to power saving mode? Maybe change out the 8pin connectors.

Also you probably realise the clocks do drop when idle regardless of what you set, need to test under load to see if applied.


----------



## Nevril

Quote:


> Originally Posted by *Newbie2009*
> 
> Use watttool, idiot proof. Also is your pc set to power saving mode? Maybe change out the 8pin connectors.
> 
> Also you probably realise the clocks do drop when idle regardless of what you set, need to test under load to see if applied.


Used also WattTool as I stated.
Tests were done with 3DMark Firestrike Extreme in loop and my PC was not in PowerSaving mode.
Tried with both the 8 pin connectors I have on my PSU.

The way it happens just doesn't make any sense.
- Fresh driver install (DDU and/or ATI Cleaning tool)
- Custom profile without changes, just enabling the manual voltage (both with or without changing the voltages themselves!)
- Clock drops from 1401(1536 peak) to 1138(1200 peak). Performance drops accordingly, power doesn't.
- Disable manual voltage, clocks get back to normal.

With WattTool the result is the exact same, as soon as I change one or both the P6/P7 voltages performance drops. Both increasing or decreasing them, no matter the value. Even if I simply open it up and just hit "Set" without changing anything.
Then as soon as I reset, everything works normally.

It is clearly a bug related/triggered by the use of the manual voltage.
And I know it is happening only to me, my usual luck.


----------



## Dam4rusxp

Quote:


> Originally Posted by *pmc25*
> 
> Does anyone else get a freeze of 2-3 seconds (not crash type freeze, just temporary lock up) every single time a Flash or HTML5 video initially loads in (doesn't happen after the player initially loading?
> 
> It seems to be to do with idle / low GPU utilisation HBM2 memory clocks.
> 
> The player will begin loading, then everything will freeze, the HBM clock momentarily boosts from 167Mhz to full clock, and it unfreezes, then as soon as it's unfrozen it goes back to 167Mhz.
> 
> Drivers are REALLY bad.
> 
> Using Opera / Chrome in W7 x64.


I had that problem as well. A reinstall with DDU fixed it.


----------



## VickB

Hey guys new here. Just got my Vega 64 air a couple days ago and have been loving it. Been building PCs for about 10years so i know what I'm doing (most of the time







)

I'd like to chime in in regards to the freezing up issue, i also get it when shutting down but holding shift and doing a hard shut down of W10CU and there's no issues. Turning off hardware acceleration in chrome and on flashplayer on edge also solves the issue. I have noticed however that launching skype will freeze for one sec and then launch (no doubt due to the ads they have on its homepage causing the freeze)

Shame AMD didn't catch this before releasing the driver. I will also be putting an ekwb on mine once they become available. Not sure what temps i will get as i have a full loop with a 360 in a push/pull and a 240 in push so I'm hoping at least HBM stays much lower then the 91°C I'm getting now.

P.S. Not sure who said to use DDU but i did a fresh clean install of W10 and it did not solve the problem, still get freezing up when launching flash videos on chrome and launching skype. It's def on AMDs side now but not a worry. I am using 17.8.1 in balanced mode.


----------



## gupsterg

Quote:


> Originally Posted by *Nevril*
> 
> Being unable to mod the BIOS directly is simply a pain in these situations.
> I sincerely hope AMD reconsiders.


Pretty much zero chance IMO.


----------



## LionS7

What version of MSI Afterburner is relative working with Vega Frontier Edition, RX Vega ? I mean at least power target, frequency, fan... ?


----------



## pmc25

Quote:


> Originally Posted by *theBee2112*
> 
> Seems that there's no performance to be had over 1850Mhz, the scaling just stops after that. I'm being held back by heat at 71C, and my CPU is going to skew my results a bit I imagine.
> I'll do more testing soon, but at least from what I'm seeing, the OC sticks. Weather or not it scales that well is up for debate. My benchmarks show a 4% improvement from stock to 1872Mhz.
> 
> On a side note, enabling HBCC with 11600 MB memory segment helped some, but still some less noticeable tearing. A smoother experience none the less.
> 
> EDIT:
> The core voltage did not stay at 1050mV. It stayed at default.
> 
> 
> 
> 
> 
> 
> 
> 
> HWinfo reports core voltage at 1.356V and mem at 850mV


You need to look at the memory (actually GPU voltage as the driver reports it to HWiNFO64) reading under load. If you don't have a second monitor, then do a windowed stress test at below screen resolution. If your 1050mV undervolt is being applied, then the "GPU memory voltage" will try to stay close to 1050mV and only oscillate a little either side. It will undervolt much harder at idle (which is the 850mV which you see).


----------



## pmc25

Quote:


> Originally Posted by *Nevril*
> 
> A quick update after 7h hours spent trying to make my Vega 64 Air work as I wanted.
> 
> I've tried different settings both with 17.8.1 and 17.8.2 gaming drivers but Wattman simply refuses to apply anything except power limit changes.
> 
> No matter what I did as soon as I tried manual voltage settings the card would downclock to 1138/1200 core (even leaving everything at stock, just by selecting and applying the manual settings without changing them).
> On the HBM side of things, same stuff. With everything at stock/auto, just changing the memory clock in any way would make it drop to 500/800Mhz.
> 
> Results confirmed using 3DMark and Superposition. In both cases performance where lower but power consumption was the same.
> 
> Obviously I've tried reinstalling everything multiple times after running both the ATI Cleanup tool and/or DDU.
> Also tried tuning using WattTool or the Windows Registry mod.
> 
> As of now an extremely bad experience.
> Going on vacation for a week, I hope I'll find a working driver once I'm back.
> Being unable to mod the BIOS directly is simply a pain in these situations.
> I sincerely hope AMD reconsiders.


Have you got any third party monitoring or OC'ing software like Afterburner installed? If you do, completely uninstall and registry clean it, then uninstall the driver and registry clean, then reinstall the new driver.


----------



## VickB

Btw for anyone interested, heres my r9 390 compared to Vega. Not sure why it doesnt stay at 1630mhz but varies between 1402 and 1536 but im sure once on water it will be much better.


----------



## pmc25

Quote:


> Originally Posted by *LionS7*
> 
> What version of MSI Afterburner is relative working with Vega Frontier Edition, RX Vega ? I mean at least power target, frequency, fan... ?


None. Don't touch it.

https://pastebin.com/S5Js7NdZ - see my comment at line / para #11 here.

Possible it's fixed in 17.8.2 but I wouldn't risk it for some time yet.


----------



## kundica

Quote:


> Originally Posted by *Whatisthisfor*
> 
> The beta driver is very stable, as far as i can say. I would be surprised if there is much difference to 17.8.1.


Yeah. I'm back on it for now. It seems 17.8.1 is crashing some peoples' systems as well.


----------



## Nevril

Quote:


> Originally Posted by *pmc25*
> 
> Have you got any third party monitoring or OC'ing software like Afterburner installed? If you do, completely uninstall and registry clean it, then uninstall the driver and registry clean, then reinstall the new driver.


Yup, Afterburner...
Now I'm on vacation, got a flight in a couple of hours. Guess I'll try once I'm back in 10 days.
Thanks for the suggestion


----------



## Paxi

Quote:


> Originally Posted by *Paxi*
> 
> Actually an issue related to PCI-E 2.0 sound reasonable. I already asked a friend with a Rampage 5 Extreme to try it out.
> Already did a clean fresh install of Win10. I also got Win7 still laying around but I don't think there should be a difference.
> 
> Thanks for you suggestion.


Just to keep you updated.

There is definitely an issue with using the current driver in Win10 x64 and having a PCI-E 2.0 motherboard! There were already some other people complaining on the post I made on the official
AMD forums
While one hand this is kinda inadmissible the issue does not occur on Win7 x64 and the associated driver..

So for now I am running Win7 and my Vega seems to run fine. I will most likely upgrade when Coffeelake is here and switch to Win10 x64 again.

The only thing I encountered so far is that my system once rebooted before running FireMark (shorly before the loading screen finished) when I had the card connected to a single PCI-E power cable with 2 8 Pin connector (the standard one from RMi750).
I am now using 2 separate ones and did not encounter the issue anymore.

Does anyone encountered any issues while having WattMan running in foreground? This was maybe also related to my reboot prior, because afterwards I left it in background.


----------



## pmc25

New driver is hopeless. Aside from Battlegrounds which is less stuttery and shows maybe a 5% increase in FPS, everything else is 10-40% slower than 17.8.1 WHQL.

950mV / 1630 / 1100 rock solid before.

Now ... it can't even hold 1400Mhz at 1000mV, even pushing power limit to 50%. HBM also drops clock all the time under load (even at stock clock) on 17.8.2.

Certainly for anyone on W7, avoid it like the plague.

Even upping voltage massively and pushing power limit to 50%, to get it to maintain 1600 80% of the time, it's still at least 10% slower in the few benches I've tried.

Totally hopeless. WattMan voltage settings are still a mess and not fixed. HBM Voltage and GPU Voltage are still labeled the wrong way round.


----------



## theBee2112

Quote:


> Originally Posted by *pmc25*
> 
> You need to look at the memory (actually GPU voltage as the driver reports it to HWiNFO64) reading under load. If you don't have a second monitor, then do a windowed stress test at below screen resolution. If your 1050mV undervolt is being applied, then the "GPU memory voltage" will try to stay close to 1050mV and only oscillate a little either side. It will undervolt much harder at idle (which is the 850mV which you see).


Those numbers were while mining. Interesting that it still behaves as if it's idle. I didn't play too much last night, as my waterblock is due to be here in 4-6 hrs today. Will re-run my tests and verify things properly on a second monitor as you've suggested. Really confusing that GPU momory voltage is actually GPU voltage. I watched all of gamer Nexus's videos on the subject, and it seems consistent. They reported a max improvement of 4.6% no matter what OC they picked, while I managed 4%..

Quote:


> Originally Posted by *rdr09*
> 
> Can't find any at msrp. But at same time, you miners bought my old radeons more than i bought them.


Bought $3000 worth of cards above MRSP. Already made back $1900 in 2 months, and intend to sell the cards back to gamers probably for like 1/4 of what I paid for them in a year or so. Win win, just buy a really cheap card from a miner to game on. I don't feel guilty for mining because I don't have a "farm" like some other massive miners, and I follow the same rules of 1-2 cards per customer. Mining for a hobby is no evil.


----------



## Newbie2009

Quote:


> Originally Posted by *pmc25*
> 
> New driver is hopeless. Aside from Battlegrounds which is less stuttery and shows maybe a 5% increase in FPS, everything else is 10-40% slower than 17.8.1 WHQL.
> 
> 950mV / 1630 / 1100 rock solid before.
> 
> Now ... it can't even hold 1400Mhz at 1000mV, even pushing power limit to 50%. HBM also drops clock all the time under load (even at stock clock) on 17.8.2.
> 
> Certainly for anyone on W7, avoid it like the plague.
> 
> Even upping voltage massively and pushing power limit to 50%, to get it to maintain 1600 90% of the time, it's still at least 10% slower in the few benches I've tried.
> 
> Totally hopeless. WattMan voltage settings are still a mess and not fixed. HBM Voltage and GPU Voltage are still labeled the wrong way round.


Yeah but it would appear the card wasn't hitting 1630 last time, showing real clocks now.

I can still hit around 1630 at same voltage but you have to add a few percentage overclock. GPUZ will report 1660 core or whatever, but you will be hiitting closer to the original 1630.


----------



## pmc25

Quote:


> Originally Posted by *Newbie2009*
> 
> Yeah but it would appear the card wasn't hitting 1630 last time, showing real clocks now.
> 
> I can still hit around 1630 at same voltage but you have to add a few percentage overclock. GPUZ will report 1660 core or whatever, but you will be hiitting closer to the original 1630.


Not for me.

Dunno if you're on the W7 x64 driver.

The huge drop in performance suggests it was doing exactly as reported for me, before.

It's impossible to get it to maintain a solid boost (or HBM) clock under load at all, even at 1200mV (and temperature isn't an issue).

If I 'overclock' it to get the boost clock back up, it still doesn't make 1630, and then crashes eventually.

This was clean install and registries all wiped. I'm now going to revert.

Your own testing which showed 4% gain from 1600 to nearly 1900Mhz shows that something is obviously wrong. Or sorry, no, that was someone else.

This experience is more like some of the reviews and users sometimes reporting that it struggles to beat a 1070. Yesterday I was nipping at the heels of 1080Tis in some metrics.


----------



## Newbie2009

Quote:


> Originally Posted by *pmc25*
> 
> Not for me.
> 
> Dunno if you're on the W7 x64 driver.
> 
> The huge drop in performance suggests it was doing exactly as reported for me, before.
> 
> It's impossible to get it to maintain a solid boost (or HBM) clock under load at all, even at 1200mV (and temperature isn't an issue).
> 
> If I 'overclock' it to get the boost clock back up, it still doesn't make 1630, and then crashes eventually.
> 
> This was clean install and registries all wiped. I'm now going to revert.
> 
> Your own testing which showed 4% gain from 1600 to nearly 1900Mhz shows that something is obviously wrong. Or sorry, no, that was someone else.


Yeah I'm on windows 7 mate.

My core with these drivers hover at 1590-95 vs the 1630 I had. Performance is same or better. If I add a couple of percent OC it will get within 5-10mhz of 1630. (increasing power limit or volts won't make it clock higher lol)

My experience with overclocking on older drivers I said performance past 5% overclock just wasn't there. The HBM oc on these and older drivers though have been working fine.


----------



## pmc25

I simply can't get it to hold boost clocks at all, at any voltages, via WattMan, and performance is hugely down.

I can get it to hold about 1590Mhz at my previous 950-980mV settings if I use WattTool. However, doing that, as soon as I use WattMan to increase the HBM clock beyond stock 945Mhz, the HBM will never clock higher than 800Mhz, even if I set it at 946Mhz. It will only do stock 945Mhz.


----------



## dagget3450

Quote:


> Originally Posted by *pmc25*
> 
> I simply can't get it to hold boost clocks at all, at any voltages, via WattMan, and performance is hugely down.
> 
> I can get it to hold about 1590Mhz at my previous 950-980mV settings if I use WattTool. However, doing that, as soon as I use WattMan to increase the HBM clock beyond stock 945Mhz, the HBM will never clock higher than 800Mhz, even if I set it at 946Mhz. It will only do stock 945Mhz.


didn't gamers nexus talk about this issue in one of their vega videos?


----------



## dagget3450

Quote:


> Originally Posted by *VickB*
> 
> Btw for anyone interested, heres my r9 390 compared to Vega. Not sure why it doesnt stay at 1630mhz but varies between 1402 and 1536 but im sure once on water it will be much better.


Funny how Vega reduces CPU scores... i wonder what thats about...


----------



## Paxi

Quote:


> Originally Posted by *pmc25*
> 
> I simply can't get it to hold boost clocks at all, at any voltages, via WattMan, and performance is hugely down.
> 
> I can get it to hold about 1590Mhz at my previous 950-980mV settings if I use WattTool. However, doing that, as soon as I use WattMan to increase the HBM clock beyond stock 945Mhz, the HBM will never clock higher than 800Mhz, even if I set it at 946Mhz. It will only do stock 945Mhz.


Have you tried the tool "Clockblocker" ?


----------



## pmc25

Quote:


> Originally Posted by *dagget3450*
> 
> didn't gamers nexus talk about this issue in one of their vega videos?


Probably, yes. The behaviour is completely consistent.
Quote:


> Originally Posted by *Paxi*
> 
> Have you tried the tool "Clockblocker" ?


Quick google shows no-one using it ... and given what Afterburner does to Radeon Settings / WattMan, I'd rather not the the Guinea Pig.

I'm just going to have to revert to 17.8.1.


----------



## VickB

Quote:


> Originally Posted by *dagget3450*
> 
> Funny how Vega reduces CPU scores... i wonder what thats about...


No thats my bad, on my r9 390 i used 50% core parking (works better in some games and synthetics), I have yet to try it on my vega though so will do that eventually. I already found an annoying bug, i set 3dmark with no FRTC in the settings with its own profile but didnt take so i had to do it globally, it also messed up my r6s profile (i use no frtc for better frametime in that game) so i had to reset it and set frtc to off again. Its like the old crimson bugs are back at it again.

So far though the card is an absolute beast compared to my r9 390. I had a bit of a problem installing the driver (system crashed on me and botched some of my registry files) so i had to redo it. Then i did a fresh install just to see if it would fix the freezing issue in chrome but it didn't, and i got a radeon settings error while installing it but everything is fine so i dont get it.


----------



## Paxi

Quote:


> Originally Posted by *pmc25*
> 
> Probably, yes. The behaviour is completely consistent.
> Quick google shows no-one using it ... and given what Afterburner does to Radeon Settings / WattMan, I'd rather not the the Guinea Pig.
> 
> I'm just going to have to revert to 17.8.1.


Well I already gave it a shot. The only thing it actually does is keeping the Powerstate at the highest while running any 3D application.


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> I'm just going to have to revert to 17.8.1.


Let me know how you manage to revert to 17.8.1. The minimal package tries to download and install 17.8.2 and the full package from here http://support.amd.com/en-us/download/desktop/previous?os=Windows%207%20-%2064 won't recognize the card. I had to revert back to the launch beta driver.


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> Let me know how you manage to revert to 17.8.1. The minimal package tries to download and install 17.8.2 and the full package from here http://support.amd.com/en-us/download/desktop/previous?os=Windows%207%20-%2064 won't recognize the card. I had to revert back to the launch beta driver.


AMD Cleaner to uninstall, then CCleaner registry cleaner (or similar) to clean up the remaining registries guff once you reboot.


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> AMD Cleaner to uninstall, then CCleaner registry cleaner (or similar) to clean up the remaining registries guff once you reboot.


I did all of that. No matter what, the minimal package tries to download 17.8.2 and the full 17.8.1 package from the link I provided states it doesn't recognize the card.


----------



## Chaoz

I tried the new drivers and it's quite the improvement. Temps are 10°C lower, fps is more stable at 130fps.


----------



## kundica

Quote:


> Originally Posted by *Chaoz*
> 
> I tried the new drivers and it's quite the improvement. Temps are 10°C lower, fps is more stable at 130fps.


Same here except I get frequent crashes(complete system reset) while gaming.


----------



## theBee2112

Quote:


> Originally Posted by *kundica*
> 
> I did all of that. No matter what, the minimal package tries to download 17.8.2 and the full 17.8.1 package from the link I provided states it doesn't recognize the card.


Are you able to install the drivers manually through device manager? Try uninstall with DDU first. I haven't had any driver install issues since using DDU.

@pmc25 What's your power limit? Core clocks should stay stable with a high power limit. If not, try setting P7 as MIN and MAX powerstate, so it doesn't try to switch between P5, P6, etc.


----------



## Chaoz

Quote:


> Originally Posted by *kundica*
> 
> Same here except I get frequent crashes(complete system reset) while gaming.


Seems I celebrated too early







. Game crashed after I typed it


----------



## kundica

Quote:


> Originally Posted by *theBee2112*
> 
> Are you able to install the drivers manually through device manager? Try uninstall with DDU first. I haven't had any driver install issues since using DDU.


That would be my next step. I'll try that after I get home from work today.

Quote:


> Originally Posted by *Chaoz*
> 
> Seems I celebrated too early
> 
> 
> 
> 
> 
> 
> 
> . Game crashed after I typed it


That sucks to hear, but I'm somewhat relieved there are several others experiencing this issue. I sold my 64 Air and picked up the 64 LC this week. I'd be pretty damn irritated if the card needs RMA.


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> I did all of that. No matter what, the minimal package tries to download 17.8.2 and the full 17.8.1 package from the link I provided states it doesn't recognize the card.


Why are you not downloading the full driver package rather than just installer? You then just choose the local drivers rather than updated ones when it prompts?


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> Why are you not downloading the full driver package rather than just installer? You then just choose the local drivers rather than updated ones when it prompts?


Are you not reading what I'm posting?

I tried that too. Downloaded the full package from this link I provided but the driver fails to recognize the card and therefore does not let me install from local drivers.
Quote:


> Originally Posted by *kundica*
> 
> ...the full package from here http://support.amd.com/en-us/download/desktop/previous?os=Windows%207%20-%2064 won't recognize the card. I had to revert back to the launch beta driver.


----------



## Chaoz

Quote:


> Originally Posted by *kundica*
> 
> That would be my next step. I'll try that after I get home from work today.
> That sucks to hear, but I'm somewhat relieved there are several others experiencing this issue. I sold my 64 Air and picked up the 64 LC this week. I'd be pretty damn irritated if the card needs RMA.


True. It seems when V-Sync is enabled (Don't have my DisplayPort cable yet) that it doesn't crash at all, when it stays on 60fps.


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> Are you not reading what I'm posting?
> 
> I tried that too. Downloaded the full package from this link I provided but the driver fails to recognize the card and therefore does not let me install from local drivers.


Sounds like W10 related strangeness? If you're on W7, then I have no idea.


----------



## pmc25

Quote:


> Originally Posted by *theBee2112*
> 
> Are you able to install the drivers manually through device manager? Try uninstall with DDU first. I haven't had any driver install issues since using DDU.
> 
> @pmc25 What's your power limit? Core clocks should stay stable with a high power limit. If not, try setting P7 as MIN and MAX powerstate, so it doesn't try to switch between P5, P6, etc.


Power limit has nothing to do with it. It will do exactly the same at -5% or +50%.

Thanks for the suggestion re P7 min / max ... however that works fine until you actually load the GPU with a benchmark or game, in which case it completely ignores it and dynamically varies the frequency again









I have found out what was causing the HUGE clock swings, though:

WattMan. I was leaving it open yesterday for the graphing, during benchmarks / games / stress tests, as I found it useful.

On 17.8.2 it causes massive throttling if I leave it open. This doesn't happen if I close it before starting. 100% repeatable.

It is now faster than yesterday, albeit at lower clockspeeds being reported - I think that is correct, driver has just been optimised / overhead reduced IMO.

The annoying thing is that with WattTool setting the voltage, I can actually sustain yesterdays very low voltages (on 17.8.1) at the same sort of frequency as in WattMan today, but WattTool then breaks HBM2 overclocking and it has to be set at 945Mhz. To get stable 1580-1590Mhz core under load I need 1050mV using WattMan only about 980mV in WattTool (which was the performance sweetspot yesterday too).

Even at 1200mV and 50% power limit and temperature not being a problem, under load it totally refuses to go to 1630Mhz. The 1580-1590 was achieved with +2% GPU frequency OC. Pushing it any higher (and you have to push voltage up) still doesn't result in any higher clocks under load.

Still borked, but I guess acceptable given that it's a bit faster, and generally more stable (if you don't provoke it).


----------



## pmc25

Update.

It is worth setting P7 state as both min and max. Using HWiNFO64's graphing, it does make the clock a little more stable under load, even though it doesn't match the set boost clock.

*Everyone should set P3 HBM clock state as min and max when loading it.* Vega is the first graphics card I've ever had that constantly varies and aggressively declocks the memory outside of load, and under load if there is even the slightest lull, thinks nothing of dropping it to 800Mhz or 500Mhz. *Setting P3 clock state for HBM as both min and max DOES lock it to the clock you have set.* 1) This reduces FPS dips. 2) Seems to improve stability quite a bit.

So before you go gaming or benchmarking or stress testing. Do the above.

Performance now stands at over 2x my Fury Nano in some benches. ~1.5x in others.


----------



## theBee2112

A wild single slot Vega 64 appears!



I love this EK block. Every block they make should come with the back plate to make your GPU 1 slot. More incoming


----------



## VickB

Quote:


> Originally Posted by *theBee2112*
> 
> A wild single slot Vega 64 appears!
> 
> 
> 
> I love this EK block. Every block they make should come with the back plate to make your GPU 1 slot. More incoming


Looks good, love that they include the fury x single slot IO plate, looks so much cleaner.


----------



## Mandarb

Pulled the trigger on a Gigabyte RX64 Air as I found one for just shy above $500 from a French etailer. Hope it arrives safely and I get a sample that can downvolt nicely (but I'm usually drawing the short end of the silicon







).
If not I'm probably going to resell it.. after all, Vegas here in Switzerland go for $700+ from retailers. It's ridiculous.

So yeah, You'll probably have to bear with me asking lots of questions soon-ish.


----------



## Newbie2009

Quote:


> Originally Posted by *Mandarb*
> 
> Pulled the trigger on a Gigabyte RX64 Air as I found one for just shy above $500 from a French etailer. Hope it arrives safely and I get a sample that can downvolt nicely (but I'm usually drawing the short end of the silicon
> 
> 
> 
> 
> 
> 
> 
> ).
> If not I'm probably going to resell it.. after all, Vegas here in Switzerland go for $700+ from retailers. It's ridiculous.
> 
> So yeah, You'll probably have to bear with me asking lots of questions soon-ish.


Nice, congrats. I think you will be pleasantly surprised


----------



## lmiao

Hi everyone, got my hands on a air sapphire vega 64, loving her so far expect some annoying problems..

First of all i'm running it on Win7, i did a clean install(used also DDU) but dunno why i can't see anywhere the exactly core voltages, only in wattman.
Hwmonitor and hwinfo shows wrong values.

Anyway,going on, i pushed my gpu with a clock core 1665mhz/1050mv (if really true?) and mem to 1050mhz, pt +50%. Got ca. 25000 on 3dmark, is this kinda right?
Temperatures at high load are on 75-80 c°range.

Most annoying thing that i can't get rid of it's a occasional drop (similar to a mini stutter) while gaming. For example while i'm on the witcher 3, game run very smooth, but sometimes i got a mini spike, like a sort of mhzs dropping i suppose.
I also checked gpuz logs and it show constant freqs at 1660/1050, apart the sporadic drops at 1500/800 for example.
I thought about throttling, but isn't 75/78° good, or at least out of that problem?
Edit: almost forgot that i hear a constant coil whine from the gpu, but when the drop in question occurs the noise diseappear for a second, i got a corsair 850w as a psu, enough?
Anyone can help me?

Drivers are 17.8.2 (was the same with 17.8.1) and cpu is an i5 2500k oc @ 4.6.
Is cpu's fault, so bottleneck?

Thanks in advance and sorry for long post


----------



## Chaoz

Quote:


> Originally Posted by *theBee2112*
> 
> A wild single slot Vega 64 appears!
> 
> 
> 
> I love this EK block. Every block they make should come with the back plate to make your GPU 1 slot. More incoming


What manufacturer made your GPU?
Seems you don't have a molded Package. Luckily EKWB has it covered







.


----------



## SAMiN

One question or maybe more, as we know we can't flash AIO's bios on Air cooled version, but can we undervolt the original bios with something like polaris bios editor and reflash it again? will it work? if not is there/will be any solution?


----------



## gupsterg

Modded VBIOS is no go.

Flashing RX VEGA with FE is no go.

Flashing RX VEGA 64 to 56 has been done, other way not confirmed yet. @buildzoid has RX VEGA 56 perhaps he can confirm.

So far not got complete RX VEGA AIO VBIOS for a RX VEGA 64 owner to try.


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> So far not got complete RX VEGA AIO VBIOS for a RX VEGA 64 owner to try.


Here you go. @gupsterg

Sapphire Vega 64 AIO bios. This is the default power setting which should be the high power mode. I need to dump the other one still but it'll have to be a bit later tonight.

EDIT: Just realized I added this to the wrong thread. Posted it to the other thread now. http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/270#post_26305595


----------



## buildzoid

Quote:


> Originally Posted by *gupsterg*
> 
> Modded VBIOS is no go.
> 
> Flashing RX VEGA with FE is no go.
> 
> Flashing RX VEGA 64 to 56 has been done, other way not confirmed yet. @Buildzoid has RX VEGA 56 perhaps he can confirm.
> 
> So far not got complete RX VEGA AIO VBIOS for a RX VEGA 64 owner to try.


I don't have a 56 Steve does.


----------



## Soggysilicon

Quote:


> Originally Posted by *rdr09*
> 
> Can't find any at msrp. But at same time, you miners bought my old radeons more than i bought them.
> 
> http://www.marketwatch.com/story/nvidia-and-amd-are-deluged-with-orders-for-pc-graphics-cards-2017-08-24
> 
> At the end of the year i might snag a 64.


Well I actually don't mine. I do have a couple different use cases for my PC though which include math heavy simulations for work that I do. I also play games, so AMD products have "tended" to be my go to because their isn't all that much difference between the consumer graphics cards and the work station cards; especially with precision floating point operations.

On release day I had my cart emptied 3 times for the 64 on newegg, actually completed an order and got an email from newegg, only to get another email saying that they goofed and canceled my order... it just so happened that I happened to have an inventory website up which pinged me and I managed to snag a sapphire card Monday night. The single cards (LESS bundle where gone in just a few seconds).

So I got prey and wolf 2... as a gift...









So yeah, it was a lottery on Monday at 6 am PST, especially if you didn't bot the ordering process... I took half the day off work just to try to get one...

Vega HBM has not so good yields, and Apple jumped in with a big order... so I think these things are just going to be hard to get.

Its just what the card is though, game wise... its comparable, not really competitive, but like Ryzen it has some interesting flexibility that makes it interesting to more than one market segment.


----------



## Soggysilicon

Quote:


> Originally Posted by *DrZine*
> 
> New driver, new benching. The only difference the driver made for me is the clock reporting. I did a series of undervolts then tried some overclocks. I will just copypasta my notepad.
> 
> TLDR; performance did not change just the clock reporting.
> 
> Heaven Benchmarking
> 
> 1652/1100, 2266
> 1667/1100, 2278
> 1702/1100, 2303
> 1727/1100, crash
> 1717/1100, crash
> 1702/1100, to be sure x3, 2285, 2290, crash.
> 
> Not a lot of overclock headroom as I did get a crash at 1700Mhz on the final run. I called it a night at this point.


That air cooler is holding you back.

1708/1105 +50% power... solid as a rock. Much beyond that 1708 is dodgy at best, and 1105 is my "hard" limit at the moment, crashes instantly going beyond it. Run these settings for hours in mixed work / benches / gaming without a snag... EK-FC, big loop, tiny noise... Core freq. is held back by the memory, Vega needs to be fed more memory, all the gains are happening there. Maybe the HBCC going forward will address that?


----------



## Soggysilicon

Quote:


> Originally Posted by *VickB*
> 
> Hey guys new here. Just got my Vega 64 air a couple days ago and have been loving it. Been building PCs for about 10years so i know what I'm doing (most of the time
> 
> 
> 
> 
> 
> 
> 
> )
> 
> I'd like to chime in in regards to the freezing up issue, i also get it when shutting down but holding shift and doing a hard shut down of W10CU and there's no issues. Turning off hardware acceleration in chrome and on flashplayer on edge also solves the issue. I have noticed however that launching skype will freeze for one sec and then launch (no doubt due to the ads they have on its homepage causing the freeze)
> 
> Shame AMD didn't catch this before releasing the driver. I will also be putting an ekwb on mine once they become available. Not sure what temps i will get as i have a full loop with a 360 in a push/pull and a 240 in push so I'm hoping at least HBM stays much lower then the 91°C I'm getting now.
> 
> P.S. Not sure who said to use DDU but i did a fresh clean install of W10 and it did not solve the problem, still get freezing up when launching flash videos on chrome and launching skype. It's def on AMDs side now but not a worry. I am using 17.8.1 in balanced mode.


Your proposed loop is similar to what I am running with 64 EK-FC, and I am idle at 22c and load is less then 36~37 benching, gaming... whatever. Once I got the block on and running, I was much happier with the card. I am sure your exp. will be similar. Cheers!


----------



## Soggysilicon

Quote:


> Originally Posted by *LionS7*
> 
> What version of MSI Afterburner is relative working with Vega Frontier Edition, RX Vega ? I mean at least power target, frequency, fan... ?


RX Vega? I uninstalled AB... does not work, causes issues. Down the road I expect this change but its early days atm.


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> I did all of that. No matter what, the minimal package tries to download 17.8.2 and the full 17.8.1 package from the link I provided states it doesn't recognize the card.


If you want to use the unsigned drivers you need to boot windows into the pseudo safe mode with "driver signature enforcement" OFF. This should get you up and going.


----------



## steadly2004

steadly2004--- 5930k @ 4.6 VEGA 64 (air) 11,371

https://www.3dmark.com/fs/13457500



Here is my submission to TOP 30 on Firestrike Extreme. I know some of you guys can outscore me. Lets see it! I don't see any VEGA on the top 30, here on OCN.









Graphics score was 12,896. Tess turned off in drivers, but didn't throw a flag on the score reporting. P6 @ 1100 and P7 @ 1150. Frequency of HMB @ 1080. Min fan 1000, Max fan 4000. Max temp 85, Target 70. Power limit + 50%


----------



## theBee2112

Quote:


> Originally Posted by *Chaoz*
> 
> What manufacturer made your GPU?
> Seems you don't have a molded Package. Luckily EKWB has it covered
> 
> 
> 
> 
> 
> 
> 
> .


It's an XFX card, and I see what you did there









Guys, don't question my sanity, but I spent all night bending tubes building a monstrosity. A Vega 64 and GTX 1070, both waterblocked in the same system, with a Ryzen 1700X. Just for the lulz.



PS: It works flawless. Both drivers installed and each card will work independently. I'm going to keep it like this lol. Off to testing.


----------



## theBee2112

Double post, my bad


----------



## Chaoz

Quote:


> Originally Posted by *theBee2112*
> 
> It's an XFX card, and I see what you did there
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Guys, don't question my sanity, but I spent all night bending tubes building a monstrosity. A Vega 64 and GTX 1070, both waterblocked in the same system, with a Ryzen 1700X. Just for the lulz.
> 
> 
> 
> PS: It works flawless. Both drivers installed and each card will work independently. I'm going to keep it like this lol. Off to testing.


Ah, mine's a Sapphire. And holy ****, that looks insane.

Be sure to test it with Ashes Of The Singularity, that's a game that can benefit from having AMD and nvidia GPU in 1 system.


----------



## theBee2112

The poor 360 rad is in push pull, and it's overheating if i really push it. 61C for the whole system under 100% synthetic load on both gpu and cpu. Going to add another 120mm in push pull at the back of the case, just to make it look ever crazier and help with that heat soak.


----------



## steadly2004

Not sure if you guys care, but got my graphics score >13000 on Firestrike extreme.

http://www.3dmark.com/fs/13457823

13,314 graphics score.

I challenge you VEGA owners to surpass my score. Its ON AIR..... should be able to beat it if its on WATER. I'll be on water soon enough....


----------



## dagget3450

I'll post a quick run here(dual GPU) but firestrike isn't really working properly on Ryzen. That aside i did a slight oc on this run for fun mostly.

ryzen [email protected] - 3200 cl14 - Vega FE Air x2 CF undervolt core1600 + 50%PL + 1050HBM - tess disabled, beta fracking 17.6 launch drivers

https://www.3dmark.com/3dm/21784037?


----------



## dagget3450

picked up a 1080ti for my ryzen box, and i am going to put the vegas back on my 5960x, AMD gpus still seem to be fighting with dx11 overhead also so in those cases i think the vega will be better off on intel for now


----------



## rdr09

Quote:


> Originally Posted by *Soggysilicon*
> 
> Well I actually don't mine. I do have a couple different use cases for my PC though which include math heavy simulations for work that I do. I also play games, so AMD products have "tended" to be my go to because their isn't all that much difference between the consumer graphics cards and the work station cards; especially with precision floating point operations.
> 
> On release day I had my cart emptied 3 times for the 64 on newegg, actually completed an order and got an email from newegg, only to get another email saying that they goofed and canceled my order... it just so happened that I happened to have an inventory website up which pinged me and I managed to snag a sapphire card Monday night. The single cards (LESS bundle where gone in just a few seconds).
> 
> So I got prey and wolf 2... as a gift...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So yeah, it was a lottery on Monday at 6 am PST, especially if you didn't bot the ordering process... I took half the day off work just to try to get one...
> 
> Vega HBM has not so good yields, and Apple jumped in with a big order... so I think these things are just going to be hard to get.
> 
> Its just what the card is though, game wise... its comparable, not really competitive, but like Ryzen it has some interesting flexibility that makes it interesting to more than one market segment.


I understand. Just pulling them legs. Not really gonna buy not till end of year. Work requires good image quality and i find AMD to deliver. They said the same thing about Hawaii when it first came out and i almost got a 780Ti. Miners bought three of my Tahitis which paid for Hawaiis.

Please continue to fine tune Vega for the rest of us. Ha!


----------



## theBee2112

Quote:


> Originally Posted by *Chaoz*
> 
> Be sure to test it with Ashes Of The Singularity, that's a game that can benefit from having AMD and nvidia GPU in 1 system.


Indeed, it scales. Two tests in DX 12, one with dual GPU enabled, one without. Settings were everything maxxed out to ultra, and 8X MSAA, 2560x1080, and stock clocked 1700x.
Playback was smooth enough, i'm sure I'd manage 60FPS with MSAA at 4x and then enable Vsync. That coil while on Vega is real. On water it's super noticeable while gaming.

Test with dual GPU
== Total Avg Results =================================================
Total Time: 60.006760 ms per frame
Avg Framerate: 45.578503 FPS (21.940168 ms)
Weighted Framerate: 45.186546 FPS (22.130480 ms)
CPU frame rate (estimated if not GPU bound): 47.538975 FPS (21.035372 ms)
Percent GPU Bound: 79.295258 %
Driver throughput (Batches per ms): 7441.356445 Batches
Average Batches per frame: 48535.093750 Batches
Average Particles simulated per frame 397954
===========================================================

Test without dual GPU (VEGA 64 ONLY - 1000MHz Mem, +35% Power, Stock Core)
== Total Avg Results =================================================
Total Time: 60.011871 ms per frame
Avg Framerate: 30.825512 FPS (32.440662 ms)
Weighted Framerate: 30.250401 FPS (33.057415 ms)
CPU frame rate (estimated if not GPU bound): 54.616917 FPS (18.309345 ms)
Percent GPU Bound: 99.774338 %
Driver throughput (Batches per ms): 13801.579102 Batches
Average Batches per frame: 46402.296875 Batches
Average Particles simulated per frame 397677
=============================================================


----------



## kundica

My LC card crashed again about 5 min into gaming with the launch Beta driver this time. I had the clocks and mem at stock with a +50% power limit. Temp did not exceed 61.

Running the card in balanced mode at stock clocks it was fine for several hours, but it seems my cars can't handle +50% beyond benching with any driver. I might try to UV it tomorrow to see if it'll help.

Sent from my ONEPLUS A3000 using Tapatalk


----------



## VickB

Quote:


> Originally Posted by *Soggysilicon*
> 
> Your proposed loop is similar to what I am running with 64 EK-FC, and I am idle at 22c and load is less then 36~37 benching, gaming... whatever. Once I got the block on and running, I was much happier with the card. I am sure your exp. will be similar. Cheers!


Good lord man you must live somewhere really cold haha. 22°C at idle means your ambient is probably like 18-19°C (or you just have AC haha). I'm in southern France, its still around 28°C outside during the day and i have no AC just natural breeze. In the summer my r9 390 reaches 46°C under load or there abouts and in winter reaches around 39°C. What are the HBM temps like under water and how much has this card raised your water temps if at all?


----------



## SAMiN

Quote:


> Originally Posted by *gupsterg*
> 
> Modded VBIOS is no go.
> 
> Flashing RX VEGA with FE is no go.
> 
> Flashing RX VEGA 64 to 56 has been done, other way not confirmed yet. @Buildzoid has RX VEGA 56 perhaps he can confirm.
> 
> So far not got complete RX VEGA AIO VBIOS for a RX VEGA 64 owner to try.


Damn it! I hope it changes during the time!


----------



## lmiao

Quote:


> Originally Posted by *lmiao*
> 
> Hi everyone, got my hands on a air sapphire vega 64, loving her so far expect some annoying problems..
> 
> First of all i'm running it on Win7, i did a clean install(used also DDU) but dunno why i can't see anywhere the exactly core voltages, only in wattman.
> Hwmonitor and hwinfo shows wrong values.
> 
> Anyway,going on, i pushed my gpu with a clock core 1665mhz/1050mv (if really true?) and mem to 1050mhz, pt +50%. Got ca. 25000 on 3dmark, is this kinda right?
> Temperatures at high load are on 75-80 c°range.
> 
> Most annoying thing that i can't get rid of it's a occasional drop (similar to a mini stutter) while gaming. For example while i'm on the witcher 3, game run very smooth, but sometimes i got a mini spike, like a sort of mhzs dropping i suppose.
> I also checked gpuz logs and it show constant freqs at 1660/1050, apart the sporadic drops at 1500/800 for example.
> I thought about throttling, but isn't 75/78° good, or at least out of that problem?
> Edit: almost forgot that i hear a constant coil whine from the gpu, but when the drop in question occurs the noise diseappear for a second, i got a corsair 850w as a psu, enough?
> 
> Drivers are 17.8.2 (was the same with 17.8.1) and cpu is an i5 2500k oc @ 4.6.
> Is cpu's fault, so bottleneck?
> 
> Thanks in advance and sorry for long post


Anyone can help me?


----------



## VickB

Quote:


> Originally Posted by *lmiao*
> 
> Anyone can help me?


Im not hearing coil whine but i have a freesync display so i cap my fps, during firestrike i hear the fan more the coil whine (if there is any i havent heard it) but i also have 12 fans in my build, should tell me how quiet it is if i hear the vega at 2600rpm more then i do my Noctua fans haha.

As far as hwinfo just make sure to download the latest BETA version, i think its still off though and i think wattman is the best to measure that. Me myself I know better then to overclock with a new architecture and new BIOS/Drivers (i bought a 1700x on day 1 and didnt overclock it for a while)

Also the temps might not seem bad but it may start to throttle earlier as to not get any hotter. Try to undervolt and see if you still have the problem. HBM memory gets incredibly hot so even though its low wattage it does add heat to the small cooler. These cards do seem to thrive under water though.


----------



## lmiao

Quote:


> Originally Posted by *VickB*
> 
> Im not hearing coil whine but i have a freesync display so i cap my fps, during firestrike i hear the fan more the coil whine (if there is any i havent heard it) but i also have 12 fans in my build, should tell me how quiet it is if i hear the vega at 2600rpm more then i do my Noctua fans haha.
> 
> As far as hwinfo just make sure to download the latest BETA version, i think its still off though and i think wattman is the best to measure that. Me myself I know better then to overclock with a new architecture and new BIOS/Drivers (i bought a 1700x on day 1 and didnt overclock it for a while)
> 
> Also the temps might not seem bad but it may start to throttle earlier as to not get any hotter. Try to undervolt and see if you still have the problem. HBM memory gets incredibly hot so even though its low wattage it does add heat to the small cooler. These cards do seem to thrive under water though.


Well the problem is not the noise of coil whine, i mean the only thing that concern me right now is that kind of drop i spoke about while playing.
As i told when the drop occurs i heard a stop for a second of the coil whine, like he doesn't receiving energy anymore? Could it be a psu problem?

I also run 3 times a fire strike stress test, all had 99.5% stability and the logs never shown a drop from freqs. Max temp reached was 81° during tests.

Lastly how can i know if wattman voltages are real if i can't check anywhere x_x

Thanks for the reply btw!

edit: i am also on a freesync monitor and my fps are capped


----------



## Sicness

Germans, check out Mindfactory's Mindstar: https://www.mindfactory.de/product_info.php/8GB-XFX-Radeon-RX-Vega-64-Black-Aktiv-PCIe-3-0-x16--Retail-_1186799.html

XFX Vega 64 Air can be had for 509e for 10 more hours or as long as supplies last.


----------



## VickB

Quote:


> Originally Posted by *Sicness*
> 
> Germans, check out Mindfactory's Mindstar: https://www.mindfactory.de/product_info.php/8GB-XFX-Radeon-RX-Vega-64-Black-Aktiv-PCIe-3-0-x16--Retail-_1186799.html
> 
> XFX Vega 64 Air can be had for 509e for 10 more hours or as long as supplies last.


Pretty much same price i got mine for haha. I even ordered on launch day from ldlc.com, they still have the coupon but they ran out of stock as of today but they were in stock yesterday (gigabyte vega 64)


----------



## gupsterg

Been on AMD since 5xxx series. My last nVidia cards were GTX 280 and GTX 8800, before that AMD as well.

I'm seriously considering GTX 1080. As my latest rig is gonna be full custom WC I can get a MSI GTX 1080 Sea Hawk EK X for ~£500 delivered, the FOC game is of no real value in purchase decision.

VEGA 64 is hard to find at £449 in the UK, don't wish to buy outside of UK. On OCuk several VEGA 64 cards have bitten the dust too quickly. So on RMA POV gotta be UK purchase for me. WC block is ~£100, if I don't reuse stock back plate it's ~£25 for one. So even if I got a VEGA 64 for £449 it's another £100-£125 to have it as WC. Gibbo on OCuk had highlighted VEGA 56 will be £349 on launch day, if this holds true no idea. Again ~£100-£125 to make that card WC.

The GTX 1080 seems a better price/performance and power usage choice to me at this time. I can get the MSI GTX 1080 Sea Hawk EK X from Amazon, which I always prefer for such purchases, as even if manufacturer has pants RMA service, Amazon for 1st year support me well with full refund without usage deduction. As WC'd 'out of box' warranty is secure as well.

I do believe the GTX 1080 has sunk as low as it will get, given the 'climate' in GPU market. As VEGA is in short supply and it's pricing level, I doubt it will place pressure on nVidia/etailers to drop those cards prices. Monitoring ebay listing I reckon I can get £300 for Fury X. Somewhat I think it's relatively not needed GPU swap out for me, but then I think I can dispose of Fury X without a loss and jump to a newer GPU like I did from Hawaii to Fiji.

Yeah I'll miss FreeSync but as the GTX 1080 has some better performance may not be an issue. I won't be swapping my MG279Q as it took 4 screens samples to end up with a decent IPS panel and don't wanna go through that pain.

The fly in the ointment is VEGA still isn't using it's full capabilities, AFAIK 'intrinsic shaders' is still not active?

What say ye members?

Do I wait for AMD to solve drivers to make a fairer assessment? seems like VEGA has been the longest waiting game ever







.


----------



## VickB

Quote:


> Originally Posted by *gupsterg*
> 
> Been on AMD since 5xxx series. My last nVidia cards were GTX 280 and GTX 8800, before that AMD as well.
> 
> I'm seriously considering GTX 1080. As my latest rig is gonna be full custom WC I can get a MSI GTX 1080 Sea Hawk EK X for ~£500 delivered, the FOC game is of no real value in purchase decision.
> 
> VEGA 64 is hard to find at £449 in the UK, don't wish to buy outside of UK. On OCuk several VEGA 64 cards have bitten the dust too quickly. So on RMA POV gotta be UK purchase for me. WC block is ~£100, if I don't reuse stock back plate it's ~£25 for one. So even if I got a VEGA 64 for £449 it's another £100-£125 to have it as WC. Gibbo on OCuk had highlighted VEGA 56 will be £349 on launch day, if this holds true no idea. Again ~£100-£125 to make that card WC.
> 
> The GTX 1080 seems a better price/performance and power usage choice to me at this time. I can get the MSI GTX 1080 Sea Hawk EK X from Amazon, which I always prefer for such purchases, as even if manufacturer has pants RMA service, Amazon for 1st year support me well with full refund without usage deduction. As WC'd 'out of box' warranty is secure as well.
> 
> I do believe the GTX 1080 has sunk as low as it will get, given the 'climate' in GPU market. As VEGA is in short supply and it's pricing level, I doubt it will place pressure on nVidia/etailers to drop those cards prices. Monitoring ebay listing I reckon I can get £300 for Fury X. Somewhat I think it's relatively not needed GPU swap out for me, but then I think I can dispose of Fury X without a loss and jump to a newer GPU like I did from Hawaii to Fiji.
> 
> Yeah I'll miss FreeSync but as the GTX 1080 has some better performance may not be an issue. I won't be swapping my MG279Q as it took 4 screens samples to end up with a decent IPS panel and don't wanna go through that pain.
> 
> The fly in the ointment is VEGA still isn't using it's full capabilities, AFAIK 'intrinsic shaders' is still not active?
> 
> What say ye members?
> 
> Do I wait for AMD to solve drivers to make a fairer assessment? seems like VEGA has been the longest waiting game ever
> 
> 
> 
> 
> 
> 
> 
> .


Hard decision but i would go with the Vega 56, from the reviews I've seen even undervolted and overclocked it goes toe to toe with the 1080 (which i must say is VERY impressive), power consumption seems to be about the same and the 64 does lower drastically when undervolted (me i dont really care i got a 1000w psu thats 2 years old so no worries).

Plus if you have freesync and cap your fps your card wont be running full tilt all the time anyways (ultrawide 75hz), but i mean trying to get 144fps in 1440 max out in games just won't happen. Its probably best to cap em to something like 60 or 75 depending on the game youll use way less power.

I got lucky and snagged one on release day for 508€ theyre going for 620€ now. The prices between the UK and mainland europe are just ridiculous though but since the gbp has dropped significantly it's pretty much cheaper to buy in the UK now. It's a tough decision but Id wait for 56 and put it on water. Full loops are quite amazing. Im bypassing the card for now (no vega block yet) and my 1700x pretty much stays at ambient even while gaming with 2 rads for it alone haha.


----------



## gupsterg

I agree tough to decide at present.

To decide to go from Hawaii to Fiji was easy for me. I didn't have WC parts, rig was air cooled, for cost of block for Hawaii and it, the Fury X was costing the same. Ran cooler, quieter with some performance gain. Again due to usually buying at promos/cashback deal sites, etc I got Fiji at a sweet price and sold Hawaii with no loss.

I cap at ~88FPS as the MG279Q FS range is 35-90Hz. It can be modded to 144Hz using CRU. I don't as if GPU can do FPS for that range FS is really not needed IMO.

Biggest reason for GTX 1080 is how it comes as is, WC block has a value of £100 in my eyes, so card is technically £400. So which ever way I look at it it's £50 more than best price VEGA 56 and £50 less than VEGA 64 at best price. Warranty remains intact as card is WC'd from factory.

Performance of GTX 1080 is non issue, VEGA not having full functionality enabled is the unknown element. Will AMD make good on it? will it be a feature on paper that we don't see in action or when it does come to fruition will gains be good? will something else be on the market by the time it is used, that makes it null and void?

I never moved from Hawaii to Fiji for say a 'feature', it was basically cooler, quieter card, low cost swap with performance gain. So sort of aiming to do the same, in that 'staying ahead' of curve were when I sell HW it's made a loss.


----------



## Digitalwolf

Quote:


> Originally Posted by *gupsterg*
> 
> I agree tough to decide at present.
> 
> To decide to go from Hawaii to Fiji was easy for me. I didn't have WC parts, rig was air cooled, for cost of block for Hawaii and it, the Fury X was costing the same. Ran cooler, quieter with some performance gain. Again due to usually buying at promos/cashback deal sites, etc I got Fiji at a sweet price and sold Hawaii with no loss.
> 
> I cap at ~88FPS as the MG279Q FS range is 35-90Hz. It can be modded to 144Hz using CRU. I don't as if GPU can do FPS for that range FS is really not needed IMO.
> 
> Biggest reason for GTX 1080 is how it comes as is, WC block has a value of £100 in my eyes, so card is technically £400. So which ever way I look at it it's £50 more than best price VEGA 56 and £50 less than VEGA 64 at best price. Warranty remains intact as card is WC'd from factory.
> 
> Performance of GTX 1080 is non issue, VEGA not having full functionality enabled is the unknown element. Will AMD make good on it? will it be a feature on paper that we don't see in action or when it does come to fruition will gains be good? will something else be on the market by the time it is used, that makes it null and void?
> 
> I never moved from Hawaii to Fiji for say a 'feature', it was basically cooler, quieter card, low cost swap with performance gain. So sort of aiming to do the same, in that 'staying ahead' of curve were when I sell HW it's made a loss.


I wanted to go with a Vega card as I have the Asus MG279Q as well. I got lucky on mine and the first one has been with me since around the launch of Fury X... I don't want to change my monitor.

Part of the issue for me was I had 2 Fury X cards and had them for a few years. One day I walked into the room and saw a miniature sun effect in my case so I shut everything down. Either the motherboard or the cards had caused what looked like an explosion. This was on stock bios I never oc'd my Fury X cards. I have a custom loop but it wasn't related... there were no leaks aka the level was still full. The GPU slots on the motherboard blew out on the power side of the PCIE slot. I never tried to RMA because I had put blocks on the cards and I was never sure which part caused the problems...

I bought a RX 480 for a gap card and then they went price crazy on Ebay... I ended up with a Zotac card because at the time the Seahawk which I wanted was out of stock. I got the 1080 because of the fact that the block is factory mounted and thus fully under warranty.

I still want a Vega card but the US prices compared to UK you will find the same relative outcome. By the time you add the block to a Vega 64 it cost more and even if a Vega 56 cost the same... do you still have your warranty after you mount the block?

I didn't even sell my RX 480 to buy the 1080 because I felt like I needed to... it was simply because I ended up paying pretty much $0 for the card and I could add it to my loop. Oh ya to be redundant... and I have a full warranty.


----------



## gupsterg

VEGA with block is pretty much no warranty AFAIK, I doubt when I looked into blocking a card when on Hawaii anything has changed. Most manufacturers deem even TIM change warranty void even if cooler same as factory reused. Only manufacturer which I found when Polaris was released and looking at product pages/faq was XFX in the US/Canada honor warranty on cooler swap, not UK. I found MSI on their facebook page stated you can when someone asked, but you can also find info that they don't. So somewhat grey area.

I have seen posts of members having successful RMA where TIM swapped and factory cooler used, then also where card blocked and reverted and RMA'd. I just feel it's a grey area where I may or may not have an issue.

The Fury X I've had since Mar 16, touch wood been sound. Not removed a single screw ever from card so as is as it came. Yes I have used undervolt stock clock VBIOS and OC clock overvolt one. As I had dual bios I thought worst happens card dies I'll flip switch to other side and hopefully RMA team never check if one side has mod VBIOS.

The GTX 1080 Sea Hawk EK X seems like a no brainer to me. I just wish there was the info about what affect on performance will there be when VEGA driver has all capabilities firing.

I also think with the GTX 1080 the drivers will be so mature by now that I shouldn't run into huge headaches. Both Hawaii and Fiji I bought not at launch, so it was a good experience for drivers. Based on how much of pain Ryzen has been at launch and reading VEGA members shares so far I think I'll enjoy the GTX 1080 more in that aspect.


----------



## dagget3450

Quote:


> Originally Posted by *gupsterg*
> 
> VEGA with block is pretty much no warranty AFAIK, I doubt when I looked into blocking a card when on Hawaii anything has changed. Most manufacturers deem even TIM change warranty void even if cooler same as factory reused. Only manufacturer which I found when Polaris was released and looking at product pages/faq was XFX in the US/Canada honor warranty on cooler swap, not UK. I found MSI on their facebook page stated you can when someone asked, but you can also find info that they don't. So somewhat grey area.
> 
> I have seen posts of members having successful RMA where TIM swapped and factory cooler used, then also where card blocked and reverted and RMA'd. I just feel it's a grey area where I may or may not have an issue.
> 
> The Fury X I've had since Mar 16, touch wood been sound. Not removed a single screw ever from card so as is as it came. Yes I have used undervolt stock clock VBIOS and OC clock overvolt one. As I had dual bios I thought worst happens card dies I'll flip switch to other side and hopefully RMA team never check if one side has mod VBIOS.
> 
> The GTX 1080 Sea Hawk EK X seems like a no brainer to me. I just wish there was the info about what affect on performance will there be when VEGA driver has all capabilities firing.
> 
> I also think with the GTX 1080 the drivers will be so mature by now that I shouldn't run into huge headaches. Both Hawaii and Fiji I bought not at launch, so it was a good experience for drivers. Based on how much of pain Ryzen has been at launch and reading VEGA members shares so far I think I'll enjoy the GTX 1080 more in that aspect.


Me personally i wouldnt go with 1080gtx over vega. I look at it differently though than you. I would only go with 1080ti or better on nvidia side because its faster by a decent margin. 1080 is too close to vega56/64 performance IMO. For me if performance is close ill take AMD and time it will only surpass nvidia offering at that time of launch.


----------



## Papa Emeritus

Quote:


> Originally Posted by *gupsterg*
> 
> VEGA with block is pretty much no warranty AFAIK, I doubt when I looked into blocking a card when on Hawaii anything has changed. Most manufacturers deem even TIM change warranty void even if cooler same as factory reused. Only manufacturer which I found when Polaris was released and looking at product pages/faq was XFX in the US/Canada honor warranty on cooler swap, not UK. I found MSI on their facebook page stated you can when someone asked, but you can also find info that they don't. So somewhat grey area.


I've asked MSI's european support directly how they feel about removing coolers, swapping thermal paste etc. They said it's as a non issue if the card itself is not physically damaged.


----------



## gupsterg

@dagget3450

1080 Ti out of price range for purchase.

I'd be happy with VEGA performance, but when I look at the GTX 1080 Sea Hawk EK X offering it makes more sense on all fronts currently. Surpassing GTX 1080 by VEGA seems like a variable I will be waiting on and also something that may or may not happen.

Let's take 980 Ti vs Fury X as an example, these were the cards when I had Hawaii that I was considering. Pretty much they traded blows, did the Fury X surpass the 980 Ti by a justifiable margin nope. The Fury X at the time due to it's price/performance and AIO swung the purchase.

Now again it's a similar compare, but the VEGA purchase has the disadvantage highlighted in prior posts.

TPU don't recycle results from another review, so on this point good to ref, test setup link. Now 1440P is the res I use, so Fury X on average from test games has ~3% more performance than 980 Ti, link. Now the games in the launch review of Fury X on TPU differ from VEGA review, if disregard that point (which is significant) we could say Fury X gained ~17% in a round about way over 2yrs. In the RX 480 review (2016) the 980 Ti is still ahead, link.

So how I see it is by the time I may wish to jump ship on an AMD card is about when it matches/beats a competitor card. Too long IMO now and as at this purchase point the competitor product is the better price/performance item/fitting the bill, it's more compelling.

@Papa Emeritus

Thank you for the info







. Currently I suspect OCuk will have Sapphire VEGA 56 on launch day, as they had Sapphire on 64 launch day. Only one other etailer in the UK did VEGA 64 at £449 on launch day, all others was £550+. Sapphire definitely don't allow TIM/cooler swap, but I have read successful RMA cases where user has swapped TIM/cooler. Now if I need a particular AIB VEGA card to have warranty with block I do believe my chances of snagging a promo card will be significantly reduced, may even be 0 chance.


----------



## punchmonster

As a fury X owner, comparing Vega and Fury X is absurd. The Fury X was never able to catch an overclocked 980Ti at launch and it wasn't as drastically new an architecture. Vega can do it and is a new architecture.

With proper cooling it already is flat out faster than an overclocked 1080 before overclocking or undervolting.
Quote:


> Originally Posted by *gupsterg*
> 
> @dagget3450
> 
> 1080 Ti out of price range for purchase.
> 
> I'd be happy with VEGA performance, but when I look at the GTX 1080 Sea Hawk EK X offering it makes more sense on all fronts currently. Surpassing GTX 1080 by VEGA seems like a variable I will be waiting on and also something that may or may not happen.
> 
> Let's take 980 Ti vs Fury X as an example, these were the cards when I had Hawaii that I was considering. Pretty much they traded blows, did the Fury X surpass the 980 Ti by a justifiable margin nope. The Fury X at the time due to it's price/performance and AIO swung the purchase.
> 
> Now again it's a similar compare, but the VEGA purchase has the disadvantage highlighted in prior posts.
> 
> TPU don't recycle results from another review, so on this point good to ref, test setup link. Now 1440P is the res I use, so Fury X on average from test games has ~3% more performance than 980 Ti, link. Now the games in the launch review of Fury X on TPU differ from VEGA review, if disregard that point (which is significant) we could say Fury X gained ~17% in a round about way over 2yrs. In the RX 480 review (2016) the 980 Ti is staill ahead, link.
> 
> So how I see it is by the time I may wish to jump ship on an AMD card is about when it matches/beats a competitor card. Too long IMO now.
> 
> @Papa Emeritus
> 
> Thank you for the info
> 
> 
> 
> 
> 
> 
> 
> . Currently I suspect OCuk will have Sapphire VEGA 56 on launch day, as they had Sapphire on 64 launch day. Only one other etailer in the UK did VEGA 64 at £449 on launch day, all others was £550+. Sapphire definitely don't allow TIM/cooler swap, but I have read successful RMA cases where user has swapped TIM/cooler. Now if I need a particular AIB VEGA card to have warranty with block I do believe my chances of snagging a promo card will be significantly reduced, may even be 0 chance.


----------



## Nuke33

+1 @punchmonster

My undervolted 64 Liquid with unlocked PT is around 3% faster than MSI 1080 OC X.
With those crappy driver nonetheless.


----------



## gupsterg

@punchmonster

I'm not comparing Fury X to VEGA.

What I'm doing in a way is using Fury X vs 980 Ti as compare of gains AMD FineWine™ does to speculate where VEGA maybe as it ages.

I agree VEGA has 'features' which Fiji lacks, so would not make that compare. Those 'features' are not in full use. That makes the decision difficult to make on which card to go for.

If those 'features' takes what 6mths to come to 'fruition' was the wait worth it? how well supported will those features be? these are the things swirling in my mind making it difficult.

I bought Hawaii as simply put the card at the time by competitor was not right purchase in my eyes. I bought Fury X as again it was a better 'complete package' than air cooled 980 Ti at price point and wasn't lacking in performance. Now as stated before the GTX 1080 and VEGA 56/64, when accounting for WC are all at the similar price, performance is trading blows with each, but I will fully be covered on warranty aspect. This last aspect is what is swaying purchase at present.

Perhaps I'm wrong to pose these posts, as truly none of us can answer what would VEGA be performance wise if all it has is 'firing on all cylinders'.

@Nuke33

VEGA Liquid here is ~£700, so you can see why I'd go for a GTX 1080 Sea Hawk EK X for £500. I can not justify ~£200 extra for it, if I had that extra funds I'd go GTX 1080 Ti for ~£630.

Sapphire VEGA AIO - £700, PowerColor VEGA AIO - £780, Gigabyte VEGA AIO - £800, OCuk VEGA 64 with WC block - £804.


----------



## punchmonster

As long as you're in the EU (which the UK still is) they can't deny you an RMA for swapping the cooler as long as they can't prove you damaged it.
Quote:


> Originally Posted by *gupsterg*
> 
> @punchmonster
> 
> I'm not comparing Fury X to VEGA.
> 
> What I'm doing in a way is using Fury X vs 980 Ti as compare of gains AMD FineWine™ does to speculate where VEGA maybe as it ages.
> 
> I agree VEGA has 'features' which Fiji lacks, so would not make that compare. Those 'features' are not in full use. That makes the decision difficult to make on which card to go for.
> 
> If those 'features' takes what 6mths to come to 'fruition' was the wait worth it? how well supported will those features be? these are the things swirling in my mind making it difficult.
> 
> I bought Hawaii as simply put the card at the time by competitor was not right purchase in my eyes. I bought Fury X as again it was a better 'complete package' than air cooled 980 Ti at price point and wasn't lacking in performance. Now as stated before the GTX 1080 and VEGA 56/64, when accounting for WC are all at the similar price, performance is trading blows with each, but I will fully be covered on warranty aspect. This last aspect is what is swaying purchase at present.
> 
> Perhaps I'm wrong to pose these posts, as truly none of us can answer what would VEGA be performance wise if all it has is 'firing on all cylinders'.
> 
> @Nuke33
> 
> VEGA Liquid here is ~£700, so you can see why I'd go for a GTX 1080 Sea Hawk EK X for £500. I can not justify ~£200 extra for it, if I had that extra funds I'd go GTX 1080 Ti for ~£630.


----------



## gupsterg

I agree and disagree. From others shares too many times it becomes a battle which I don't have the time for.

This is also why Amazon UK become my 'port of call for what I regard as higher value purchases. They seem to have such good aspects of CS that it boggles my mind, if truly unhappy, wham full refund.

Perhaps I'll just wait and see what goes on for a bit more before buying what I do.

Thanks guys for understanding discussion







, which to me seemed a balanced discussion







.


----------



## dagget3450

Quote:


> Originally Posted by *gupsterg*
> 
> I agree and disagree. From others shares too many times it becomes a battle which I don't have the time for.
> 
> This is also why Amazon UK become my 'port of call for what I regard as higher value purchases. They seem to have such good aspects of CS that it boggles my mind, if truly unhappy, wham full refund.
> 
> Perhaps I'll just wait and see what goes on for a bit more before buying what I do.
> 
> Thanks guys for understanding discussion
> 
> 
> 
> 
> 
> 
> 
> , which to me seemed a balanced discussion
> 
> 
> 
> 
> 
> 
> 
> .


Thats why i said "me personally" i realize our needs are different. Even still i dont see a 1080 as a worthy upgrade from FuryX considering the cost. Now if you sell your furyx and get enough to cover the upgrade costs? Is the Furyx not doing enough for you as it stands?


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> @punchmonster
> 
> I'm not comparing Fury X to VEGA.
> 
> What I'm doing in a way is using Fury X vs 980 Ti as compare of gains AMD FineWine™ does to speculate where VEGA maybe as it ages.
> 
> I agree VEGA has 'features' which Fiji lacks, so would not make that compare. Those 'features' are not in full use. That makes the decision difficult to make on which card to go for.
> 
> If those 'features' takes what 6mths to come to 'fruition' was the wait worth it? how well supported will those features be? these are the things swirling in my mind making it difficult.
> 
> I bought Hawaii as simply put the card at the time by competitor was not right purchase in my eyes. I bought Fury X as again it was a better 'complete package' than air cooled 980 Ti at price point and wasn't lacking in performance. Now as stated before the GTX 1080 and VEGA 56/64, when accounting for WC are all at the similar price, performance is trading blows with each, but I will fully be covered on warranty aspect. This last aspect is what is swaying purchase at present.
> 
> Perhaps I'm wrong to pose these posts, as truly none of us can answer what would VEGA be performance wise if all it has is 'firing on all cylinders'.
> 
> @Nuke33
> 
> VEGA Liquid here is ~£700, so you can see why I'd go for a GTX 1080 Sea Hawk EK X for £500. I can not justify ~£200 extra for it, if I had that extra funds I'd go GTX 1080 Ti for ~£630.
> 
> Sapphire VEGA AIO - £700, PowerColor VEGA AIO - £780, Gigabyte VEGA AIO - £800, OCuk VEGA 64 with WC block - £804.


I get your reasoning, at those prices it is really hard to justify a half baked driver and power, coil whine issues. I bought mine for 700€ at launch date, but I think I just got lucky. Only 2 hours after I ordered the price jumped to 750€ and now it is at an absurd 800€.

I would have been content with my GTX980ti and could have waited for the next gen, but what really made the difference to keep Vega is the difference in DPC/System Latency. I am mainly a FPS gamer so low latency is a must. Nvidias drivers are really awful in that regard. With GTX980ti I have double the latency compared to Vega. I do not want to know how bad it would be on pascal, they seem to be a lot worse than maxwell.

And even efficiency is not of concern since I almost get pascal level efficiency with 950mv undervolt.
Another bonus for me is the very good linux support for opensource drivers from AMD.
Nvidias shady politics aside, their support for OpenSource is basically non existent. Everything is proprietary, be it CUDA, Physx,Gsync, their driver binaries....

So that all taken into account Vega is the obvious choice for me, but I get why you might see it differently. At the end of the day one should buy what suits them the most, fanboy-ism is such a plague.
Comment section on WccfTech is the best example


----------



## VickB

Quote:


> Originally Posted by *Nuke33*
> 
> I get your reasoning, at those prices it is really hard to justify a half baked driver and power, coil whine issues. I bought mine for 700€ at launch date, but I think I just got lucky. Only 2 hours after I ordered the price jumped to 750€ and now it is at an absurd 800€.
> 
> I would have been content with my GTX980ti and could have waited for the next gen, but what really made the difference to keep Vega is the difference in DPC/System Latency. I am mainly a FPS gamer so low latency is a must. Nvidias drivers are really awful in that regard. With GTX980ti I have double the latency compared to Vega. I do not want to know how bad it would be on pascal, they seem to be a lot worse than maxwell.
> 
> And even efficiency is not of concern since I almost get pascal level efficiency with 950mv undervolt.
> Another bonus for me is the very good linux support for opensource drivers from AMD.
> Nvidias shady politics aside, their support for OpenSource is basically non existent. Everything is proprietary, be it CUDA, Physx,Gsync, their driver binaries....
> 
> So that all taken into account Vega is the obvious choice for me, but I get why you might see it differently. At the end of the day one should buy what suits them the most, fanboy-ism is such a plague.
> Comment section on WccfTech is the best example


Where on earth do you live that it was 700€ on launch day and 800€ now? That's insane, its only 620€ here.


----------



## gupsterg

Quote:


> Originally Posted by *dagget3450*
> 
> Thats why i said "me personally" i realize our needs are different.


Indeed, accepted and no doubt







.
Quote:


> Originally Posted by *dagget3450*
> 
> Even still i dont see a 1080 as a worthy upgrade from FuryX considering the cost. Now if you sell your furyx and get enough to cover the upgrade costs? Is the Furyx not doing enough for you as it stands?


If I sell Fury X within the near future, I make no loss on sale, as I delay longer it makes a situation of loss IMO. So sell up is more down to this and answer in reply to other member below.

When I disposed of 290/X slightly prior to 390/X launch. There was no huge performance gap, but as the Hawaii was 4GB vs 8GB I believe the resale was affected slightly after. Shortly after I sold Hawaii the resale plummeted to a level I would have made a loss. I added ~£75 to pot to have Fury X, which as said before was viable due to performance/cooling solution.

VEGA is priced so 'hot' at present that Fury X will remain viable for me to sell making no loss. As supply improves in coming months I believe it will take a hit in resale value and I would make a loss.

I'll start with price first.

VEGA 56 ~£350 if OCuk keep to highlighted launch day price.
VEGA 64 ~£449 being used for compare but really 0 chance of getting this price in coming months, only 2 etailers in UK had this launch day.

As the air blowers to me are zero value I must equate £100 to go WC, but I'll just knock that from the GTX 1080 Sea Hawk EK X for now to make valid compare, so that is in my eyes £400. So the card is sitting between the 2. So by adding 33% to pot of £300 from selling Fury X I get ~32% gain from data on this chart of GTX 1080 FE TPU review, link. The Sea Hawk has higher clocks then linked review card, I'm discounting that and what OC it will gain on WC. Still 22% gap here on chart in VEGA 64 review between Fury X and GTX 1080, link.

So there we have my reasoning. I have no reason not to jump from Fury X to GTX 1080 IMO. Only 'fly in ointment' is what is VEGA's true performance level, as there are 'features' currently not being fully utilized.

@Nuke33

I play FPS predominately, I am unaware of nVidia driver issues, but will research. +rep for share







.

If I use the etailer I plan to it's simple no quibble return within 30 days even if I use.

I enjoyed my time with bios mod on Hawaii/Fiji, VEGA is locked hard, so jumping to nVidia with lack lustre bios mod (from what I've read so far) is non issue.

What has recently also made my mind up to sell off Fiji is what AMD have done on v17.x.x drivers. For ~8 mths a HBM overclock applies but the driver blocks any performance gain. The gain from HBM clock is low *but* when you OC GPU core on Fiji it loses some performance scaling with voltage increase, so you get that back. I have done ~250+ Fury X 3DM runs so know what cards performance should be at, thread is on OCN. I recently did this thread on OCuk as an AMD rep is more active there, answer is sorry we will not give Fiji HBM performance gain from clock increase back. *Buy VEGA as that has HBM OC support from AMD.* On Fiji I could use VBIOS mod or 3rd app from June 15 drivers til Jan 17 to gain performance with HBM clock increase, I can not any longer. No idea why is locked now.

To me our small number of purchase make little difference to who we perceive a company on 'ethic/practices', so as the purchase decision has more of an affect on me on what I get for my £ I have to pretty much disregard going for the better 'ethic/practices' company. End of the day I am just a consumer to xyz and a sale.


----------



## VickB

Quote:


> Originally Posted by *Nuke33*
> 
> Germany
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I am talking about the Liquid cooled model though not AIR.


Ah gotcha, then yea very lucky. Here the liquid cooled are 749€ from launch day till now haven't changed prices. I got my air for 508€ so I'm happy as hell, I ran firestrike and don't seem to hear any coil whine (especially with the fan going on at like 2500rpm) but i do have a dozen fans since im watercooled with 2 rads but they're quiet and I'm still not hearing coil whine, i have pitch perfect hearing too so I don't know. Guessing people have like no case fans and can hear everything haha.

If this is what you guys sound like I've never heard mine do that lol.






Edit: So yea i just tried with all the fans off in my case (only one was the gpu and rear exhaust, the d5 pump isnt even audible) and i couldn't hear anything but the fan on the gpu, doesn't seem like mine has coil whine.


----------



## Nuke33

@gupsterg

You are welcome, thanks for the rep









I have tried Fijii once on launch but wasn't satisfied at all. The asic I got was a potato, it did not overclock nor undervolt very well.

Kinda strange they locked down HBM perf, since it´s the one remarkable feature of Fijii. Maybe to boost following gen sales








Edit: Saw your highlights, so never mind my last sentences









I can further elaborate on Latency issues and/or solutions if you like. I got mine as low as 16-20μs on highest set multiplier for the cpu.

@VickB

508€ is a very good price









Sadly I can hear a very distinct coil whine on stock volts. I am sensitive to noise myself. My system is a silent build. During night it is barely audible while gaming. The AIO solution and the coil whine on Vega kind of ruin this silence. I am looking into modding the AIO and fix the coil whine.
Undervolting helps a lot with coil whine, but it happens occasionally nonetheless.
Framelimiting to 75fps solves it for me since I own a 75hz monitor, but I wouldn´t count on it above 100fps.


----------



## Nuke33

Quote:


> Originally Posted by *VickB*
> 
> Edit: So yea i just tried with all the fans off in my case (only one was the gpu and rear exhaust, the d5 pump isnt even audible) and i couldn't hear anything but the fan on the gpu, doesn't seem like mine has coil whine.


Lucky you, your Vega seems like a keeper


----------



## VickB

Quote:


> Originally Posted by *Nuke33*
> 
> @gupsterg
> 
> You are welcome, thanks for the rep
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have tried Fijii once on launch but wasn't satisfied at all. The asic I got was a potato, it did not overclock nor undervolt very well.
> 
> Kinda strange they locked down HBM perf, since it´s the one remarkable feature of Fijii. Maybe to boost following gen sales
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I can further elaborate on Latency issues and/or solutions if you like. I got mine as low as 16-20μs on highest set multiplier for the cpu.
> 
> @VickB
> 
> 508€ is a very good price
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sadly I can hear a very distinct coil whine on stock volts. I am sensitive to noise myself. My system is a silent build. During night it is barely audible while gaming. The AIO solution and the coil whine on Vega kind of ruin this silence. I am looking into modding the AIO and fix the coil whine.
> Undervolting helps a lot with coil whine, but it happens occasionally nonetheless.
> Framelimiting to 75fps solves it for me since I own a 75hz monitor, but I wouldn´t count on it above 100fps.


Yea that makes sense, I'm hoping when i put the block on it doesnt start showing up but neither my 7850 nor r9 390 had coil whine and the 390 was on air for a year then on water. I'm guessing it may have to do with the power supply as well I can't imagine this just being every single Vega, but I've been wrong before.

I'm sensitive enough of hearing that i need to dial down one of my fans because at 1150rpm it hums but at 1100 or 1200 it doesnt, pretty sure Id notice coil whine haha. I even closed all the windows (double sided filled with gas) and my doors and turned my fan and tv off and stuck my head in my case (its a TT x5 so the gpu is mounted vertically instead of horizontally) I'm not sure if that would make a difference but i couldn't hear anything just the fan. If its supposed to be louder then the fan then id totally hear it but its barely audible then over the fans it doesnt bother me.

I've heard it before on the many PCs ive built but the past 4 cards for me have had none, 5770/7850/390/64 so hopefully it stays that way.

P.S. As soon as i saw the price i jumped on it just in case, i received it on the 22nd i think even though i ordered it the 15th so they had to wait a few days to get em in stock, France is a bit behind in technology.


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> I enjoyed my time with bios mod on Hawaii/Fiji, VEGA is locked hard, so jumping to nVidia with lack lustre bios mod (from what I've read so far) is non issue.


You have to be aware though that Pascal has a similar lock on its VBios. You need an external flasher to circumvent signature checks. Something like this:

https://www.amazon.de/dp/B01DZC36GY/ref=wl_it_dp_o_pC_S_ttl?_encoding=UTF8&colid=2PL1YQW39J3H5&coliid=ILL959JN0SW1I

https://www.amazon.de/TEST-CLIP-1-27MM-SOIC-POMONA/dp/B018CPWJTO/ref=sr_1_1?ie=UTF8&qid=1503762876&sr=8-1&keywords=pomona+5250

I haven´t tried this method on Vega, but from what I read here it seems more complicated than on pascal.


----------



## Nuke33

Quote:


> Originally Posted by *VickB*
> 
> Yea that makes sense, I'm hoping when i put the block on it doesnt start showing up but neither my 7850 nor r9 390 had coil whine and the 390 was on air for a year then on water. I'm guessing it may have to do with the power supply as well I can't imagine this just being every single Vega, but I've been wrong before.
> 
> I'm sensitive enough of hearing that i need to dial down one of my fans because at 1150rpm it hums but at 1100 or 1200 it doesnt, pretty sure Id notice coil whine haha. I even closed all the windows (double sided filled with gas) and my doors and turned my fan and tv off and stuck my head in my case (its a TT x5 so the gpu is mounted vertically instead of horizontally) I'm not sure if that would make a difference but i couldn't hear anything just the fan. If its supposed to be louder then the fan then id totally hear it but its barely audible then over the fans it doesnt bother me.
> 
> I've heard it before on the many PCs ive built but the past 4 cards for me have had none, 5770/7850/390/64 so hopefully it stays that way.
> 
> P.S. As soon as i saw the price i jumped on it just in case, i received it on the 22nd i think even though i ordered it the 15th so they had to wait a few days to get em in stock, France is a bit behind in technology.


Fingers crossed









My Fans behave exactly the same. They aren´t by chance Noctua PPAs?

I am actually suprised there are still Vegas in stock in Germany. Either AMD actually managed to ship enough or miners haven´t catched up with the latest developments.


----------



## roybotnik

FE air owner here (feel free to add me to the list @dagget3450). Which drivers are FE owners currently using? Everything since the launch day drivers doesn't expose the gaming/pro mode toggle anymore for me, which means no wattman. This is extremely frustrating thanks to the lack of any third party overclocking tool support right now. I want to be able to undervolt and tweak the fan settings, sigh.

AMD has obviously been pretty busy releasing all sorts of beta drivers and whatever, but I would really appreciate some support for my $1000 GPU... It's been two months







.

It also seems like the beta listed here has the incorrect version (says 17.8.2 but the download is the 17.8.1 beta ?): http://support.amd.com/en-us/download/frontier?os=Windows%2010%20-%2064


----------



## Chaoz

Quote:


> Originally Posted by *Nuke33*
> 
> Fingers crossed
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My Fans behave exactly the same. They aren´t by chance Noctua PPAs?
> 
> I am actually suprised there are still Vegas in stock in Germany. Either AMD actually managed to ship enough or miners haven´t catched up with the latest developments.


I bought mine in Germany aswell 3 days after release they were still in stock. I had no choice but to buy mine in Germany, even though I live in Belgium. Due to problems with the suppliers here, there was no stock anywhere in Belgium. But props to Alternate Germany. Bought mine on Wednesday and it was already delivered on Friday at noon.

I got a Sapphire Air Black version, as I already bought a waterblock a week or so in advance. So didn't care what the GPU looked like. Too bad mine whines like a mf under 100% in-game. It doesn't seem to do that when I enable V-Sync.

Still waiting on my Displayport cable so I can use FreeSync







.


----------



## Nuke33

Hellm just posted he got infos on clock throttling. Seems to be HBM heat related.

http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/270#post_26306449


----------



## gupsterg

Quote:


> Originally Posted by *Nuke33*
> 
> @gupsterg
> 
> You are welcome, thanks for the rep
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have tried Fijii once on launch but wasn't satisfied at all. The asic I got was a potato, it did not overclock nor undervolt very well.
> 
> Kinda strange they locked down HBM perf, since it´s the one remarkable feature of Fijii. Maybe to boost following gen sales
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I can further elaborate on Latency issues and/or solutions if you like. I got mine as low as 16-20μs on highest set multiplier for the cpu.


I will PM for info on nVidia DPC, if you don't mind







.

I went through 11 Fiji cards. 1100MHz +/- 20MHz was average on them, one card did 1135MHz for a week and then couldn't hold OC, the card I kept was the best same, still does 1145MHz after ~year+ ownership. With what voltage increase it needs for fully stable 1145MHz it is 1175MHz bench run stable, not gaming stable for lengthy use







. Most cards did HBM 545MHz with slight voltage bump or stock, only 1 card was 600MHz HBM competently stable for gaming but clocked poor on core







. Real shame about drivers on Fiji IMO.
Quote:


> Originally Posted by *Nuke33*
> 
> You have to be aware though that Pascal has a similar lock on its VBios. You need an external flasher to circumvent signature checks. Something like this:
> 
> https://www.amazon.de/dp/B01DZC36GY/ref=wl_it_dp_o_pC_S_ttl?_encoding=UTF8&colid=2PL1YQW39J3H5&coliid=ILL959JN0SW1I
> 
> https://www.amazon.de/TEST-CLIP-1-27MM-SOIC-POMONA/dp/B018CPWJTO/ref=sr_1_1?ie=UTF8&qid=1503762876&sr=8-1&keywords=pomona+5250
> 
> I haven´t tried this method on Vega, but from what I read here it seems more complicated than on pascal.


Yes noted this. I have seen results of ~2GHz on GTX 1080 WC without modified BIOS when I searched owners thread. Even if I don't get that 1850MHz will be AOK which this card stock boost is.

VEGA even if you use external flash tool and mod bios is flashed the 'security chip' will not allow use. You are currently limited to using unmodified another card VBIOS.

Had 1 confirmed successful use of RX VEGA AIO VBIOS use on RX VEGA AIR. Issue is though you have no access to GPU core / HBM temp throttle limits in Wattman, so as RX VEGA AIO has lower limits I reckon an VEGA AIR user may encounter throttling issues if air flow not good.

RX VEGA on WC flashed to RX VEGA AIO should be fine and dandy IMO.

Real shame AMD have locked down VBIOS mod on VEGA so hard.


----------



## gupsterg

Quote:


> Originally Posted by *Nuke33*
> 
> Hellm just posted he got infos on clock throttling. Seems to be HBM heat related.
> 
> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/270#post_26306449


Throttle temps were posted week+ ago, in that same thread by me, link and discussed here before







.

Here is a more pertinent info.

RX VEGA AIR 56/64 temp limits in PowerPlay

Code:



Code:


(89°C)    USHORT usSoftwareShutdownTemp;
(105°C)   USHORT usTemperatureLimitHotSpot;
(74°C)    USHORT usTemperatureLimitLiquid1;
(74°C)    USHORT usTemperatureLimitLiquid2;
(95°C)    USHORT usTemperatureLimitHBM;
(115°C)   USHORT usTemperatureLimitVrSoc;
(115°C)   USHORT usTemperatureLimitVrMem;
(100°C)   USHORT usTemperatureLimitPlx;
(85°C)    USHORT usTemperatureLimitTedge;

RX VEGA 64 AIO temp limits in PowerPlay

Code:



Code:


(74°C)    USHORT usSoftwareShutdownTemp;
(105°C)   USHORT usTemperatureLimitHotSpot;
(74°C)    USHORT usTemperatureLimitLiquid1;
(74°C)    USHORT usTemperatureLimitLiquid2;
(95°C)    USHORT usTemperatureLimitHBM;
(115°C)   USHORT usTemperatureLimitVrSoc;
(115°C)   USHORT usTemperatureLimitVrMem;
(100°C)   USHORT usTemperatureLimitPlx;
(70°C)    USHORT usTemperatureLimitTedge;

As the values are contained in PowerPlay the registry mod table edited to a new value should apply. Be aware each registry file by hellm is stock, so user need to mod PL, etc as they want.


----------



## Energylite

Quote:


> Originally Posted by *gupsterg*
> 
> Real shame AMD have locked down VBIOS mod on VEGA so hard.


Yeah it's a shame, but i believe one day we could break down this security to mod the bios


----------



## roybotnik

Quote:


> Originally Posted by *Nuke33*
> 
> Hellm just posted he got infos on clock throttling. Seems to be HBM heat related.
> 
> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/270#post_26306449


Aye, I found the same thing and posted it in a few other places. It's unlikely that the HBM is actually hitting 95C, but the card will throttle when the HBM temp as reported by HWInfo64 hits 95C. Who knows what that temp actually is, but it definitely throttles at that point regardless of GPU temp or power usage.

I am curious if any Vega 64 owners can check what that temp looks like for them. I wonder if it's any cooler with the 4-hi stacks vs the 8-hi stacks on FE.


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> I will PM for info on nVidia DPC, if you don't mind
> 
> 
> 
> 
> 
> 
> 
> .


Sure, go ahead








Quote:


> Originally Posted by *gupsterg*
> 
> I went through 11 Fiji cards. 1100MHz +/- 20MHz was average on them, one card did 1135MHz for a week and then couldn't hold OC, the card I kept was the best same, still does 1145MHz after ~year+ ownership. With what voltage increase it needs for fully stable 1145MHz it is 1175MHz bench run stable, not gaming stable for lengthy use
> 
> 
> 
> 
> 
> 
> 
> . Most cards did HBM 545MHz with slight voltage bump or stock, only 1 card was 600MHz HBM competently stable for gaming but clocked poor on core
> 
> 
> 
> 
> 
> 
> 
> . Real shame about drivers on Fiji IMO.


Woow, 11 cards







.. you must be really patient.
I only tried a couple GTX980Tis till I found a good one with 74%asic quality.
Yeah it´s sad AMD resorts to the same methods as nVidia.
But you should still get a good return on investment with the mining craze nowadays. I see many Furys being bought at around 350-400€ on ebay.
Quote:


> Originally Posted by *gupsterg*
> 
> Yes noted this. I have seen results of ~2GHz on GTX 1080 WC without modified BIOS when I searched owners thread. Even if I don't get that 1850MHz will be AOK which this card stock boost is.
> 
> VEGA even if you use external flash tool and mod bios is flashed the 'security chip' will not allow use. You are currently limited to using unmodified another card VBIOS.
> 
> Had 1 confirmed successful use of RX VEGA AIO VBIOS use on RX VEGA AIR. Issue is though you have no access to GPU core / HBM temp throttle limits in Wattman, so as RX VEGA AIO has lower limits I reckon an VEGA AIR user may encounter throttling issues if air flow not good.
> 
> RX VEGA on WC flashed to RX VEGA AIO should be fine and dandy IMO.
> 
> Real shame AMD have locked down VBIOS mod on VEGA so hard


Yeah, VEGA on water with AIO bios is probably the most reasonable way to get the best out of Vega imo.

Truly a shame. I don´t believe for a second it´s because of Secure Boot. It is more likely a means to prevent people from flashing Vega56 to Vega64 performance like it was possible with RX470.


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> Throttle temps were posted week+ ago, in that same thread by me, link and discussed here before
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Here is a more pertinent info.
> 
> RX VEGA AIR 56/64 temp limits in PowerPlay
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> (89°C)    USHORT usSoftwareShutdownTemp;
> (105°C)   USHORT usTemperatureLimitHotSpot;
> (74°C)    USHORT usTemperatureLimitLiquid1;
> (74°C)    USHORT usTemperatureLimitLiquid2;
> (95°C)    USHORT usTemperatureLimitHBM;
> (115°C)   USHORT usTemperatureLimitVrSoc;
> (115°C)   USHORT usTemperatureLimitVrMem;
> (100°C)   USHORT usTemperatureLimitPlx;
> (85°C)    USHORT usTemperatureLimitTedge;
> 
> RX VEGA 64 AIO temp limits in PowerPlay
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> (74°C)    USHORT usSoftwareShutdownTemp;
> (105°C)   USHORT usTemperatureLimitHotSpot;
> (74°C)    USHORT usTemperatureLimitLiquid1;
> (74°C)    USHORT usTemperatureLimitLiquid2;
> (95°C)    USHORT usTemperatureLimitHBM;
> (115°C)   USHORT usTemperatureLimitVrSoc;
> (115°C)   USHORT usTemperatureLimitVrMem;
> (100°C)   USHORT usTemperatureLimitPlx;
> (70°C)    USHORT usTemperatureLimitTedge;
> 
> As the values are contained in PowerPlay the registry mod table edited to a new value should apply. Be aware each registry file by hellm is stock, so user need to mod PL, etc as they want.


Oh I must have read over that. +Rep


----------



## VickB

Yea good info. Anyone care to post their HBM temps under hwinfo64? Mine is hitting 92°C playing Car Mechanic Simulator 2018 (very demanding game not sure but looks AMAZING). I'm hoping under water it will be closer to 50°C.

Out of curiosity, what are you guys using to undervolt your cards. Wattman? Wattool? Afterburner?


----------



## Nuke33

Quote:


> Originally Posted by *VickB*
> 
> Yea good info. Anyone care to post their HBM temps under hwinfo64? Mine is hitting 92°C playing Car Mechanic Simulator 2018 (very demanding game not sure but looks AMAZING). I'm hoping under water it will be closer to 50°C.
> 
> Out of curiosity, what are you guys using to undervolt your cards. Wattman? Wattool? Afterburner?


Wattman + modified PowerPlay tables


----------



## gupsterg

Quote:


> Originally Posted by *Energylite*
> 
> Yeah it's a shame, but i believe one day we could break down this security to mod the bios


The signature could be cracked, but I believe AMD must be able to update the 'security chip' via driver to then relock.

For example when the issue on Polaris ref PCB of high power draw from PCI-E slot occurred they reprogrammed the voltage chip using driver.
Quote:


> Originally Posted by *roybotnik*
> 
> Aye, I found the same thing and posted it in a few other places. It's unlikely that the HBM is actually hitting 95C, but the card will throttle when the HBM temp as reported by HWInfo64 hits 95C. Who knows what that temp actually is, but it definitely throttles at that point regardless of GPU temp or power usage.
> 
> I am curious if any Vega 64 owners can check what that temp looks like for them. I wonder if it's any cooler with the 4-hi stacks vs the 8-hi stacks on FE.


There was a post with RX VEGA HWINFO showing HBM temp. It was a higher temp than core, link.

The error (if there is one) in my view would be from AMD driver. On Fiji the on die SMC became what OS apps must 'message' to gain data, change voltage, etc. As VEGA FE has i2c completely disabled the OS SW would not be able to talk to VRM control chip for anything, so again for say VRM temps, etc it would be SMC being 'messaged'. RX VEGA does have i2c open from what we have gained insight from The Stilt. Martin Malik does have dev access to AMD AFAIK, so he would have got his app correctly set to 'message' SMC IMO.
Quote:


> Originally Posted by *Nuke33*
> 
> Sure, go ahead


Cheers








Quote:


> Originally Posted by *Nuke33*
> 
> Woow, 11 cards
> 
> 
> 
> 
> 
> 
> 
> .. you must be really patient.
> I only tried a couple GTX980Tis till I found a good one with 74%asic quality.
> Yeah it´s sad AMD resorts to the same methods as nVidia.
> 
> But you should still get a good return on investment with the mining craze nowadays. I see many Furys being bought at around 350-400€ on ebay.


Yeah I had spare time, was interesting to experience each card.

ASIC Quality on AMD is LeakageID, read a section in OP of Hawaii Bios mod and you will see The Stilt's info. This was also true for Fiji and Polaris.
Quote:


> Originally Posted by *Nuke33*
> 
> Yeah, VEGA on water with AIO bios is probably the most reasonable way to get the best out of Vega imo.
> 
> Truly a shame. I don´t believe for a second it´s because of Secure Boot. It is more likely a means to prevent people from flashing Vega56 to Vega64 performance like it was possible with RX470.


I doubt it was for Secure Boot as well. There were 2-3 layers of security for 'pure UEFI' environment, this post covers it for Polaris and earlier cards with UEFI/GOP in VBIOS.


----------



## theBee2112

Quote:


> Originally Posted by *VickB*
> 
> Yea good info. Anyone care to post their HBM temps under hwinfo64? Mine is hitting 92°C playing Car Mechanic Simulator 2018 (very demanding game not sure but looks AMAZING). I'm hoping under water it will be closer to 50°C.
> 
> Out of curiosity, what are you guys using to undervolt your cards. Wattman? Wattool? Afterburner?


Here's mine. It's on water with the EK block. Been mining on and off and runing benchmarks for last 12 hours.


I haven't been able to adjust my voltages at all. Tried all 3 of those.


----------



## Soggysilicon

Quote:


> Originally Posted by *VickB*
> 
> Good lord man you must live somewhere really cold haha. 22°C at idle means your ambient is probably like 18-19°C (or you just have AC haha). I'm in southern France, its still around 28°C outside during the day and i have no AC just natural breeze. In the summer my r9 390 reaches 46°C under load or there abouts and in winter reaches around 39°C. What are the HBM temps like under water and how much has this card raised your water temps if at all?


"Dad's Room" is an upstairs bonus room which has it's own LG Smart Inverter, and 2x Lasko Wind Curve Fans . Also in the process of wallpapering the walls, ceiling, and doors with recovered foam from materials at work. (4'x4' mil. spec foam esd) which helps with sound and adds some extra insulation. The computer has 2 raditors, a 360x60 push pull, and a single pass 240x30 in push pull, with approx. 3/4 gal. of fluid in the loop and res(s).

Vega idles better in lower power states than the card it replaced, a 280x Tahiti based card. Ryzen 1800x is better on power than the proc it replaced which was a 965be quad core. Temp load wise, Vega/Ryzen does warm up the loop to around 27~28c under gaming (at least on Warhammer TW), I think I finally "bumped" 32 last night (proc block thermo couple on the outside of the block) when one of my sensors chirped and the fans came up... so I bumped the AC down from 72F to 70F as the ambient air had come up to 74 and settled it back to 72. Set to 72c on the inverter the room will drop to 70F or 20-21c?

I can keep hwMon open and watch the HBM, but I haven't bothered as it hits a hard freq. limit for me with stock wattman at 1105. (Assuming there are internal tc(s) to the HBM to look at it with).


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> Yeah I had spare time, was interesting to experience each card.
> 
> ASIC Quality on AMD is LeakageID, read a section in OP of Hawaii Bios mod and you will see The Stilt's info. This was also true for Fiji and Polaris.


Thanks, will look into that








Quote:


> Originally Posted by *gupsterg*
> 
> I doubt it was for Secure Boot as well. There were 2-3 layers of security for 'pure UEFI' environment, this post covers it for Polaris and earlier cards with UEFI/GOP in VBIOS.


Very enlightening post you linked









Would be interesting to get a statement from AMD regarding Linux and Bios modding, since it generally does not use secure boot.

I am not a friend of "security chips". In my experience those are more useful as government backdoors than enhanced security.
Intel vPro or TPM for example.


----------



## VickB

Quote:


> Originally Posted by *theBee2112*
> 
> Here's mine. It's on water with the EK block. Been mining on and off and runing benchmarks for last 12 hours.
> 
> 
> I haven't been able to adjust my voltages at all. Tried all 3 of those.


Damn still quite toasty for being under water. I don't have any VRM temps under my hwinfo64, wondering if its been updated again as of the past couple days.


----------



## gupsterg

Quote:


> Originally Posted by *Nuke33*
> 
> Would be interesting to get a statement from AMD regarding Linux and Bios modding, since it generally does not use secure boot.


Linux you can not use mod VBIOS.

The 'security chip' on VEGA means any modified VBIOS prior to post of mobo is detected and card will not work. In the VEGA bios thread is post how a member modified Linux kernel to load VBIOS at OS load. This is similar to the WinOS registry mod where when driver 'sees' a PowerPlay in OS it uses that instead of VBIOS.

Some who didn't want a modified VBIOS or say had single VBIOS card have used registry PowerPlay mod to change clocks, etc on past AMD cards. Again this loop hole can be closed by AMD anytime IMO







.


----------



## kundica

Hey guys,

I was dumping the Low Power AIO bios for the other thread and decided I should run some benchmarks. Results are pretty much the same using +50% power limit with stock clock on the core and 1100 HBM. I noticed with the low power bios the clock dips to 1668 every so often while the high power bios sustains 1750. I'm currently using the beta launch driver.

HP bios - 8185, GS 8072 - https://www.3dmark.com/3dm/21764905
LP bios - 8176, GS 8062 - https://www.3dmark.com/3dm/21796571

I've been having issues with my card crashing while gaming with +50% power limit or anything higher than balanced mode while running the high power bios. With 17.8.2 it's worse. I've seen reports of others having this issue as well, some even RMA'd their cards, but it's not clear if new cards fixed the issue. If LP resolves the issue there might be something to it.

Also, I just saw a post on reddit asking about HBM voltage in Wattman for the LC 64 card. They said their card showed 950 but that they've seen reports of 1050 in Wattman from others. Isn't the HBM voltage 1350? Anyway, the poster said that setting it to 1050 seemed to resolve his crashes.


----------



## Soggysilicon

Quote:


> Originally Posted by *roybotnik*
> 
> Aye, I found the same thing and posted it in a few other places. It's unlikely that the HBM is actually hitting 95C, but the card will throttle when the HBM temp as reported by HWInfo64 hits 95C. Who knows what that temp actually is, but it definitely throttles at that point regardless of GPU temp or power usage.
> 
> I am curious if any Vega 64 owners can check what that temp looks like for them. I wonder if it's any cooler with the 4-hi stacks vs the 8-hi stacks on FE.




HW mon does not seem to separate the thermals... but for what its worth.


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> Linux you can not use mod VBIOS. The 'security chip' on VEGA means any modified VBIOS prior to post of mobo is detected and card will not work. In the VEGA bios thread is post how a member modified Linux kernel to load VBIOS at OS load. This is similar to the WinOS registry mod where when driver 'sees' a PowerPlay in OS it uses that instead of VBIOS.
> 
> Some who didn't want a modified VBIOS or say had single VBIOS card have used registry PowerPlay mod to change clocks, etc on past AMD cards. Again this loop hole can be closed by AMD anytime IMO
> 
> 
> 
> 
> 
> 
> 
> .


Yeah I got that, what I meant was if AMD has a reasonable explanation why Linux users have to suffer those restrictions when they only secure MS environments.
I think they will probably say something like it is not economically enough to open up to linux users and the sorts.
But would be interesting to see their attitude towards that, especially regarding future cards.

Yeah I wanted to try that myself on Arch Linux, but have not come around to compile a new kernel.
Maybe a similar approach as with Hackintosh Bootloaders would net some results. If I remember correctly the bootloader simulates an UEFI environment. So some sort of Vbios emulation/translation, which takes precedence might do the trick.


----------



## gupsterg

I guess so answer to Linux users would be as you think. Everytime I've used Linux off a USB stick and GUI use is so slick WinOS seems like bloat OS if you get what I mean. I think this feels as fast in general use as WinOS on SSD.

What keeps me from going fully Linux is some apps I use in WinOS there is no same version for Linux.

Recently I saw a post on another forum. User has fully Linux build, needs a firmware update for a EC on mobo, which can only be done by WinOS app, in short mobo vendor has no Linux app so he must go install WinOS.


----------



## roybotnik

Card firmware is part of the driver (at least for ROCm). I wonder if this has any relevance to the VBIOS discussion?

https://github.com/RadeonOpenCompute/ROCm/issues/147#issuecomment-317200589

https://github.com/RadeonOpenCompute/ROCm/issues/147#issuecomment-324692325


----------



## lmiao

Can someone suggest a water cooling kit for gpus? I'd like to buy one for my vega, but i never had one.. so i don't really know where to begin.

I heard ek are good, do they have gpu kits?


----------



## Soggysilicon

Quote:


> Originally Posted by *roybotnik*
> 
> Aye, I found the same thing and posted it in a few other places. It's unlikely that the HBM is actually hitting 95C, but the card will throttle when the HBM temp as reported by HWInfo64 hits 95C. Who knows what that temp actually is, but it definitely throttles at that point regardless of GPU temp or power usage.
> 
> I am curious if any Vega 64 owners can check what that temp looks like for them. I wonder if it's any cooler with the 4-hi stacks vs the 8-hi stacks on FE.


Here ya go.


----------



## VickB

Quote:


> Originally Posted by *Soggysilicon*
> 
> Here ya go.


What rad setup are you running and what cpu? Those are some nice temps would love those on my card once on water. I'm wondering if i can get that once i get my block. Your hwinfo64 looks more like mine, i dont have VRM temps and such like @theBee2112 does its a bit weird.


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> I guess so answer to Linux users would be as you think. Everytime I've used Linux off a USB stick and GUI use is so slick WinOS seems like bloat OS if you get what I mean. I think this feels as fast in general use as WinOS on SSD.
> 
> What keeps me from going fully Linux is some apps I use in WinOS there is no same version for Linux.
> 
> Recently I saw a post on another forum. User has fully Linux build, needs a firmware update for a EC on mobo, which can only be done by WinOS app, in short mobo vendor has no Linux app so he must go install WinOS.


I definitely get what you mean, Windows is a bloated piece of s***








Win10 is better in that regard but only as enterprise edition, Home is still awful.

Yes that´s what keeps me from switching completely too. Mainly Gaming and those purely Windows apps.
Many programs and even games can be emulated quite nicely with Wine, but especially functionality of hardware-tools and performance in games is not that great or even broken.

A friend of mine recently build me a minimal bootable windows7 iso with all sorts of benchmarking and monitoring tools, including drivers for nvidia and intel hardware. If I would switch completely, that would be my fallback for hardware update tools which rely on windows.


----------



## DrZine

Quote:


> Originally Posted by *Soggysilicon*
> 
> Here ya go.


Are you overvolting with powerplay? 1.294v gpu core. I have never seen mine go past 1.15v and my temps are never past 75C with the fans maxed out.


----------



## Soggysilicon

Quote:


> Originally Posted by *VickB*
> 
> What rad setup are you running and what cpu? Those are some nice temps would love those on my card once on water. I'm wondering if i can get that once i get my block. Your hwinfo64 looks more like mine, i dont have VRM temps and such like @theBee2112 does its a bit weird.


This is the sig rig, Yukikaze Modernization Program









http://www.overclock.net/lists/display/view/id/6683224

I grabbed whatever version popped up in the search engine to answer the posters question. Suspect a version, or setting tweak?


----------



## Soggysilicon

Quote:


> Originally Posted by *DrZine*
> 
> Are you overvolting with powerplay? 1.294v gpu core. I have never seen mine go past 1.15v and my temps are never past 75C with the fans maxed out.


Nope... well not specifically, its auto in wattman +50%... I literally moved 3 sliders hit apply... and go... mad skillz....


----------



## Nuke33

Quote:


> Originally Posted by *roybotnik*
> 
> Card firmware is part of the driver (at least for ROCm). I wonder if this has any relevance to the VBIOS discussion?
> 
> https://github.com/RadeonOpenCompute/ROCm/issues/147#issuecomment-317200589
> 
> https://github.com/RadeonOpenCompute/ROCm/issues/147#issuecomment-324692325


Unlikely one could alter the bios lock with a modified firmware. Those firmware blobs are kind of like an OS for the card, whereas the VBios is still standalone.
You could probably override Vbios setting with it but that defeats the purpose of a vbios mod, since you don´t have to load any alterations during OS boot.
Some sort of firmware is also being loaded during windows boot I think.


----------



## CaptainTom

Quote:


> Originally Posted by *VickB*
> 
> Im not hearing coil whine but i have a freesync display so i cap my fps, during firestrike i hear the fan more the coil whine (if there is any i havent heard it) but i also have 12 fans in my build, should tell me how quiet it is if i hear the vega at 2600rpm more then i do my Noctua fans haha.
> 
> As far as hwinfo just make sure to download the latest BETA version, i think its still off though and i think wattman is the best to measure that. Me myself I know better then to overclock with a new architecture and new BIOS/Drivers (i bought a 1700x on day 1 and didnt overclock it for a while)
> 
> Also the temps might not seem bad but it may start to throttle earlier as to not get any hotter. Try to undervolt and see if you still have the problem. *HBM memory gets incredibly hot so even though its low wattage it does add heat to the small cooler.* These cards do seem to thrive under water though.


It's not that HBM produces a ton of heat _total_, it's that the top layer (this is 3D memory) is much more easily cooled than the bottom layer attached to the die. So in effect you have to cool it aggressively to make sure the bottom layer doesn't overheat.


----------



## gupsterg

@Nuke33

I hear you and concur







.
Quote:


> Originally Posted by *roybotnik*
> 
> Card firmware is part of the driver (at least for ROCm). I wonder if this has any relevance to the VBIOS discussion?
> 
> https://github.com/RadeonOpenCompute/ROCm/issues/147#issuecomment-317200589
> 
> https://github.com/RadeonOpenCompute/ROCm/issues/147#issuecomment-324692325


To me reads same as what wolf9466 did link, so each time OS load the 'FW' OS is used. For example on WinOS CPU microcode updating, usual route is mobo BIOS has updates.


----------



## VickB

Quote:


> Originally Posted by *Soggysilicon*
> 
> This is the sig rig, Yukikaze Modernization Program
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.overclock.net/lists/display/view/id/6683224
> 
> I grabbed whatever version popped up in the search engine to answer the posters question. Suspect a version, or setting tweak?


Alright cool so as me as me a 360/240 both in push/pull so my temps should be similar. Its gotta be a tweak i guess, i just tried both 5.55 and 5.56 both didn't give me the temps. Unless its the cards themselves but not sure. I did write to Martin in the hwinfo64 thread to see what he comes up with.
Quote:


> Originally Posted by *CaptainTom*
> 
> It's not that HBM produces a ton of heat _total_, it's that the top layer (this is 3D memory) is much more easily cooled than the bottom layer attached to the die. So in effect you have to cool it aggressively to make sure the bottom layer doesn't overheat.


Makes sense, and water is much better at doing so. its a shame that some HBMs memory modules sit 40nanometers lower then the chip, i could see that being a problem. I know TIM fills it in but as we know TIM does not work that well the thicker it is. I wonder if i should try using CLU or is that a totally bad idea.


----------



## CaptainTom

So is it confirmed that 17.8.2 brings big performance boosts, but also crashing?

As someone who also mines, I have decided to keep using the blockchain driver for now. Actually due to HBCC still being enabled I get better framerates too lol, and it is very stable.


----------



## Nuke33

Quote:


> Originally Posted by *VickB*
> 
> Makes sense, and water is much better at doing so. its a shame that some HBMs memory modules sit 40nanometers lower then the chip, i could see that being a problem. I know TIM fills it in but as we know TIM does not work that well the thicker it is. I wonder if i should try using CLU or is that a totally bad idea.


What is CLU ?


----------



## gupsterg

Coollaboratory Liquid Ultra


----------



## Nuke33

@gupsterg
Ah okay, thanks









@VickB
It probably would be better than the same amount of regular TIM but not by much, without cooler pressure.
Personally I think it is not worth the risk of spilling somewhere it shouldn´t.

Maybe a very thin copper spacer would work.


----------



## VickB

Quote:


> Originally Posted by *Nuke33*
> 
> @gupsterg
> Ah okay, thanks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> @VickB
> It probably would be better than the same amount of regular TIM but not by much, without cooler pressure.
> Personally I think it is not worth the risk of spilling somewhere it shouldn´t.
> 
> Maybe a very thin copper spacer would work.


Would but then youd need CLU/TIM on both sides would defeat the purpose. Id tape over the transistors though or nail polish em like ive done to CPUs ive delided before. Maybe TIM on the die and clu on the HBM modules? Thats if mine aren't level. Not sure yet.


----------



## Nuke33

Quote:


> Originally Posted by *VickB*
> 
> Would but then youd need CLU/TIM on both sides would defeat the purpose. Id tape over the transistors though or nail polish em like ive done to CPUs ive delided before. Maybe TIM on the die and clu on the HBM modules? Thats if mine aren't level. Not sure yet.


True but in my experience 2x very thin layer of TIM with good pressure is still better than 1x TIM (even LM) with poor pressure.
Covering the necessary places with nail polish would work, but I recommend RTV silicone. Unlike nailpolish you can remove it without agressive chemicals/alcohol.


----------



## theBee2112

Quote:


> Originally Posted by *VickB*
> 
> What rad setup are you running and what cpu? Those are some nice temps would love those on my card once on water. I'm wondering if i can get that once i get my block. Your hwinfo64 looks more like mine, i dont have VRM temps and such like @theBee2112 does its a bit weird.


I'm on the Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23 driver, while as far as I understand, you're using 17.8.2. I'm trading some gaming performance for mining performance, and it shows. I've tried the newest 17.8.2 drivers and then went back. This could attribute to the differences in HWinfo. Also my temps are a lot higher than others are posting for various reasons, so trust their numbers over mine if you're just gaming with 1 card.

I've just figured out voltage control and OC using powerplay reg edit, because wattool, wattman ,and AB are all broken in this driver. Shame.


----------



## VickB

Quote:


> Originally Posted by *theBee2112*
> 
> I'm on the Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23 driver, while as far as I understand, you're using 17.8.2. I'm trading some gaming performance for mining performance, and it shows. I've tried the newest 17.8.2 drivers and then went back. This could attribute to the differences in HWinfo. Also my temps are a lot higher than others are posting for various reasons, so trust their numbers over mine if you're just gaming with 1 card.
> 
> I've just figured out voltage control and OC using powerplay reg edit, because wattool, wattman ,and AB are all broken in this driver. Shame.


I'm on 17.8.1 no need for me to go on 8.2 unless i buy F1 2017 lol.


----------



## CaptainTom

Quote:


> Originally Posted by *VickB*
> 
> Makes sense, and water is much better at doing so. its a shame that some HBMs memory modules sit 40nanometers lower then the chip, i could see that being a problem. I know TIM fills it in but as we know TIM does not work that well the thicker it is. I wonder if i should try using CLU or is that a totally bad idea.


Honestly it's about time there is a revolution in computer cooling. If you look at people's water/LN2 results it becomes painful how much more powerful cards/cpus could be if they didn't have to worry about thermals.

I mean quite literally a Fury X could have matched a 1080 Ti if it could just have been cooled enough to hit 2000/800 clocks. That's a bigger uplift than a die shrink.


----------



## Chaoz

Quote:


> Originally Posted by *VickB*
> 
> I'm on 17.8.1 no need for me to go on 8.2 unless i buy F1 2017 lol.


It does give a boost in other games aswell. My FPS is much more stable even in BF1 and my temps are 10°C lower than with the 17.8.1.


----------



## Nuke33

@VickB
I just reread your initial post, disregard my previous answers. I thought you were talking about 0,4mm. Obviously copper wouldn´t work at such a thickness, not by hand anyways.
40nm is so little, i don´t think it matters much. But LM would probably net a little better thermals.


----------



## Nuke33

Quote:


> Originally Posted by *Chaoz*
> 
> I bought mine in Germany aswell 3 days after release they were still in stock. I had no choice but to buy mine in Germany, even though I live in Belgium. Due to problems with the suppliers here, there was no stock anywhere in Belgium. But props to Alternate Germany. Bought mine on Wednesday and it was already delivered on Friday at noon.
> 
> I got a Sapphire Air Black version, as I already bought a waterblock a week or so in advance. So didn't care what the GPU looked like. Too bad mine whines like a mf under 100% in-game. It doesn't seem to do that when I enable V-Sync.
> 
> Still waiting on my Displayport cable so I can use FreeSync
> 
> 
> 
> 
> 
> 
> 
> .


Alternate is really great, got mine there too.
Did you receive your 2 Free gamecodes though?
Somehow they forgot to include them in my shipment.


----------



## VickB

Quote:


> Originally Posted by *Chaoz*
> 
> It does give a boost in other games aswell. My FPS is much more stable even in BF1 and my temps are 10°C lower than with the 17.8.1.


I wonder if thats actual temps are lower or if reported temps are lower, kinda like Ryzen does lol. Maybe 17.8.2 is giving less voltage to the card? Not sure, seems like people are having more issues then good with .2 so ill stay away from it for now.


----------



## Chaoz

Quote:


> Originally Posted by *Nuke33*
> 
> Alternate is really great, got mine there too.
> Did you receive your 2 Free gamecodes though?
> Somehow they forgot to include them in my shipment.


Unfortunately because I don't live in Germany I didn't get my codes. I mailed asking for them but seems I had to buy mine locally to bz eligable for those game cards.

Quote:


> Originally Posted by *VickB*
> 
> I wonder if thats actual temps are lower or if reported temps are lower, kinda like Ryzen does lol. Maybe 17.8.2 is giving less voltage to the card? Not sure, seems like people are having more issues then good with .2 so ill stay away from it for now.


Dunno, might be I really have no clue.
It's still reporting that its at 1.2v.

I've had some issues in the beginning but not anymore. I've been playing bf1 for a few hours on end and no crash at all.


----------



## Gdourado

Thinking of ditching my 980ti for a Vega 64 as I am eyeing a new 32 inch freesync 2 monitor.
How is the noise of the Vega 64 with the stock cooler?

Cheers!


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Gdourado*
> 
> Thinking of ditching my 980ti for a Vega 64 as I am eyeing a new 32 inch freesync 2 monitor.
> How is the noise of the Vega 64 with the stock cooler?
> 
> Cheers!


At 2000 rpm the noise is not bad but it throttles like crazy. It performs much better at 3000 rpm but then it sounds like a vacuum cleaner.


----------



## twan69666

Quote:


> Originally Posted by *CaptainTom*
> 
> Some new discoveries and questions from tests I have run:
> 
> 3) *Anyone getting an odd (and random) mega performance drop?* I have been getting this while playing BF1 almost once a day. My framerate goes from 144Hz to ~27Hz and the only fix is a reboot. Temperatures go down and the clocks are reported as the same, but ALL performance is terrible. Even in Ethereum mining my performance dropped to an insane 7 MH/s!


I got my Vega 64 Air last week and stuck it in my mining rig until my water block shows up. After being away for a few days, I'm noticing the exact same thing. I'm dropping from 43 mh/s -> 14 mh/s. Only a reboot will fix it. Anyone else having this issue? I have yet to check the mining forums but I figured I'd ask here before them since this is a gaming only card for me once I get my EK block


----------



## Nuke33

Quote:


> Originally Posted by *Chaoz*
> 
> Unfortunately because I don't live in Germany I didn't get my codes. I mailed asking for them but seems I had to buy mine locally to bz eligable for those game cards.


I see thanks for the info.


----------



## CaptainTom

Quote:


> Originally Posted by *twan69666*
> 
> I got my Vega 64 Air last week and stuck it in my mining rig until my water block shows up. After being away for a few days, I'm noticing the exact same thing. I'm dropping from 43 mh/s -> 14 mh/s. Only a reboot will fix it. Anyone else having this issue? I have yet to check the mining forums but I figured I'd ask here before them since this is a gaming only card for me once I get my EK block


It's the memory downclocking to 500MHz most likely. Make sure you have the power limit turned up, and set the memory voltage to 800mV.

This does not explain the GAMING performance drop though. Although tbh it has gone aways since I switched back to the mining drivers lol.


----------



## LocoDiceGR

How is the fan noise on the air RX VEGA guys?

With everything default while gaming?

I search for a youtube video, but nothing came up sadly, i really like the desing of the limited edition, but im afraid of the noise.

Thanks!


----------



## CaptainTom

Quote:


> Originally Posted by *Gdourado*
> 
> Thinking of ditching my 980ti for a Vega 64 as I am eyeing a new 32 inch freesync 2 monitor.
> How is the noise of the Vega 64 with the stock cooler?
> 
> Cheers!


Quote:


> Originally Posted by *IvantheDugtrio*
> 
> At 2000 rpm the noise is not bad but it throttles like crazy. It performs much better at 3000 rpm but then it sounds like a vacuum cleaner.


I can honestly say I do not know what people are talking about when it comes to noise. Imo it doesn't get loud till ~3500RPM, and even then it is a duller hum (Not an annoying noise). In fact it's really the sound of the air rushing through my case that is making the most noise at this point lol.

I would say the 7970 reference cards sounded much louder at full RPM (Although they used less energy and never needed full fan), and these are NOT comparable to the infamous GTX 480. Also if you undervolt these things they never need more than 2500 RPM.


----------



## dagget3450

Are the fans on the Rx vega airs delta brand like the Vega FE?


----------



## Worldwin

Quote:


> Originally Posted by *dagget3450*
> 
> Are the fans on the Rx vega airs delta brand like the Vega FE?


Yes. Check tomshardware review. They have an image of the fan.

http://www.tomshardware.com/reviews/amd-radeon-rx-vega-64,5173-3.html


----------



## Whatisthisfor

Fortunately the AIO version is rather quiet, after i cut the fan to 400-1600rpm max. I set P6 and P7 to 1562MHz @ 1070mV. With these values, the GPU reaches max 67° Grad Celsius only. The fan is really loud and annoying when it is allowed to reach high rpm, but with above values it stays mostly silent. With the old RX Vega beta drivers, which where available at launch, the card mostly followed the clock speed @ P6/P7 in games, with the new 17.8.2 though the clock speed hops between lower P-states, which ofc. cannot be set manually in Wattman yet. The new driver seems to have its own mind and mostly does not follow the clock values set.

From a typical user perspective this is not as bad as it sounds, because fps do not suffer very much in games from that P-states hopping. I hope that behaviour to change with upcoming releases though.


----------



## Irev

hey guys the latest driver doesnt OC well for me... just set power target +50 and high fan speed and did a benchmark and the system locked up... and didnt even touch voltage or core/hbm clocks.... it didnt do that with the last driver.

Also whats the best profile to run the vega64 air on? Balanced / Turbo ??? is it worth running turbo do you actually get more fps?


----------



## steadly2004

Quote:


> Originally Posted by *Irev*
> 
> hey guys the latest driver doesnt OC well for me... just set power target +50 and high fan speed and did a benchmark and the system locked up... and didnt even touch voltage or core/hbm clocks.... it didnt do that with the last driver.
> 
> Also whats the best profile to run the vega64 air on? Balanced / Turbo ??? is it worth running turbo do you actually get more fps?


The highest OC I could get was with a 50mv undervolt, and upping the HMB voltage. With custom fan profile, that's pretty important if you're on air.


----------



## Blameless

Quote:


> Originally Posted by *gupsterg*
> 
> The 'security chip' on VEGA means any modified VBIOS prior to post of mobo is detected and card will not work. In the VEGA bios thread is post how a member modified Linux kernel to load VBIOS at OS load. This is similar to the WinOS registry mod where when driver 'sees' a PowerPlay in OS it uses that instead of VBIOS.


Has anyone tried hot-plugging these parts? Couldn't that theoretically get past the firmware lock by not requiring the system to POST after reinitializing the card?
Quote:


> Originally Posted by *gupsterg*
> 
> Some who didn't want a modified VBIOS or say had single VBIOS card have used registry PowerPlay mod to change clocks, etc on past AMD cards. Again this loop hole can be closed by AMD anytime IMO
> 
> 
> 
> 
> 
> 
> 
> .


The possibility that a future driver may simply ignore most of the PowerPlay table is a major concern of mine.


----------



## W1zzard

Quote:


> Originally Posted by *Irev*
> 
> Also whats the best profile to run the vega64 air on? Balanced / Turbo ??? is it worth running turbo do you actually get more fps?


My own testing in my review show that it makes no significant difference. The card will clock higher, using more power at first, and then run into the power limit earier and more often, reducing overall performance, depending onthe game


----------



## Irev

Quote:


> Originally Posted by *W1zzard*
> 
> My own testing in my review show that it makes no significant difference. The card will clock higher, using more power at first, and then run into the power limit earier and more often, reducing overall performance, depending onthe game


hmm Im now thinking of just running the Vega64 air on power save mode just because of temps - will loose a few fps but it should be much quieter and cooler....... looking at your review now mate

another Q how do i tell which bios is witch? which one is primary bios ? the switch that goes towards the fan or away from it ?


----------



## gupsterg

Come to the aid of your fellow/prospective VEGA owners, join W1zzard for GPU-Z VEGA Beta testing







.

https://www.techpowerup.com/forums/threads/vega-beta-testers-needed.236537/
Quote:


> Originally Posted by *Blameless*
> 
> Has anyone tried hot-plugging these parts? Couldn't that theoretically get past the firmware lock by not requiring the system to POST after reinitializing the card?


Not that I know of. No idea.
Quote:


> Originally Posted by *Blameless*
> 
> The possibility that a future driver may simply ignore most of the PowerPlay table is a major concern of mine.


Mine too.

Yeah nVidia has been locking down this aspect before AMD, so perhaps its just becoming the norm. Shame.


----------



## n3squ1ck

Anyone has an idea wich vrm temp is normal on Vega 64 for gaming?
I think im hitting around 101 while gpu temp is around 80 ~


----------



## Sicness

Quote:


> Originally Posted by *gupsterg*
> 
> Come to the aid of your fellow/prospective VEGA owners, join W1zzard for GPU-Z VEGA Beta testing


On my way


----------



## lmiao

Someone can help me how to set a liquid cooling system? I don't have one and never had, i'd like to put my vega on LC because temp and noise are way too high.. but i don't really know which components are good to make a decent one. I went to EK site, did the wizard that choose their products to form a kit, but it was very expensive, ca. 300€ only for gpu. Any suggestion?


----------



## n3squ1ck

For a decent custom loop youll need at least 500€ / usd at minimum!


----------



## lmiao

Holy crap! Just for only a GPU? There isn't something out of the box, not necessarily custom?


----------



## Nuke33

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Fortunately the AIO version is rather quiet, after i cut the fan to 400-1600rpm max. I set P6 and P7 to 1562MHz @ 1070mV. With these values, the GPU reaches max 67° Grad Celsius only. The fan is really loud and annoying when it is allowed to reach high rpm, but with above values it stays mostly silent. With the old RX Vega beta drivers, which where available at launch, the card mostly followed the clock speed @ P6/P7 in games, with the new 17.8.2 though the clock speed hops between lower P-states, which ofc. cannot be set manually in Wattman yet. The new driver seems to have its own mind and mostly does not follow the clock values set.
> 
> From a typical user perspective this is not as bad as it sounds, because fps do not suffer very much in games from that P-states hopping. I hope that behaviour to change with upcoming releases though.


You could try powerplay registry mods.

http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/270#post_26306207

The link is for a 950mv p3-p7 powerplay table.


----------



## Chaoz

Quote:


> Originally Posted by *lmiao*
> 
> Someone can help me how to set a liquid cooling system? I don't have one and never had, i'd like to put my vega on LC because temp and noise are way too high.. but i don't really know which components are good to make a decent one. I went to EK site, did the wizard that choose their products to form a kit, but it was very expensive, ca. 300€ only for gpu. Any suggestion?


It's not that expensive. You can get kits for a couple of €100 for CPU and just add a waterblock for your gpu with a couple of extra fittings and you're done:

https://www.ekwb.com/shop/ek-kit-l360-r2-0
https://www.ekwb.com/shop/water-blocks/vga-blocks/full-cover-for-amd-radeon/radeon-vega-series
https://www.ekwb.com/shop/ek-acf-fitting-10-13mm-nickel

Grand Total € 395.61


----------



## Nuke33

Quote:


> Originally Posted by *Blameless*
> 
> Has anyone tried hot-plugging these parts? Couldn't that theoretically get past the firmware lock by not requiring the system to POST after reinitializing the card?


I did not, but it is a very good idea.
Do eGPUs need to be initialized during boot or are they completly hotpluggable?
Maybe someone with a eGPU case could help with that.


----------



## Sicness

There's initial high cost for rad, fittings, tubing, pump etc. but once you got that covered, you'll be off way cheaper when you change your CPU or GPU in the next few years. Outside of insanely huge CPUs like TR, you can carry over a block between platforms by just using different brackets. I've been using my Watercool CPU block on socket 1155, 2011-3 and AM4. Brackets for other platforms or sockets are usually <20e/USD.

Most of the parts are an investment for years.


----------



## kundica

So my Vega 64 AIO crashing problem has turned into an ordeal. I filed an RMA with Newegg for a card exchange, however, they're telling me the card is out of stock despite there being bundles with the card available. As a result, I have to return the card for a refund which is further complicated by the fact that I bought it in a bundle. It looks like they'll take back all the items if I send everything but minus $120 for the game codes I haven't used. What a scam. I'm waiting for them to open to fight with them some more.

An AMD rep on the OCUK forum said that my card is probably faulty, that he had to return his first card since it would crash on anything but powersaver. Not a great sign since a good number of people with the LC version are having this issue.

At this point my options are slim. The 64 Air card I had has already been sold/shipped.


----------



## saygram

It's possible to make custom loops even cheaper by buying parts from China. I understand this is not for everyone as you get very litte support.


----------



## SAMiN

Is this legit?


__
https://www.reddit.com/r/6w5lnn/successful_flash_of_vega_64_liquid_bios_onto_vega/


----------



## kundica

Quote:


> Originally Posted by *SAMiN*
> 
> Is this legit?
> 
> 
> __
> https://www.reddit.com/r/6w5lnn/successful_flash_of_vega_64_liquid_bios_onto_vega/


He used my bios dump as says it worked. Someone else said it isn't working though so I don't know.


----------



## milkbreak

Has Sapphire announced any custom 56 or 64s yet? Has anyone aside from Asus, period?


----------



## lmiao

Quote:


> Originally Posted by *Chaoz*
> 
> It's not that expensive. You can get kits for a couple of €100 for CPU and just add a waterblock for your gpu with a couple of extra fittings and you're done:
> 
> https://www.ekwb.com/shop/ek-kit-l360-r2-0
> https://www.ekwb.com/shop/water-blocks/vga-blocks/full-cover-for-amd-radeon/radeon-vega-series
> https://www.ekwb.com/shop/ek-acf-fitting-10-13mm-nickel
> 
> Grand Total € 395.61


Thanks a lot! I'll looking forward to it, maybe for christmas..









Edit: there are a lot of differences between the others kits, for example extreme or gaming?


----------



## Chaoz

Quote:


> Originally Posted by *lmiao*
> 
> Thanks a lot! I'll looking forward to it, maybe for christmas..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Edit: there are a lot of differences between the others kits, for example extreme or gaming?


Np, there is a slight difference. Radiator thickness and better pump.


----------



## lmiao

So it
Quote:


> Originally Posted by *Chaoz*
> 
> Np, there is a slight difference. Radiator thickness and better pump.


So the basic it's enough for a slight overclock and gaming?


----------



## Chaoz

Quote:


> Originally Posted by *lmiao*
> 
> So it
> So the basic it's enough for a slight overclock and gaming?


Yes, make sure to get the kit with a 360 rad. It usually is min. 120mm radiator per component but the Vega due to its high TDP needs atleast 240m. So 360 radiator should be sufficient.

I have a 360 and 480 rad for my 5820K and Vega 64. Temps are maxed at 50°C for CPU and 40°C for GPU.

Those kits also have the possibility to be upgraded in the future if you want. You could add another rad if you have enough room in your case and money in your wallet







.


----------



## geoxile

Anyone know what time EST the Vega 56 orders open up tomorrow?


----------



## LocoDiceGR

Quote:


> Originally Posted by *geoxile*
> 
> Anyone know what time EST the Vega 56 orders open up tomorrow?


17 Hours from now.


----------



## lmiao

Quote:


> Originally Posted by *Chaoz*
> 
> Yes, make sure to get the kit with a 360 rad. It usually is min. 120mm radiator per component but the Vega due to its high TDP needs atleast 240m. So 360 radiator should be sufficient.
> 
> I have a 360 and 480 rad for my 5820K and Vega 64. Temps are maxed at 50°C for CPU and 40°C for GPU.
> 
> Those kits also have the possibility to be upgraded in the future if you want. You could add another rad if you have enough room in your case and money in your wallet
> 
> 
> 
> 
> 
> 
> 
> .


very clear! I guess i'll go for the 360 rad, don't really have much money right now..if it will be enough i'll stick with it, otherwise an upgrade will come in the future. Anyway thanks for the help!


----------



## Chaoz

Quote:


> Originally Posted by *lmiao*
> 
> very clear! I guess i'll go for the 360 rad, don't really have much money right now..if it will be enough i'll stick with it, otherwise an upgrade will come in the future. Anyway thanks for the help!


Np, glad to help out. It will definately be enough for a CPU and a GPU. Always better to have the option available to upgrade. Even if you don't really need it.


----------



## pmc25

Installed my EKWB ...

It's refusing to boot with monitors plugged in, but does with none plugged in. By not booting I mean won't even turn on.

I haven't used either of the two fan headers on the card ... has that perhaps got something to do with it?


----------



## seanmacvay

Quote:


> Originally Posted by *kundica*
> 
> He used my bios dump as says it worked. Someone else said it isn't working though so I don't know.


I was able to flash your bios on my Vega 64 with an EK waterblock. It does technically work, but I have to underclock the core by 5% or it will crash on every benchmark.

Other than that though, it is still a big improvement over the stock bios!


----------



## theBee2112

Quote:


> Originally Posted by *seanmacvay*
> 
> I was able to flash your bios on my Vega 64 with an EK waterblock. It does technically work, but I have to underclock the core by 5% or it will crash on every benchmark.
> 
> Other than that though, it is still a big improvement over the stock bios!


Could you outline your process a bit please? I've done bios flash before on RX 580, just wondering if there's anything different about the process.

I'm looking to do the same thing. I have an XFX air cooled VEGA 64 with an EKWB on it right now.

I'm guessing the secure boot thing only applies when you try to modify the vbios, rather than try to load one from a different card.


----------



## Soggysilicon

Quote:


> Originally Posted by *lmiao*
> 
> Someone can help me how to set a liquid cooling system? I don't have one and never had, i'd like to put my vega on LC because temp and noise are way too high.. but i don't really know which components are good to make a decent one. I went to EK site, did the wizard that choose their products to form a kit, but it was very expensive, ca. 300€ only for gpu. Any suggestion?




AIOs in the past few years have made water cooling very affordable and with reasonable performance. Vega, being so new only has a few options... you buy the LC version and get what you get (some folks seem to have done well modifying the fan), you modify an existing solution (custom bracket on a universal or modify and AIO), or you get a FC and build / fit a loop around it.

Modifying the bracket on a universal or AIO would be the cheapest route but its going to be "janky", and still maybe need some stick on heat sinks.

Next up is the LC version, but I have read the fan is sorta "meh", that and it's a 120... so... it's better than the blower I suppose.









Last up is the FC, alphacool has a cover and EK has one... I think there is another one out in the wild... but who knows... everything I have seen is copper, so that means your loop by default will be copper based... so your price just went up, no scrub aluminum parts.

Gunna need a pump, res (or combo), tube, fittings, a way to mount, blocks, some fans, decoupling (rubber / old fan shrouds), towels, jumper wire for the 12v, water... lights of course... cause it makes it go faster...

Not to be discouraging but for what this is going to cost *just* to cool off Vega, there may be other more compelling options... like a 1080Ti... may just want to wait for an AIO to drop in the next couple months...


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> Installed my EKWB ...
> 
> It's refusing to boot with monitors plugged in, but does with none plugged in. By not booting I mean won't even turn on.
> 
> I haven't used either of the two fan headers on the card ... has that perhaps got something to do with it?


There is the one fan header and the led header, neither of which need to be plugged in for the card to post.

When I installed the stock back plate I utilized the extra washers they included and mixed the screws. I also used a torque pistol grip screw driver to insure I was at the minimum torque all around in a typical cross pattern. Reseat your power connectors and insure each 8 pin pcie is on its own cable. Double check with an HDMI cable, my DP on this Vega card have been janky out of the box. The EK block has no springs so there is no linear torque when attaching the block to the pcb/die... that means best judgement... with any luck you haven't cracked a ball under the package. I.E. bricked the card... wish you the best.


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> AIOs in the past few years have made water cooling very affordable and with reasonable performance. Vega, being so new only has a few options... you buy the LC version and get what you get (some folks seem to have done well modifying the fan), you modify an existing solution (custom bracket on a universal or modify and AIO), or you get a FC and build / fit a loop around it.
> 
> Modifying the bracket on a universal or AIO would be the cheapest route but its going to be "janky", and still maybe need some stick on heat sinks.
> 
> Next up is the LC version, but I have read the fan is sorta "meh", that and it's a 120... so... it's better than the blower I suppose.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Last up is the FC, alphacool has a cover and EK has one... I think there is another one out in the wild... but who knows... everything I have seen is copper, so that means your loop by default will be copper based... so your price just went up, no scrub aluminum parts.
> 
> Gunna need a pump, res (or combo), tube, fittings, a way to mount, blocks, some fans, decoupling (rubber / old fan shrouds), towels, jumper wire for the 12v, water... lights of course... cause it makes it go faster...
> 
> Not to be discouraging but for what this is going to cost *just* to cool off Vega, there may be other more compelling options... like a 1080Ti... may just want to wait for an AIO to drop in the next couple months...


That's the reason why EK kits exist. To make it easy for everyone who wants a basic loop just for cooling and not aesthetics.

The kits are decent enough and all he has to do is, like I already said, buy a seperate ek Vega block, couple extra fittings and you're done. Everything you need is included in the box. So no need to buy additional fans and decouplers and such. A quick disconnect option is also a great solution.

As I also said is that the kit + extra fittings and Vega block cost around €400 max. No need to make it difficult with all the technicalities. It's a lot easier than it looks.


----------



## theBee2112

Quote:


> Originally Posted by *kundica*
> 
> He used my bios dump as says it worked. Someone else said it isn't working though so I don't know.


IT WORKS!!

Just tried it... I flashed my XFX Air cooled black edition, using your vbios, and atiwinflash 2.77. Windows 10 x64

Only difference is that I'm using the Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23 drivers. I am not experiencing the same crashing seanmacvay describes. It's maintaining 1750MHz. Will run more benchmarks.


----------



## Sicness

Quote:


> Originally Posted by *theBee2112*
> 
> IT WORKS!!
> 
> Just tried it... I flashed my XFX Air cooled black edition, using your vbios, and atiwinflash 2.77. Windows 10 x64
> 
> Only difference is that I'm using the Win10-64Bit-Crimson-ReLive-Beta-Blockchain-Workloads-Aug23 drivers. I am not experiencing the same crashing seanmacvay describes. It's maintaining 1750MHz. Will run more benchmarks.


Congrats! Can you elaborate on how exactly you flashed the BIOS? Did you run the GUI version as admin or did you take the DOS route?


----------



## pillowsack

Quote:


> Originally Posted by *Sicness*
> 
> Congrats! Can you elaborate on how exactly you flashed the BIOS? Did you run the GUI version as admin or did you take the DOS route?


I just flashed my XFX Vega 64 with the same bios, can confirm it works. Gonna play overwatch at 1700 now.

I just did it from windows, I have a thumbdrive handy but windows app is handier. Just close most of the stuff you have open.


----------



## theBee2112

Quote:


> Originally Posted by *Sicness*
> 
> Congrats! Can you elaborate on how exactly you flashed the BIOS? Did you run the GUI version as admin or did you take the DOS route?


I'm embarrassed to say I took the lazy way and used the GUI lol.


----------



## Newbie2009

Quote:


> Originally Posted by *pillowsack*
> 
> I just flashed my XFX Vega 64 with the same bios, can confirm it works. Gonna play overwatch at 1700 now.
> 
> I just did it from windows, I have a thumbdrive handy but windows app is handier. Just close most of the stuff you have open.


so is this definitely allowing higher power limit?


----------



## theBee2112

Quote:


> Originally Posted by *Newbie2009*
> 
> so is this definitely allowing higher power limit?


I seem to be getting the same performance with -15% power limit now, as i was with +30 power limit before on the stock vbios. I just checked HWinfo and it looks like the biggest difference i see betwen the stock and AIO bios is that the VRAM voltages are slightly higher.

My PSU trips immediately (Or soon thereafter) when I put it to +50 now.


----------



## ontariotl

So is it worth flashing to the AIO bios? I have my EK wb coming in on Tuesday.


----------



## pmc25

Quote:


> Originally Posted by *Soggysilicon*
> 
> There is the one fan header and the led header, neither of which need to be plugged in for the card to post.
> 
> When I installed the stock back plate I utilized the extra washers they included and mixed the screws. I also used a torque pistol grip screw driver to insure I was at the minimum torque all around in a typical cross pattern. Reseat your power connectors and insure each 8 pin pcie is on its own cable. Double check with an HDMI cable, my DP on this Vega card have been janky out of the box. The EK block has no springs so there is no linear torque when attaching the block to the pcb/die... that means best judgement... with any luck you haven't cracked a ball under the package. I.E. bricked the card... wish you the best.


Thanks for the recommendations.

Turned out it was the PSU's power cable .. it's decided it doesn't like it anymore, despite it working on two other PSUs.

Whilst being under water and at very low temperatures, unfortunately on janky 17.8.2 Wattman, you can't get higher clocks at lower voltages, and the boost clock still seems to be bugged in so far as it won't go to max boost under load unless you significantly 'overclock' it, even though load temps are 35-40C.

Wish WattTool and HBM clocks weren't bugged. Would be good to see what it could do under water and at lower voltages.

All that said, performance has increased significantly. Way less dips, even at the same stable (solid) clocks, voltage and power limit as on air.

Up to 20% average in some games, 'only' 5% in others. Max and min are way up, and stutter seems gone.

Either micro stutter has been eliminated by slightly less oscillation (just a few Mhz) in GPU core clock, or the HBM is operating at significantly lower timings and that's having a big effect in some games.

Metro 2033 redux 3 run bench using same settings is now 2.43x faster than my Fiji Nano at 1045/545!
Quote:


> Originally Posted by *theBee2112*
> 
> I seem to be getting the same performance with -15% power limit now, as i was with +30 power limit before on the stock vbios. I just checked HWinfo and it looks like the biggest difference i see betwen the stock and AIO bios is that the VRAM voltages are slightly higher.
> 
> My PSU trips immediately (Or soon thereafter) when I put it to +50 now.


The sensor labeled as GPU memory voltage, or the sensor labeled as GPU Core Voltage? The one that doesn't fluctuate (GPU Core Voltage) is actually the HBM Voltage. The one that fluctuates (GPU Memory Voltage) is actually GPU Core Voltage.


----------



## theBee2112

Quote:


> Originally Posted by *pmc25*
> 
> The sensor labeled as GPU memory voltage, or the sensor labeled as GPU Core Voltage? The one that doesn't fluctuate (GPU Core Voltage) is actually the HBM Voltage. The one that fluctuates (GPU Memory Voltage) is actually GPU Core Voltage.


Glad you got the card working. And it was the one labelled GPU Memory voltage. it was .950 before, now its upwards of 1.1V. I guess that one is GPU core voltage then, cause it changes. The other one is at 1.356V and it has never changed. I've read this a few times now and still keep mixing them up.

@ontariotl: Do it for sure, but make sure you have a good PSU. 900-1000 Watt or more. Haven't tested power draw yet, but I imagine it has gone up to sustain the higher clocks. It may have helped my temps a bit too. Ran Firestike, timespy, Superpostion, and a few games. All of them showed 5-15% increase in FPS/scores.

@kundica: I missed your post where you said you had to RMA the card and whole bundle. That really sucks. No chance at an exchange at all for a different bundle? At least the vbios has gone off to do great things. It has not been RMA'd in vein.


----------



## kundica

Quote:


> Originally Posted by *theBee2112*
> 
> @kundica: I missed your post where you said you had to RMA the card and whole bundle. That really sucks. No chance at an exchange at all for a different bundle? At least the vbios has gone off to do great things. It has not been RMA'd in vein.


They were not very flexible. I was trying to get them to charge me for a card so I could do an advance RMA just on the card an not the other gear, but they said they wouldn't and to even consider it I'd need to have a high volume account. I haven't used the bundled gear, my intention was always to sell it. I could replace my C6H with the bundled Crosshair VI Extreme but that doesn't seem like a smart upgrade given the difference in cost and how well my C6H is dialed in. Anyway, did a return on the entire bundle and ordered the same bundle as a replacement. If the next card is jacked too, I guess I'll just return everything again and wait until I can grab a reference card close to MSRP or get and AIB card.

I could've stop delivery on the Air 64 card I sold then refund the money to the buyer but that would be a dick move.

Glad the bios worked for you. I suspect some people have to clock down or lower the power limit because of normal variance between Air cards.


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> Thanks for the recommendations.
> 
> Turned out it was the PSU's power cable .. it's decided it doesn't like it anymore, despite it working on two other PSUs.
> 
> Whilst being under water and at very low temperatures, unfortunately on janky 17.8.2 Wattman, you can't get higher clocks at lower voltages, and the boost clock still seems to be bugged in so far as it won't go to max boost under load unless you significantly 'overclock' it, even though load temps are 35-40C.
> 
> Wish WattTool and HBM clocks weren't bugged. Would be good to see what it could do under water and at lower voltages.
> 
> All that said, performance has increased significantly. Way less dips, even at the same stable (solid) clocks, voltage and power limit as on air.
> 
> Up to 20% average in some games, 'only' 5% in others. Max and min are way up, and stutter seems gone.
> 
> Either micro stutter has been eliminated by slightly less oscillation (just a few Mhz) in GPU core clock, or the HBM is operating at significantly lower timings and that's having a big effect in some games.
> 
> Metro 2033 redux 3 run bench using same settings is now 2.43x faster than my Fiji Nano at 1045/545!
> The sensor labeled as GPU memory voltage, or the sensor labeled as GPU Core Voltage? The one that doesn't fluctuate (GPU Core Voltage) is actually the HBM Voltage. The one that fluctuates (GPU Memory Voltage) is actually GPU Core Voltage.


Hey man good to hear you found the issue. Bricking the card would of been a complete disaster... kundica has been saying that Newegg won't break up bundles to swap out returns... so it could be "month(s)" to get a replacement.

Even as OC'd as I am, I would of liked to have seen another 5% from the card stock and 10% more than what I am getting OC'd... mostly in the lows. I really don't want to compromise settings to keep above 48 fps / hz and enhanced freesync kick off. Maybe we will see that in the months to come with drivers. The LC bios may be going in the right direction, but I remain unconvinced as I haven't seen any HWinfo(s) w/ benchies that are comparable -> superior to where I am atm.

It's early days though, I'm sure things will improve. So is it your 3 prong wall power cable (assuming 50-60hz 3 phase 110v ac) or one of the pci-e cables?


----------



## Soggysilicon

Quote:


> Originally Posted by *theBee2112*
> 
> I'm embarrassed to say I took the lazy way and used the GUI lol.


1750 stable on the core? Hrrmmmm, and its stable?







Benchies and gaming? Wouldn't mind posting a HWinfo would you?


----------



## theBee2112

Quote:


> Originally Posted by *Soggysilicon*
> 
> 1750 stable on the core? Hrrmmmm, and its stable?
> 
> 
> 
> 
> 
> 
> 
> Benchies and gaming? Wouldn't mind posting a HWinfo would you?




This was under 100% load.

I've run Timespy, Firestrike, and Superposition. It's mining right now, cause i'm going to bed soon. Seems to keep 1750 no matter what it's doing, as long as it has the power. It went to 1667Mhz for mining, so i just upper the power limit +10% and it went back up and stayed.


----------



## seanmacvay

Quote:


> Originally Posted by *Sicness*
> 
> Congrats! Can you elaborate on how exactly you flashed the BIOS? Did you run the GUI version as admin or did you take the DOS route?


I was able to flash using the GUI in admin mode with no issues. Just played about 4 hours of Prey with the card at 1670mhz, and the memory at 1100mhz.


----------



## seanmacvay

Quote:


> Originally Posted by *Newbie2009*
> 
> so is this definitely allowing higher power limit?


I'm pretty certain it is. With the power limit set to +50% my system is drawing nearly 700 watts from the wall under full synthetic load.


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> Hey man good to hear you found the issue. Bricking the card would of been a complete disaster... kundica has been saying that Newegg won't break up bundles to swap out returns... so it could be "month(s)" to get a replacement.


Sort of. I think if they had any cards in stock when my card arrived Newegg RMA center, they would replace it. This issue is the cards keep dropping out of stock(at least the LC 64 does) even in bundles. The situation I would've found myself in if the RMA was designated for replace and no cards were in stock when it arrived, they would refund me but then I'd be stuck with the bundle items I only bought because of the card.


----------



## Mandarb

Quote:


> Originally Posted by *lmiao*
> 
> Can someone suggest a water cooling kit for gpus? I'd like to buy one for my vega, but i never had one.. so i don't really know where to begin.
> 
> I heard ek are good, do they have gpu kits?


Alphacool also released a full cover kit. Doesn't show up on mobile though.. search their products on desktop.


----------



## Chaoz

Quote:


> Originally Posted by *Mandarb*
> 
> Alphacool also released a full cover kit. Doesn't show up on mobile though.. search their products on desktop.


This one with the AIO kit?
https://www.alphacool.com/shop/-neue-produkte-/22291/alphacool-eiswolf-120-gpx-pro-ati-rx-vega-m01-black?c=21224

I actually doubt a 120 rad is sufficient for an OC'ed Vega, tbh.

Btw, you can force your mobile browser to show the desktop version.


----------



## Mandarb

Quote:


> Originally Posted by *Chaoz*
> 
> This one with the AIO kit?
> https://www.alphacool.com/shop/-neue-produkte-/22291/alphacool-eiswolf-120-gpx-pro-ati-rx-vega-m01-black?c=21224
> 
> I actually doubt a 120 rad is sufficient for an OC'ed Vega, tbh.
> 
> Btw, you can force your mobile browser to show the desktop version.


Yes, and true, but I'm at work and not supposed to be staring at my phone. ^^

Can also buy the plate and whatever radiator separately.


----------



## Chaoz

Quote:


> Originally Posted by *Mandarb*
> 
> Yes, and true, but I'm at work and not supposed to be staring at my phone. ^^
> 
> Can also buy the plate and whatever radiator separately.


That's all extra cost for no valid reason. €170 isn't cheap for a kit with just 1 120mm rad. Best for him is to cool his entire PC with a 360.


----------



## Newbie2009

is 1200mv the max for voltage on the aio version also?


----------



## kundica

New Beta for HWiNFO64 v5.57-3235 lists HBM and core voltage correctly now.


----------



## Nuke33

Quote:


> Originally Posted by *Chaoz*
> 
> This one with the AIO kit?
> https://www.alphacool.com/shop/-neue-produkte-/22291/alphacool-eiswolf-120-gpx-pro-ati-rx-vega-m01-black?c=21224
> 
> I actually doubt a 120 rad is sufficient for an OC'ed Vega, tbh.
> 
> Btw, you can force your mobile browser to show the desktop version.


I just spoke with a sales guy from Alphacool.
Eiswolf AIOs for Vega are supposed to be in stock in about 2 weeks for Germany.


----------



## Nuke33

Here is a review of the Eiswolf:
http://www.tomshardware.com/reviews/radeon-rx-vega-64-water-cooling,5177.html


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> New Beta for HWiNFO64 v5.57-3235 lists HBM and core voltage correctly now.


Can confirm.

I'd suggest people install it, then there won't be any more confusion.


----------



## pillowsack

Quote:


> Originally Posted by *Newbie2009*
> 
> is 1200mv the max for voltage on the aio version also?


I would like to know too.

I can't get mine stable at 1680+ which is just like the stock bios. I did the registry edit trying to get 1250 core voltage but I can't seem to get the powerplay mod working.

Does anyone know if it's possible to get this working on the latest driver?


----------



## Newbie2009

Quote:


> Originally Posted by *pillowsack*
> 
> I would like to know too.
> 
> I can't get mine stable at 1680+ which is just like the stock bios. I did the registry edit trying to get 1250 core voltage but I can't seem to get the powerplay mod working.
> 
> Does anyone know if it's possible to get this working on the latest driver?


Hmm, I would have presumed that is how AIO could reach higher clocks, rather than just powertune.


----------



## pmc25

I wouldn't waste too much time overclocking or attempting to on current drivers and BIOS. Not worth it IMO.

I'm sure with better BIOS / drivers / hopefully better WattMan & third party tools, that will change, but it's very limited in scope and performance gain atm.


----------



## lmiao

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> AIOs in the past few years have made water cooling very affordable and with reasonable performance. Vega, being so new only has a few options... you buy the LC version and get what you get (some folks seem to have done well modifying the fan), you modify an existing solution (custom bracket on a universal or modify and AIO), or you get a FC and build / fit a loop around it.
> 
> Modifying the bracket on a universal or AIO would be the cheapest route but its going to be "janky", and still maybe need some stick on heat sinks.
> 
> Next up is the LC version, but I have read the fan is sorta "meh", that and it's a 120... so... it's better than the blower I suppose.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Last up is the FC, alphacool has a cover and EK has one... I think there is another one out in the wild... but who knows... everything I have seen is copper, so that means your loop by default will be copper based... so your price just went up, no scrub aluminum parts.
> 
> Gunna need a pump, res (or combo), tube, fittings, a way to mount, blocks, some fans, decoupling (rubber / old fan shrouds), towels, jumper wire for the 12v, water... lights of course... cause it makes it go faster...
> 
> Not to be discouraging but for what this is going to cost *just* to cool off Vega, there may be other more compelling options... like a 1080Ti... may just want to wait for an AIO to drop in the next couple months...


Sorry to bear, but FC stand for? Anyway if i understood right and listening to various suggestions, waiting for an AIO or FC cooling system it's not worth it,instead it's better invest in a custom kit like EK that can be upgradable and customizable n/times in the future, even if cost more money, right?

Anyway thanks to all that brought precious ideas!


----------



## kundica

Please delete.


----------



## theBee2112

Quote:


> Originally Posted by *pillowsack*
> 
> I would like to know too.
> 
> I can't get mine stable at 1680+ which is just like the stock bios. I did the registry edit trying to get 1250 core voltage but I can't seem to get the powerplay mod working.


What's your power limit set at? For a test, put it to +50% and everything else default. See if anything gets better.
Mine's been holding now for about 16 hours. It's damn hot, but holds 1750MHz at 1.18V


----------



## L36

Is anyone else experience hard system crashing while playing GTA 5 after 20 mins or so? Temps look good.


----------



## kundica

Quote:


> Originally Posted by *L36*
> 
> Is anyone else experience hard system crashing while playing GTA 5 after 20 mins or so? Temps look good.


Which version of the card, driver, and card settings in Wattman?

My LC version crashes on anything over Balanced. Currently have a second one in route and returning the current one.


----------



## L36

Quote:


> Originally Posted by *kundica*
> 
> Which version of the card, driver, and card settings in Wattman?
> 
> My LC version crashes on anything over Balanced. Currently have a second one in route and returning the current one.


Interesting. I have set my card to turbo mode. Its water cooled with EK block.

I'll put it back on balanced and report back. I have the latest non WLHQ driver.


----------



## pmc25

Install latest beta of HWINFO64 and tell us what your GPU core voltage under load is. Also what GPU and HBM clocks.


----------



## pillowsack

Quote:


> Originally Posted by *theBee2112*
> 
> What's your power limit set at? For a test, put it to +50% and everything else default. See if anything gets better.
> Mine's been holding now for about 16 hours. It's damn hot, but holds 1750MHz at 1.18V


I just tried that and couldn't get it going. What driver are you on?

I think I should really stop getting so impatient and just wait for better OCing to be available... This is pretty stupid. I bought a $500 graphics card and a waterblock and I can't change the core voltage....


----------



## theBee2112

Quote:


> Originally Posted by *pillowsack*
> 
> I just tried that and couldn't get it going. What driver are you on?
> 
> I think I should really stop getting so impatient and just wait for better OCing to be available... This is pretty stupid. I bought a $500 graphics card and a waterblock and I can't change the core voltage....


Damn. Was hoping. It must be the difference in drivers. I'm using the BETA blockchain drivers from Aug 23. 17.30.something. Not the 17.8.1 or 17.8.2 most people here are using.

While I may be able to sustain the core clocks, I'm sacrificing gaming performance to gain a little mining performance. My gaming benchmarks are lackluster, and much lower than those using 17.8.1+. Also, only power limit and mem clocks are available in wattman. No WattTool, and no AB.
I would NOT recommend this driver to anyone who's gaming and looking for MAX FPS. Now if you're a hobbyist... well try it out. I hear you though, I'm impatient too. All I want is to be able to OC without messing with powerplay.


----------



## paulc010

Quote:


> Originally Posted by *lmiao*
> 
> Sorry to bear, but FC stand for? Anyway if i understood right and listening to various suggestions, waiting for an AIO or FC cooling system it's not worth it,instead it's better invest in a custom kit like EK that can be upgradable and customizable n/times in the future, even if cost more money, right?
> 
> Anyway thanks to all that brought precious ideas!


FC = Full cover, as in water block; although with the way the Vega is designed the AIO is pretty much a full cover water block with only a 120mm radiator.


----------



## 113802

Quote:


> Originally Posted by *Newbie2009*
> 
> is 1200mv the max for voltage on the aio version also?


1250Mv is the max but it doesn't matter. The card never runs above 1712Mhz no matter what I try with a temperature of 52C and power limit of 100%

Card can run benchmarks at 1852Mhz without crashing using 17.8.2 but the frequency runs low.
Quote:


> Originally Posted by *theBee2112*
> 
> Damn. Was hoping. It must be the difference in drivers. I'm using the BETA blockchain drivers from Aug 23. 17.30.something. Not the 17.8.1 or 17.8.2 most people here are using.
> 
> While I may be able to sustain the core clocks, I'm sacrificing gaming performance to gain a little mining performance. My gaming benchmarks are lackluster, and much lower than those using 17.8.1+. Also, only power limit and mem clocks are available in wattman. No WattTool, and no AB.
> I would NOT recommend this driver to anyone who's gaming and looking for MAX FPS. Now if you're a hobbyist... well try it out. I hear you though, I'm impatient too. All I want is to be able to OC without messing with powerplay.


Your card isn't running 1750Mhz, 17.8.2 finally fixed the frequency reading. The AIO mostly runs at 1668Mhz. You can go back in this thread and see I also thought it was.


----------



## Newbie2009

Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1250Mv is the max but it doesn't matter. The card never runs above 1712Mhz no matter what I try with a temperature of 52C and power limit of 100%
> 
> Card can run benchmarks at 1852Mhz without crashing using 17.8.2 but the frequency runs low.


Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1250Mv is the max but it doesn't matter. The card never runs above 1712Mhz no matter what I try with a temperature of 52C and power limit of 100%
> 
> Card can run benchmarks at 1852Mhz without crashing using 17.8.2 but the frequency runs low.
> Your card isn't running 1750Mhz, 17.8.2 finally fixed the frequency reading. The AIO mostly runs at 1668Mhz. You can go back in this thread and see I also thought it was.


That's strange. Bugged still I guess.


----------



## theBee2112

Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1250Mv is the max but it doesn't matter. The card never runs above 1712Mhz no matter what I try with a temperature of 52C and power limit of 100%
> 
> Card can run benchmarks at 1852Mhz without crashing using 17.8.2 but the frequency runs low.
> Your card isn't running 1750Mhz, 17.8.2 finally fixed the frequency reading. The AIO mostly runs at 1668Mhz. You can go back in this thread and see I also thought it was.


Why is nothing as it seems









So HWINFO64, wattman, and the core frequencies displayed during Superposition and Heaven are all wrong when they show 1750? Sometimes if power limit isn't high enough it will drop to 1668, and I can see that reflected in all of the places I mentioned. Why would my power draw go up if there's no difference?

When I get home I will be downloading the new BETA HWINFO64 and then try out 17.8.2, and see as you describe.


----------



## pillowsack

Quote:


> Originally Posted by *theBee2112*
> 
> Why is nothing as it seems
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So HWINFO64, wattman, and the core frequencies displayed during Superposition and Heaven are all wrong when they show 1750? Sometimes if power limit isn't high enough it will drop to 1668, and I can see that reflected in all of the places I mentioned. Why would my power draw go up if there's no difference?
> 
> When I get home I will be downloading the new BETA HWINFO64 and then try out 17.8.2, and see as you describe.


Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1250Mv is the max but it doesn't matter. The card never runs above 1712Mhz no matter what I try with a temperature of 52C and power limit of 100%
> 
> Card can run benchmarks at 1852Mhz without crashing using 17.8.2 but the frequency runs low.
> Your card isn't running 1750Mhz, 17.8.2 finally fixed the frequency reading. The AIO mostly runs at 1668Mhz. You can go back in this thread and see I also thought it was.


I'm just going to put my original bios back on and do a DDU wipe I guess









(it seems impossible to overclock at this point so I'm going to give up for a bit.)

(does anyone have the stock high-power bios for the XFX Vega 64? I didn't realize GPU-Z only saves the legacy bits







)


----------



## Blameless

Quote:


> Originally Posted by *Newbie2009*
> 
> Hmm, I would have presumed that is how AIO could reach higher clocks, rather than just powertune.


The AIO gets higher clocks mostly via lower temperatures and a higher power limit, not voltages.


----------



## pmc25

I think some people's cards are over-stating clocks. Others aren't.

One thing's sure though, they really do not want to use the boost clocks set on 17.8.2 under load.


----------



## Soggysilicon

Quote:


> Originally Posted by *lmiao*
> 
> Sorry to bear, but FC stand for? Anyway if i understood right and listening to various suggestions, waiting for an AIO or FC cooling system it's not worth it,instead it's better invest in a custom kit like EK that can be upgradable and customizable n/times in the future, even if cost more money, right?
> 
> Anyway thanks to all that brought precious ideas!


Full Cover, as opposed to a gpu / universal only.

Steve over at GN 56cu Hybrid is an example of an AIO universal.

In the past I have preferred to use universal blocks with copper heat sinks peppered all over the place as opposed to FC for a few reasons... The universal is portable across multiple cards and the flow rates are far superior to the FC blocks which are very restrictive. The flow direction, (which fitting is the inlet or outlet) is also irreverent because its only contacting the gpu IHS, the drive FETS and VRM are designed to handle a good bit of heat and WC them isn't all that useful or practical when heat sinks are capable of dissipating 2-4 watts or more with a little air circulation.

Newer pumps per the fashion are relatively low flow due to the focus seemingly on noise control and quite operation. They also have lower current draw and as such could be fitted to most motherboards without much risk of damaging the board. A lot of them are PWM or respond well to voltage tweaking. New New motherboards, like the C6H have a dedicated header for > 1 Amp pumps, but this has certainly not been the case in the past.

Even at that I still run the pump off 12v power rather than supplying via the motherboard. Same with fans... just don't like running highly inductive loads from the motherboard.

Flow rate is in no small part a major contributor to WC setups thermal performance. My take on all this is that WC is an interesting hobby, but I can't say that I would ever recommend it as having some great "get more our than you put in" proposition.

Once you get past the initial price hurdles and have some stuff "sitting around". It's like any hobby. You can get a lot out of it. It's the gateway or cost of entry which is going to be the biggest barrier for most folks. If your looking for a one shot solution, a good AIO will do the trick, if your interested in WC as a long term investment or hobby; then piece mill out a setup, visit the OCN water cooling forums.


----------



## gupsterg

Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1250Mv is the max but it doesn't matter.


That was max Fiji was allowed via WattMan. VID for a DPM in VBIOS/registry max was 1.3V or driver BSOD at OS launch. Only way to gain extra was a VDDC offset via VRM chip, using:-

i) VBIOS mod.
ii) i2c command.
iii) 3rd party apps using VDDC offset.

Don't think I'll be joining VEGA club at all.

Today launch price of ref VEGA 56 in the UK is £380. This pretty much seals the deal a MSI GTX 1080 EK X is a better buy for me.

VEGA 56 £380 + WB = ~£480
MSI GTX 1080 EK X = ~£500
VEGA 64 £450 + WB = ~£550 (This price is unobtainable, OCuk are supposed to do £480 tomorrow as one off again, so ~£580 inc WB).

I believe now is the right time to dispose of Fury X for various reasons.


----------



## PontiacGTX

Here they used the watercooled card using an air cooler? why not backwards?
Quote:


> Originally Posted by *gupsterg*
> 
> That was max Fiji was allowed via WattMan. VID for a DPM in VBIOS/registry max was 1.3V or driver BSOD at OS launch. Only way to gain extra was a VDDC offset via VRM chip, using:-
> 
> i) VBIOS mod.
> ii) i2c command.
> iii) 3rd party apps using VDDC offset.
> 
> Don't think I'll be joining VEGA club at all.
> 
> Today launch price of ref VEGA 56 in the UK is £380. This pretty much seals the deal a MSI GTX 1080 EK X is a better buy for me.
> 
> VEGA 56 £380 + WB = ~£480
> MSI GTX 1080 EK X = ~£500
> VEGA 64 £450 + WB = ~£550 (This price is unobtainable, OCuk are supposed to do £480 tomorrow as one off again, so ~£580 inc WB).
> 
> I believe now is the right time to dispose of Fury X for various reasons.


More like 530GBP 50GBP difference
https://uk.pcpartpicker.com/product/8MVBD3/msi-geforce-gtx-1080-8gb-video-card-gtx-1080-sea-hawk-x


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> Today launch price of ref VEGA 56 in the UK is £380.


Isn't that the expected price for the UK?


----------



## gupsterg

Quote:


> Originally Posted by *PontiacGTX*
> 
> More like 530GBP 50GBP difference
> https://uk.pcpartpicker.com/product/8MVBD3/msi-geforce-gtx-1080-8gb-video-card-gtx-1080-sea-hawk-x


Nope, OCuk £500, OCuk had this price since 9th Aug, was supposed to be a one week deal, link. I know of another etailer that prices matches and they would do the same. A 3rd uk etailer has also had it at £499 for few weeks and still is now. I can get this card with ease, where as VEGA is catching a promo.

OCuk I get FOC deliver as a forum member, the other places I can also.

There has been a mass flurry today on ebay UK of Fiji cards. I often check the 'climate' on there, IMO many that have listed have either gone VEGA 56/64 or opted for green if they deemed AMD offering not as great as I am.
Quote:


> Originally Posted by *kundica*
> 
> Isn't that the expected price for the UK?


See spoiler in this post, was supposed to be £350. Even at £350 I was debating going GTX 1080 EK X, now at £380 (~+10%) I see really no point in going VEGA 56.

If bios mod had been open I would have probably swung towards AMD. As it isn't there is no plus to AMD vs nVidia. The particular nVidia card is also boost clocks of 1847MHz 'out of box' in an earlier post I highlighted that on TPU a GTX 1080 FE with lower clocks is ~33% faster than Fury X and a later review ~23% with differing games. So I reckon I'll get my money worth in performance gain. Maturer drivers, OC support, etc IMO as well. Yeah I'll miss FreeSync but may not be a biggie, not gonna go G-Sync, that's for sure.

AMD have been too slow to bring VEGA to market, I could live with that and have done. But drivers still lack full features of HW being enabled, what people are experiencing as issues is not putting me off, it's how now you guys will still be waiting on getting those features.

Will all games benefit?
How much of gain will be had?
How long will the wait be?

So to me the nVidia offering is better after so many years being on AMD.


----------



## Newbie2009

Quote:


> Originally Posted by *Blameless*
> 
> The AIO gets higher clocks mostly via lower temperatures and a higher power limit, not voltages.


Think ill flash to aio bios once the drivers have settled down. Overclocking still a bit weird.


----------



## pmc25

If you can get a Vega64 at similar price to the 1080, I'd go for that ... whilst tweaking stuff is a massive bind atm, and drivers / BIOS are poor, I'd imagine with 2-3 months of releases it should get a lot quicker, and be completely out of touch of a 1080.
Quote:


> Originally Posted by *Newbie2009*
> 
> Think ill flash to aio bios once the drivers have settled down. Overclocking still a bit weird.


I'd be staggered if there aren't official BIOS updates for all Vega SKUs, and maybe more than one.


----------



## gupsterg

When add in the cost of waterblock for VEGA, the MSI GTX 1080 EK X at £500 is a no brainer IMO.

There is no chance I'll be happy with VEGA on air. AIO was minimum my purchase route. Which there is no chance of VEGA AIO hitting price point same as nVidia offering.

I have been spoilt by Fury X cooling solution. Even AIB 3 fan Fiji/Hawaii offerings were not equal to Fury X cooling IMO/experience.

Waiting for AIB VEGA could mean resale has further slid on Fury X. I don't see AIB VEGA with better cooling gonna be as sweet a price either.


----------



## Newbie2009

Quote:


> Originally Posted by *pmc25*
> 
> If you can get a Vega64 at similar price to the 1080, I'd go for that ... whilst tweaking stuff is a massive bind atm, and drivers / BIOS are poor, I'd imagine with 2-3 months of releases it should get a lot quicker, and be completely out of touch of a 1080.
> I'd be staggered if there aren't official BIOS updates for all Vega SKUs, and maybe more than one.


Bios updates for GPUs? Don't recall that happening in the past.


----------



## pmc25

Quote:


> Originally Posted by *Newbie2009*
> 
> Bios updates for GPUs? Don't recall that happening in the past.


Usually released via AIBs rather than AMD / NVIDIA, but Fiji had official vBIOS update from AMD.

Think Vega is fairly unprecedented in recent years re: how prototypical everything is at launch, though, so I feel it's unlikely there won't be.


----------



## gupsterg

Yeah Fiji had 'official' VBIOS update and some AIBs placed on their own sites as well.

But be aware the main and only purpose of the 'official' Fiji VBIOS was adding UEFI support. All command tables were the same between that VBIOS and earlier. All data tables also, it was only that UEFI/GOP module was added.

There is a huge thread on AMD Community where Fiji owners experienced display corruption. Some of those people tried new VBIOS and no change in issue. Most of the owners got very frustrated as AMD could not reproduce fault. Many tried differing monitors, cables, OS, drivers and still issue persisted. Some vowed never to buy AMD again. I had only one card do it infrequently, that card got RMA'd.

I had 8 Fury X cards, all bought over a course of 6-8mths from Feb 2016, as Fury X launched June 2015 you'd expect AIBs had updated VBIOS within production line. Nope they had not, all cards had same non UEFI VBIOS, same build. None had the 'official' AMD updated VBIOS.

I also have all serials of cards and can tell you production year / week of card from those serials and again there were no Fury X that I purchased in late 2016 with a 2016 year of production. All were pretty much end of 2015.

The 2x Fury Nitro OC edition that I bought on Jan 2017 after Amazon had had it on pre-order as out of stock, one was Week 45 Year 2015 and other Week 38 Year 2016. Same VBIOS on each. The later card, which we would hypothesise as newer silicon, was similar OC ability as earlier card. And both were not any better on GPU core OC, HBM did sustain 545MHz with stock voltage, 600MHz was out of their reach.

Do not expect a differing VBIOS to improve silicon OC ability. I experienced none on Hawaii or Fiji. It is basically data values for xyz element that will allow you a better OC if silicon manage it (ie PL, Voltage, etc), so if each VBIOS had same setup, I hit same clocks. If I took release VBIOS for Fiji and later one, then benched at same clocks, etc it will bench the same. Driver makes more of a difference and depends on test case.


----------



## Nuke33

I just noticed severe lagging and HBM downclocks when using hwinfo64 during gaming. BF1 was lagging like hell all few seconds.








As soon as I closed it everything ran smooth again.

GPU-Z shows no such behaviour


----------



## Chaoz

Quote:


> Originally Posted by *Nuke33*
> 
> I just spoke with a sales guy from Alphacool.
> Eiswolf AIOs for Vega are supposed to be in stock in about 2 weeks for Germany.


Okay, great. But it's not for me. I already have mine watercooled.


----------



## ashman95

I just finished Vega FE EK Block, all is running well haven't gone over 40C running SPECvierpert or Prey. I've Been mostly undervolting- seems this card's power play tables are linked to performance even after switching to water, cant get over 1680Mhz, cant push frequency over 4% without crashing- the early drivers could be the big issue here. Beta drivers look stable but Wattman is disabled , will try to use WattTool with new drivers.


----------



## Nuke33

Quote:


> Originally Posted by *Chaoz*
> 
> Okay, great. But it's not for me. I already have mine watercooled.


----------



## Sufferage

Quote:


> Originally Posted by *Nuke33*
> 
> I just noticed severe lagging and HBM downclocks when using hwinfo64 during gaming. BF1 was lagging like hell all few seconds.
> 
> 
> 
> 
> 
> 
> 
> 
> As soon as I closed it everything ran smooth again.
> 
> GPU-Z shows no such behaviour


Try disabling monitoring of GPU VRM temp, read somewhere that it's causing that lags.


----------



## pmc25

Quote:


> Originally Posted by *ashman95*
> 
> I just finished Vega FE EK Block, all is running well haven't gone over 40C running SPECvierpert or Prey. I've Been mostly undervolting- seems this card's power play tables are linked to performance even after switching to water, cant get over 1680Mhz, cant push frequency over 4% without crashing- the early drivers could be the big issue here. Beta drivers look stable but Wattman is disabled , will try to use WattTool with new drivers.


No use. Whilst you can push lower voltage and keep it cooler at the same clocks with WattTool, it bugs HBM ... can only be set to 945Mhz, not even 946, or it stays at 800Mhz.


----------



## PontiacGTX

Quote:


> Originally Posted by *Nuke33*
> 
> I just noticed severe lagging and HBM downclocks when using hwinfo64 during gaming. BF1 was lagging like hell all few seconds.
> 
> 
> 
> 
> 
> 
> 
> 
> As soon as I closed it everything ran smooth again.
> 
> GPU-Z shows no such behaviour


and msi ab?


----------



## AlphaC

Quote:


> Originally Posted by *PontiacGTX*
> 
> 
> 
> Here they used the watercooled card using an air cooler? why not backwards?
> More like 530GBP 50GBP difference
> https://uk.pcpartpicker.com/product/8MVBD3/msi-geforce-gtx-1080-8gb-video-card-gtx-1080-sea-hawk-x


Seems the REAL TDP of the card is ~ 190W (much like a R9 285 or the HD 7870XT) and they pushed it to 210W due to yields & marketing.


http://www.tomshardware.com/reviews/radeon-rx-vega-56,5202-22.html


----------



## drufause

Quote:


> Originally Posted by *L36*
> 
> Is anyone else experience hard system crashing while playing GTA 5 after 20 mins or so? Temps look good.


Do you know what your volts are getting to you have a high voltage processor with an 860 watt power supply.


----------



## rancor

Add me to the list for a Vega 64.







Water block should be here tomorrow.

Side note did someone want an I2C dump? I have never done it before before but I could try.


----------



## punchmonster

Getting my Morpheus II tomorrow. Do you guys want an in depth review of it on the Vega64?


----------



## steadly2004

Quote:


> Originally Posted by *punchmonster*
> 
> Getting my Morpheus II tomorrow. Do you guys want an in depth review of it on the Vega64?


Yes.... do it


----------



## ashman95

Cant touch HBM with WattTool- 1100mhz in Wattman. -BUT-getting 1670mhz consistently with WattTool and Beta Drivers/ SPECview and Prey, then I broke it going for 1735mhz- cant maintain 1735mhz even with temps under 40C. Beta drivers and WattTool seem to be more stable than 17.1 drivers and Wattman. Looks as if some kind of symmetry to get to 1700mhz+, or AMD set the power tables on air and LC differently- that sucks, until they open the cards up for water blocks.... was getting 37 m/H mining eth on air, haven't mined since EK block.

6 1600mhz - 1070
7 1670mhz - 1120mV


----------



## pillowsack

Quote:


> Originally Posted by *ashman95*
> 
> Cant touch HBM with WattTool- 1100mhz in Wattman. -BUT-getting 1670mhz consistently with WattTool and Beta Drivers/ SPECview and Prey, then I broke it going for 1735mhz- cant maintain 1735mhz even with temps under 40C. Beta drivers and WattTool seem to be more stable than 17.1 drivers and Wattman. Looks as if some kind of symmetry to get to 1700mhz+, or AMD set the power tables on air and LC differently- that sucks, until they open the cards up for water blocks.... was getting 37 m/H mining eth on air, haven't mined since EK block.
> 
> 6 1600mhz - 1070
> 7 1670mhz - 1120mV


how are you setting voltage?


----------



## Irev

Anyone know if i could run 2x vega64 air in CF on a 850w gold psu?


----------



## LocoDiceGR

Quote:


> Originally Posted by *Irev*
> 
> Anyone know if i could run 2x vega64 air in CF on a 850w gold psu?


you wont see to much support from AMD in terms of CF.

Just buy 1.


----------



## drufause

Quote:


> Originally Posted by *Irev*
> 
> Anyone know if i could run 2x vega64 air in CF on a 850w gold psu?


Im not sure you could do this very well. I am running i7 - 3960x, x99 and vega64wc and im hitting 650 watts in games. So I would estimate running both at full power would pull 800 watts easy.


----------



## gupsterg

Quote:


> Originally Posted by *rancor*
> 
> Side note did someone want an I2C dump? I have never done it before before but I could try.


Kundica already posted some, but alas no data gained. Mumak's post has some insight for this occurrence.


----------



## SlushPuppy007

Quote:


> Originally Posted by *Irev*
> 
> Anyone know if i could run 2x vega64 air in CF on a 850w gold psu?


850watt is not enough.


----------



## Irev

ah that's rough.... a year ago I decided to get a 850w psu instead of a 1000w because I thought power consumption would drop with all the new cards coming out HAHA

I was wrong!!!

Oh well guess i'll just stick to a single VEGA64 untill NAVI

Another thought would CF VEGA64 work on 850w if both cards set to power save mode?


----------



## Newbie2009

Quote:


> Originally Posted by *Irev*
> 
> ah that's rough.... a year ago I decided to get a 850w psu instead of a 1000w because I thought power consumption would drop with all the new cards coming out HAHA
> 
> I was wrong!!!
> 
> Oh well guess i'll just stick to a single VEGA64 untill NAVI
> 
> Another thought would CF VEGA64 work on 850w if both cards set to power save mode?


You would be ok. Doom 60fps maxed with framerate cap is pulling <300w whole system for me.


----------



## pmc25

@Irev - if you're going to undervolt, 850W is more than fine. I'd really think about watercooling though for Vega, if you have the budget.

17.8.2 is really borked in terms of clocks / voltages in WattMan.

17.8.1 I too was drawing around 350W total system load, when at 950mV core. Quite a lot of that is probably CPU, since I have my 3770k at 1.41V for 4.8Ghz (vDroop is epic on my uATX gigabyte board).


----------



## Irev

another quick Q is the bios on the v64.... which way does the dipswitch go for the standard bios vs 2nd bios ??


----------



## Nuke33

Quote:


> Originally Posted by *Sufferage*
> 
> Try disabling monitoring of GPU VRM temp, read somewhere that it's causing that lags.


Thanks for the info. Will try it as soon as I get home.
Quote:


> Originally Posted by *PontiacGTX*
> 
> and msi ab?


Have not used it while gaming so far. Will try that too later.


----------



## punchmonster

Well installed my Morpheus II. Went from thermal throttling at 85ºC at 50% power limit to hovering around 60º. HBM2 went from 80ºC to 60ºC under load. This is all with default fan settings, so with my new fans roughly ~700RPM and practically silent

I'll report on underclocking/overclocking results in a bit.


----------



## abe_joker

Quote:


> Originally Posted by *punchmonster*
> 
> Well installed my Morpheus II. Went from thermal throttling at 85ºC at 50% power limit to hovering around 60º. HBM2 went from 80ºC to 60ºC under load. This is all with default fan settings, so with my new fans roughly ~700RPM and practically silent
> 
> I'll report on underclocking/overclocking results in a bit.


How did you do it? Does the Morpheus II cover the chip completely? And how did you cool the VRMs? We'll be waiting for some benchmarks!


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> Well installed my Morpheus II. Went from thermal throttling at 85ºC at 50% power limit to hovering around 60º. HBM2 went from 80ºC to 60ºC under load. This is all with default fan settings, so with my new fans roughly ~700RPM and practically silent
> 
> I'll report on underclocking/overclocking results in a bit.


how are the VRMs temperature and the doubler temperature? do you have an infrared thermomether to measure the temp if there is no sensor?

also did it allow higher OC ?

what TIM did you use? also have you thought about using liquid thermal compound in the core? I wonder if this is safee since the interpose on the GPU is exposed


----------



## punchmonster

I've got tiny heatsinks on the VRMs. Besides with this beefy power delivery I doubt VRMs would even strain without them.

Also yes the morpheus covers the entire chip.

Having some strange occurrences with HBM temps though so trying to sort that out. It's very spiky.
Quote:


> Originally Posted by *PontiacGTX*
> 
> how are the VRMs temperature and the doubler temperature? do you have an infrared thermomether to measure the temp if there is no sensor?
> 
> also did it allow higher OC ?
> 
> what TIM did you use? also have you thought about using liquid thermal compound in the core? I wonder if this is safee since the interpose on the GPU is exposed


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> I've got tiny heatsinks on the VRMs. Besides with this beefy power delivery I doubt VRMs would even strain without them.
> 
> Also yes the morpheus covers the entire chip.
> 
> Having some strange occurrences with HBM temps though so trying to sort that out. It's very spiky.


VRM is one of the main things you will have to keep cooled http://www.guru3d.com/articles-pages/amd-radeon-rx-vega-64-8gb-review,10.html

mainly the stock heatsink uses the front plate to cool the VRMs and the backplate takes part of the heat aswell


----------



## ashman95

These settings working good, looks like the pattern is to go wide on the early frequencies and taper off as you near P6, P7 going to go for 1700mhz -1800mhz as I fool around on the mid frequencies, then try Wattman again later for HBM mem frequencies.

Watt Tool


----------



## ashman95

These settings are working good
P1- 991Mhz 900Mv,
P2- 1138, 950,
P3- 1290, 1000,
P4- 1416, 1050,
P5- 1540, 1100
P6- 1620 , 1070
P7- 1670, 1120


----------



## punchmonster

As you can see here I've got all of them cooled



Also I guess I did TIM badly the first time. This time I spread the TIM carefully on the HBM modules (I have a molded die by the way) and now the temperature is a rock solid 62ºC on the HBM2 under full memory load and the GPU tops out at 50ºC(!!!!)

Even at max RPM the fans are inaudible even with the case open.

Quote:


> Originally Posted by *PontiacGTX*
> 
> VRM is one of the mian things you will have to keep cooled http://www.guru3d.com/articles-pages/amd-radeon-rx-vega-64-8gb-review,10.html
> 
> mainly the stock heatsink uses the front plate to cool the VRMs and the backplate takes part of the heat aswell


----------



## Dolk

Please make sure you get full contact with some force on HBM dies. Lowest level of that 2.5D chip can get up to 90C easily. AMD talked a bit about temps during Hot Chips this year.


----------



## ashman95

Quote:


> Originally Posted by *pillowsack*
> 
> how are you setting voltage?


Watt Tool
These settings are looking good, gonna boost the mid frequencies, see where they break

P1- 991Mhz 900Mv,
P2- 1138, 950,
P3- 1290, 1000,
P4- 1416, 1050,
P5- 1540, 1100
P6- 1620 , 1070
P7- 1670, 1120


----------



## abe_joker

I've aftermarket coolers for the Vega 64, but what about AIO coolers? I am thinking about putting an AIO to my Vega 64 but i dunno which one is compatible or what would it require. Any ideas?


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> As you can see here I've got all of them cooled
> 
> 
> 
> Also I guess I did TIM badly the first time. This time I spread the TIM carefully on the HBM modules (I have a molded die by the way) and now the temperature is a rock solid 62ºC on the HBM2 under full memory load and the GPU tops out at 50ºC(!!!!)
> 
> Even at max RPM the fans are inaudible even with the case open.


someone linked your picture on reddit

__
https://www.reddit.com/r/6wrdph/morpheus_ii_on_vega_rx_64/










also if you wanted ot reduce core temperature you coudl use liquid thermal compound/paste and it will drop the temperature at least by 4-5c or so


----------



## punchmonster

currently running the card at 1630Mhz/1075mV with memory at 1100Mhz, this snap is taken during a torture test. It holds clocks rock solid and the HBM2 isn't even loosening it's timings.



As you can see temps are very reasonable so I'd rather not risk liquid metal. Currently using non-conductive TIM and it's doing fine really. My card can't get over 1105Mhz memory regardless of temp anyways so lower temps wont help me beyond this point.
Quote:


> Originally Posted by *PontiacGTX*
> 
> someone linked your picture on reddit
> 
> __
> https://www.reddit.com/r/6wrdph/morpheus_ii_on_vega_rx_64/
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> also if you wanted ot reduce core temperature you coudl use liquid thermal compound/paste and it will drop the temperature at leats by 5c or so


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> currently running the card at 1630Mhz/1075mV with memory at 1100Mhz, this snap is taken during a torture test. It holds clocks rock solid and the HBM2 isn't even loosening it's timings.
> 
> 
> 
> As you can see temps are very reasonable so I'd rather not risk liquid metal. Currently using non-conductive TIM and it's doing fine really. My card can't get over 1105Mhz memory regardless of temp anyways so lower temps wont help me beyond this point.


is HBM2 stack making contact with the heatsink? becuase GPU core seems to be running cooler


----------



## punchmonster

Yes. It's a molded die, the top is uniform. The HBM2 stacks always run a bit hotter than the core. Was the case for my Fiji too.
Quote:


> Originally Posted by *PontiacGTX*
> 
> is HBM2 stack making contact with the heatsink? becuase GPU core seems to be running cooler


----------



## abe_joker

Quote:


> Originally Posted by *punchmonster*
> 
> Yes. It's a molded die, the top is uniform. The HBM2 stacks always run a bit hotter than the core. Was the case for my Fiji too.


Could you try to do some benchmarks? and compare them to previous benchmarks if you did some? (also, did you UV?)


----------



## ducegt

CLU made gave me zero difference from gelid extreme on a R9 285 under load. Idle was about 5 to 7C lower. Waste of time and not worth any risk for sure.


----------



## PontiacGTX

Quote:


> Originally Posted by *ducegt*
> 
> CLU made gave me zero difference from gelid extreme on a R9 285 under load. Idle was about 5 to 7C lower. Waste of time and not worth any risk for sure.


probably the GPU wasnt really doing proper contact with the new TIM application or the fans ran at different speed,many people have reported that the temperature drop could be around 3-5c and depends from GPU to GPU


----------



## punchmonster

Stock core clocks, 1070mV undervolt.

Here's TimeSpy


Here's Rainbow Six: Siege, up from 138 fps average previously.


Both are done with with a Ryzen 1700 at stock clocks so I might have a bunch of CPU bound frames If you want any other benchmarks that are free done, tell me which.

Quote:


> Originally Posted by *abe_joker*
> 
> Could you try to do some benchmarks? and compare them to previous benchmarks if you did some? (also, did you UV?)


----------



## abe_joker

Edit:

Didn't read correctly. Thanks for sharing the benchmark! What was your score in timespy before?


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> Stock core clocks, 1070mV undervolt.
> 
> Here's TimeSpy
> 
> 
> Here's Rainbow Six: Siege, up from 138 fps average previously.
> 
> 
> Both are done with with a Ryzen 1700 at stock clocks so I might have a bunch of CPU bound frames If you want any other benchmarks that are free done, tell me which.


sod di you flash the VEGA 64 Liquid Cooling's BIOS on your card to increase the power limit?


----------



## punchmonster

Nope. this is stock air BIOS. I have no need for the extra powerlimit right now at 1440p. If it's needed down the road I'll do it.
Quote:


> Originally Posted by *PontiacGTX*
> 
> sod di you flash the VEGA 64 Liquid Cooling's BIOS on your card to increase the power limit?


no idea, sorry







forgot to save my results
Quote:


> Originally Posted by *abe_joker*
> 
> Edit:
> 
> Didn't read correctly. Thanks for sharing the benchmark! What was your score in timespy before?


----------



## ontariotl

My goodies finally arrived!


----------



## Pholostan

Couldn't you just keep the stock baseplate and backplate? Or is the stock baseplate too tall? I was thinking of using a Heatkiller universal waterblock on a reference Vega, and then keep the stock baseplate for VRM cooling.

And why heatsinks on the chokes? They don't really need it.


----------



## PontiacGTX

Quote:


> Originally Posted by *Pholostan*
> 
> And why heatsinks on the chokes? They don't really need it.


----------



## pillowsack

I just wanted to say I just went back to the LC Sapphire Vega 64 bios for one reason:

On the Air 64 bios with my EKWB block the card will throttle itself just from doing 1660/1100, but with LC bios i'm holding 1640/1100 steady.

I can't get the core to stay at 1660 like my other oc, but it's better than my HBM going to 167mhz versus 1100. I just haven't done anything with voltage because I feel like it isn't changing squat, but this is the best I can get.

I also downclocked my 6800K to 4.2ghz for power usage/heat purposes.


----------



## abe_joker

Quote:


> Originally Posted by *pillowsack*
> 
> I just wanted to say I just went back to the LC Sapphire Vega 64 bios for one reason:
> 
> On the Air 64 bios with my EKWB block the card will throttle itself just from doing 1660/1100, but with LC bios i'm holding 1640/1100 steady.
> 
> I can't get the core to stay at 1660 like my other oc, but it's better than my HBM going to 167mhz versus 1100. I just haven't done anything with voltage because I feel like it isn't changing squat, but this is the best I can get.
> 
> I also downclocked my 6800K to 4.2ghz for power usage/heat purposes.


How can I change the bios of my Air version?


----------



## pillowsack

Quote:


> Originally Posted by *abe_joker*
> 
> How can I change the bios of my Air version?


ATIWinflash 2.77 works with Vega. Make sure to save a copy of your bios from ATIWinflash. You can flash in windows with administrative rights, just make sure you close mostly anything on your desktop and most importantly anything that interacts with GPU(afterburner, HWinfo, wattman/wattool)

BIOS file:

Sapphire_Vega64_AIO_HP.zip 135k .zip file


ATIWinflash 2.77

ATIWinflash2.77.zip 1215k .zip file


----------



## abe_joker

Quote:


> Originally Posted by *pillowsack*
> 
> ATIWinflash 2.77 works with Vega. Make sure to save a copy of your bios from ATIWinflash. You can flash in windows with administrative rights, just make sure you close mostly anything on your desktop and most importantly anything that interacts with GPU(afterburner, HWinfo, wattman/wattool)
> 
> Sapphire_Vega64_AIO_HP.zip 135k .zip file


I can flash Sapphire's unto my XFX?


----------



## pillowsack

Quote:


> Originally Posted by *abe_joker*
> 
> I can flash Sapphire's unto my XFX?


Yes, all the reference cards are basically the same.

I'm just going to assume that the air bios only allow x ammount of watts or tdc before throttle. LC does not do that, although my card can't run the 1700 frequency yet

Make sure you downclock or test and see if it will run 1700


----------



## CaptainTom

Quote:


> Originally Posted by *kundica*
> 
> New Beta for HWiNFO64 v5.57-3235 lists HBM and core voltage correctly now.


It seems to be more accurate in general now, but are we sure this is correct?

It is still listing my HBM voltage as 1.356v, and it is saying my GPU is only using 200w. Although I guess it is set to only 1000mV on both memory and core.


----------



## kundica

Quote:


> Originally Posted by *CaptainTom*
> 
> It seems to be more accurate in general now, but are we sure this is correct?
> 
> It is still listing my HBM voltage as 1.356v, and it is saying my GPU is only using 200w. Although I guess it is set to only 1000mV on both memory and core.


The default voltage for HBM on the 64 is 1.35v. The voltage you can change in Wattman is not the direct HBM voltage but something else, VRM maybe? I'm not sure.


----------



## CaptainTom

Quote:


> Originally Posted by *kundica*
> 
> The default voltage for HBM on the 64 is 1.35v. The voltage you can change in Wattman is not the direct HBM voltage but something else, VRM maybe? I'm not sure.


I have been off of this forum for a couple days. I recall some people debating if the voltage fields in Wattman actually applied to the correct part of the card (HBM voltage actually applies to core voltage).

Did anyone confirm if this is true?


----------



## ashman95

OK I'm finding that the issues are trying to keep the memory from throttling or to maintain 945mhz all the way through benches - when running SPECviewperf there are 2 occasions when HBM throttles, when the script is loading and then after when it is running. I've managed to increase SPEC scores by playing with the frequencies and voltages, using Beta drivers (seemed more reliable than 17.1 Vega Frontier) also installed EK Frontier waterblock temps haven't gone over 43C.
***Seems either increasing frequency per power stage that throttling occurs will most often keep memory from throttling. I say 'either' because sometimes if the frequency is high or the voltage is either to high or low on a power stage it could effect where or at what power stage the benchmark operates. I will be pushing to 1700mhz & 18mhz after I run test in Prey, so often just because I can run countless SPEC benches, all is lost after playing a game. Will keep the same setting from SPEC tests.


----------



## CryWin

I managed to grab a Sapphire Vega 56 on Amazon for $399 yesterday. No estimated ship date or any information though.


----------



## Tyrael

Hey guys,
I have a question regarding water cooling the RX vega 56. I bought one and I want to use a water cooler on it. I am already using a nzxt x62 on my 1800x, so it has to be a second loop. I found two solutions at this point:

EK-FC Radeon Vega and Alphacool Eiswolf 120 GPX Pro ATI RX Vega M01 - Black.

Anyone of you guys has experience with this companies?
Which one is more preferable?
On the EK solution, which pump /radiator should I get? Like I said it is a loop only for the gpu.


----------



## Chaoz

Quote:


> Originally Posted by *Tyrael*
> 
> Hey guys,
> I have a question regarding water cooling the RX vega 56. I bought one and I want to use a water cooler on it. I am already using a nzxt x62 on my 1800x, so it has to be a second loop. I found two solutions at this point:
> 
> EK-FC Radeon Vega and Alphacool Eiswolf 120 GPX Pro ATI RX Vega M01 - Black.
> 
> Anyone of you guys has experience with this companies?
> Which one is more preferable?
> On the EK solution, which pump /radiator should I get? Like I said it is a loop only for the gpu.


I'm currently using the EK Acetal Nickel block. It looks really good. I can recommend it. Almost everything in my loop is from either EKWB or Bitspower.

Alphacool is quite a good company, haven't bought anything from them but a few friends have and they say the quality is quite decent.
The Alphacool kit is an AIO for your GPU.

This is what it would cost for you to liquid cool your GPU.


Tbh, if you're going for the EK block, make your PC a full custom loop. It would look stupid if you have a custom loop on your GPU with an X62 on your CPU. A CPU block costs $60 more on your overall cost of a gpu block + rad etc.

This is what it would cost for a full loop, CPU included.


You could get an EKWB kit, so you only have to get the Vega EK block and 2 extra fittings, that's it.


----------



## Mandarb

Well, my RX64 Air arrived today, started playing with it.

Do I see it right that Wattman voltage control is broken? Tried undervolting it but no change in behaviour.

Does WattTool work? Currently trying with it but not yet sure if it does.

Edit: well, WattTool appears to work for undervolting. Clocks seem more stable.
Does WattTool also work for memory OC? Says 500MHz without touching it, which seems wrong.


----------



## betaflame

Quote:


> Originally Posted by *punchmonster*
> 
> Well installed my Morpheus II. Went from thermal throttling at 85ºC at 50% power limit to hovering around 60º. HBM2 went from 80ºC to 60ºC under load. This is all with default fan settings, so with my new fans roughly ~700RPM and practically silent
> 
> I'll report on underclocking/overclocking results in a bit.


Can I bother you to measure the height from the top of the GPU PCB to the bottom of the Fans in mm? (full height).

I want to know if I could fit that in an Ncase M1.

Thanks


----------



## theor14

Is it possible to keep the backplate with the Morpheus II?


----------



## punchmonster

43mm
Quote:


> Originally Posted by *betaflame*
> 
> Can I bother you to measure the height from the top of the GPU PCB to the bottom of the Fans in mm? (full height).
> 
> I want to know if I could fit that in an Ncase M1.
> 
> Thanks


you cannot
Quote:


> Originally Posted by *theor14*
> 
> Is it possible to keep the backplate with the Morpheus II?


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> 43mm
> you cannot


he means measure the thickness of the whole card(or height depends on how you place the card)


----------



## Mandarb

Alright, been playing around with the card. Can't undervolt any further than 1000mV, WattMan does work, WattTool does work exactly the same in this regard.

Currently I set the card to 1000mV and memory to 1100MHz. HWinfo Beta confirms changes and does show the card hitting those targets. Frequency howers around 1540MHz for the core as it thermal throttles.

I guess the 1000mV only works as the card isn't going higher than 1550MHz. Else I would probably run into issues. Didn't see any corruption yet, but only ran Unigine Heaven, Unigine Superposition and FiresStrike. No crashes to report.

Here's a screen from HWinfo:


http://imgur.com/LFSOK


Edit: am getting the mouse lag in the browser too I just notice. What was that issue? Bug in the driver?


----------



## Mandarb

Quote:


> Originally Posted by *punchmonster*
> 
> currently running the card at 1630Mhz/1075mV with memory at 1100Mhz, this snap is taken during a torture test. It holds clocks rock solid and the HBM2 isn't even loosening it's timings.
> 
> 
> 
> As you can see temps are very reasonable so I'd rather not risk liquid metal. Currently using non-conductive TIM and it's doing fine really. My card can't get over 1105Mhz memory regardless of temp anyways so lower temps wont help me beyond this point.


Where would you see that the timings are getting losened?


----------



## betaflame

Quote:


> Originally Posted by *punchmonster*
> 
> 43mm


Sorry, I meant the entire assembled card with cooler and fans (red line in picture):


----------



## ontariotl

Still haven't installed my Vega 64 back into the PC. For a few hours I've been doing some modding with the waterblock. I'll hold off installing it back in the PC tomorrow when my backplate for the motherboard finally arrives. I hate buying a used motherboard without a backplate.

Then I'm debating on flashing my card with the AIO bios as well.


----------



## IF6WAS9

I've replaced the stock fan on my Liquid Vega 64 with a Corsair ML 120 and I'm wondering if anyone has an educated guess as to whether or not i can run a second fan off of the card. The stock fan is .225 amps and the replacement fans would be the same. I know MB's generally support 1 or 2 amps, but I've no idea about video cards.


----------



## Chaoz

Quote:


> Originally Posted by *ontariotl*
> 
> Still haven't installed my Vega 64 back into the PC. For a few hours I've been doing some modding with the waterblock. I'll hold off installing it back in the PC tomorrow when my backplate for the motherboard finally arrives. I hate buying a used motherboard without a backplate.
> 
> Then I'm debating on flashing my card with the AIO bios as well.


Was thinking about doing that aswell. Is it difficult to put a led behind the cover?


----------



## ontariotl

Quote:


> Originally Posted by *Chaoz*
> 
> Was thinking about doing that aswell. Is it difficult to put a led behind the cover?


It's a little tedious but not impossible. I had to shave the 3mm led on both sides and still had to dremel the inside of the plate cover to have them fit with the sticker overlay properly. I thought about ordering some SMD's as they would be smaller. Maybe mod it again at a later date. I also had to go with 3 led's for it to light the lettering properly.


----------



## Roboyto

You can add me as a PowerColor Vega 64 owner



Have my EK block here, just haven't installed yet. Playing around in Uber L337 Leaf Blower mode first









Running 17.8.1 on Win 10 x64 as 17.8.2 was super crashy. Running in my HTPC with bargain ASRock AB350M and 1700 OC'd to 3.7 with 8GB 3200 DDR4

I know clock reporting is probably off, at least from what I'm reading, but Time Spy score kept increasing up to 1742 core clock. Anything thereafter would cause crash and hard lock/blue screen.

Haven't messed with undervolting yet...but as it sits it doesn't appear that my HBM is very capable of an OC. 970 is best I've had for an increase in scores. Beyond that it is a decrease or crashes/lockups.

Max Settings: 1742 core 970 HBM - Stock Voltage - 50% Power - 3000-3250 Forced Fan Speed

Best TimeSpy - 7719 overall 7651 GS

Best Firestrike - 17672 overall 23950 GS

Best FS Xtreme - 10511 overall 11464 GS

FS Xtreme Tess OFF - 11143 overall 12373 GS

Someone had mentioned disabling Tess in drivers did not display as invalid result, but mine came up that way: https://www.3dmark.com/3dm/21854897?

*FireStrike Spreadsheet*



*Time Spy Spreadsheet*



*Firestrike*



*FS Xtreme*



*FS Xtreme Tess OFF*



*TimeSpy*


----------



## punchmonster

HBM2 is VERY temperature sensitive. You're probably just being temp limited.
Quote:


> Originally Posted by *Roboyto*
> 
> You can add me as a PowerColor Vega 64 owner
> 
> 
> 
> 
> Have my EK block here, just haven't installed yet. Playing around in Uber L337 Leaf Blower mode first
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Running 17.8.1 on Win 10 x64 as 17.8.2 was super crashy. Running in my HTPC with bargain ASRock AB350M and 1700 OC'd to 3.7 with 8GB 3200 DDR4
> 
> I know clock reporting is probably off, at least from what I'm reading, but Time Spy score kept increasing up to 1742 core clock. Anything thereafter would cause crash and hard lock/blue screen.
> 
> Haven't messed with undervolting yet...but as it sits it doesn't appear that my HBM is very capable of an OC. 970 is best I've had for an increase in scores. Beyond that it is a decrease or crashes/lockups.
> 
> Max Settings: 1742 core 970 HBM - Stock Voltage - 50% Power - 3000-3250 Forced Fan Speed
> 
> Best TimeSpy - 7719 overall 7651 GS
> Best Firestrike - 17672 overall 23950 GS
> Best FS Xtreme - 10511 overall 11464 GS
> FS Xtreme Tess OFF - 11143 overall 12373 GS
> 
> Someone had mentioned disabling Tess in drivers did not display as invalid result, but mine came up that way: https://www.3dmark.com/3dm/21854897?
> 
> *FireStrike Spreadsheet*


----------



## Roboyto

Quote:


> Originally Posted by *punchmonster*
> 
> HBM2 is VERY temperature sensitive. You're probably just being temp limited.


How accurate is temp sensor in HWInfo? With my leaf blower spinning at 3500 dB per second







I'm peaking in high 80's


----------



## PontiacGTX

Quote:


> Originally Posted by *Roboyto*
> 
> How accurate is temp sensor in HWInfo? With my leaf blower spinning at 3500 dB per second
> 
> 
> 
> 
> 
> 
> 
> I'm peaking in high 80's


3500Dba? you mean RPM?


----------



## Roboyto

Quote:


> Originally Posted by *PontiacGTX*
> 
> 3500Dba? you mean RPM?


Yes....and no







After my ears stop bleeding I'll let you know


----------



## Chaoz

Quote:


> Originally Posted by *ontariotl*
> 
> It's a little tedious but not impossible. I had to shave the 3mm led on both sides and still had to dremel the inside of the plate cover to have them fit with the sticker overlay properly. I thought about ordering some SMD's as they would be smaller. Maybe mod it again at a later date. I also had to go with 3 led's for it to light the lettering properly.


Cool, thanks very much. Might try it now that I know it's possible.

You can unscrew the Radeon bracket without flushing, right? As I don't want to flush ly loop again.


----------



## ontariotl

Quote:


> Originally Posted by *Chaoz*
> 
> Cool, thanks very much. Might try it now that I know it's possible.
> 
> You can unscrew the Radeon bracket without flushing, right? As I don't want to flush ly loop again.


Yeah you can just remove the bracket with the two allen key screws. It will come out easily and no need to drain as it's just a cover. I then used the allen key to poke the sticker out through the hole in the back.


----------



## Roboyto

Quote:


> Originally Posted by *punchmonster*
> 
> HBM2 is VERY temperature sensitive. You're probably just being temp limited.


I plan on running some CLU or Conductonaut since my block is nickel plated. Either some thermal tape or HondaBond automotive liquid gasket is going around the die to protect the sensitive parts.

Thermal tape worked excellently on my delidded Haswell chips..super easy to remove if necessary.

and I just did some CLU on my steal-of-a-deal "refurb/open-box" AKA new G1 Gaming GTX 1060 6GB I snatched for $199. Spread some Hondabond for protection and then a thin coat of CLU on the die. Got 2C C temp drop at idle, even with fans not spinning. Temp under load didn't change, but fans spinning around 150 RPM slower with auto fan curve. Should have set finite fan speed to see how temps under load were effected...hindsight I tell ya...gets me every time


----------



## punchmonster

Very. It starts throttling speed and voltage at 80. It loosens timings at 75ish. So you want to keep your HBM below 70 ideally
Quote:


> Originally Posted by *Roboyto*
> 
> How accurate is temp sensor in HWInfo? With my leaf blower spinning at 3500 dB per second
> 
> 
> 
> 
> 
> 
> 
> I'm peaking in high 80's


----------



## Roboyto

Quote:


> Originally Posted by *punchmonster*
> 
> Very. It starts throttling speed and voltage at 80. It loosens timings at 75ish. So you want to keep your HBM below 70 ideally


Roger that. Will be back with more results once I pull Vega out of HTPC and drop into other rig with water loop.


----------



## Soggysilicon

Quote:


> Originally Posted by *Irev*
> 
> another quick Q is the bios on the v64.... which way does the dipswitch go for the standard bios vs 2nd bios ??


Who knows... my manual was so generic it mentions an install disc... that the card... of course, did not come with never-the-less any indication as to which position the slider was for 1 and 2.


----------



## Soggysilicon

Quote:


> Originally Posted by *Tyrael*
> 
> Hey guys,
> I have a question regarding water cooling the RX vega 56. I bought one and I want to use a water cooler on it. I am already using a nzxt x62 on my 1800x, so it has to be a second loop. I found two solutions at this point:
> 
> EK-FC Radeon Vega and Alphacool Eiswolf 120 GPX Pro ATI RX Vega M01 - Black.
> 
> Anyone of you guys has experience with this companies?
> Which one is more preferable?
> On the EK solution, which pump /radiator should I get? Like I said it is a loop only for the gpu.


EK Blocks are excellent (save the fiasco some years back with a dodgy nickle plating). I own 4 of them, never the first days trouble outside normal MX and wear n' tear. As far as pumps go... Laing or rebrands of Laing pumps... is all I would ever recommend.


----------



## abe_joker

Quote:


> Originally Posted by *punchmonster*
> 
> HBM2 is VERY temperature sensitive. You're probably just being temp limited.


How did you flash Bios in both the primary and the backup?


----------



## Soggysilicon

Quote:


> Originally Posted by *ontariotl*
> 
> Still haven't installed my Vega 64 back into the PC. For a few hours I've been doing some modding with the waterblock. I'll hold off installing it back in the PC tomorrow when my backplate for the motherboard finally arrives. I hate buying a used motherboard without a backplate.
> 
> Then I'm debating on flashing my card with the AIO bios as well.


Hey, Nice Work YO!


----------



## Zero4549

Just got my Vega 64.

Is there any way to adjust the clocks and voltages for GPU states 0-6 and memory states 0-2 in wattman? I see, to only be able to adjust the last state in either category.

I really want to undervolt across the board like I did with my RX 480, not just on the boost step :/
(I can appreciate the removal of the convoluted acoustic limit thing though, even if I did eventually learn how to actually use it)


----------



## Irev

anyone know when a working version of MSI AB will release for VEGA?


----------



## LionS7

Did somebody know from where I can find Vega Frontier Edition Liquid BIOS ? Or I need to type a massage to PCPER...


----------



## ashman95

Am able to get to 1700mhz running SPECview tests , used excel to plot a linear graph. Following Vega's original settings as a guide lower frequencies are wide while higher frequencies are around 3% from each other. Would work to keep different profiles for each programs, seems Prey will run @ 1700mhz for a while but then crash-
Post frequency profiles please.


----------



## ashman95

Been testing different frequency profiles -


----------



## Spectre73

Question regarding the EK Block for Vega with factory backplate.

The manual of the block does not say how to install the factory backplate. Are there (longer) screws included for the backplate or how is it installed?

The Installation by EK only covers the block without any backplate at all.


----------



## ashman95

Quote:


> Originally Posted by *Spectre73*
> 
> Question regarding the EK Block for Vega with factory backplate.
> 
> The manual of the block does not say how to install the factory backplate. Are there (longer) screws included for the backplate or how is it installed?
> 
> The Installation by EK only covers the block without any backplate at all.


I didn't get the backplate for my EK block but it came with everything I needed- I would bet on it BTW my temps haven't gone over 43C under any circumstance.


----------



## Tyrael

Quote:


> Originally Posted by *Chaoz*
> 
> I'm currently using the EK Acetal Nickel block. It looks really good. I can recommend it. Almost everything in my loop is from either EKWB or Bitspower.
> 
> Alphacool is quite a good company, haven't bought anything from them but a few friends have and they say the quality is quite decent.
> The Alphacool kit is an AIO for your GPU.
> 
> This is what it would cost for you to liquid cool your GPU.
> 
> 
> Tbh, if you're going for the EK block, make your PC a full custom loop. It would look stupid if you have a custom loop on your GPU with an X62 on your CPU. A CPU block costs $60 more on your overall cost of a gpu block + rad etc.
> 
> This is what it would cost for a full loop, CPU included.
> 
> 
> You could get an EKWB kit, so you only have to get the Vega EK block and 2 extra fittings, that's it.


Thank you very much. I will have a deeper look at it.


----------



## ashman95

Quote:


> Originally Posted by *Tyrael*
> 
> Thank you very much. I will have a deeper look at it.


Check instructions you may have to reuse Vega screws


----------



## Spectre73

Quote:


> Originally Posted by *ashman95*
> 
> I didn't get the backplate for my EK block but it came with everything I needed- I would bet on it BTW my temps haven't gone over 43C under any circumstance.


Maybe I was not very clear. You can order an EK backplate or you can reuse the factory (preinstalled) backplate.

The EK manual only covers the Installation of the waterblock WITHOUT any backplate at all.

Since the screws are all the same length (or so I think) can l install the (factory) backplate? I do not know if the factory screws will fit the EK block or if the EK screws will fit with the factory backplate, since it adds height that is not covered in the manual. Sorry, I am no native Speaker, if my Point does not come across.


----------



## Newbie2009

Quote:


> Originally Posted by *Irev*
> 
> anyone know when a working version of MSI AB will release for VEGA?


How long is a piece of string. Has it every worked well with AMD cards, AMD keep breaking it.
Quote:


> Originally Posted by *Spectre73*
> 
> Question regarding the EK Block for Vega with factory backplate.
> 
> The manual of the block does not say how to install the factory backplate. Are there (longer) screws included for the backplate or how is it installed?
> 
> The Installation by EK only covers the block without any backplate at all.


Quote:


> Originally Posted by *Spectre73*
> 
> Maybe I was not very clear. You can order an EK backplate or you can reuse the factory (preinstalled) backplate.
> 
> The EK manual only covers the Installation of the waterblock WITHOUT any backplate at all.
> 
> Since the screws are all the same length (or so I think) can l install the (factory) backplate? I do not know if the factory screws will fit the EK block or if the EK screws will fit with the factory backplate, since it adds height that is not covered in the manual. Sorry, I am no native Speaker, if my Point does not come across.


You can use stock backplate with the block no problem.


----------



## Tyrael

Quote:


> Originally Posted by *betaflame*
> 
> Can I bother you to measure the height from the top of the GPU PCB to the bottom of the Fans in mm? (full height).
> 
> I want to know if I could fit that in an Ncase M1.
> 
> Thanks


Did you modify and screws or could you just mount it on the gpu?


----------



## Mandarb

Seems to be pretty good. Memory runs currently at 1095MHz and GPU 1560MHz @ 1000mV. Can't really check how much voltage I'd need for any frequency beyond that as it throttles because of temps and never goes higher.

I'd need to mod it with an aftermarket cooler to find out, and now I'm thinking whether it's a good idea to void my warranty or not...

I do have a Vega RX56 incoming too and will see how that does on stock, then sell off the one I'm not going to keep.

Suggestions? Also, I'd like to keep my PC on air (Define R5, NH-D15S).

Edit: ****ed up whether by writing wether. ^^'


----------



## ashman95

Quote:


> Originally Posted by *Spectre73*
> 
> Maybe I was not very clear. You can order an EK backplate or you can reuse the factory (preinstalled) backplate.
> 
> The EK manual only covers the Installation of the waterblock WITHOUT any backplate at all.
> 
> Since the screws are all the same length (or so I think) can l install the (factory) backplate? I do not know if the factory screws will fit the EK block or if the EK screws will fit with the factory backplate, since it adds height that is not covered in the manual. Sorry, I am no native Speaker, if my Point does not come across.


I'll let you know, I was thinkin of putting Vega back plate
Quote:


> Originally Posted by *Mandarb*
> 
> 
> 
> Seems to be pretty good. Memory runs currently at 1095MHz and GPU 1560MHz @ 1000mV. Can't really check how much voltage I'd need for any frequency beyond that as it throttles because of temps and never goes higher.
> 
> I'd need to mod it with an aftermarket cooler to find out, and now I'm thinking wether it's a good idea to void my warranty or not...
> 
> I do have a Vega RX56 incoming too and will see how that does on stock, then sell off the one I'm not going to keep.
> 
> Suggestions? Also, I'd like to keep my PC on air (Define R5, NH-D15S).


Nothing like a water cooled Vega - I have Vega FE


----------



## Newbie2009

Quote:


> Originally Posted by *Mandarb*
> 
> 
> 
> Seems to be pretty good. Memory runs currently at 1095MHz and GPU 1560MHz @ 1000mV. Can't really check how much voltage I'd need for any frequency beyond that as it throttles because of temps and never goes higher.
> 
> I'd need to mod it with an aftermarket cooler to find out, and now I'm thinking wether it's a good idea to void my warranty or not...
> 
> I do have a Vega RX56 incoming too and will see how that does on stock, then sell off the one I'm not going to keep.
> 
> Suggestions? Also, I'd like to keep my PC on air (Define R5, NH-D15S).


Same MV mine will hit about 1610 under water. You might get higher if you apply a 1-2% overclock, the newest drivers and overclocks act weird. Won't go past a certain clock range if you don't overclock, even if not close to your target clock.


----------



## Mandarb

Quote:


> Originally Posted by *Newbie2009*
> 
> Same MV mine will hit about 1610 under water. You might get higher if you apply a 1-2% overclock, the newest drivers and overclocks act weird. Won't go past a certain clock range if you don't overclock, even if not close to your target clock.


Basically what you're saying is that I won't get a lot higher frequency out of the card, I'll just make it quieter?


----------



## punchmonster

73mm, but that simply depends on what fans you get.
Quote:


> Originally Posted by *betaflame*
> 
> Sorry, I meant the entire assembled card with cooler and fans (red line in picture):


Mostly depends on what you want. You can hit 1750~ trivially but you'll have to increase voltage. Most are aiming for lower temps/noise and stable clocks though.
Quote:


> Originally Posted by *Mandarb*
> 
> Basically what you're saying is that I won't get a lot higher frequency out of the card, I'll just make it quieter?


----------



## 113802

So I modded my RX Vega 64 XTX so I can easily replace the radiator fan. Unfortunately Amazon lost the 2150 RPM PWM Gentle Typhoons I ordered.

Things needed:

https://www.amazon.com/gp/product/B005ZKZEQA/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
https://www.amazon.com/gp/product/B01FFFHDKO/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1


----------



## Newbie2009

Quote:


> Originally Posted by *Mandarb*
> 
> Basically what you're saying is that I won't get a lot higher frequency out of the card, I'll just make it quieter?


No I'm saying it will not try to hit your target clock unless you overclock it past your target clock.

Madness I know.


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> So I modded my RX Vega 64 XTX so I can easily replace the radiator fan. Unfortunately Amazon lost the 2150 RPM PWM Gentle Typhoons I ordered.
> 
> Things needed:
> 
> https://www.amazon.com/gp/product/B005ZKZEQA/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
> https://www.amazon.com/gp/product/B01FFFHDKO/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1


I ordered the same adapter, but I haven't modded my card yet since it's being replaced. The new one should arrive today. The extension is great since it allows the fan connection to exist outside the chassis if you want to swap fans. I'll be using a be quiet! silentwings 3 120mm 2200rpm PWM fan. My fingers are crossed the new card doesn't suck.

Also, I just read this post on OCuk from AMDMatt regarding clocks on Vega. Thought you all might find it informative.
Quote:


> Clock fluctutation is a feature of ACG = Advanced Clock Generator, which is enabled for DPM states 5/6/7. It will naturally fluctuate when there are current spikes. When current spikes occur voltage droops, to avoid instability during the droop, frequency must be decreased to match. This is why you see the fluctuation in the frequency.
> 
> To maximise highest possible performance, set +50% power limit + increase state 7 frequency, and voltage if required. I use state 6 as stock max boost frequency and undervolt by -0.075mv. This works well for me Vega and allows a nice perf increase over stock boost clock/voltage (1752Mhz)
> 
> I recommend enabling HBCC too globally, even with just 16GB of system memory.


He posted a video and screenshot along with it using 17.8.2 drivers.


Spoiler: Warning: Spoiler!


----------



## beatfried

do the ek blocks come with the 1 slot blends?
couldn't find that info in the description..


----------



## pillowsack

Quote:


> Originally Posted by *beatfried*
> 
> do the ek blocks come with the 1 slot blends?
> couldn't find that info in the description..


My EKWB block came with some LONG screws, and enough MEDIUM and SMALL screws to do the waterblock alone, or in my case with the stock backplate(medium screws).

It also did come with the 1 slot bracket if you want to install it.


----------



## ashman95

Check my post, so far I've hit 1700mhz (SPECview has no probs)- Prey and Sniper 3 crashed after some play- these are settings that worked with
Prey, Sniper 3, and Wolfenstein TOB, and SPECview 12.1, using Beta drivers and Watt Tool to set frequencies and power.
Am about to switch to 17.1 drivers to play with HBM frequencies. So far temps not gone over 43C EK block, dedicated GPU loop.

I can definitely get better SPEC scores with dedicated SPEC profile. These settings are Water cooled General.


----------



## Mandarb

How far can you OC your Vegas? Mine crashes at stock voltage (1200mV) @ 1650MHz.


----------



## rednow

Big problems with P2-P5 gpu power states and P2 hbm power state with recent 17.8.2 drivers but managed to get this ...


----------



## Newbie2009

have to say vega 64 runs the division way better than my old 290x crossfire


----------



## ontariotl

So I went to bed last night not happy with the result of my LED backlight as it wasn't as bright as I would like as I'm only using three 3mm so I decided to work on Revision 2 this morning while I wait for my package with my accessories to install on the motherboard to arrive today.

I took a LED strip and removed the epoxy coating on the top and removed the adhesive on the rear. Cut them up to one a piece to align the LED's up with each letter of the decal. Then dremelled out the backplate so the LEDs could sit in the hole with black tape holding it in place until it is screwed back on to the waterblock.




I did install a LED for the EK logo and was going to set it as blue, but it needed more dremeling and I decided to leave it for now.


In bright or dark, no hotspots or dimmed lettering.




Finally in my PC!





Just did some valley benchmark for temps. I set my overclock at 50% power, 1702 GPU, and 1005 for HBM2. Clocks were running at 1675 (still fluctuates here and there) and memory ran fine at 1005. Temps were 36C max.

Eventually I need to do more tweaking, but for now its to mount this Samsung CF791 on the wall and clean up this damn room!


----------



## punchmonster

you sure you don't want a slight diffuser for that? It looks a bit uneven. Just take a clear plastic something and sandpaper it a bit to make a diffusion sheet.
Quote:


> Originally Posted by *ontariotl*
> 
> So I went to bed last night not happy with the result of my LED backlight as it wasn't as bright as I would like as I'm only using three 3mm so I decided to work on Revision 2 this morning while I wait for my package with my accessories to install on the motherboard to arrive today.
> 
> I took a LED strip and removed the epoxy coating on the top and removed the adhesive on the rear. Cut them up to one a piece to align the LED's up with each letter of the decal. Then dremelled out the backplate so the LEDs could sit in the hole with black tape holding it in place until it is screwed back on to the waterblock.
> 
> I did install a LED for the EK logo and was going to set it as blue, but it needed more dremeling and I decided to leave it for now.
> 
> In bright or dark, no hotspots or dimmed lettering.
> 
> Finally in my PC!
> 
> Just did some valley benchmark for temps. I set my overclock at 50% power, 1702 GPU, and 1005 for HBM2. Clocks were running at 1675 (still fluctuates here and there) and memory ran fine at 1005. Temps were 31C max.
> 
> Eventually I need to do more tweaking, but for now its to mount this Samsung CF791 on the wall and clean up this damn room!


----------



## ontariotl

Quote:


> Originally Posted by *punchmonster*
> 
> you sure you don't want a slight diffuser for that? It looks a bit uneven. Just take a clear plastic something and sandpaper it a bit to make a diffusion sheet.


I'm already using a white piece of paper behind the lettering to act as a diffuser and it works perfect. The camera still shows hotspots while naked eye there is none.


----------



## Mandarb

Le sigh.

I was happy with the undervolt on my Vega 64.

Now I'm having the issue that the card sometimes jumps to 1650MHz, and that frequency is only stable at 1200mV. Undervolt the card, it crashes and burns because now it can boost up. Don't undervolt and it' hot but stable because it can't boost that high.

Too bad you can't define a boost ceiling.:/


----------



## lightofhonor

Got a PowerColor 64 about a week ago and have been fairly happy so far, coming from a 380 2GB









Going to Waterblock soon. My 1700X needs it too


----------



## kundica

Quote:


> Originally Posted by *Mandarb*
> 
> Le sigh.
> 
> I was happy with the undervolt on my Vega 64.
> 
> Now I'm having the issue that the card sometimes jumps to 1650MHz, and that frequency is only stable at 1200mV. Undervolt the card, it crashes and burns because now it can boost up. Don't undervolt and it' hot but stable because it can't boost that high.
> 
> Too bad you can't define a boost ceiling.:/


What's your DP7 set to?


----------



## Mandarb

Quote:


> Originally Posted by *kundica*
> 
> What's your DP7 set to?


I noticed the voltage has a huge influence in determining where it boosts up to.

Even with -0.5% WattMan frequency it spikes up as soon as I enter 1080mV or higher.

Meaning: it's running at around 1575MHz, and suddenly it jumps to 1630MHz+, which crashes the card as it hits a wall at around 1620MHz, at which point it rapidly needs 1200mV. It crashes at around 1650MHz with 1200mV.

With -0.5% OC and 1070mV (from 1060-1075mV) it does seem to target 1575MHz, spiking to 1586MHz max, and thus staying stable.

Toughest bench I've thrown at it so far has been Timespy, before that I thought stable mem OC was 1095MHz, seems to be 1080MHz. It also caused the spike at some point.


----------



## ashman95

I just tested a bunch of stuff-OK so I switched back to Wattman to mess with HBM frequencies -Wattman has a BUG!!- but I over came that- these setting are (Water Cooled) good- I ran Specview (scores an official run) highest I've got so far, except for medical I did get a 119, don't know why only 110- these settings are balanced from lowest to highest, and should run anything without crashing- played PREY and Sniper Elite 3 for about 30mins each @ 2,560 X1,440 - no issues , quick response.

So these benches and game play was with original17.1 drivers or the first Vega drivers for FE Frontier HBM2 frequency was set to 1100mhz - , EK water block- in game play temps got to max of 47C


----------



## Roboyto

Quote:


> Originally Posted by *lightofhonor*
> 
> Got a PowerColor 64 about a week ago and have been fairly happy so far, coming from a 380 2GB
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Going to Waterblock soon. My 1700X needs it too


That's a nice boost in graphical prowess ?


----------



## ashman95

People you're banging your heads against a wall- frequencies and mV settings have to follow a linear like line (look at my other posts), ALL frequencies and mV settings are connected and relative to each other- or else you will run into issues!! excel examples below


----------



## punchmonster

This is simply not true. My card is working perfectly as expected with non-linear P-states. You've just got buggy drivers.
Quote:


> Originally Posted by *ashman95*
> 
> People you're banging your heads against a wall- frequencies and mV settings have to follow a linear like line (look at my other posts), ALL frequencies and mV settings are connected and relative to each other- or else you will run into issues!! excel examples below


----------



## ashman95

Quote:


> Originally Posted by *punchmonster*
> 
> This is simply not true. My card is working perfectly as expected with non-linear P-states. You've just got buggy drivers.


Have you got to 1700mhz? Have you benched your settings? I'm not talking about stock settings!


----------



## Soggysilicon

Quote:


> Originally Posted by *Spectre73*
> 
> Question regarding the EK Block for Vega with factory backplate.
> 
> The manual of the block does not say how to install the factory backplate. Are there (longer) screws included for the backplate or how is it installed?
> 
> The Installation by EK only covers the block without any backplate at all.


I used the included washers in the EK-WB packaging on the helicoils to create a slight spacing between the stock backplate and the card itself. If I recall I used the M2.5x6 included screws (comes with 13 or so) to affix the backplate to the WB. This does not require anything more than a simple interference fit. The initial resistance offered by the OEM screws in large part is the torque seal. Install in a cross pattern and profit.


----------



## PontiacGTX

Quote:


> Originally Posted by *rednow*
> 
> Big problems with P2-P5 gpu power states and P2 hbm power state with recent 17.8.2 drivers but managed to get this ...


how is it compared to stock?


----------



## Soggysilicon

Quote:


> Originally Posted by *Mandarb*
> 
> How far can you OC your Vegas? Mine crashes at stock voltage (1200mV) @ 1650MHz.




Much past this becomes a crap shoot while gaming... ultimate engine... if that matters...


----------



## dagget3450

Updated owners list in OP. If your not in the list, please post in this thread or PM me.

New owners added!

roybotnik Vega FE (AC)
ashman95 Vega FE (custom water)

CryWin RX Vega 56 (AC)
Tyrael RX Vega 56 (AC)

Nevril RX Vega 64 (AC)
VickB RX Vega 64 (AC)
Mandarb RX Vega 64 (AC)
lmiao RX Vea 64 (AC)
Nuke33 RX Vega 64 (WC)
twan69666 RX Vega 64 (AC)
Whatisthisfor RX Vega 64 (WC)
n3squ1ck RX Vega 64 (AC)
seanmacvay RX Vega 64 (WC)
rancor RX Vega 64 (AC) (soon to be watercooled)
abe_joker RX Vega 64
Mandarb RX Vega 64 (AC)
IF6WAS9 Rx Vega 64 (WC)
Roboyto RX Vega 64 (AC) (waterblocked)
Zero4549 RX Vega 64
lightofhonor RX Vega 64 (AC) (soon to be waterblocked)

Thought we would see way more Vega 56 owners by now....
Quote:


> Originally Posted by *gupsterg*
> 
> Come to the aid of your fellow/prospective VEGA owners, join W1zzard for GPU-Z VEGA Beta testing
> 
> 
> 
> 
> 
> 
> 
> .
> 
> https://www.techpowerup.com/forums/threads/vega-beta-testers-needed.236537/
> Not that I know of. No idea.
> Mine too.
> 
> Yeah nVidia has been locking down this aspect before AMD, so perhaps its just becoming the norm. Shame.


Ill add this to OP - TY for the link.

Quote:


> Originally Posted by *Irev*
> 
> Anyone know if i could run 2x vega64 air in CF on a 850w gold psu?


Last i recall RX vega didn't work in crossfire yet? (maybe old information on drivers?)

Quote:


> Originally Posted by *punchmonster*
> 
> Well installed my Morpheus II. Went from thermal throttling at 85ºC at 50% power limit to hovering around 60º. HBM2 went from 80ºC to 60ºC under load. This is all with default fan settings, so with my new fans roughly ~700RPM and practically silent
> 
> I'll report on underclocking/overclocking results in a bit.


Really curious how well this is working temp wise and clocking.

Quote:


> Originally Posted by *ontariotl*
> 
> Still haven't installed my Vega 64 back into the PC. For a few hours I've been doing some modding with the waterblock. I'll hold off installing it back in the PC tomorrow when my backplate for the motherboard finally arrives. I hate buying a used motherboard without a backplate.
> 
> Then I'm debating on flashing my card with the AIO bios as well.
> 
> 
> 
> Spoiler: Warning: Spoiler!


Quote:


> Originally Posted by *ontariotl*
> 
> So I went to bed last night not happy with the result of my LED backlight as it wasn't as bright as I would like as I'm only using three 3mm so I decided to work on Revision 2 this morning while I wait for my package with my accessories to install on the motherboard to arrive today.
> 
> I took a LED strip and removed the epoxy coating on the top and removed the adhesive on the rear. Cut them up to one a piece to align the LED's up with each letter of the decal. Then dremelled out the backplate so the LEDs could sit in the hole with black tape holding it in place until it is screwed back on to the waterblock.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> I did install a LED for the EK logo and was going to set it as blue, but it needed more dremeling and I decided to leave it for now.
> 
> 
> In bright or dark, no hotspots or dimmed lettering.
> 
> 
> 
> 
> Finally in my PC!
> 
> 
> 
> 
> 
> 
> 
> Just did some valley benchmark for temps. I set my overclock at 50% power, 1702 GPU, and 1005 for HBM2. Clocks were running at 1675 (still fluctuates here and there) and memory ran fine at 1005. Temps were 36C max.
> 
> Eventually I need to do more tweaking, but for now its to mount this Samsung CF791 on the wall and clean up this damn room!


Looks awesome!


----------



## milan616

Quote:


> Originally Posted by *dagget3450*
> 
> Thought we would see way more Vega 56 owners by now....


Probably would if there were any in the US. None of the orders from Amazon, Newegg or Best Buy even have shipping dates yet. Looks like only European retailers had and shipped the 56.


----------



## dagget3450

Quote:


> Originally Posted by *milan616*
> 
> Probably would if there were any in the US. None of the orders from Amazon, Newegg or Best Buy even have shipping dates yet. Looks like only European retailers had and shipped the 56.


Yeah, i would say right now they must not have sold many based on how many Vega 64 owners we have now... maybe soon though


----------



## punchmonster

yes
Quote:


> Originally Posted by *ashman95*
> 
> Have you got to 1700mhz? Have you benched your settings? I'm not talking about stock settings!


temps hover around 55ºC core at stock settings/voltage and 48ºC with 1025mV undervolt. Positively chilly.
The HBM hovers around 65-70ºC with stock settings and core undervolt while eth mining.
Quote:


> Originally Posted by *dagget3450*
> 
> ...
> Really curious how well this is working temp wise and clocking.
> ...


----------



## kundica

Quote:


> Originally Posted by *Mandarb*
> 
> I noticed the voltage has a huge influence in determining where it boosts up to.
> 
> Even with -0.5% WattMan frequency it spikes up as soon as I enter 1080mV or higher.
> 
> Meaning: it's running at around 1575MHz, and suddenly it jumps to 1630MHz+, which crashes the card as it hits a wall at around 1620MHz, at which point it rapidly needs 1200mV. It crashes at around 1650MHz with 1200mV.
> 
> With -0.5% OC and 1070mV (from 1060-1075mV) it does seem to target 1575MHz, spiking to 1586MHz max, and thus staying stable.
> 
> Toughest bench I've thrown at it so far has been Timespy, before that I thought stable mem OC was 1095MHz, seems to be 1080MHz. It also caused the spike at some point.


What's your DP7 set to though? It'll help me understand how your card is OCd


----------



## dagget3450

Quote:


> Originally Posted by *punchmonster*
> 
> yes
> temps hover around 55ºC core at stock settings/voltage and 48ºC with 1025mV undervolt. Positively chilly.
> The HBM hovers around 65-70ºC with stock settings and core undervolt while eth mining.


That is pretty nice temps. how about fan noise, do they need to be set high rpms to keep cool or it manages well?


----------



## punchmonster

these fans top out at 1350, I run them at 700 RPM and they are practically inaudible.
Quote:


> Originally Posted by *dagget3450*
> 
> That is pretty nice temps. how about fan noise, do they need to be set high rpms to keep cool or it manages well?


----------



## lightofhonor

Quote:


> Originally Posted by *Roboyto*
> 
> That's a nice boost in graphical prowess ?


I try to double the speed at least whenever I get a new GPU. Less than that and I end up being disappointed







Vega 56 would have done that, but eh what's money?


----------



## Roboyto

Quote:


> Originally Posted by *lightofhonor*
> 
> I try to double the speed at least whenever I get a new GPU. Less than that and I end up being disappointed
> 
> 
> 
> 
> 
> 
> 
> Vega 56 would have done that, but eh what's money?


Something you can't take with you when you're 6 foot under and worm fodder.


----------



## bogdi1988

Add me as an owner with Sapphire Vega 64 air.

Question: anyone have issues with AtiWinFlash?

Running a windows 10 PC (build 16278) with Ryzen 1700 and Vega 64 air and 17.8.2 video drivers. Wanted to try ATIWinFlash 2.77 (and even older versions) to test out a bios update and keep getting the following error. I made sure that I ran with right click -> Run as administrator.


When I click OK, i get the following error. Clicking on continue, the app just closes.



If I run ATIFlash over CMD with Admin rights, I get the same OS requirements error and then when I click continue I get this in the CMD prompt:


Any help is appreciated


----------



## Mandarb

Quote:


> Originally Posted by *kundica*
> 
> What's your DP7 set to though? It'll help me understand how your card is OCd


-0.5% OC and 1070mV in WattMan (switched from WattTool as that allows no HBM2 OC). Should be 1621MHz in WattTool if this works how I think it works.


----------



## lmiao

I think i got a problem, my HBM reach 80/81° while playing (ex. witcher 3) and sometimes i see artifacts.. it's because of temperature or overclock (1045)? Core is 1682mhz/1080mv

I guess it's impossibile to touch HBM voltage right now, ye?

Got a Sapphire vega 64 reference AC


----------



## kundica

My replacement LC 64 card arrived yesterday. After some testing and gaming for several hours it doesn't seem to exhibit the issues my other card was having.
Quote:


> Originally Posted by *Mandarb*
> 
> -0.5% OC and 1070mV in WattMan (switched from WattTool as that allows no HBM2 OC). Should be 1621MHz in WattTool if this works how I think it works.


Oh right, you wrote that before. Instead of using a percent try setting DP7 to what you want max clock to peak at and DP6 to something more comfortable.
Quote:


> Originally Posted by *lmiao*
> 
> I think i got a problem, my HBM reach 80/81° while playing (ex. witcher 3) and sometimes i see artifacts.. it's because of temperature or overclock (1045)? Core is 1682mhz/1080mv
> 
> I guess it's impossibile to touch HBM voltage right now, ye?
> 
> Got a Sapphire vega 64 reference AC


Set it to stock and see if it still does it. Sounds like OC to me combined with heat.


----------



## Mandarb

Quote:


> Originally Posted by *kundica*
> 
> Oh right, you wrote that before. Instead of using a percent try setting DP7 to what you want max clock to peak at and DP6 to something more comfortable.[...]


Oooooh, never noticed you could set it to dynamic and then edit the frequencies manually! ?


----------



## FelixB

I have just bought a Vega 56 from Overclockers UK and appear on https://forums.overclockers.co.uk/threads/the-rx-vega-56-owners-thread.18789712/

I am happy to help with any reasonable requests for testing the hardware.


----------



## pmc25

Quote:


> Originally Posted by *lmiao*
> 
> I think i got a problem, my HBM reach 80/81° while playing (ex. witcher 3) and sometimes i see artifacts.. it's because of temperature or overclock (1045)? Core is 1682mhz/1080mv
> 
> I guess it's impossibile to touch HBM voltage right now, ye?
> 
> Got a Sapphire vega 64 reference AC


HBM temps are heavily influenced by the amount of heat being spat out of the GPU core. Remember this isn't like GDDRx where memory is separate. The GPU die and HBM2 dies are on the same package (interposer), very close to each other and share the same heatsink.

I suggest you reduce GPU clock and voltage to around 1600Mhz / 1000mV and boost your HBM2 clock to around 1100 (most cards begin to crash at 1105-1110 - almost all will hit 1090Mhz). It's faster this way anyway as the card is bandwidth bottlenecked.


----------



## lmiao

I'm gonna give a try to this then.. however i'm still afraid that temperature will be very high anyway, because reference cooler sucks.. another solution is to raise fan rmp to 4000 ca. but then it will be like a reactor :S


----------



## Mandarb

Quote:


> Originally Posted by *kundica*
> 
> My replacement LC 64 card arrived yesterday. After some testing and gaming for several hours it doesn't seem to exhibit the issues my other card was having.
> Oh right, you wrote that before. Instead of using a percent try setting DP7 to what you want max clock to peak at and DP6 to something more comfortable.


DP7 1612MHz @ 1070mV yields the same results as boost clocks (around 1570MHz), going higher to 1620MHz gets the card to peak to 1620 resulting in a crash. Increasing voltage causes the card to not boost as high and run around 1540-1550MHz.

Edit: also, my Vega 56 has just arrived. Going to see how that one does on Sunday, then decide which one I sell off.


----------



## kundica

Quote:


> Originally Posted by *Mandarb*
> 
> DP7 1612MHz @ 1070mV yields the same results as boost clocks (around 1570MHz), going higher to 1620MHz gets the card to peak to 1620 resulting in a crash. Increasing voltage causes the card to not boost as high and run around 1540-1550MHz.
> 
> Edit: also, my Vega 56 has just arrived. Going to see how that one does on Sunday, then decide which one I sell off.


How much it boosts will vary based on the app and other factors. It won't always achieve your max boost, but using the 7th step as a max boost is a good guide. See this post from AMDMatt on the OCuk forum: https://forums.overclockers.co.uk/threads/the-rx-vega-64-owners-thread.18789713/page-67#post-31110626


----------



## Mandarb

Quote:


> Originally Posted by *kundica*
> 
> How much it boosts will vary based on the app and other factors. It won't always achieve your max boost, but using the 7th step as a max boost is a good guide. See this post from AMDMatt on the OCuk forum: https://forums.overclockers.co.uk/threads/the-rx-vega-64-owners-thread.18789713/page-67#post-31110626


Aye, with the reference cooler it's a matter of temperature and thus voltage.

I decided for myself that I can live with a maximum of 2700rpm. It can boost higher when I enable jetengine mode and increase voltage, but that's not desirable.

I'll check out my Vega56 next and see what is possible with that one. Then decide which one I keep and which one I sell. When partner cards arrive and the pricing is revealed I will weigh price of one of those and risk of installing an aftermarket solution on the reference card I kept.

For as long as I can make back what I paid it's all on the table. ?


----------



## elderblaze

My 56 arrives tomorrow, I'm sure most who bought on launch day are just barely receiving their cards unless they paid for overnite shipping


----------



## SAMiN

Just placed order for VEGA 56 through OCUK! for the lunch price as business customer! hopefully can get it soon!


----------



## ontariotl

I decided to flash the AIO bios on switch 2 of my Air card since it's now waterblocked. However I've run into this issue. I'm doing something wrong.


----------



## abe_joker

Could someone explain the switches thing? I read the following on OcUK
Quote:


> Left is default bios, normal performance and push the switch to the right for the lower wattage setup or alternate bios.
> Left is the exhaust grill end of the card. Right is the end pointing towards the power cables for the card. You want the switch to the left most of the time


What does that mean?


----------



## theBee2112

Quote:


> Originally Posted by *ontariotl*
> 
> I decided to flash the AIO bios on switch 2 of my Air card since it's now waterblocked. However I've run into this issue. I'm doing something wrong.


Change the position of the BIOS switch, and try flashing the other one.
Also, what's your version of ATIWinFlash?

If I remember correctly, I flashed my powersave BIOS to the AIO BIOS, (Switch towards the PCIE power plugs) and kept the standard Air Cooled high power BIOS (switch in left position, facing back of case). Works fine, higher power limit, and the clocks *seem* higher. At least benchmarks show a tangible improvement with the AIO BIOS. It's still on there a week later, no issues.

I've heard in the past that some cards will only allow you to flash 1 of the two switch positions. (for fallback reasons)


----------



## ontariotl

Quote:


> Originally Posted by *theBee2112*
> 
> Change the position of the BIOS switch, and try flashing the other one.
> Also, what's your version of ATIWinFlash?
> 
> If I remember correctly, I flashed my powersave BIOS to the AIO BIOS, (Switch towards the PCIE power plugs) and kept the standard Air Cooled high power BIOS (switch in left position, facing back of case). Works fine, higher power limit, and the clocks *seem* higher. At least benchmarks show a tangible improvement with the AIO BIOS. It's still on there a week later, no issues.
> 
> I've heard in the past that some cards will only allow you to flash 1 of the two switch positions. (for fallback reasons)


Yeah I thought about trying the other switch. I'm using the 2.77 version of flash.

I wanted exactly what you have is the powersave bios flash to the AIO bios. I wanted to keep the Air high power bios intact.


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> Yeah I thought about trying the other switch. I'm using the 2.77 version of flash.
> 
> I wanted exactly what you have is the powersave bios flash to the AIO bios. I wanted to keep the Air high power bios intact.


Did you try running as admin?


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> Did you try running as admin?


Yes I did or Winflash wouldn't even load.

Ok I got it now. I left it on high power switch and it worked. Just testing now. It keeps crashing on Turbo with Valley benchmark, so it looks like it's not going to work well with my card. More testing needs to be done.


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> Ok I got it now. I left it on high power switch and it worked. Just testing now. It keeps crashing on Turbo with Valley benchmark, so it looks like it's not going to work well with my card. More testing needs to be done.


Manually lower the clock. The main reason to run the AIO bios on the Air card is for the added power. Lower the clock and work your way up until you find your max stable.


----------



## theBee2112

Quote:


> Originally Posted by *ontariotl*
> 
> Yes I did or Winflash wouldn't even load.
> 
> Ok I got it now. I left it on high power switch and it worked. Just testing now. It keeps crashing on Turbo with Valley benchmark, so it looks like it's not going to work well with my card. More testing needs to be done.


I might be remembering wrong. I guess I did flash the high power BIOS if that's the one that worked for you. Could you post some updates on your performance? I seem the be the only one here still using the AIO bios on my card, since i saw the last two people say they reverted back. Let me know if you can get 1750MHz stable

All I remember is getting the same error as yourself, and just moving the switch over and it worked. I'm just hoping you backed up both of your BIOS with ATIWinflash and not GPU-Z


----------



## pillowsack

I still want to ramp up the voltage and see what I get... Max temp in games of like 46C with my CPU at 4.4....


----------



## theBee2112

Quote:


> Originally Posted by *pillowsack*
> 
> I still want to ramp up the voltage and see what I get... Max temp in games of like 46C with my CPU at 4.4....


Just saw your post where you said you kept the AIO BIOS. Rock on!

Isn't voltage control possible by powerplay? It's explained here: http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26297003


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> Manually lower the clock. The main reason to run the AIO bios on the Air card is for the added power. Lower the clock and work your way up until you find your max stable.


Yeah I started doing that. I just thought I would be able to reach the AIO speed now that I'm under water. I can only dream.
Quote:


> Originally Posted by *theBee2112*
> 
> I might be remembering wrong. I guess I did flash the high power BIOS if that's the one that worked for you. Could you post some updates on your performance? I seem the be the only one here still using the AIO bios on my card, since i saw the last two people say they reverted back. Let me know if you can get 1750MHz stable
> 
> All I remember is getting the same error as yourself, and just moving the switch over and it worked. I'm just hoping you backed up both of your BIOS with ATIWinflash and not GPU-Z


I think that's exactly what it was, high power needs to be flashed on high power switch. And I did back up both my high and lower power BIOS through winflash as I noticed the size when I backed up with GPU-Z.

So far as I mentioned before Turbo is no go so probably no 1750 for me. I was about to flash back to Air bios but I found for me with the AIO bios the ram stays at a higher clock when I overclock. When I tried with the Air and set it higher than 1020 the VRAM would revert back to 800 or 500 depending on the speed of the GPU.
With the AIO, I can hit 1105 with my GPU at P6 @ 1702 and P7 @1722. Any higher and it will go back to 800 or 500. Funny how my GPU then clocks higher steady at 1705 but with the HBM2 at 1105 it hovers around 1695 with a few spikes over to 1705.


----------



## Energylite

Im back with that fcking beautiful rig after 7hrs of work on it #Vegasme


















For now, Im gonna edit the Powertable (any recomendation ?) and try to reach 1730-1100. If i can then i hope 1800-1100 but i dont think i can reach it (maybe more for the hbm, maybe I'll see).

Quote:


> Originally Posted by *gupsterg*
> 
> Any members with RX VEGA able to see if MSI AB gets i2cdump,
> 
> 
> 
> and/or do AIDA64 SMBus dump,
> 
> 
> 
> . Does AIDA64 show VID per DPM in registers dump,
> 
> 
> 
> . To be able to bring up the menu to select dumps go to view menu and enable status bar and then right click status bar, even an evaluation version of AIDA64 will get dumps if it supports VEGA.


Sorry Gup, I can't find what you want (i2c) on Aida64 extreme and it's the same for MSI AB (4.3). I hope someone finds what you wanted


----------



## pillowsack

Quote:


> Originally Posted by *theBee2112*
> 
> Just saw your post where you said you kept the AIO BIOS. Rock on!
> 
> Isn't voltage control possible by powerplay? It's explained here: http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26297003


I tried that and have no idea if I did it right or if it works with 17.8.2 drivers. Yes, heck yeah the AIO bios. I wanna get the voltage upped though so it can run at the same speeds.

I did a benchmark at 4.4ghz and this is what I got:

Core clock 1,686 MHz Memory bus clock 1,100 MHz

https://www.3dmark.com/3dm/21853420?

https://www.3dmark.com/spy/2258948



I have no throttling but it's still a shame that the Vega 56 is caught up to me. Do I need to downgrade drivers just to ramp voltage up? That means I lose PUBG FPS too


----------



## abe_joker

Quote:


> Originally Posted by *pillowsack*
> 
> Core clock 1,686 MHz Memory bus clock 1,100 MHz


I got a AC vega. Do you think I should flash LC bios on the low (right) switch and OC like yours? OR do you think I'll get throttling? Someone told me it might not be worth it since the LC Bios will only consume more energy but throttle


----------



## pillowsack

Quote:


> Originally Posted by *abe_joker*
> 
> I got a AC vega. Do you think I should flash LC bios on the low (right) switch and OC like yours? OR do you think I'll get throttling? Someone told me it might not be worth it since the LC Bios will only consume more energy but throttle


Right now 17.8.2 made my card throttle too low and not even do this basic overclock I have on the hp AIR bios.

With water it seems that it allows a higher TDP or something. I have to apply memory clocks in crimson and then I use watt-tool too set P6 and P7 to 1660 and 1700. It never hits 1700 but this seems to hold it at 1680 which is great.


----------



## ontariotl

A little more testing and I can confirm with the 17.8.2 drivers I just can't hit 1750 with the AIO bios. It's not a total loss though like I've mentioned before as I can have the HBM2 at 1105 without it dropping to 500 to 800 during test runs.

From my testing, looks like 1730 is my ceiling. I've set my [email protected] and [email protected] and it seems to be ok, but need further testing. In firestike, the highest clock I witnessed was 1729 without crashing. In games, witcher 3 hovers between 1669 to 1691, and Rise of the Tomb Raider is around 1704-1711.

Here is my old FSultra benchmark with my Air bios.



And now with the AIO bios with the settings mentioned above.



I am kinda bummed it wont reach 1750, but considering what the bios has done to help me I guess I can't complain. Plus the full EK wb is a lot quieter with either air or the AIO with the 120 rad/fan combo.


----------



## Arizonian

Hi guys, just stopping in to make this thread *[Official]*. Thank you dagget3450 for a nice OP with info.









Been watching the thread and following owners results. I'm still interested myself, waiting on the Sapphire Nitro while I gather funds.

Keep the new GPU pics coming for us tech voyeurs.


----------



## Mandarb

So, what I wonder:

You can only switch the BIOS while the PC is switched off? And you can only read and flash the BIOS the card is currently set to?

So how do you reflash a botched BIOS flash when you can only run the non-corrupted BIOS and can't switch and flash to the corrupted one?


----------



## aliquis

Are you guys really running your cards overclocked/pushed to the limit ?

I did some testing with my card, but from what others shared, i think the numbers are similar:

Already at about 1500MHz the scaling becomes really bad in terms of powerconsumption: roughly speaking, for every MHz more i need about 1 mV, and that raises the power consumption by about 0,8W (measured)

So i can run my card at about 1500MHz 970mV, total system power draw about 330W, If i want 1600MHz i need to raise the voltage to about 1070mV, but then the total power consumption is ~410W,
If i go for a higher clockrate to about 1700MHz, i need about 1170mV, total system power about 500W.... thats like almost 200W extra for 200MHz... almost doubles the gpu power consumption for just some extra performance.

If you run the card at 1500MHz or lower with appropriate low voltage, it is a very efficient card, yet if you raise the clock rate, that scaling , 0,8 -1W per extra Mhz is just unreasonable , at least for me.


----------



## ontariotl

Quote:


> Originally Posted by *Mandarb*
> 
> So, what I wonder:
> 
> You can only switch the BIOS while the PC is switched off? And you can only read and flash the BIOS the card is currently set to?
> 
> So how do you reflash a botched BIOS flash when you can only run the non-corrupted BIOS and can't switch and flash to the corrupted one?


Correct me if I'm wrong.

I believe you load the bios that works fine to get it to boot. Once you are in a O/S environment, you flip the bios switch to the botched flash and then run the flash program in an attempt to fix the issue. Then reboot and see if it posts with a screen and loads into O/S.


----------



## gupsterg

Quote:


> Originally Posted by *Energylite*
> 
> Sorry Gup, I can't find what you want (i2c) on Aida64 extreme and it's the same for MSI AB (4.3). I hope someone finds what you wanted


No worries kundica did them and unfortunately SW seems to lack support to do or it's an issue on VEGA.

Nice rig BTW







, damn those tubing bends sent my eyes googly







.


----------



## Mandarb

Quote:


> Originally Posted by *ontariotl*
> 
> Correct me if I'm wrong.
> 
> I believe you load the bios that works fine to get it to boot. Once you are in a O/S environment, you flip the bios switch to the botched flash and then run the flash program in an attempt to fix the issue. Then reboot and see if it posts with a screen and loads into O/S.


Yeah, that's how it works, just tried it.


----------



## buildzoid

Just gonna drop this here.

I finally got an RX V64 (thank you Alza)

I flashed the V56 BIOS to check HBM voltage.

VFE runs 1.35V HBM
V64 runs 1.35V HBM
V56 runs 1.25V HBM

Since flashing the V56 BIOS on a V64 sets the V56's HBM voltage I'd hazzard a guess that flashing the V64 BIOS on a V56 will get you 1.35V HBM. That extra 100mv on the HBM should yield an extra 100MHz HBM clock. Since VFEs and V64s regularly hit 1000-1100 HBM clock whereas V56s normally top out around 950.

Since the cards do have a BIOS switch I see no risk in people flashing V56s with air V64 BIOSs. I see no point in the V64 LC BIOS as that is literally just a BIOS with higher clocks and power limits. Both of which can be achieved using the soft power play table registry edit.

Also I have a suspicion that AMD is binning the LC cards so the clocks in those BIOS might not necessarily run on the air cards without extra Vcore.


----------



## dagget3450

Quote:


> Originally Posted by *Arizonian*
> 
> Hi guys, just stopping in to make this thread *[Official]*. Thank you dagget3450 for a nice OP with info.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Been watching the thread and following owners results. I'm still interested myself, waiting on the Sapphire Nitro while I gather funds.
> 
> Keep the new GPU pics coming for us tech voyeurs.


Thank you! The OP still needs a lotta love(mainly rx vega info) but it will get there soon..
Quote:


> Originally Posted by *buildzoid*
> 
> Just gonna drop this here.
> 
> I finally got an RX V64 (thank you Alza)
> 
> I flashed the V56 BIOS to check HBM voltage.
> 
> VFE runs 1.35V HBM
> V64 runs 1.35V HBM
> V56 runs 1.25V HBM
> 
> Since flashing the V56 BIOS on a V64 sets the V56's HBM voltage I'd hazzard a guess that flashing the V64 BIOS on a V56 will get you 1.35V HBM. That extra 100mv on the HBM should yield an extra 100MHz HBM clock. Since VFEs and V64s regularly hit 1000-1100 HBM clock whereas V56s normally top out around 950.
> 
> Since the cards do have a BIOS switch I see no risk in people flashing V56s with air V64 BIOSs. I see no point in the V64 LC BIOS as that is literally just a BIOS with higher clocks and power limits. Both of which can be achieved using the soft power play table registry edit.
> 
> Also I have a suspicion that AMD is binning the LC cards so the clocks in those BIOS might not necessarily run on the air cards without extra Vcore.


Thank you guys for posting the goods. I will try to get these good info posts in the op soon.


----------



## Mandarb

I'm currently running my RX56 through the paces on stock BIOS to determine best undervolt OC and will flash RX64 BIOS once I'm done, then repeat.

You will hear back from me.


----------



## CaptainTom

So I flashed a Vega 64 bios to my new Vega 56 (So now I have 3 Vega's), and I can confirm:


Yes, it totally works! Even when I allow +50% power the Vega 56 has ZERO stability issues.
Now the 56 can hit 1000MHz on the HBM whereas it could only hit ~950 before. The additional voltage definitely helps.
The HBM maxes out at 1000MHz (All of my 64's can hit 1105MHz). So indeed while a flashed Vega 56 can hit the performance of a _stock_ 64, it is lower binned.


----------



## Mandarb

I really do get the short end of every silicone lottery...

My 1800X can do 3.9GHz max, my RX64 can do max 1080MHz on HBM2 and I'm trying my RX56 and it seems to be the worst ever, even 900MHz HBM2 just crashed. **** you, gods of RNG. You even hate me in bloody World of Warships.


----------



## CaptainTom

Quote:


> Originally Posted by *Mandarb*
> 
> I really do get the short end of every silicone lottery...
> 
> My 1800X can do 3.9GHz max, my RX64 can do max 1080MHz on HBM2 and I'm trying my RX56 and it seems to be the worst ever, even 900MHz HBM2 just crashed. **** you, gods of RNG. You even hate me in bloody World of Warships.


Well I have only tried the 56 in Ethereum + Siacoin mining. It's a heavy load to be sure, but gaming always seems to be a better stability test _for gaming_.

For instance my R9 380's will do 1660MHz with lowered timings on the memory as long as they are dummy cards mining, but if you attach an output they crash.

Hey 1080Mhz will bring nearly the same performance increase.


----------



## surfinchina

Hi,
I've got a vega FE with EK waterblock.
I decided to flash the 64 AIO rom and it didn't really make a difference. Is there something else I need to do?

Oh, the difference it did make was that the GPU isn't recognised by HWMon any more, or any benchmarks. Tells me the GPU doesn't support them.
Plus one of my monitors doesn't go.

Oddly, I run it on hackintosh and it works a wee bit faster on that, both monitors work and the benchmarks run fine.
I can't overclock in registry or software on account of running mac os x, so I'm looking for a bios solution. Any ideas?

pete


----------



## springs113

Well Vega 64(2) air owner here xfx+sapphire. I don't know if i want to actually tinker with the bios I'm actually quite comfortable with its performance. I just hope Amd gets the drivers right sooner than later. An extra 10% increase would be great. Support of crossfire would be awesome too. Get Vega features to work(all of them). But like I've told everyone that have asked me about Vega...it is smooth as butter. The fluidity is astounding abs when paired with ryzen is just as awesome. Not to sound like a fanboy or anything but I'm a bit giddy in having an all AMD system. I've tested out Vega with my 5930K and i much prefer to play with ryzen instead.


----------



## Mandarb

Wow, I opened the toilet, reached deep into it and pulled out the biggest turd possible in the guise of a RX56.

Stock BIOS it can reach 895MHz on HBM 2, after that it crashes in Timespy. Flashed with the RX64 BIOS it's only stable at 950MHz. And to think that other people reach this on the stock BIOS..

Well, I know which of the cards I sell..


----------



## Roboyto

Quote:


> Originally Posted by *aliquis*
> 
> Are you guys really running your cards overclocked/pushed to the limit ?
> 
> I did some testing with my card, but from what others shared, i think the numbers are similar:
> 
> Already at about 1500MHz the scaling becomes really bad in terms of powerconsumption: roughly speaking, for every MHz more i need about 1 mV, and that raises the power consumption by about 0,8W (measured)
> 
> So i can run my card at about 1500MHz 970mV, total system power draw about 330W, If i want 1600MHz i need to raise the voltage to about 1070mV, but then the total power consumption is ~410W,
> If i go for a higher clockrate to about 1700MHz, i need about 1170mV, total system power about 500W.... thats like almost 200W extra for 200MHz... almost doubles the gpu power consumption for just some extra performance.
> 
> If you run the card at 1500MHz or lower with appropriate low voltage, it is a very efficient card, yet if you raise the clock rate, that scaling , 0,8 -1W per extra Mhz is just unreasonable , at least for me.


Yup. This is what has been seen on a couple review sites as well.

Personally it doesn't bother me. I have 1kW PSU and my dual 240mm rads should be plenty to keep the temps well within reason. At least they were for a blazing fast 1300/1700 R9 290 pushed to the brink. And they handled my Fury X without trouble at all.

It looks like my PowerColor is hitting ~1740 with the air cooler with stock voltage and 50% power. HBM is definitely temperature limited presently at 970 when pushing the core to the brink. Haven't undervolted yet, but I just dropped the waterblock on..gotta fill the loop up. I'm more concerned with flashing AIO BIOS and seeing how far this thing will go with 0 concern for power consumption presently. I'll find that sweet spot of power target, voltage and clocks like I always do...but maximum performance comes first


----------



## CaptainTom

Quote:


> Originally Posted by *springs113*
> 
> Well Vega 64(2) air owner here xfx+sapphire. I don't know if i want to actually tinker with the bios I'm actually quite comfortable with its performance. I just hope Amd gets the drivers right sooner than later. An extra 10% increase would be great. Support of crossfire would be awesome too. Get Vega features to work(all of them). But like I've told everyone that have asked me about Vega...it is smooth as butter. The fluidity is astounding abs when paired with ryzen is just as awesome. Not to sound like a fanboy or anything but I'm a bit giddy in having an all AMD system. I've tested out Vega with my 5930K and i much prefer to play with ryzen instead.


I will 100% back you up with the addendum of "What games do you play?"

I play BF1 the most, and even at Vega's garbage stock settings it manages to be only 10% weaker than the 1080 Ti in DX 11. With undervolted 1630/1105 clocks I am easily around the performance of even an overclocked 1080 Ti in my favorite online shooter - And I only paid $560 for this thing! (Sold the games lol)

Everything else I still play like Doom, Deus Ex, and Metro: LL shows the same results. Oh and this is making me $150/month mining lol.


----------



## kundica

After feeling confident with the performance of my replacement AIO 64, I decided to update to 17.8.2 and the new card started crashing within a few minutes launching a game. I rolled back to 17.8.1 and haven't crashed. I'm happy it seems stable on 17.8.1 but definitely worried since I've seen AMDMatt over at OCuk post several videos of his AIO 64 running just fine on 17.8.2. If this card starts to have issues again, I'll probably be done with Vega and just get a 1080Ti. Whatever is going on with this AIO cards is absurd.


----------



## punchmonster

Quote:


> Originally Posted by *kundica*
> 
> After feeling confident with the performance of my replacement AIO 64, I decided to update to 17.8.2 and the new card started crashing within a few minutes launching a game. I rolled back to 17.8.1 and haven't crashed. I'm happy it seems stable on 17.8.1 but definitely worried since I've seen AMDMatt over at OCuk post several videos of his AIO 64 running just fine on 17.8.2. If this card starts to have issues again, I'll probably be done with Vega and just get a 1080Ti. Whatever is going on with this AIO cards is absurd.


have you considered that maybe it's your powersupply causing issues? 17.8.2 is more all over the place with LLC so it could simply be showing an issue with the PSU.

Personally I run 17.8.1 regardless since it's simply a better driver build.


----------



## kundica

Quote:


> Originally Posted by *punchmonster*
> 
> have you considered that maybe it's your powersupply causing issues? 17.8.2 is more all over the place with LLC so it could simply be showing an issue with the PSU.
> 
> Personally I run 17.8.1 regardless since it's simply a better driver build.


Of course, it's why I removed my cable extensions. It's highly unlikely though. I have a 6 week old Seasonic 1000w Prime Platinum.


----------



## springs113

Quote:


> Originally Posted by *kundica*
> 
> Of course, it's why I removed my cable extensions. It's highly unlikely though. I have a 6 week old Seasonic 1000w Prime Platinum.


don't get me wrong and but seasonic ain't prone to not have defects, it's the nature of electronics it happens. Since you pointed it out though that you were stable on 17.8.1, maybe it's just the drivers not playing well with your setup...it happens. Second unless you and AMDMatt had the same exact setup you really can't compare his to yours and even then you still can't guarantee they are binned the same. I'm on 17.8.2 and vega 64 and the issue i really have is when Asus' ai suite is installed my system stumbles other than that everything seems OK. Vega on the other hand is one hot chip, which is why mine copies are going under water.

Edit: forgive my penmanship, i guess my smart phone isn't as smart as it should with its auto correction i have basically been awake for over 27+hrs and I'm currently at work, i just can't get enough of tech talk right now.


----------



## Whatisthisfor

Quote:


> Originally Posted by *Mandarb*
> 
> Wow, I opened the toilet, reached deep into it and pulled out the biggest turd possible in the guise of a RX56.
> 
> Stock BIOS it can reach 895MHz on HBM 2, after that it crashes in Timespy. Flashed with the RX64 BIOS it's only stable at 950MHz. And to think that other people reach this on the stock BIOS..
> 
> Well, I know which of the cards I sell..


Is the HBM2 of your RX 56 SK Hynix or Samsung? Although i am not sure if GPU-Z reports that accurate yet. There is a beta version with support for Vega which does, but one has to register .


----------



## Mandarb

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Is the HBM2 of your RX 56 SK Hynix or Samsung? Although i am not sure if GPU-Z reports that accurate yet. There is a beta version with support for Vega which does, but one has to register .


It reports Micron just the same as on the 64. So, no, can't tell.


----------



## gupsterg

Quote:


> Originally Posted by *buildzoid*
> 
> Just gonna drop this here.
> 
> I finally got an RX V64 (thank you Alza)
> 
> I flashed the V56 BIOS to check HBM voltage.
> 
> VFE runs 1.35V HBM
> V64 runs 1.35V HBM
> V56 runs 1.25V HBM
> 
> Since flashing the V56 BIOS on a V64 sets the V56's HBM voltage I'd hazzard a guess that flashing the V64 BIOS on a V56 will get you 1.35V HBM. That extra 100mv on the HBM should yield an extra 100MHz HBM clock. Since VFEs and V64s regularly hit 1000-1100 HBM clock whereas V56s normally top out around 950.
> 
> Since the cards do have a BIOS switch I see no risk in people flashing V56s with air V64 BIOSs. I see no point in the V64 LC BIOS as that is literally just a BIOS with higher clocks and power limits. Both of which can be achieved using the soft power play table registry edit.
> 
> Also I have a suspicion that AMD is binning the LC cards so the clocks in those BIOS might not necessarily run on the air cards without extra Vcore.


+rep for info







.

IMO HBM2 is again like HBM1, in that it has needed voltage increases to hit clocks as AMD need. I suspect yields/quality may not have been again great as well.

I had been perplexed by your previous mentions and GN stating HBM2 as 1.3V on VEGA 56, as I had seen it as 1.25V in PowerPlay of VBIOS you shared. So great to read this update







.

All the RX VEGA share the same VoltageObjectInfo when I last checked, so the HBM2 voltage must be set by the PowerPlay value. Is HBM voltage control working now in WattMan? if you modify the HBM voltage via registry PowerPlay mod does it work?


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> Of course, it's why I removed my cable extensions. It's highly unlikely though. I have a 6 week old Seasonic 1000w Prime Platinum.


While I agree with springs, that whether a power supply is fairly new it can still be defective. I had my X-1250 by Seasonic go down when pushed by my former 290x's and it was a fairly new power supply as well.

However, since you are having no issues with 17.8.1 I don't think this is your case. Neither does it have to do with your AIO replacement card. If it was the card, it would still screw up with either driver. 17.8.2 just has issues and that's all there is to it. Something is on your system that is bugging out the drivers. It happens. Unless you can reinstall your O/S and just the basic drivers for all the peripherals as well as 17.8.2 and one game that you know crashes (no other monitoring or overclocking, antivirus, anti malware software etc) to test it, you won't get any answers. I'd just wait for 17.9.1 drivers (holy crap it's September already!!)


----------



## Sicness

I finally managed to flash the LC BIOS to my Vega 64 Air, I don't know what caused the update to fail earlier. An interesting bit I noticed - the original Air BIOS reports revision C1, while the LC BIOS gives me C0. Either way, the BIOS is running fine and my Vega is kept happy by the EKWB block


----------



## erase

I'm thinking there is plently of performance at a lower wattage to be had by lowering the core and pushing up the vRAM memory speed.

Problem is limiting power for limits power to the entire package, no way to isolate the core from the memory.

Is there way possible to lower the power limit on Vega without the HBM2 throttling its speed back?


----------



## Ne01 OnnA




----------



## Whatisthisfor

Quote:


> Originally Posted by *Mandarb*
> 
> It reports Micron just the same as on the 64. So, no, can't tell.


Yes, with the current version it tells me the same. But with the upcoming GPU-Z version Vega will be fully supported and it says i have Samsung HBM (Vega AIO).

There is a rumour, that the Vega 56 has SK Hynix HBM2, with lower clock speed ofc.


----------



## kundica

Quote:


> Originally Posted by *springs113*
> 
> don't get me wrong and but seasonic ain't prone to not have defects, it's the nature of electronics it happens. Since you pointed it out though that you were stable on 17.8.1, maybe it's just the drivers not playing well with your setup...it happens. Second unless you and AMDMatt had the same exact setup you really can't compare his to yours and even then you still can't guarantee they are binned the same. I'm on 17.8.2 and vega 64 and the issue i really have is when Asus' ai suite is installed my system stumbles other than that everything seems OK. Vega on the other hand is one hot chip, which is why mine copies are going under water.
> 
> Edit: forgive my penmanship, i guess my smart phone isn't as smart as it should with its auto correction i have basically been awake for over 27+hrs and I'm currently at work, i just can't get enough of tech talk right now.


Quote:


> Originally Posted by *ontariotl*
> 
> While I agree with springs, that whether a power supply is fairly new it can still be defective. I had my X-1250 by Seasonic go down when pushed by my former 290x's and it was a fairly new power supply as well.
> 
> However, since you are having no issues with 17.8.1 I don't think this is your case. Neither does it have to do with your AIO replacement card. If it was the card, it would still screw up with either driver. 17.8.2 just has issues and that's all there is to it. Something is on your system that is bugging out the drivers. It happens. Unless you can reinstall your O/S and just the basic drivers for all the peripherals as well as 17.8.2 and one game that you know crashes (no other monitoring or overclocking, antivirus, anti malware software etc) to test it, you won't get any answers. I'd just wait for 17.9.1 drivers (holy crap it's September already!!)


I should've included more info last night when I posted but I was in a rush to get some ZZZzzz.

Before my first AIO 64 I had a the Air 64. My Air 64 ran on 17.8.2 for a few days just fine(minus the known issues) until I swapped in the AIO 64. I had rolled back to 17.8.1 to do some testing when I first installed the AIO 64. The AIO 64 ran fine for a few days with +50% power limit but once I updated to 17.8.2 it started crashing at anything but Balanced or lower settings. I rolled back to 17.8.1 and the beta driver but the crashes remained.

I returned that card and bought another AIO 64. I started that setup off with a clean driver(17.8.1) and used the card for a day and half with probably 6 hours total gaming without any issues at +50% power limit and my HBM at 1000. I decided to update to 17.8.2 but once I didn't the card started crashing again in a very similar way as my first card on anything but balanced mode or lower. It usually happens within 5 minutes of loading into a game. I rolled back to 17.8.1 after the crashes last night and the card was fine for the 90 minutes I had to test.


----------



## dagget3450

Quote:


> Originally Posted by *surfinchina*
> 
> Hi,
> I've got a vega FE with EK waterblock.
> I decided to flash the 64 AIO rom and it didn't really make a difference. Is there something else I need to do?
> 
> Oh, the difference it did make was that the GPU isn't recognised by HWMon any more, or any benchmarks. Tells me the GPU doesn't support them.
> Plus one of my monitors doesn't go.
> 
> Oddly, I run it on hackintosh and it works a wee bit faster on that, both monitors work and the benchmarks run fine.
> I can't overclock in registry or software on account of running mac os x, so I'm looking for a bios solution. Any ideas?
> 
> pete


I totally was wondering if thid was possible. First thing i thought though is vram would drop to 8gb instead of 16gb??? I think the vega fe would need vega fe aio bios instead of rx 64?


----------



## Tgrove

Some decently prices liquid 64s on ebay

Http://www.ebay.com/itm/202042573888


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> I should've included more info last night when I posted but I was in a rush to get some ZZZzzz.
> 
> Before my first AIO 64 I had a the Air 64. My Air 64 ran on 17.8.2 for a few days just fine(minus the known issues) until I swapped in the AIO 64. I had rolled back to 17.8.1 to do some testing when I first installed the AIO 64. The AIO 64 ran fine for a few days with +50% power limit but once I updated to 17.8.2 it started crashing at anything but Balanced or lower settings. I rolled back to 17.8.1 and the beta driver but the crashes remained.
> 
> I returned that card and bought another AIO 64. I started that setup off with a clean driver(17.8.1) and used the card for a day and half with probably 6 hours total gaming without any issues at +50% power limit and my HBM at 1000. I decided to update to 17.8.2 but once I didn't the card started crashing again in a very similar way as my first card on anything but balanced mode or lower. It usually happens within 5 minutes of loading into a game. I rolled back to 17.8.1 after the crashes last night and the card was fine for the 90 minutes I had to test.


I don't know what is going on with your system but there is something fishy about it. I could see one bad vega, but 2 more in a row. Nah, that isn't right. You have something else in play that is causing these issues.


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> I don't know what is going on with your system but there is something fishy about it. I could see one bad vega, but 2 more in a row. Nah, that isn't right. You have something else in play that is causing these issues.


Have you not seen how many people have issues with the AIO 64? My cards exhibit the exact issues others are having. The OCuk forum is full of people who have returned their AIO cards. ****, the AMD rep himself stated his first AIO cards where faulty.


----------



## FelixB

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Is the HBM2 of your RX 56 SK Hynix or Samsung? Although i am not sure if GPU-Z reports that accurate yet. There is a beta version with support for Vega which does, but one has to register .


I have tried the beta GPU-Z on my consumer Sapphire Vega 56 and it reports HBM2 Samsung (KHA843801B). The author has told me that this is reliable.

Therefore, it appears that AMD is shipping the Vega 56 with Samsung memory for the present at least.

Poor Hynix


----------



## Whatisthisfor

Quote:


> Originally Posted by *FelixB*
> 
> I have tried the beta GPU-Z on my consumer Sapphire Vega 56 and it reports HBM2 Samsung (KHA843801B). The author has told me that this is reliable.
> 
> Therefore, it appears that AMD is shipping the Vega 56 with Samsung memory for the present at least.
> 
> Poor Hynix


Chances are still, that both deliver HBM2 for Vega 56. Hynix did a great job with HBM1 for Fury X. But with HBM2 it seems they were out of luck. Or maybe they did not put enough effort in it? Who knows. AMD seems to have waited long time for Hynix to get the HBM2 done.

But there are no reasons why Hynix should not just sell AMD what they have: Their HBM2 doesnt clock as high as Samsungs, but thats also not necessary for Vega 56. I would like them to sell their HBM2 to AMD and maybe nvidia. Must have been very expensive for them to develop the HBM2.

And AMD is surely interested to lower prices for HBM2, which is their baby, so they will help Hynix in my opinion and they may need Hynix as partner in the future.


----------



## Whatisthisfor

Any opinions on voltage for HBM2 on Vega 64? Should one touch it or leave it alone? I heard, touching it may confuse the driver, so has anyone bad experience with over- or undervolting HBM2? Currently i did leave it at 950mV @ 1000MHz.


----------



## Mandarb

Quote:


> Originally Posted by *kundica*
> 
> I should've included more info last night when I posted but I was in a rush to get some ZZZzzz.
> 
> Before my first AIO 64 I had a the Air 64. My Air 64 ran on 17.8.2 for a few days just fine(minus the known issues) until I swapped in the AIO 64. I had rolled back to 17.8.1 to do some testing when I first installed the AIO 64. The AIO 64 ran fine for a few days with +50% power limit but once I updated to 17.8.2 it started crashing at anything but Balanced or lower settings. I rolled back to 17.8.1 and the beta driver but the crashes remained.
> 
> I returned that card and bought another AIO 64. I started that setup off with a clean driver(17.8.1) and used the card for a day and half with probably 6 hours total gaming without any issues at +50% power limit and my HBM at 1000. I decided to update to 17.8.2 but once I didn't the card started crashing again in a very similar way as my first card on anything but balanced mode or lower. It usually happens within 5 minutes of loading into a game. I rolled back to 17.8.1 after the crashes last night and the card was fine for the 90 minutes I had to test.


Have you tried running DDU to get rid of 17.8.1 before installing 17.8.2?

Also, have you made sure your HBM overclock is completely stable? TimeSpy is the one I found out is the one most demanding on the memory and would reveal any issues with the memory overclock latest in the stress test.

Edit: didn't see the reply about people with Water Vegas having issues.


----------



## Whatisthisfor

Quote:


> Originally Posted by *Tgrove*
> 
> Some decently prices liquid 64s on ebay
> 
> Http://www.ebay.com/itm/202042573888


Looks good to me, had to pay almost 800 Euro for my Sapphire AIO here in Old Europe /Germany ;-)


----------



## kundica

Quote:


> Originally Posted by *Mandarb*
> 
> Have you tried running DDU to get rid of 17.8.1 before installing 17.8.2?
> 
> Also, have you made sure your HBM overclock is completely stable? TimeSpy is the one I found out is the one most demanding on the memory and would reveal any issues with the memory overclock latest in the stress test.


Going to do a reclean when I get home this afternoon. My card was crashing on 17.8.2 with the core and memory at stock.


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> Have you not seen how many people have issues with the AIO 64? My cards exhibit the exact issues others are having. The OCuk forum is full of people who have returned their AIO cards. ****, the AMD rep himself stated his first AIO cards where faulty.


Yes I had read how many had issues, and not many complained after the replacement. Besides the power supply debate and some that were unhappy that they didn't get to speeds of 1750 which is not guaranteed, of course there will be red flags. There always is with any product (evga fiasco with 1080's for example).

Granted it could be defective, but you reported you had no problems with the card running in 17.8.1 and it only happened when 17.8.2 was installed. Yes AMDmatt had his running in 17.8.2 but his setup is different from yours. 17.8.2 has had more complaints than AIO owners returning their cards because of one reason or another. Even myself after reading this reply I've gone back to 17.8.1 for now as I have found performance better with them. It's no genie in a bottle that gave me more speed than 17.8.2 as I can't overclock any more than I could with them, but the numbers don't lie in my benchmarks.

Just stay with 17.8.1 and wait for better drivers.


----------



## Mandarb

Well, I have tweaked my Vega RX64 a bit more, after having put up my Vega RX56 for sale on a Swiss ebay-like Site (ricardo.ch). Hope someone pays my retail price plus the fees of the site as base bid.









Anyways, my current best settings are:

P6: 1492 @ 1000mV
P7: 1572 @ 1030mV

Memory: 1080MHz

Fan: 2700RPM with 70°C target and 85°C threshold

Power: +50%

With that I received these GPU marks in 3DMark:

Time Spy: 7355 https://www.3dmark.com/spy/2314222

Firestrike: 23011 https://www.3dmark.com/fs/13511933

Firestrike Extreme: 11279 https://www.3dmark.com/fs/13511978

Firestrike Ultra: 5757 https://www.3dmark.com/fs/13512020


----------



## lightofhonor

For those interested in water blocking your Vega, EKWB will update their fluid gaming line with Vega support on Oct 1st according to their support team.

"The product your are looking for: *EK-FC Vega ALU* will be added to our fluid gaming website *on approx. 1st of October 2017.*"

Good news if you don't have any waterblock at all. Everyone with copper or nickel can disregard







Their 1080 block is like 20% cheaper through this line so I'd assume Vega would be the same.


----------



## alecmg

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Any opinions on voltage for HBM2 on Vega 64? Should one touch it or leave it alone? I heard, touching it may confuse the driver, so has anyone bad experience with over- or undervolting HBM2? Currently i did leave it at 950mV @ 1000MHz.


I believe voltage adjustment for HBM2 on Vega either doesn't work at all or adjusts memory controller voltage. Memory chips use 1.25V or 1.35V (on 56 and 64 respectively), nowhere near the 1.05V value shown.


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> Granted it could be defective, but you reported you had no problems with the card running in 17.8.1 and it only happened when 17.8.2 was installed.


I didn't on my first AIO card either. It wasn't until I updated to 17.8.2 that I started having issues and they followed me back to 17.8.1.

Regarding clocks... I've seen people concerned about not reaching the same clocks on 17.8.2 as earlier bios but I don't care about the clock if performance is better. I posted a while back increased performance on 17.8.2 compared to 17.8.1 while running lower clocks. That said, the Sapphire LC 64 is advertised as having DPM7 1750, so there shouldn't be any reason the card can't reach that. Since these cards are liquid cooled heat shouldn't be an issue either.


----------



## Gdourado

So far how is the overclocks of the Vega56?
And how would a Vega 56 compare at 1440p against a 980ti that does 1500mhz on the core?

Cheers!


----------



## Whatisthisfor

Quote:


> Originally Posted by *alecmg*
> 
> I believe voltage adjustment for HBM2 on Vega either doesn't work at all or adjusts memory controller voltage. Memory chips use 1.25V or 1.35V (on 56 and 64 respectively), nowhere near the 1.05V value shown.


Interesting. How do you know these values are like 1,35V for Vega64?


----------



## Mandarb

Quote:


> Originally Posted by *Gdourado*
> 
> So far how is the overclocks of the Vega56?
> And how would a Vega 56 compare at 1440p against a 980ti that does 1500mhz on the core?
> 
> Cheers!


Never OCed my RX56 as I checked how it was working, it was able to run at about 30MHz higher than my RX64 with same cooling restraints. Sadly the HBM wasn't very good and only reached 895MHz.


----------



## alecmg

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Interesting. How do you know these values are like 1,35V for Vega64?


somewhere, probably /r/Amd

Also default voltage for HBM2 standard is 1.3V, so it can never be 1.05V or 0.95V


----------



## Mandarb

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Interesting. How do you know these values are like 1,35V for Vega64?


Current HWinfo beta reads out voltage and frequency for both memory and GPU correctly. RX56 who state 1.25V and that are flashed to RX64 bios read out 1.35V.


----------



## kundica

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Interesting. How do you know these values are like 1,35V for Vega64?


Voltage meter.


----------



## gupsterg

Quote:


> Originally Posted by *FelixB*
> 
> I have tried the beta GPU-Z on my consumer Sapphire Vega 56 and it reports HBM2 Samsung (KHA843801B). The author has told me that this is reliable.
> 
> Therefore, it appears that AMD is shipping the Vega 56 with Samsung memory for the present at least.
> 
> Poor Hynix


So far every VBIOS I have seen for any VEGA card, RX or FE, has only that HBM IC support. VBIOS can have more than one type of IC support but it does not on VEGA.

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Any opinions on voltage for HBM2 on Vega 64? Should one touch it or leave it alone? I heard, touching it may confuse the driver, so has anyone bad experience with over- or undervolting HBM2? Currently i did leave it at 950mV @ 1000MHz.
> 
> Quote:
> 
> 
> 
> Originally Posted by *alecmg*
> 
> I believe voltage adjustment for HBM2 on Vega either doesn't work at all or adjusts memory controller voltage. Memory chips use 1.25V or 1.35V (on 56 and 64 respectively), nowhere near the 1.05V value shown.
> 
> Quote:
> 
> 
> 
> Originally Posted by *Whatisthisfor*
> 
> Interesting. How do you know these values are like 1,35V for Vega64?
> 
> Quote:
> 
> 
> 
> Originally Posted by *kundica*
> 
> Voltage meter.
> 
> 
> 
> 
> 
> Click to expand...
> 
> 
> 
> Click to expand...
Click to expand...

Kundica is right, matches what Buildzoid has stated and what is shown as HBM voltage value in VBIOS. So it seems to me the memory voltage in Wattman is not really useful at present. Perhaps it's VDDCI.

Kundica as you have DMM/VEGA, know measuring points, if I made a registry mod with lowered HBM RAM voltage would you test it?


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> Kundica as you have DMM/VEGA, know measuring points, if I made a registry mod with lowered HBM RAM voltage would you test it?


Sorry, I didn't measure. I was only pointing out how we knew the voltage.


----------



## Skinnered

Quote:


> Originally Posted by *kundica*
> 
> I should've included more info last night when I posted but I was in a rush to get some ZZZzzz.
> 
> Before my first AIO 64 I had a the Air 64. My Air 64 ran on 17.8.2 for a few days just fine(minus the known issues) until I swapped in the AIO 64. I had rolled back to 17.8.1 to do some testing when I first installed the AIO 64. The AIO 64 ran fine for a few days with +50% power limit but once I updated to 17.8.2 it started crashing at anything but Balanced or lower settings. I rolled back to 17.8.1 and the beta driver but the crashes remained.
> 
> I returned that card and bought another AIO 64. I started that setup off with a clean driver(17.8.1) and used the card for a day and half with probably 6 hours total gaming without any issues at +50% power limit and my HBM at 1000. I decided to update to 17.8.2 but once I didn't the card started crashing again in a very similar way as my first card on anything but balanced mode or lower. It usually happens within 5 minutes of loading into a game. I rolled back to 17.8.1 after the crashes last night and the card was fine for the 90 minutes I had to test.


Same here. I have two LQ's waiting to be crossfirered once







and it crashes almost everytime esp. when editing "power" in wattman. I've checked my pcie rails to be dedicated, but no dice. I can run a custom profile by only oc'ing the HBM mem. (@~1075) only however. Last night I tried upping the voltages a bit (P6 1200 and p7 1250mv) and now I can run power at 50% letting the core speed to rise a bit higher (mostly 165x-17xx) without crashes, but it take quite a lot of powerusage this way. I blame the 17.8.2's, altough some games are faster, the boost core speed is lower and wattman behaves strange...


----------



## gupsterg

Quote:


> Originally Posted by *kundica*
> 
> Sorry, I didn't measure. I was only pointing out how we knew the voltage.


I have seen HWiNFO shows HBM2 voltage as it did for HBM1. I know on Fury X if I modify HBM1 voltage it will adjust in HWiNFO. If you're up for testing then I'm willing to do the mod file.


----------



## PontiacGTX

http://www.gamersnexus.net/guides/3040-amd-vega-56-hybrid-results-1742mhz-400w-power/page-2


----------



## kundica

Quote:


> Originally Posted by *Skinnered*
> 
> Same here. I have two LQ's waiting to be crossfirered once
> 
> 
> 
> 
> 
> 
> 
> and it crashes almost everytime esp. when editing "power" in wattman. I've checked my pcie rails to be dedicated, but no dice. I can run a custom profile by only oc'ing the HBM mem. @~1075) however. Last night I tried upping the voltages a bit (P6 1200 and p7 1250mv) and now I can run power at 50% letting the core speed to rise a bit higher (mostly 165x-17xx) without crashed, but it take quite a lot of power usage this way. I blame the 17.8.2's, altough some games are faster, the boost core speed is lower and wattman behaves strange...


Interesting. I didn't try upping the voltage. What clock did you set P6 and P7 to? When I get home in several hours I plan to do a full driver cleaning then test 17.8.2 some more.

Do both of your cards behave the same?

Quote:


> Originally Posted by *gupsterg*
> 
> I have seen HWiNFO shows HBM2 voltage as it did for HBM1. I know on Fury X if I modify HBM1 voltage it will adjust in HWiNFO. If you're up for testing then I'm willing to do the mod file.


Sure. I won't be home for another 3 or 4 hours though and want to focus the 17.8.2/AIO crashing issue I'm having.


----------



## abe_joker

So I've been keeping an eye on wattman while I play. For some reason my video card goes from 1000mhz in memory, to 500mhz and mkaes my game stutter. Same with my GPU mhz...goes all over the place. Any idea?


----------



## Skinnered

Quote:


> Originally Posted by *kundica*
> 
> Interesting. I didn't try upping the voltage. What clock did you set P6 and P7 to? When I get home in several hours I plan to do a full driver cleaning then test 17.8.2 some more.
> Do both of your cards behave the same?


1702 and 1752 mhz, see below

Wattman1.jpg 954k .jpg file


Yes, since the 17.8.2's

To reach near or above 1700 speeds need a lot of juice with the 17.8.2's
I guess?


----------



## gupsterg

Quote:


> Originally Posted by *PontiacGTX*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> http://www.gamersnexus.net/guides/3040-amd-vega-56-hybrid-results-1742mhz-400w-power/page-2


I reckon with HBM2 on VEGA 56 given same voltage as VEGA 64 it will shine even more







. Maybe I should get one ....
Quote:


> Originally Posted by *kundica*
> 
> Sure. I won't be home for another 3 or 4 hours though and want to focus the 17.8.2/AIO crashing issue I'm having.


I appreciate help, if it works others will be able to enjoy mod. As stated above V56 owners could benefit from increase, allowing them better MHz on HBM2.

WattMan Memory Voltage is showing VDDCI IMO (this was the case on Polaris as well).



1000mV was what Fiji, Hawaii had also.

Just before it is HBM2 voltage.



Here is RX VEGA 64 AIO stock PowerPlay but HBM2 as 1.25V.

RX_VEGA_64_AIO_Soft_PP_HBM2_1.25V.zip 1k .zip file


Do a restart after applying, not a shutdown and powerup, reason I say this is if your on W10 and 'Fast start' is enabled then a shutdown/powerup does not load a fresh kernel and changes do not apply.


----------



## Energylite

Quote:


> Originally Posted by *Skinnered*
> 
> To reach near or above 1700 speeds need a lot of juice with the 17.8.2's
> I guess?


Yeah,, i think, even if I touch to the powertable i can barely reach 1700Mhz rofl
Quote:


> Originally Posted by *abe_joker*
> 
> So I've been keeping an eye on wattman while I play. For some reason my video card goes from 1000mhz in memory, to 500mhz and mkaes my game stutter. Same with my GPU mhz...goes all over the place. Any idea?


WOW, I've no idea dude


----------



## kundica

Quote:


> Originally Posted by *Skinnered*
> 
> 1702 and 1752 mhz, see below
> 
> Wattman1.jpg 954k .jpg file
> 
> 
> Yes, since the 17.8.2's
> 
> To reach near or above 1700 speeds need a lot of juice with the 17.8.2's
> I guess?


This is interesting. It could mean that the newest bios isn't handling the voltage to the core correctly. Can you run some tests while monitoring the voltage to your core through HWinfo64?

I noticed in AMDMatt's videos he has P7 set to 1250 @1802 yet the card never hits 1250 even when it hits or exceeds max clock(in menus he maxes at about 1852). While hovering in 1777 range the voltage is 1188. His P6 is set to 1125mv at 1752.


----------



## buildzoid

I messed around with the mem voltage slider in Wattman. It does nothing. I went from 0.9 to 1.2 and none of the VRMs on the card registered any change with my DMM. VDDCI stayed at 0.9 Vpp at 1.8 and Vdisp at 0.9 VHBM at 1.35V. It also had no impact on the max stable HBM clock. It could be some kind off on die regulator but since it also doesn't affect OC in anyway I would just ignore it.

BTW anyone here capable of doing 6K+ in Unigine Heaven? My VFE does it at 1650/1050 but the V64 only does about 5.7K at 1700/1100.


----------



## paulc010

Quote:


> Originally Posted by *abe_joker*
> 
> So I've been keeping an eye on wattman while I play. For some reason my video card goes from 1000mhz in memory, to 500mhz and mkaes my game stutter. Same with my GPU mhz...goes all over the place. Any idea?


Which card is it? I'd up the fan speed to "brutal" and see if this behaviour goes away. HMB2 will throttle down when it hits 80C or so.


----------



## abe_joker

I used Turbo and it didn't crash. I go custom with the following settings and it crashes in a few seconds.:


----------



## PontiacGTX

Quote:


> Originally Posted by *gupsterg*
> 
> I reckon with HBM2 on VEGA 56 given same voltage as VEGA 64 it will shine even more
> 
> 
> 
> 
> 
> 
> 
> . Maybe I should get one ....
> I appreciate help, if it works others will be able to enjoy mod. As stated above V56 owners could benefit from increase, allowing them better MHz on HBM2.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> WattMan Memory Voltage is showing VDDCI IMO (this was the case on Polaris as well).
> 
> 
> 
> 1000mV was what Fiji, Hawaii had also.
> 
> Just before it is HBM2 voltage.
> 
> 
> 
> Here is RX VEGA 64 AIO stock PowerPlay but HBM2 as 1.25V.
> 
> RX_VEGA_64_AIO_Soft_PP_HBM2_1.25V.zip 1k .zip file
> 
> 
> Do a restart after applying, not a shutdown and powerup, reason I say this is if your on W10 and 'Fast start' is enabled then a shutdown/powerup does not load a fresh kernel and changes do not apply.


performance-watt is quite good once undervolted the card only drawback is the overclock ability seems to be very limited)(HBM2 has lower clock speed and overclock at lower frequency than VEGA 64 and the BIOS limited greatly the overclock, then reference GPU Cooler doesnt keep HBM2 cool enough and the GPU doesnt have much temperature limit for the OC)
Quote:


> Originally Posted by *buildzoid*
> 
> I messed around with the mem voltage slider in Wattman. It does nothing. I went from 0.9 to 1.2 and none of the VRMs on the card registered any change with my DMM. VDDCI stayed at 0.9 Vpp at 1.8 and Vdisp at 0.9 VHBM at 1.35V. It also had no impact on the max stable HBM clock. It could be some kind off on die regulator but since it also doesn't affect OC in anyway I would just ignore it.
> 
> BTW anyone here capable of doing 6K+ in Unigine Heaven? My VFE does it at 1650/1050 but the V64 only does about 5.7K at 1700/1100.


what would happen if VEGA 56 uses VEGA 64 bios and you compare if it allow higher memory OC ability,just wondering if the VEGA 56 allow higher memory OC with vega 64 bios


----------



## Energylite

Quote:


> Originally Posted by *buildzoid*
> 
> I messed around with the mem voltage slider in Wattman. It does nothing. I went from 0.9 to 1.2 and none of the VRMs on the card registered any change with my DMM. VDDCI stayed at 0.9 Vpp at 1.8 and Vdisp at 0.9 VHBM at 1.35V. It also had no impact on the max stable HBM clock. It could be some kind off on die regulator but since it also doesn't affect OC in anyway I would just ignore it.


What is the maximum clock that you can achieve on your HBM?
Quote:


> Originally Posted by *buildzoid*
> 
> BTW anyone here capable of doing 6K+ in Unigine Heaven? My VFE does it at 1650/1050 but the V64 only does about 5.7K at 1700/1100.


Tell me your settings on Heaven, I'm gonna do a run when I'm home


----------



## buildzoid

Quote:


> Originally Posted by *Energylite*
> 
> What is the maximum clock that you can achieve on your HBM?
> Tell me your settings on Heaven, I'm gonna do a run when I'm home


I'm just running the HWbot Heaven Preset.

HBM tops out at 1100 for both my FE and V64.

Right now I'm messing with the VFE power play tables on the V64. The VFE tables seem a lot better behaved than the 64's


----------



## shadowxaero

Arrived today, 4 days ahead of schedule. Sadly the water block doesn't arrive till Tuesday so I can't play with it v.v.......#firstworldproblems


----------



## buildzoid

Ok I can now confirm that the V64 BIOS on a V56 will improve HBM OC. A friend with a V56 that topped out at 960 on the stock BIOS is now pushing 1100 on the V64 BIOS from my V64.


----------



## surfinchina

Quote:


> Originally Posted by *dagget3450*
> 
> I totally was wondering if thid was possible. First thing i thought though is vram would drop to 8gb instead of 16gb??? I think the vega fe would need vega fe aio bios instead of rx 64?


I used the FE aio bios. I also tried an rx aio bios and it flashed, just no signal, so the connectors are different I guess.


----------



## Gdourado

How is the quality of the cooler on Vega 56?
At a time I had a reference 290x.
It was a great performance card, but I had to sell it because I was forced to play with headphones.
Is the Vega cooler more silent?
Are the materials better? Is it a heatpipe cooler? Vapor chamber? Or just a standard heatsink with a fan pushing air trough?


----------



## erase

Is there any hardware limitations to decouple the HBM2 voltage and overclocking while being able to power limit the core? Would this be something that could be software or BIOS modded in future, or are we stuck with it like this forever?


----------



## poisson21

Soon to be installed here 2 MSI RX Vega 64, just wait for my second EK waterblock.

Just after that i think i'll try an AIO bios. (can you put any aio bios on a card if it is a reference card ?)


----------



## Energylite

Quote:


> Originally Posted by *buildzoid*
> 
> Ok I can now confirm that the V64 BIOS on a V56 will improve HBM OC. A friend with a V56 that topped out at 960 on the stock BIOS is now pushing 1100 on the V64 BIOS from my V64.


Sweet, how does it perform against a V64 stock?
Quote:


> Originally Posted by *buildzoid*
> 
> I'm just running the HWbot Heaven Preset.
> 
> HBM tops out at 1100 for both my FE and V64.
> 
> Right now I'm messing with the VFE power play tables on the V64. The VFE tables seem a lot better behaved than the 64's


Yeah pretty the same for me. I can achieve 1105Mhz max on HBM. Oh so the VFE Powertable is "better" than the V64 one? Can you tell me what you gonna change, I want to compare it with my powertable (if possible and if you want







)


----------



## gupsterg

Quote:


> Originally Posted by *erase*
> 
> Is there any hardware limitations to decouple the HBM2 voltage and overclocking while being able to power limit the core? Would this be something that could be software or BIOS modded in future, or are we stuck with it like this forever?


AMD PowerLimit applies to GPU AFAIK, not VRAM/HBM.


----------



## GroupB

Im about to put a ek block on my vega 64, receive it a hour ago... I have to mod my loop and bleed it so maybe in one hour I will do the install.

Here my question : Should I use ek pad on VRM (1mm one) or some Fujipoly extreme I have left ?

Do the vrm temps are high on ek pad ? I aim for OK temps vrm but I dont want to put more heat into the core/hbm by using the extreme one unless the vrm heat output are bad or not that great with the EK pad

Fujipoly extreme 11w/mk , ek pad ???

I know my prev card r9 290 you had to get Fujipoly extreme on vrm because the stock pad were not doing the job enough

So what you guys think?


----------



## pillowsack

Quote:


> Originally Posted by *gupsterg*
> 
> AMD PowerLimit applies to GPU AFAIK, not VRAM/HBM.


So when is someone gonna voltmod it and put out a guide








Quote:


> Originally Posted by *GroupB*
> 
> Im about to put a ek block on my vega 64, receive it a hour ago... I have to mod my loop and bleed it so maybe in one hour I will do the install.
> 
> Here my question : Should I use ek pad on VRM (1mm one) or some Fujipoly extreme I have left ?
> 
> Do the vrm temps are high on ek pad ? I aim for OK temps vrm but I dont want to put more heat into the core/hbm by using the extreme one unless the vrm heat output are bad or not that great with the EK pad
> 
> Fujipoly extreme 11w/mk , ek pad ???
> 
> I know my prev card r9 290 you had to get Fujipoly extreme on vrm because the stock pad were not doing the job enough
> 
> So what you guys think?


I think the ones EK provided are great.


----------



## GroupB

Just look it up EK pad are between 3-5 w/mk , that not so great vs 11w/mk

I think Ill go extreme since I have them laying around... I can probably deal with a extra 2-3C on the core HBM.

Trying to find a monitor software that report vrm temp and nothing so far


----------



## abe_joker

I need HELP guys. I got the RX Vega 64 AC. Driver 17.8.1, a Ryzen [email protected] and 16GB running at 2933mhz. When I open a game the video card stutters every now and then for a second or two. Also, when it stutters, I can hear a stuttering sound inside the case. I can't identify where is it coming from.


----------



## pillowsack

Quote:


> Originally Posted by *abe_joker*
> 
> I need HELP guys. I got the RX Vega 64 AC. Driver 17.8.1, a Ryzen [email protected] and 16GB running at 2933mhz. When I open a game the video card stutters every now and then for a second or two. Also, when it stutters, I can hear a stuttering sound inside the case. I can't identify where is it coming from.


I think the sound coming from your case is coil whine, when the load shifts from high and low the GPU will make cool sounds.

The stuttering sounds like it might be throttling. Are your temps checking out alright? Have you set up the states or anything?


----------



## PontiacGTX

Quote:


> Originally Posted by *buildzoid*
> 
> Ok I can now confirm that the V64 BIOS on a V56 will improve HBM OC. A friend with a V56 that topped out at 960 on the stock BIOS is now pushing 1100 on the V64 BIOS from my V64.


then it is more a limitation to change HBM2 voltage on Vega hence you dont get same results as VEGA 64 bios,no?


----------



## abe_joker

Quote:


> Originally Posted by *pillowsack*
> 
> I think the sound coming from your case is coil whine, when the load shifts from high and low the GPU will make cool sounds.
> 
> The stuttering sounds like it might be throttling. Are your temps checking out alright? Have you set up the states or anything?


It happens in Balanced, Turbo or custom. Doesn't matter the power limit %. The whine starts as soon as I open a game. The GPU temp is below 60c while playing RL, so is the HBM Temps.


----------



## abe_joker

The switch on the card is on the left side, towards the back of the case. Game goes from 144fps to 90fps stuttering and then back up. It happens in Rocket League, PUBG or any other.


----------



## pillowsack

Quote:


> Originally Posted by *abe_joker*
> 
> The switch on the card is on the left side, towards the back of the case. Game goes from 144fps to 90fps stuttering and then back up. It happens in Rocket League, PUBG or any other.


Like I said it sounds like it's throttling. Sometimes it's throttling from heat or sometimes TDP. You should monitor the frequencies and see if they're jumping around. That's why I flashed the AIO 64 bios, more TDP.









EDIT: honestly that or the drivers are absolute garbage right now considering my card didn't throttle at all on the stock bios and other drivers with the same overclock.


----------



## abe_joker

It can't be thermal since temps are below 60c. I will get the card out and put some thermal paste of my own. TDP throttling...would be weird, no? I got a TX650M from Corsair which should be enough.


----------



## gupsterg

Quote:


> Originally Posted by *pillowsack*
> 
> So when is someone gonna voltmod it and put out a guide


Buildzoid would have that info, I'm more of a bios mod person







.
Quote:


> Originally Posted by *PontiacGTX*
> 
> then it is more a limitation to change HBM2 voltage on Vega hence you dont get same results as VEGA 64 bios,no?


This is why I believed originally the 'Security Feature' for locking out bios mod is basically to keep 'Artificial segmentation'.

Fury was so close to Fury X in most reviews IIRC. I unlocked 3840SP on a Fury Tri-X and clock for clock matched a genuine Fury X in benches I did. The only reason I kept Fury X was at the time I got a deal on one meaning it cost the same as Fury but had AIO vs AIR and the full SP, etc would mean better resale IMO.

We just need some one to try out a modded registry to see if voltage to HBM changes. It is changing with VBIOS and VBIOS sets it using PowerPlay, via registry mod 'we' are doing the same and it should apply (hopefully







).


----------



## PontiacGTX

Quote:


> Originally Posted by *gupsterg*
> 
> Buildzoid would have that info, I'm more of a bios mod person
> 
> 
> 
> 
> 
> 
> 
> .
> This is why I believed originally the 'Security Feature' for locking out bios mod is basically to keep 'Artificial segmentation'.
> 
> Fury was so close to Fury X in most reviews IIRC. I unlocked 3840SP on a Fury Tri-X and clock for clock matched a genuine Fury X in benches I did. The only reason I kept Fury X was at the time I got a deal on one meaning it cost the same as Fury but had AIO vs AIR and the full SP, etc would mean better resale IMO.
> 
> We just need some one to try out a modded registry to see if voltage to HBM changes. It is changing with VBIOS and VBIOS sets it using PowerPlay, via registry mod 'we' are doing the same and should apply.


the small performance difference or one non existent with higher CU count onyl means there is an architecture bottleneck and AMD keep working on an architecture which had bottlenecks at same core count


----------



## pmc25

Little disappointed that there was no driver update this week, given what a total mess WattMan remains and all the clocking / voltage control issues that 17.8.2 introduced or exacerbated.

Would also be great to be able to re-enable hardware acceleration in browsers if they fixed the freezing (surely they have to know the cause).

Enabling all the hardware features I can wait for, but making WattMan fit for purpose and fixing dynamic clocks, LLC etc really needs to happen sooner rather than later.

I'm waiting for some better drivers before I try the AIO BIOS - possibly some better (updated) vBIOS would be nice too.


----------



## mtrai

Quote:


> Originally Posted by *pmc25*
> 
> Little disappointed that there was no driver update this week, given what a total mess WattMan remains and all the clocking / voltage control issues that 17.8.2 introduced or exacerbated.
> 
> Enabling all the hardware features I can wait for, but making WattMan fit for purpose and fixing dynamic clocks, LLC etc really needs to happen sooner rather than later.
> .


Seriously man...you do realize that Wattman has been a hot mess since it was released with the Polaris GPUs. I gave up on wattman long time ago.


----------



## DrZine

Quote:


> Originally Posted by *gupsterg*
> 
> We just need some one to try out a modded registry to see if voltage to HBM changes. It is changing with VBIOS and VBIOS sets it using PowerPlay, via registry mod 'we' are doing the same and it should apply (hopefully
> 
> 
> 
> 
> 
> 
> 
> ).


I just gave it a try. I set the hbm volt to 1300mv (14 05) from the stock 1350mv (46 05). I applied the reg mod. The same way I did to get the increased power limits which work. How ever the hbm voltage is still reported as 1.356v in the new HWinfo beta.


----------



## pmc25

Quote:


> Originally Posted by *mtrai*
> 
> Seriously man...you do realize that Wattman has been a hot mess since it was released with the Polaris GPUs. I gave up on wattman long time ago.


But launching with "HBM Memory Voltage" being adjustable (it isn't what is stated but is some other unknown parameter that influences GPU core voltage), then not doing anything about it in 2 further revisions?

That's an absolute joke.

I never used WattMan before, because Afterburner / WattTool worked properly before. Neither do anymore. In fact in my experience Afterburner being opened even once borks the driver to an extent that it needs to be uninstalled, cleaned and reinstalled.


----------



## erase

Quote:


> Originally Posted by *gupsterg*
> 
> Quote:
> 
> 
> 
> Originally Posted by *erase*
> 
> Is there any hardware limitations to decouple the HBM2 voltage and overclocking while being able to power limit the core? Would this be something that could be software or BIOS modded in future, or are we stuck with it like this forever?
> 
> 
> 
> AMD PowerLimit applies to GPU AFAIK, not VRAM/HBM.
Click to expand...

Really? We'll try dropping your power limit and see if you can overclock the memory, cause it won't work. If anything the memory will go slower than stock speeds.


----------



## abe_joker

I flashed the LC Vega BIOS and still getting the same stutters. So, BIOS didn't solve the issue. What the heck should I do?


----------



## springs113

Quote:


> Originally Posted by *abe_joker*
> 
> It can't be thermal since temps are below 60c. I will get the card out and put some thermal paste of my own. TDP throttling...would be weird, no? I got a TX650M from Corsair which should be enough.


what motherboard do you have?


----------



## roybotnik

Been having some serious issues with 17.8.2. Is powerplay registry editing the only way to adjust settings for VFE on 17.8.2?

I tried using afterburner to set HBM clocks and power limit, but now my card can't complete basically any compute task. And out of curiosity I tried mining...which resulted in the display being turned off and my card's fans locking at max RPM. Even a reboot didn't work, had to power down completely... Guess I need to DDU


----------



## gupsterg

Quote:


> Originally Posted by *DrZine*
> 
> I just gave it a try. I set the hbm volt to 1300mv (14 05) from the stock 1350mv (46 05). I applied the reg mod. The same way I did to get the increased power limits which work. How ever the hbm voltage is still reported as 1.356v in the new HWinfo beta.


Cheers, shame it didn't work, +rep for info share.

Just to confirm restart done after applying registry mod?
Quote:


> Originally Posted by *erase*
> 
> Really? We'll try dropping your power limit and see if you can overclock the memory, cause it won't work. If anything the memory will go slower than stock speeds.


I don't have VEGA.

I stated PowerLimit apply to GPU AFAIK as that is what it was on all past AMD cards.

I do believe it would be the same for VEGA and some of the results are 'up in the air' as driver is pretty crap IMO.


----------



## abe_joker

Quote:


> Originally Posted by *springs113*
> 
> what motherboard do you have?


X370 Gaming K3 from Gigabyte. It worked perfectly this morning with the 290X I had connected before the Vega. I connected Vega and lo' and behold, it stutters.


----------



## Ragsters

Someone quick help me. There are two brands of Vega 56 in stock now. Should I get Sapphire or Powercolor?


----------



## Mandarb

Quote:


> Originally Posted by *erase*
> 
> Is there any hardware limitations to decouple the HBM2 voltage and overclocking while being able to power limit the core? Would this be something that could be software or BIOS modded in future, or are we stuck with it like this forever?


I seem to remember someone mentioning that HBM memory was entirely supplied by the PCIe slot.
I think it was Buildzoid since he did the VRM analysis video.


----------



## Mandarb

Quote:


> Originally Posted by *Ragsters*
> 
> Someone quick help me. There are two brands of Vega 56 in stock now. Should I get Sapphire or Powercolor?


They are entirely the same, as they are just reference cards that the "manufacturer" puts their stickers on. If there's a difference it will be in the quality of the customer support.
Someone correct me if I'm wrong.


----------



## Ragsters

Quote:


> Originally Posted by *Mandarb*
> 
> They are entirely the same, as they are just reference cards that the "manufacturer" puts their stickers on. If there's a difference it will be in the quality of the customer support.
> Someone correct me if I'm wrong.


Same price too. I understand about the same card. I just need to know who is the better company to deal with.


----------



## Mandarb

Quote:


> Originally Posted by *abe_joker*
> 
> I flashed the LC Vega BIOS and still getting the same stutters. So, BIOS didn't solve the issue. What the heck should I do?


What are the CPU and HBM temperatures? If the HBM gets too hot it gets downclocked. I seem to remember that you quoted pretty high frequencies (1700MHz+) in your other post. If that's reference air cooled vega I guess that's your issue.


----------



## abe_joker

Quote:


> Originally Posted by *Mandarb*
> 
> What are the CPU and HBM temperatures? If the HBM gets too hot it gets downclocked. I seem to remember that you quoted pretty high frequencies (1700MHz+) in your other post. If that's reference air cooled vega I guess that's your issue.


HBM is below 60. Apparently, my problem was enhanced sync? I just put vertical sync and i am not experiencing the stuttering. Can you guys try enhanced sync?


----------



## ashman95

FE /EK water block

Ok so the best way to avoid crashes is to stair step to the desired frequency ala voltage control S0-800mv S1- 800mv S2-980 S3-980 S4-1110 S5-1110 S6/S7 1160mv . Appears the Power state choosing algorithm works Best, and without complications when it has choices- I made it easy choising power states next to each other varied only by frequency- it was interesting to see the decision tree at work.
I used the original drivers I set HBM to 1095, could've used 1100 but I didnt want to keep everything simple- note that I used 1697MHZ as target frequency, that's because over 1700mhz will work for some programs but not for others and sometimes crashes would occur- basically its either the drivers or AMD locked the cards under 1700mhz- there really shouldnt be any reason for these cards (water cooled) not to get up there and beyond, at reasonable voltages- I could've gone lower with my voltage settings but I just wanted to see the card work.

BEST SPECviewperf scores yet!! havent played any games -with these settings...I'll wait for the new drivers , Bata drivers are stable but you lose HBM control. OUT


----------



## Mandarb

Wasn't it established that p5 and below were not editable/do nothing when changed?


----------



## ashman95

Mandarb
false
Quote:


> Originally Posted by *Mandarb*
> 
> Wasn't it established that p5 and below were not editable/do nothing when changed?


Not that I find I've been watching frequencies all day- SPEVview Energy basically runs in states 4-5/6, Vega tries to keep the card at max load, switching to the best state.


----------



## Mandarb

Hmmm... maybe that was false established because people were using WattTool. Because there everything below P5 was set back to default when edited after the set button was pushed.

Never tried to adjust it in WattMan after I noticed that. Plus when in %-age mode in WattMan the lower states remain untouched.


----------



## ashman95

Quote:


> Originally Posted by *Mandarb*
> 
> Hmmm... maybe that was false established because people were using WattTool. Because there everything below P5 was set back to default when edited after the set button was pushed.
> 
> Never tried to adjust it in WattMan after I noticed that. Plus when in %-age mode in WattMan the lower states remain untouched.


Definitely strange things were happening but for me that wasn't one of them -in fact I used Watt Tool to set Watman so I would have to re copy the settings - save a profile in Watt Tool load it press set and Wattman would have those settings- cool -the issue I had wit Watt Tool was not able to set the mem frequency, for that I used Wattman after settings from Watt Tool


----------



## Roboyto

Got my PowerColor Vega 64 blocked now and in my relatively small cooling loop.

(2) XSPC EX 240 Rads

Top EX240 has (2) Yate Loon 20mm 'thin' medium fans

Front EX240 has (4) Corsair SP120's push/pull

XSPC dual-bay res with MCP655B pump

XSPC Raystorm & EKFC Vega plexi/nickel

3/8" ID tubing

All the fans are on a single splitter on one motherboard header and set to silent mode in ASUS AISuite...loudest thing is the slight hum of the pump running when you have your head next to the case.

I decided to run CLU on the die and HBM. I used thermal tape to protect around the GPU die. Most interesting thing about this is how the CLU only wanted to stick to where the GPU/HBM were. It looks like a near perfect application, but even when I was brushing over the whole area, it didn't want to stay anywhere but on the GPU/HBM. Pretty cool









I also opted for upgraded thermal pads for all the VRMs. Fujipoly Ultra Extreme to the rescue. I know for sure this stuff gives massive temp drops from testing on my R9 290 a ways back; It was 23% VRM1 temp reduction. http://www.overclock.net/t/1468593/r9-290-x-thermal-pad-upgrade-vrm-temperatures/0_20



Spoiler: Fuji















I stuck the thermal tape down first and left the protective covering on the top for any additional CLU that was brushed over it.



There was a pinch of CLU leftover on the very edge of the tape after removing the film, but I highly doubt it's going to weasel its way anywhere to do damage.



Initial testing of my Vega was done in my HTPC which is rocking a midly OC'd 1700 at 3.7GHz with 3200MHz DDR4.

That being said the benchmark scores aren't a direct comparison, and some of the GS gains can/could be attributed to my WC'd rig running on Z87 with a 4770K at 4.5GHz with 2400MHz DDR3.

Previous best run air cooled at 1742/945 scored 7719 overall with Graphics at 7651. Just installing the block and running TimeSpy with auto/Balance settings netted 7272 graphics score.

It is a fairly decent boost overall in GS though, so it is hard to say how much the i7 is contributing. Presently I have the #1 overall TimeSpy score for a Vega 64 card, and my GS is better than nearly all of the top 25-30 listings when searching by GPU only.

Have only benched TimeSpy up to this point. Best GS run with the following settings in WattMan:

Core 1732

HBM 1070

Voltage - Only bumped P6 to 1200mV matching P7

HBM Voltage - Stock 1050mV

50% Power Target

No BIOS flash yet

No tweaking with WattTool

17.8.1 Drivers

7377 Overall Score | 8107 Graphics Score

https://www.3dmark.com/3dm/21912159?



Temps are pretty good thus far I think...however there has been no extended load due to some crashes and lockups while tinkering. Plus this PC hasn't been powered up since mid-march due to moving and purchasing a house. I'm fairly certain windows update was the cause of a couple 'crashes' and random reboots while benching. At most there has been 3 TimeSpy benches back to back with HWInfo showing core peak at 45C and HBM peaking 52C; it's 25.5C in my house.

1732/1090 dropped GS by 10 points, but due to CPU score fluctuation it boosted overall score to 7396, which puts me at #1 currently for any CPU with a Vega 64. I'm ahead of numerous Ryzen's and a couple beefier i7's...time to move this rig over to Ryzen as well









https://www.3dmark.com/spy/2317729

Will be playing around with all this more throughout the weekend, but need to catch some







so I can be coherent enough to drive the wife to Midway airport in the AM


----------



## GroupB

Im done installing my water block, so put me in the club ...

Vega 64 ltd with ek block and extreme pad on vrm and IC7 on core/hbm

If I can figure that powerplay table I will be OC soon but its not that clear what to change to what to say boost the power from 50% to say 150%

Did a run of firestike Graphics score 24820

1632 @ 1050mv
hbm 1075
power 50%

during the test the voltage hover at 1 volts giving 26C core , 35 HBM , ambient is 22C, temp are good , I will start pushing I guess


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> Little disappointed that there was no driver update this week, given what a total mess WattMan remains and all the clocking / voltage control issues that 17.8.2 introduced or exacerbated.
> 
> Would also be great to be able to re-enable hardware acceleration in browsers if they fixed the freezing (surely they have to know the cause).


That crap has been broken since the 280x









I disabled HW acceleration in wall paper engine as well as I would get oddball errors from time to play in video playbacks... after watching videos on a browser...








Quote:


> Enabling all the hardware features I can wait for, but making WattMan fit for purpose and fixing dynamic clocks, LLC etc really needs to happen sooner rather than later.
> 
> I'm waiting for some better drivers before I try the AIO BIOS - possibly some better (updated) vBIOS would be nice too.


Raja put out some tweaks yammerin' about perf/watt, I would speculate that the .2 drivers where just that, an attempt to get the temps down by throttling more aggressively and allowing the frequency to float up on the core just a tad. Had to back my core down just a hair as these drivers were unstable in longer game sessions... bench OK. I suspect the lag "now" in drivers is going to be game specific tweaks to have something 'meaty' to say in the build notes. +4% in game X, +8% in game Y... so on and so forth.

To realize any significant across the board bump we need a nudge in the HBM core... else for me... its 1105 and done. Which seems to be the middle of the road, 1100-1110 range for most cards on high fan sets or water.


----------



## Soggysilicon

Quote:


> Originally Posted by *roybotnik*
> 
> Been having some serious issues with 17.8.2. Is powerplay registry editing the only way to adjust settings for VFE on 17.8.2?
> 
> I tried using afterburner to set HBM clocks and power limit, but now my card can't complete basically any compute task. And out of curiosity I tried mining...which resulted in the display being turned off and my card's fans locking at max RPM. Even a reboot didn't work, had to power down completely... Guess I need to DDU


As I understood it, afterburner was a complete lost cause atm. Anyone else?


----------



## Soggysilicon

Quote:


> Originally Posted by *Mandarb*
> 
> They are entirely the same, as they are just reference cards that the "manufacturer" puts their stickers on. If there's a difference it will be in the quality of the customer support.
> Someone correct me if I'm wrong.


Nothing to correct in this statement. Just to add my sapphire card is the most bare bones vidya' card I have ever purchased. Card, in a low rent esd bag with a spot of esd tape... a cheap pamphlet for a manual which is generic as can be.

May as well of arrived in a brown paper bag. Margins must be tiiiiigggghhhhttt on these things.


----------



## Soggysilicon

Quote:


> Originally Posted by *abe_joker*
> 
> HBM is below 60. Apparently, my problem was enhanced sync? I just put vertical sync and i am not experiencing the stuttering. Can you guys try enhanced sync?


I run "Ultimate Engine" with no problems. Enhanced as I understand it, is very new to the drivers... like... 17.18.x new... no telling what the issue is there.


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> Got my PowerColor Vega 64 blocked now and in my relatively small cooling loop.
> 
> (2) XSPC EX 240 Rads
> Top EX240 has (2) Yate Loon 20mm 'thin' medium fans
> Front EX240 has (4) Corsair SP120's push/pull
> XSPC dual-bay res with MCP655B pump
> XSPC Raystorm & EKFC Vega plexi/nickel
> 3/8" ID tubing
> 
> Will be playing around with all this more throughout the weekend, but need to catch some
> 
> 
> 
> 
> 
> 
> 
> so I can be coherent enough to drive the wife to Midway airport in the AM


Nice setup, looks to be in line with my own results...



Really need a lill' bit more from that HBM.


----------



## kundica

Quote:


> Originally Posted by *Skinnered*
> 
> 1702 and 1752 mhz, see below
> 
> Wattman1.jpg 954k .jpg file
> 
> 
> Yes, since the 17.8.2's
> 
> To reach near or above 1700 speeds need a lot of juice with the 17.8.2's
> I guess?


So I upped p6 and p7 by 50mv on 17.8.2 and I've been stable all night. Thanks a ton man. I haven't tested the limits yet but I have a long weekend.


----------



## Energylite

Quote:


> Originally Posted by *kundica*
> 
> So I upped p6 and p7 by 50mv on 17.8.2 and I've been stable all night. Thanks a ton man. I haven't tested the limits yet but I have a long weekend.


AC Vega or LC Vega?


----------



## pmc25

Quote:


> Originally Posted by *roybotnik*
> 
> Been having some serious issues with 17.8.2. Is powerplay registry editing the only way to adjust settings for VFE on 17.8.2?
> 
> I tried using afterburner to set HBM clocks and power limit, but now my card can't complete basically any compute task. And out of curiosity I tried mining...which resulted in the display being turned off and my card's fans locking at max RPM. Even a reboot didn't work, had to power down completely... Guess I need to DDU


As has been repeatedly reported, Afterburner merely running (not trying to overclock) breaks the drivers. Uninstall it.

The drivers are a mess at the moment, anyway, particularly 17.8.2 with regard to clocking and voltages. We just need to be patient.


----------



## Chaoz

Quote:


> Originally Posted by *pmc25*
> 
> As has been repeatedly reported, Afterburner merely running (not trying to overclock) breaks the drivers. Uninstall it.
> 
> The drivers are a mess at the moment, anyway, particularly 17.8.2 with regard to clocking and voltages. We just need to be patient.


I have no issues with afterburner, just using it for FPS and usage and such, no OC'ing.

17.8.2 is running fine on my system, can game hours on end without any issues.


----------



## pmc25

Quote:


> Originally Posted by *Soggysilicon*
> 
> Raja put out some tweaks yammerin' about perf/watt, I would speculate that the .2 drivers where just that, an attempt to get the temps down by throttling more aggressively and allowing the frequency to float up on the core just a tad. Had to back my core down just a hair as these drivers were unstable in longer game sessions... bench OK. I suspect the lag "now" in drivers is going to be game specific tweaks to have something 'meaty' to say in the build notes. +4% in game X, +8% in game Y... so on and so forth.


Thing is, in 17.8.2 for me and I think most others, unless you're using WattMan or PowerPlay tables, you have to use more voltage and nominally higher clocks (which it will hit under partial load but not full load - increasing 'idle' power consumption).

So it's resulted in a regression of performance per watt.

Also, the way HBM2 will drop its clock at the slightest lull when under load is just ridiculous (setting P3 state as min and max somewhat ameliorates it). It causes so much stutter and frame dropping. To my knowledge, first card ever to do something like this, and the resulting behaviour is not good.


----------



## Gdourado

How is the 56 vs the 64?
The 64 here is just 70 euros more.
Is it worth it?
With some undervolt and tuning, how far ahead is the 64 vs the 56 in 1440p?

Cheers


----------



## pmc25

Quote:


> Originally Posted by *Gdourado*
> 
> How is the 56 vs the 64?
> The 64 here is just 70 euros more.
> Is it worth it?
> With some undervolt and tuning, how far ahead is the 64 vs the 56 in 1440p?
> 
> Cheers


We can't say anything definitive because the drivers are so prototypical.

All we can say is that adequate cooling really, really helps both, and that neither are anything like as 'slow' as most reviews and the herd of redditors appeared to think at first.


----------



## Skinnered

Quote:


> Originally Posted by *kundica*
> 
> So I upped p6 and p7 by 50mv on 17.8.2 and I've been stable all night. Thanks a ton man. I haven't tested the limits yet but I have a long weekend.


Nice to hear. I think 17.8.2 may utilise the GPU a bit better hence needing more power or AMD still working on optimal powersettings in the driver/bios.


----------



## Skinnered

Btw, is anyone playing GTA5 with redux? I get black textures with enb enabled. Googleling find nothing sofar.


----------



## kundica

Quote:


> Originally Posted by *Energylite*
> 
> AC Vega or LC Vega?


LC. I was having issues with my replacement LC card crashing with 17.8.2 driver.


----------



## Energylite

So Buildzoid, I played with heaven during 2-3Hours this morning and there are the score that I have:
Heaven basic settings:


Heaven extreme settings:


This is so weird dude, I mean I've big fps drops at the begining and at then end of the extreme bench and it fcked up the score.
BTW in game, there is no fps drops (tested in 4 games) so I blame the 17.8 drivers.

and this is my GPU settings:


BTW even if I tune the powertable i cant go higher than 1702Mhz so I think there is a mess with Wattman for the core voltage, no matter what I did, it didnt works (Tested with 1.25V,1.275V,1.3V,1.325V,1.35V,1.75V and 1.4V)


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> So I upped p6 and p7 by 50mv on 17.8.2 and I've been stable all night. Thanks a ton man. I haven't tested the limits yet but I have a long weekend.


Good to hear you are having better luck. I knew something wasn't right if you could run fine with 17.8.1. They really borked .2 drivers.

I may try and up my voltage as well as the AIO bios installed on my card was unstable unless I played around with the settings in wattman. Mainly clock speed. So maybe there is hope yet to get some more Hz out of it yet.


----------



## Gdourado

My eu retailer has a price difference of 90 euros between the Vega 56 and the Vega 64.
If I get either one, my plan is to undervolt, tune for constant speed and use in an ITX build.
Is the 64 worth the 90 euros premium?
Is he performance noticiable after both are undervolted and tuned?

What's the veredict?

Cheers!


----------



## Roboyto

Quote:


> Originally Posted by *Gdourado*
> 
> My eu retailer has a price difference of 90 euros between the Vega 56 and the Vega 64.
> If I get either one, my plan is to undervolt, tune for constant speed and use in an ITX build.
> Is the 64 worth the 90 euros premium?
> Is he performance noticiable after both are undervolted and tuned?
> 
> What's the veredict?
> 
> Cheers!


http://www.gamersnexus.net/hwreviews/3020-amd-rx-vega-56-review-undervoltage-hbm-vs-core

Gamers Nexus has been doing some excellent work with Vega. They tested undervolting of 56 and also just released a video and results pushing 56 to the absolute limits. It was drawing 400W on it's own, but performing at GTX 1080 FE levels of performance.

In stock fashion the 56 is already a much more efficient, cooler and quieter card than 64. It is definitely the better choice for an ITX system. Add undervolting into the mix and you will get good results as can be seen in Gamers Nexus' results.


----------



## springs113

Quote:


> Originally Posted by *Roboyto*
> 
> http://www.gamersnexus.net/hwreviews/3020-amd-rx-vega-56-review-undervoltage-hbm-vs-core
> 
> Gamers Nexus has been doing some excellent work with Vega. They tested undervolting of 56 and also just released a video and results pushing 56 to the absolute limits. It was drawing 400W on it's own, but performing at GTX 1080 FE levels of performance.
> 
> In stock fashion the 56 is already a much more efficient, cooler and quieter card than 64. It is definitely the better choice for an ITX system. Add undervolting into the mix and you will get good results as can be seen in Gamers Nexus' results.


That video was showing the max they wanted to push to the card not the optimal. I would choose 56 like you said. I chose the 64, granted i know i couldn't wait for the 56 not to mention i wouldn't be able to get a chance to purchase one due to work. I went for the 64.


----------



## Roboyto

Quote:


> Originally Posted by *springs113*
> 
> That video was showing the max they wanted to push to the card not the optimal. I would choose 56 like you said. I chose the 64, granted i know i couldn't wait for the 56 not to mention i wouldn't be able to get a chance to purchase one due to work. I went for the 64.


The link I posted is for undervolting etc. They have more than one video for Vega 56


----------



## springs113

Sorry I knew that, I just may have responded incorrectly. A little tired my apologies.


----------



## Gdourado

I am looking at a build strictly for gaming.
Display is 1440p 144hz.
I will stick to aircooling and the reference is preferable due to itx constraints.
Is the 56 able to be much quieter under gaming than the 64?
Is the 64 more than 10% quicker?
Like 56 runs a game at ultra at 90 fps and the 64 can push 100+?

Cheers!


----------



## pmc25

Quote:


> Originally Posted by *Gdourado*
> 
> I am looking at a build strictly for gaming.
> Display is 1440p 144hz.
> I will stick to aircooling and the reference is preferable due to itx constraints.
> Is the 56 able to be much quieter under gaming than the 64?
> Is the 64 more than 10% quicker?
> Like 56 runs a game at ultra at 90 fps and the 64 can push 100+?
> 
> Cheers!


If it's ITX, wait for the Nano.

Cooler, quieter, less power.


----------



## geriatricpollywog

Has anybody tried Vega and Fiji in crossfire? I am getting RX Vega 64 and I want to know if it's worthwhile to keep my Fury X in a slot.


----------



## looncraz

Quote:


> Originally Posted by *ashman95*
> 
> FE /EK water block
> 
> Ok so the best way to avoid crashes is to stair step to the desired frequency ala voltage control S0-800mv S1- 800mv S2-980 S3-980 S4-1110 S5-1110 S6/S7 1160mv . Appears the Power state choosing algorithm works Best, and without complications when it has choices- I made it easy choising power states next to each other varied only by frequency- it was interesting to see the decision tree at work.
> I used the original drivers I set HBM to 1095, could've used 1100 but I didnt want to keep everything simple- note that I used 1697MHZ as target frequency, that's because over 1700mhz will work for some programs but not for others and sometimes crashes would occur- basically its either the drivers or AMD locked the cards under 1700mhz- there really shouldnt be any reason for these cards (water cooled) not to get up there and beyond, at reasonable voltages- I could've gone lower with my voltage settings but I just wanted to see the card work.
> 
> BEST SPECviewperf scores yet!! havent played any games -with these settings...I'll wait for the new drivers , Bata drivers are stable but you lose HBM control. OUT


Sorry if this has been asked to death - but how did you enable all of the P states?

I am just trying to get the card to keep the clocks I set and ONLY downclock if the power limit is reached.

And, of course, the stupid driver down clocks the HBM too soon, despite it running at 42C (EK FC waterblock)... and, even worse, the card down clocks just because the GPU usage is light... I want full clocks until the GPU usage is < 10% for a few seconds or when a power or temperature limit is reached. PERIOD, AMD!


----------



## dagget3450

Quote:


> Originally Posted by *0451*
> 
> Has anybody tried Vega and Fiji in crossfire? I am getting RX Vega 64 and I want to know if it's worthwhile to keep my Fury X in a slot.


I tried with vega fe and fury x and needless to say it didnt work. But my issue would be different that a vega 64/56 though. The drivers would only load furyx or vega fe thus only one was active at a time. Couldnt get both active at same time to even attempt it.

Would be curious about a furyx and vega 64/56 though but chances are next to none since they are different generations.


----------



## Rootax

Stupid question, but are Vega FE drivers (like 17.8.2) the same as RX edition, gaming wise ? Like, if I can have a good price en FE, same price as RX64, is it ok even for gaming ? Thx !


----------



## ashman95

Quote:


> Originally Posted by *looncraz*
> 
> Sorry if this has been asked to death - but how did you enable all of the P states?
> 
> I am just trying to get the card to keep the clocks I set and ONLY downclock if the power limit is reached.
> 
> And, of course, the stupid driver down clocks the HBM too soon, despite it running at 42C (EK FC waterblock)... and, even worse, the card down clocks just because the GPU usage is light... I want full clocks until the GPU usage is < 10% for a few seconds or when a power or temperature limit is reached. PERIOD, AMD!


I hear others having same issue- I have Frontier FE, different drivers I suppose.

Vega down clocks if there's no work , when running full, its decision tree looks for best power/frequency state to keep the card at max load. When I benched Specviewperf, I watched as it chose the power/freq states.


----------



## Roboyto

Quote:


> Originally Posted by *springs113*
> 
> Sorry I knew that, I just may have responded incorrectly. A little tired my apologies.


No worries 

Quote:


> Originally Posted by *Gdourado*
> 
> I am looking at a build strictly for gaming.
> Display is 1440p 144hz.
> I will stick to aircooling and the reference is preferable due to itx constraints.
> Is the 56 able to be much quieter under gaming than the 64?
> Is the 64 more than 10% quicker?
> Like 56 runs a game at ultra at 90 fps and the 64 can push 100+?
> 
> Cheers!


It would be quite beneficial for you to view the link I posted from Gamers Nexus regarding undervolting the card and still being able to OC it, with minimal additional power draw. The TDP for the 56 is 30% less than 64, 210W ~vs~ 300W, yet still yields a good portion of the performance. I don't remember exact numbers, but you should check that link.


----------



## Roboyto

Quote:


> Originally Posted by *Soggysilicon*
> 
> Nice setup, looks to be in line with my own results...
> 
> 
> 
> Really need a lill' bit more from that HBM.


Very nice results and setup.

I have found that the dual 'standard sized' 240mm rads have been extremely effective. They've now cooled 3 generations of hot and power hungry AMD GPUs...R9 290, R9 Fury X and now Vega 64. My MCP655 pump has been rock solid for 6 years and 5 months now LOL, that's crazy. Only major maintenance for the water loop since it's inception in 2011 was replacing bargain barbs...never again lol...and the XSPC dual-bay reservoir. The acrylic must get brittle over time, nearly 5 years the first one lasted, as it was splitting and cracking quite badly all over the place...NEVER leaked though. If you clean all the parts before assembling and take necessary precautions, draining/filling the loop isn't even necessary in my experience. I Use 90% isopropyl and then distilled water to clean/rinse everything before assembly and havie biocide/silver kill to keep the bad stuff from growing. I ran my R9 290 on the same distilled water for ~2.5 years until I purchased my Fury X and had to drain the system to install new card. No floaties, chunks, or nasty stuff came out...just distilled water.



Not fully assembled as the Z87/Haswell is coming out in favor of Ryzen...but this gets the point across. Lots of stuff crammed in this compact mATX tower...Dual bay res w/ pump, pair of 240 rads w/ 6 fans, (3) 2.5" SSDs and a 6TB WD Black.



My card is capable of clocking HBM all the way to 1100 actually, but it adversely effects the TimeSpy score so...I guess I'll have to test in other benchmarks to see what happens. I'll probably settle on 1050 or so if I had to guess for the sake of overall stability...but won't know until more benching/testing is done.


----------



## ashman95

Quote:


> Originally Posted by *ashman95*
> 
> I hear others having same issue- I have Frontier FE, different drivers I suppose.
> 
> Vega down clocks if there's no work , when running full, its decision tree looks for best power/frequency state to keep the card at max load. When I benched Specviewperf, I watched as it chose the power/freq states.


Just thinking, AMD is perhaps controlling every bios aspect of these cards- it makes sense too, if the 64's were open then it would be best pick for AI and workstation workloads because of price- the only difference between cards is the amount of HBM2 perhaps that the 56 uses 1.6Gbps vs 2.0 HBM2 in the higher cards.

http://wccftech.com/amd-lower-priced-gaming-optimized-radeon-rx-vega-coming/


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> Very nice results and setup.
> 
> I have found that the dual 'standard sized' 240mm rads have been extremely effective. They've now cooled 3 generations of hot and power hungry AMD GPUs...R9 290, R9 Fury X and now Vega 64. My MCP655 pump has been rock solid for 6 years and 5 months now LOL, that's crazy. Only major maintenance for the water loop since it's inception in 2011 was replacing bargain barbs...never again lol...and the XSPC dual-bay reservoir. The acrylic must get brittle over time, nearly 5 years the first one lasted, as it was splitting and cracking quite badly all over the place...NEVER leaked though. If you clean all the parts before assembling and take necessary precautions, draining/filling the loop isn't even necessary in my experience. I Use 90% isopropyl and then distilled water to clean/rinse everything before assembly and havie biocide/silver kill to keep the bad stuff from growing. I ran my R9 290 on the same distilled water for ~2.5 years until I purchased my Fury X and had to drain the system to install new card. No floaties, chunks, or nasty stuff came out...just distilled water.
> 
> 
> 
> 
> Not fully assembled as the Z87/Haswell is coming out in favor of Ryzen...but this gets the point across. Lots of stuff crammed in this compact mATX tower...Dual bay res w/ pump, pair of 240 rads w/ 6 fans, (3) 2.5" SSDs and a 6TB WD Black.
> 
> 
> 
> 
> My card is capable of clocking HBM all the way to 1100 actually, but it adversely effects the TimeSpy score so...I guess I'll have to test in other benchmarks to see what happens. I'll probably settle on 1050 or so if I had to guess for the sake of overall stability...but won't know until more benching/testing is done.


Nice looking setup! Very efficient use of the space, definitely 10 lbs. of ^%&*% in a 5 lbs. bag!







Your thermal efficiency is remarkable. The MCP655 has been a ole' workhorse for ages, did you modify it? MCP355 (Swiftech Liang re-brand) myself, still rockin' the XSPC res top and blue centrifugal pump head.

I wasn't quite as lucky as you, after one of my moves the original Phobya 360 was damaged and developed a leak which over time and with my neglect allowed the the pump head to cavitate... which resulted in one of the fets burning up... so lost a pump and rad... whoops...never had any problems with barbs... (bitspower). Ahh well, I kept the pump around for spares in case it's replacement ever needs some donor bits. On the topic of bits though I have never "not" used plumbers tape on all my taps... so I can't really speak to fitment. Machining on this stuff is pretty hit or miss.

Your 240s are dual plenum? Your runs look nice and clean so your flow rate stays peppy? Corsair fans with rubber on the edges, push pull on that front one and push on the top? Looks like you would have the space to run a decoupler / spacer on the front rad, at least one half of it.

The crazing in the acetal/plexi is fairly common on that xspc stuff, there is some on my pump res but I haven't had any issues with lossy pressure. The 5.25 dual bay res is a relatively new addition to mine and it hasn't developed any issues so far. Never used a biocide myself, just distilled and a coil... recently added a silver bullet cap which I'll monitor for corrosion as time goes on (very little fluid flow near the silver). The only corrosion I have seen is a little weld greening in the 360 rad which is to be expected and on the jet plate on the cpu... again its an old block and not that big of a deal, as it came with a couple.

My clean usually is just some white vinegar at a low concentration with distilled swish swish, and distilled to rinse; that may not be necessary as much anymore as it was back in the day when the rads shipped with lots of loose flux tossed in by the manufacturers. Good luck with your Vega... I'll be lurking here seeing if anyone manages to get that HBM dialed up any more.


----------



## 113802

For anyone using the Vega 64 AIO variant. Keep the stock fan. It's a Gentle Typhoon 3000 RPM fan







If you don't like the noise, drop it to 2150RPM

Modded my card for no reason


----------



## Roboyto

Quote:


> Originally Posted by *Soggysilicon*
> 
> Nice looking setup! Very efficient use of the space, definitely 10 lbs. of ^%&*% in a 5 lbs. bag!
> 
> 
> 
> 
> 
> 
> 
> Your thermal efficiency is remarkable. The MCP655 has been a ole' workhorse for ages, did you modify it? MCP355 (Swiftech Liang re-brand) myself, still rockin' the XSPC res top and blue centrifugal pump head.
> 
> I wasn't quite as lucky as you, after one of my moves the original Phobya 360 was damaged and developed a leak which over time and with my neglect allowed the the pump head to cavitate... which resulted in one of the fets burning up... so lost a pump and rad... whoops...never had any problems with barbs... (bitspower). Ahh well, I kept the pump around for spares in case it's replacement ever needs some donor bits. On the topic of bits though I have never "not" used plumbers tape on all my taps... so I can't really speak to fitment. Machining on this stuff is pretty hit or miss.
> 
> Your 240s are dual plenum? Your runs look nice and clean so your flow rate stays peppy? Corsair fans with rubber on the edges, push pull on that front one and push on the top? Looks like you would have the space to run a decoupler / spacer on the front rad, at least one half of it.
> 
> The crazing in the acetal/plexi is fairly common on that xspc stuff, there is some on my pump res but I haven't had any issues with lossy pressure. The 5.25 dual bay res is a relatively new addition to mine and it hasn't developed any issues so far. Never used a biocide myself, just distilled and a coil... recently added a silver bullet cap which I'll monitor for corrosion as time goes on (very little fluid flow near the silver). The only corrosion I have seen is a little weld greening in the 360 rad which is to be expected and on the jet plate on the cpu... again its an old block and not that big of a deal, as it came with a couple.
> 
> My clean usually is just some white vinegar at a low concentration with distilled swish swish, and distilled to rinse; that may not be necessary as much anymore as it was back in the day when the rads shipped with lots of loose flux tossed in by the manufacturers. Good luck with your Vega... I'll be lurking here seeing if anyone manages to get that HBM dialed up any more.


Thank you..those pics are awful and do no justice honestly









The initial project to get the PC setup like you see it now literally took me a couple months...it was irritating and exhausting..but it has now been functioning in a similar state for about 3.5 years. PSU got an upgrade from a Rosewill Capstone 650W when I received the RM1000X you see in there for a Newegg Eggxpert review. The GPU has now changed twice. The storage drive was originally a 2TB, and then I went to the 6TB. And the 3 SSDs which are housed stealthily on the rear side of the chassis have been in use for quite some time now...A 128GB Vertex 3 is still kicking, along with 256GB Crucial M500 and 240GB Intel 530.

If you're bored you can see the build log: http://www.overclock.net/t/1456279/honey-i-shrunk-the-ultra-tower-beast-my-journey-to-creating-a-more-compact-pc-with-an-r9-290/0_20

The EX 240s are just the standard variants; Dual ports at one end of the radiator. The tubing run is highly efficient and probably as small as I could possibly make it. CPU -> Rad -> GPU -> Rad -> Res/Pump - Just made the most sense to me. Water heated by CPU, then cooled on its way to GPU etc.

The fans on the top radiator are not Corsair. They are some Yate Loon "slim" 20mm fans; standard thickness are 25mm. They don't have the best airflow statistics compared to other fans...but they did have the best airflow for the 20mm of space that I required to fit everything in there







They do a pretty decent job honestly...when CPU/GPU are working hard, there is a fair amount of heat expelled out the top of the case.

I feel for the $8, or whatever the biocide costs, that it is cheap insurance. A bottle will likely last you a lifetime due to only requiring a couple drops. I have a couple kill coils in the reservoir, and they have worked marvelously for years now.

I'm sure there will be more beneficial tweaks to come collectively and for the HBM specifically.


----------



## GroupB

I comfirm vega heat is nowhere near a OC 290 , So far running [email protected] 1175 and hmb at 1075 with a i7 6700k OC vega core hover in the 26C and hmb in the 35C vs my old 290 that was going 55 on core and 65 on vrm, vega heat is nothing. off course I did not push vega yet but so far temps are very good.

Using ex 240 and ex 420 rad , mcp35x ( since 2010, stil going strong), xspc raystrom and ek vega block


----------



## Gdourado

From what we can see so far with tweaking and undervolting, is it worth to wait for aib Cards?
Or is a reference Vega 56 a good buy?
I know the VRM is great.
But how about the cooler?
Can an aib card bring big improvements to Temps and noise?


----------



## Roboyto

Quote:


> Originally Posted by *GroupB*
> 
> I comfirm vega heat is nowhere near a OC 290 , So far running [email protected] 1175 and hmb at 1075 with a i7 6700k OC vega core hover in the 26C and hmb in the 35C vs my old 290 that was going 55 on core and 65 on vrm, vega heat is nothing. off course I did not push vega yet but so far temps are very good.
> 
> Using ex 240 and ex 420 rad , mcp35x ( since 2010, stil going strong), xspc raystrom and ek vega block


Your VRMs for your 290 could have likely benefited greatly from better thermal pads. VRM1 for me was within (above OR below) a couple C of core temp once using Fuji Ultra Extreme pads. My 290 was beast mode running ~1300/1700, pending the bench in question, with +200mV and 50% power.

You are correct, temps are definitely lower on Vega compared to Hawaii. I noticed this same thing with my Fury X once fully under water; LOW temps. I'm curious to see what the VRM temps are once that is being monitored correctly. I feel like they should be pretty low since there are so many phases compared to Hawaii.

It will be interesting to see what happens with these cards once AB, Trixx, GPU Tweak are functioning and the masses can tweak voltage/power with little hassle.

Quote:


> Originally Posted by *Gdourado*
> 
> From what we can see so far with tweaking and undervolting, is it worth to wait for aib Cards?
> Or is a reference Vega 56 a good buy?
> I know the VRM is great.
> But how about the cooler?
> Can an aib card bring big improvements to Temps and noise?


AIB cards always bring big improvements to temps and noise. They're likely going to be able to bring some solid performance boosts as well if they allow more than the reference power limit.

Vega 56 I think is still a decent buy if you're willing to play around with it and find optimal settings. Gamers Nexus was comparing the reference 56 to AIB (EVGA SC?) 1070 and the 56 was winning more than losing.

The only issue you may have is the size of the AIB 56 cards in your ITX chassis...if they want to push the performance of the 56 with higher power limits and voltages, it will take a beefy cooler to tame the heat.


----------



## GroupB

Quote:


> Originally Posted by *Roboyto*
> 
> Your VRMs for your 290 could have likely benefited greatly from better thermal pads. VRM1 for me was within (above OR below) a couple C of core temp once using Fuji Ultra Extreme pads. My 290 was beast mode running ~1300/1700, pending the bench in question, with +200mV and 50% power.
> 
> You are correct, temps are definitely lower on Vega compared to Hawaii. I noticed this same thing with my Fury X once fully under water; LOW temps. I'm curious to see what the VRM temps are once that is being monitored correctly. I feel like they should be pretty low since there are so many phases compared to Hawaii.
> 
> It will be interesting to see what happens with these cards once AB, Trixx, GPU Tweak are functioning and the masses can tweak voltage/power with little hassle.
> 
> AIB cards always bring big improvements to temps and noise. They're likely going to be able to bring some solid performance boosts as well if they allow more than the reference power limit.
> 
> Vega 56 I think is still a decent buy if you're willing to play around with it and find optimal settings. Gamers Nexus was comparing the reference 56 to AIB (EVGA SC?) 1070 and the 56 was winning more than losing.
> 
> The only issue you may have is the size of the AIB 56 cards in your ITX chassis...if they want to push the performance of the 56 with higher power limits and voltages, it will take a beefy cooler to tame the heat.


Why do you think I have extreme left over







off course my r9 290's had extreme pad on vrm , But I had 2 in the loop at the temps I give up in the prev post , they were 1.3V 1245/1350 while doing eth at 35mh. Max 10 from core and more often close to 6 c was not bad at all. While gaming with a little less oc on one and the other idle its was more into the 50C range for core and 55 for vrm.

Now they both doing ETH at 35 with the same freq I just said, both on a 360 rad and a brand new mcp55x and a t line so there no water mass ( did not want to spend too much ) they now doing 45C core and 52 vrm even after 36 hour + they are closer to the windows and have no cpu in the loop now but less rad.


----------



## Soggysilicon

Quote:


> Originally Posted by *Gdourado*
> 
> From what we can see so far with tweaking and undervolting, is it worth to wait for aib Cards?
> Or is a reference Vega 56 a good buy?
> I know the VRM is great.
> But how about the cooler?
> Can an aib card bring big improvements to Temps and noise?


Pricing being what it is, Vega is an enthusiast and miners buy... or is you have a snazzy freesync monitor that is in desperate need of a vidya' card.









Enthusiast... get a dopamine hit...

Miners... money in money out...

Freesync / Ultimate Engine... if you have a widescreen, Vega is the only real choice... finally...

Everything else... not sure it would make much sense... of course this assumes you can find one to buy.









In your case I maybe would wait for teh nano or a board partner and see if they have a slim cooler or interesting WC edition...


----------



## yeayea911

Hi everyone, recent addition to the forums and a long time lurker. So I have a question. I recently purchased a brand new build (AMD TR1950x, ASUS Zenith extreme and two Vega FE's) I built the system and it seems that the only driver I can get to work is the 17.6 driver that was originally released. I tried the 17.8.2 beta but that contains only the "PRO" profile and no "GAME" mode to switch. Gaming performance is not bad, but could def be better. Has anyone figured out a driver to install to unlock the full potential of the FE cards or will we FE owners have to wait until a new driver is released?

Thanks for the help,


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> For anyone using the Vega 64 AIO variant. Keep the stock fan. It's a Gentle Typhoon 3000 RPM fan
> 
> 
> 
> 
> 
> 
> 
> If you don't like the noise, drop it to 2150RPM
> 
> Modded my card for no reason


Yeah, I found this out today when I swapped out my fan for Be Quiet! Silent Wings 3 and saw the label on the back of the stock fan. The SW3's PWM responded strangely with the card so I ended up going back to stock.


----------



## dagget3450

Quote:


> Originally Posted by *yeayea911*
> 
> Hi everyone, recent addition to the forums and a long time lurker. So I have a question. I recently purchased a brand new build (AMD TR1950x, ASUS Zenith extreme and two Vega FE's) I built the system and it seems that the only driver I can get to work is the 17.6 driver that was originally released. I tried the 17.8.2 beta but that contains only the "PRO" profile and no "GAME" mode to switch. Gaming performance is not bad, but could def be better. Has anyone figured out a driver to install to unlock the full potential of the FE cards or will we FE owners have to wait until a new driver is released?
> 
> Thanks for the help,


I have this same issue, but on a ryzen 1700x and dual vega fe's. It looks like for now the drivers are holding us up. hopefully soon they make it all universal and get all functions allowed.I have been stuck on 17.6 launch drivers due to this same issue. Crossfire doesn't work either on the 17.8.2 drivers without gaming mode options.


----------



## ontariotl

*kundica* can you post your secondary bios from the AIO? I'm thinking my card will run at default with this version instead of the High power version of the bios. At least I would like something to play around with.


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> *kundica* can you post your secondary bios from the AIO? I'm thinking my card will run at default with this version instead of the High power version of the bios. At least I would like something to play around with.


I posted them both a while ago, they were in the same zip. When I get back to my computer I can upload again.


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> I posted them both a while ago, they were in the same zip. When I get back to my computer I can upload again.


Thanks. The .zip file I grabbed from here only had the high power version. I also looked back in this thread and couldn't find your zip file anymore.


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> Thanks. The .zip file I grabbed from here only had the high power version. I also looked back in this thread and couldn't find your zip file anymore.


How about this? http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/280#post_26306552


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> How about this? http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/280#post_26306552


I don't know why that didn't come up in my search but have it now. Thanks!


----------



## SuperZan

Sapphire RX 64 incoming. I'll be putting it under water soon thereafter. Yeah, I already have a 1080, but I'm bored with it.


----------



## elderblaze

my Vega56 came in

I've ran Heaven benchmark full screen for 2 hours 1440p, 8x aa, extreme settings.

I was able to get these settings stable thus far:

Core P6:1020 P7: 1025
Power Target +50%
HBM 900 MHZ
Default Fan and Temp profile.

This results in a ~2350 RPM Fan and 75-79c temps. The fan is quite fine at this speed, not bad at all.

I have not found the ceiling on HBM, but 950mhz is not stable, not instant crash, but crash within 15-30 minutes on heaven.

Firestrike GPU score at those settings : 22,208

Im testing P6 : 1010, and P7:1015 now. Don't know if it will be stable, but temps and fan speed are down even more.. 74C load with 2200 rpm fan.

the 1025 voltage results in a clock speed around 1520 mhz in the heaven benchmark. Frequency slider of even 2.5% will hard lock system, so gave up on that. SuspectedI was getting close to the limit at 1025 if a 2.5% overclock would hard lock, but stability testing at 1015 is actually going pretty well, thus far.

I have a 1070 in this system as well, but I bought it for mining. I already owned a 144 hz 1440p freesync screen (benq xl2730z). I briefly tried to move to the dell SG2716D (Gsync) but it sucked pretty bad, so i returned it, and bought vega to pair with the benq. It's actually rather hard to find a high quality 144 hz 1440p, Gsync TN screen. There are more options of higher quality, on the freesync side.


----------



## Irev

anyone know were I can get my hands on one of these?

https://techgage.com/wp-content/uploads/2017/08/AMD-Radeon-RX-Vega-64-Cube-Close-up-1.jpg


----------



## mohiuddin

Anybody with vega56, can give an idea of how much performance boost you guys get by just flashing to vega 64 bios?


----------



## LocoDiceGR

Quote:


> Originally Posted by *Irev*
> 
> anyone know were I can get my hands on one of these?
> 
> https://techgage.com/wp-content/uploads/2017/08/AMD-Radeon-RX-Vega-64-Cube-Close-up-1.jpg


If you find out, let me know


----------



## Newbie2009

Some vega 64 scores at stock clocks on core, oc on the memory. 1000mv, max power draw of whole system from the wall 470w. On the power save bios as had to flash normal one with the watercooled bios.

https://www.3dmark.com/fs/13530199



https://www.3dmark.com/fs/13530233



https://www.3dmark.com/fs/13530267



Runs some games which my 290x xfire struggled with run much better, like the division for example.


----------



## FlanK3r

ROG Strix Vega 64


----------



## Tyrael

Quote:


> Originally Posted by *mohiuddin*
> 
> Anybody with vega56, can give an idea of how much performance boost you guys get by just flashing to vega 64 bios?


up to 20% seee http://wccftech.com/amd-rx-vega-56-serious-performance-upgrade-with-vega-64-bios/


----------



## Gdourado

These are my current fire strike and fire strike extreme scores:





I am still running my 1080p g-sync display.
If I go to a Vega 56 to power a 1440p display, can I expect improved gaming performance in regard to my current setup?

What kind of scores is an undervolting Vega 56 getting?
How about a 56 with flashed 64 bios and overclock?

Cheers


----------



## ontariotl

Quote:


> Originally Posted by *Gdourado*
> 
> These are my current fire strike and fire strike extreme scores:
> 
> 
> 
> 
> 
> I am still running my 1080p g-sync display.
> If I go to a Vega 56 to power a 1440p display, can I expect improved gaming performance in regard to my current setup?
> 
> What kind of scores is an undervolting Vega 56 getting?
> How about a 56 with flashed 64 bios and overclock?
> 
> Cheers


Take a look at this link and it should give you some answers you seek with flashing and results with Vega 56.

https://videocardz.com/72299/amd-radeon-rx-vega-56-gets-faster-with-vega-64-bios


----------



## Roboyto

Quote:



> Originally Posted by *Newbie2009*
> 
> Some vega 64 scores at stock clocks on core, oc on the memory. 1000mv, max power draw of whole system from the wall 470w. On the power save bios as had to flash normal one with the watercooled bios.


3DMark reporting correctly for 1200 on the HBM?


----------



## Newbie2009

Quote:


> Originally Posted by *Roboyto*
> 
> 3DMark reporting correctly for 1200 on the HBM?


nope, 1100mhz


----------



## Gdourado

I am browsing several reviews to try and get a feeling of how loud the reference Vega 56 is.
But then I get confused.
For example...
Techpowerup says the Vega is 44 dB under load. And for comparison a 1080ti lightning is just 33 under load.
That makes me think the Vega is seriously loud.
But then I go to guru3d and it also says Vega is 44 dB. But then measures the lightning at 39.
So not a huge difference from a reference blower to a top of the line triple fan cooler...
So, what is it...
Is this reference amd cooler loud or not?
I have experience with a 290x reference. When I had it, I had to game with headphones as I needed a custom fan curve to prevent throttle.
Now I have a gigabyte g1 gaming 980ti.
While not quite, it is perfectly acceptable for me and I don't find it loud at all.
So in comparison how is Vega?

Cheers


----------



## Roboyto

Quote:


> Originally Posted by *Newbie2009*
> 
> nope, 1100mhz










I was giddy for a little bit there that someone smashed past the low 1100 barrier that seems to be uniform thus far.

Looks like these smaller fabs cut down on the variation between chips. Once a waterblock is in place it seems like everyone is quite close in clock speeds. Hawaii seemed to have considerably higher variation where there were clear silicon lottery winners.

Anyone tried Super Position bench yet? I did a couple runs last night on the 4K Optimized DX default, and my card was power throttling hard with overclocked settings. Wasn't able to breach 1100 HBM here, 1070 was the best I could manage to complete the bench, but throttling hard.

Check out the Core/HBM clock fluctuation when trying to run 1732/1070 with 1.15V for HBM and 50% power





I then decided to just try running the default turbo settings and the score was only behind 112 points.



Then used Turbo defaults with 50% power and overtook the OC score by 166 points:



Next test was to bump the HBM to 1045 and leave the core at stock settings and 50% power...this just netted a score decrease again:


----------



## kundica

Quote:


> Originally Posted by *Roboyto*
> 
> 
> 
> 
> 
> 
> 
> 
> I was giddy for a little bit there that someone smashed past the low 1100 barrier that seems to be uniform thus far.
> 
> Looks like these smaller fabs cut down on the variation between chips. Once a waterblock is in place it seems like everyone is quite close in clock speeds. Hawaii seemed to have considerably higher variation where there were clear silicon lottery winners.
> 
> Anyone tried Super Position bench yet? I did a couple runs last night on the 4K Optimized DX default, and my card was power throttling hard with overclocked settings. Wasn't able to breach 1100 HBM here, 1070 was the best I could manage to complete the bench, but throttling hard.
> 
> Check out the Core/HBM clock fluctuation when trying to run 1732/1070 with 1.15V for HBM and 50% power
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I then decided to just try running the default turbo settings and the score was only behind 112 points.
> 
> 
> 
> 
> Then used Turbo defaults with 50% power and overtook the OC score by 166 points:
> 
> 
> 
> 
> Next test was to bump the HBM to 1045 and leave the core at stock settings and 50% power...this just netted a score decrease again:


This is my 64 LC at stock core, +50% power limit with HBM at 1050 which is what I game at. I'll test later at 1100.


----------



## Roboyto

Quote:


> Originally Posted by *Gdourado*
> 
> I am browsing several reviews to try and get a feeling of how loud the reference Vega 56 is.
> But then I get confused.
> For example...
> Techpowerup says the Vega is 44 dB under load. And for comparison a 1080ti lightning is just 33 under load.
> That makes me think the Vega is seriously loud.
> But then I go to guru3d and it also says Vega is 44 dB. But then measures the lightning at 39.
> So not a huge difference from a reference blower to a top of the line triple fan cooler...
> So, what is it...
> Is this reference amd cooler loud or not?
> I have experience with a 290x reference. When I had it, I had to game with headphones as I needed a custom fan curve to prevent throttle.
> Now I have a gigabyte g1 gaming 980ti.
> While not quite, it is perfectly acceptable for me and I don't find it loud at all.
> So in comparison how is Vega?
> 
> Cheers


Vega 56 should most certainly be quieter than a reference 290X. It has significantly lower TDP; Vega 56 is 210W TDP while 290X is 290W.

With a little tweaking by undervolting and using 50% power target performance can be drastically improved, while consuming somewhere in the neighborhood of 70W less. This is confirmed by Gamers Nexus results and this German site:

https://translate.google.de/translate?sl=de&tl=en&js=y&prev=_t&hl=de&ie=UTF-8&u=https%3A%2F%2Fwww.hardwareluxx.de%2Findex.php%2Fartikel%2Fhardware%2Fgrafikkarten%2F44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html&edit-text=

Noise level tolerance is subjective, so if you're now happy with the noise emitted by your G1 980ti, going to the reference Vega is going to be a step in the wrong direction most likely.

Neither site gives noise levels at the undervolted settings however. One would assume that if the card is drawing ~70W less, then the noise would likely come down...but without proof it is hard to say.


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> This is my 64 LC at stock core, +50% power limit with HBM at 1050 which is what I game at. I'll test later at 1100.


Thanks for posting.

I wonder how much extra threads are helping in Super Position. One would assume Unigine's latest would leverage additional cores/threads to some extent.

*edit* Need to sell off this 4770k and move to Ryzen post haste


----------



## BeetleatWar1977

Quote:


> Originally Posted by *kundica*
> 
> This is my 64 LC at stock core, +50% power limit with HBM at 1050 which is what I game at. I'll test later at 1100.


thats - a bit slow^^



Vega [email protected] clocks, UV and+50%PT, HBM 900Mhz

not so much difference......


----------



## Roboyto

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> thats - a bit slow^^
> 
> 
> 
> Vega [email protected] clocks, UV and+50%PT, HBM 900Mhz
> 
> not so much difference......


It's a 14% difference, which is rather significant and to be expected between the two cards. We know from testing that's been done by a couple websites that an undervolt on 56 with 50% power gets the 56 close to stock 64 levels of performance.

Since you're running an FX chip it would appear that CPU likely doesn't make too much a difference in Super Position, as my 4770k at 4.5 should outpace it.

Seems like the additional TDP allowed by the LC BIOS is netting the gains in performance in this bench.


----------



## Newbie2009

For comparison, stock core undervolted vega 64, 1050 HBM


----------



## Roboyto

Quote:


> Originally Posted by *Newbie2009*
> 
> For comparison, stock core undervolted vega 64, 1050 HBM
> 
> 
> 
> Spoiler: Warning: Spoiler!


Was having some trouble getting scores similar to yours, and then PC crashed when attempting an UV OC. After that reboot scores are now up 400+ points









Why do computers have to be so fickle and unpredictable at times?







Anyway...

Stock Core, HBM @ 1050, 50% Power - Undervolt P6 1100 & P7 1125

Scores 6697 making it my best run yet











*Edit* What's your 3770 and RAM running at?


----------



## kundica

Not sure of you guys are interested in this s but I ran the Blenchmark Blender test today and got a very good result for a single GPU. My LC 64 completed the test in 31 seconds. Someone posted on reddit today that the 1080Ti takes 54 seconds, maybe one of you with the card can confirm.


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> Not sure of you guys are interested in this s but I ran the Blenchmark Blender test today and got a very good result for a single GPU. My LC 64 completed the test in 31 seconds. Someone posted on reddit today that the 1080Ti takes 54 seconds, maybe one of you with the card can confirm.


Is your LC still running fine after the voltage adjustments?


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> Not sure of you guys are interested in this s but I ran the Blenchmark Blender test today and got a very good result for a single GPU. My LC 64 completed the test in 31 seconds. Someone posted on reddit today that the 1080Ti takes 54 seconds, maybe one of you with the card can confirm.


Trying to...downloaded blender and blenchmark, but when I open up the blenchmark file it doesn't give me the benchmark option like I see you have?

Edit...got the benchmark to be available, but I only have GPU options as GFX900...whatever that is...going to reboot.

Still can't get it to display RX Vega as the GPU, but ran it under OpenCL anyway. GPU Tach goes full throttle and HWInfo shows 100% GPU usage, but the render took like 2.5 minutes, so something is wrong; But don't know what. Uninstalled and reinstalled Blender a few times, rebooted, ran CCleaner, and even manually deleted blender folders/files in AppData to make sure everything was gone.


----------



## Caldeio

XFX vega 56, comes stock with clocks at 1590?

I'm testing core clocks now. hbm is 950-965 area.
core clock-1622 so far working on it!









Should I bios flash?


----------



## kundica

Quote:


> Originally Posted by *Roboyto*
> 
> Trying to...downloaded blender and blenchmark, but when I open up the blenchmark file it doesn't give me the benchmark option like I see you have?
> 
> Edit...got the benchmark to be available, but I only have GPU options as GFX900...whatever that is...going to reboot.
> 
> Still can't get it to display RX Vega as the GPU, but ran it under OpenCL anyway. GPU Tach goes full throttle and HWInfo shows 100% GPU usage, but the render took like 2.5 minutes, so something is wrong; But don't know what. Uninstalled and reinstalled Blender a few times, rebooted, ran CCleaner, and even manually deleted blender folders/files in AppData to make sure everything was gone.


Download the nightly or one of the beta builds.

I was also suggesting anyone with a 1080Ti test as well.
Quote:


> Originally Posted by *ontariotl*
> 
> Is your LC still running fine after the voltage adjustments?


Yes, running p6 and p7 with +50mv seems to have stabilized the card with 17.8.2. 17.8.1 runs fine without it.


----------



## yeayea911

I too am hoping for a unified driver in the near future. I use my single card for my Adobe stuff for the main part and too game, but the 17.6 driver is a very preliminary for Vega.


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> Download the nightly or one of the beta builds.
> 
> I was also suggesting anyone with a 1080Ti test as well.


That was with 2.78C

After a little googling GFX900 was a code name for Vega architecture or something alone those lines. The GFX900 is actually in the benchmark list on Blenchmark website. It shows a render time of 146 seconds, which is right where mine is at presently.

2.79 Release candidate 2 is recognizing Vega correctly  33.48 seconds at stock settings. Overclocking doesn't seem to have much effect...shave <2 seconds at best


----------



## surfinchina

I'm completely unsure if anyone can help me, but if there's anyone out there who is familiar with Apple Mac OS x?
In the kext that runs the Vega - the AMD10000Controller.kext, there looks like a way to overclock my Vega FE, given that There isn't really any other way to do it on my hackintosh...



Any clues?


----------



## Soggysilicon

Quote:


> Originally Posted by *Irev*
> 
> anyone know were I can get my hands on one of these?
> 
> https://techgage.com/wp-content/uploads/2017/08/AMD-Radeon-RX-Vega-64-Cube-Close-up-1.jpg


Heck man, I would of been content with a dinky little sticker!









As far as a cube... see if one of these youtuber' folks will part with theirs... or just have one made...


----------



## kundica

Here's the LC 64 with stock core, +50% power limit, and HBM at 1100. Tomorrow I might work on pushing the core a bit.


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> 
> 
> 
> 
> 
> 
> 
> I was giddy for a little bit there that someone smashed past the low 1100 barrier that seems to be uniform thus far.
> 
> Looks like these smaller fabs cut down on the variation between chips. Once a waterblock is in place it seems like everyone is quite close in clock speeds. Hawaii seemed to have considerably higher variation where there were clear silicon lottery winners.
> 
> Anyone tried Super Position bench yet? I did a couple runs last night on the 4K Optimized DX default, and my card was power throttling hard with overclocked settings. Wasn't able to breach 1100 HBM here, 1070 was the best I could manage to complete the bench, but throttling hard.
> 
> Check out the Core/HBM clock fluctuation when trying to run 1732/1070 with 1.15V for HBM and 50% power




Not a particularly impressive score... cpu is barely even running... hrmm...


----------



## Emmett

Quote:


> Originally Posted by *kundica*
> 
> Download the nightly or one of the beta builds.
> 
> I was also suggesting anyone with a 1080Ti test as well.
> 
> Yes, running p6 and p7 with +50mv seems to have stabilized the card with 17.8.2. 17.8.1 runs fine without it.


I get 32 seconds with Vega 64

47 Seconds with 1080 ti at 2025 core 6000 mem


----------



## punchmonster

Here is my superposition run. Here's card settings:

*Core freq:* 1630Mhz
*Voltage:* 1070mV
*HBM freq:* 1095Mhz
*Powerlimit:* 0%
*Cooling:* air cooled
*Driver:* 17.8.1

maintained core and HBM2 100% of the time with no throttling. I'd have to check the HBM2 temperature next time to know if it's loosening the straps on the HBM2.


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> Here's the LC 64 with stock core, +50% power limit, and HBM at 1100. Tomorrow I might work on pushing the core a bit.


I'm right there, you'll probably edge me out.

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Not a particularly impressive score... cpu is barely even running... hrmm...


Try and reboot...mine was acting weird for TimeSpy with low scores like that...computer crashed from bad settings and scores came back up



1682-1100 50% UV of P6 1100 P7 1125

FS Extreme also ran better with those undervolted settings netting GS of 12366. Nearly 300 above my previous best and 7 points lower than my best Tesselation OFF run.



TimeSpy on the other hand doesn't seem to benefit from the undervolting..it appears to prefer the extra ~50 MHz core clock

Quote:


> Originally Posted by *punchmonster*
> 
> Here is my superposition run. Here's card settings:
> 
> *Core freq:* 1630Mhz
> *Voltage:* 1070mV
> *HBM freq:* 1095Mhz
> *Powerlimit:* 0%
> *Cooling:* air cooled
> *Driver:* 17.8.1
> 
> maintained core and HBM2 100% of the time with no throttling. I'd have to check the HBM2 temperature next time to know if it's loosening the straps on the HBM2.


Pretty solid for air cooled. Mine will do stock core clocks witth P6 1050 P7 1075 and OC HBM to 1100.


----------



## punchmonster

kinda sus how the Morpheus II is getting lower temps than the LC edition. If you can spare the space just get the Morpheus II. It's cheaper.


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> I'm right there, you'll probably edge me out.


Wattman was screwin me... it loaded windows with custom, but clocked me down to stock/stock +50 pwr... neat...



Better, but I am liking your scores, may need to "investigate" your settings!


----------



## Roboyto

Quote:



> Originally Posted by *punchmonster*
> 
> kinda sus how the Morpheus II is getting lower temps than the LC edition. If you can spare the space just get the Morpheus II. It's cheaper.


I'm wondering if RTG will take note of the CPU division and maybe give a heatsink that is worthwhile the next time around. The bundled Ryzen coolers are quite impressive.


----------



## Roboyto

Quote:


> Originally Posted by *Soggysilicon*
> 
> Wattman was screwin me... it loaded windows with custom, but clocked me down to stock/stock +50 pwr... neat...
> 
> 
> 
> Better, but I am liking your scores, may need to "investigate" your settings!


If you have a driver crash and Radeon Settings tweaks to be just a blurry/fuzzy window, I've found that killing the whole host application/process with Task Manager solves the problem. It will then open and function normally without a reboot.

If there's a drive crash or a lockup, I've made it a habit to hit reset on Wattman every time to make sure settings are applied correctly.

Strange times these are...where subtracting voltage yields gains. Used to just







V's at the cards as long as they were cooled appropriately









Please do investigate.


----------



## Roboyto

Quote:


> Originally Posted by *Roboyto*
> 
> I'm wondering if RTG will take note of the CPU division and maybe give a heatsink that is worthwhile the next time around. The bundled Ryzen coolers are quite impressive.


Quote:


> Originally Posted by *punchmonster*
> 
> kinda sus how the Morpheus II is getting lower temps than the LC edition. If you can spare the space just get the Morpheus II. It's cheaper.


They should do something like this...Had one of these cards and the performance was pretty solid.


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> If you have a driver crash and Radeon Settings tweaks to be just a blurry/fuzzy window, I've found that killing the whole host application/process with Task Manager solves the problem. It will then open and function normally without a reboot.
> 
> If there's a drive crash or a lockup, I've made it a habit to hit reset on Wattman every time to make sure settings are applied correctly.
> 
> Strange times these are...where subtracting voltage yields gains. Used to just
> 
> 
> 
> 
> 
> 
> 
> 
> V's at the cards as long as they were cooled appropriately
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please do investigate.


It was what you said, a couple tweaks later a crash and a reboot...



Little closer to what I have been getting in the past... so janky...







(drivers)...


----------



## Roboyto

Quote:


> Originally Posted by *Soggysilicon*
> 
> It was what you said, a couple tweaks later a crash and a reboot...
> 
> 
> 
> Little closer to what I have been getting in the past... so janky...
> 
> 
> 
> 
> 
> 
> 
> (drivers)...


.1 or .2 ?


----------



## mikeybycrikey

Quote:


> Originally Posted by *punchmonster*
> 
> kinda sus how the Morpheus II is getting lower temps than the LC edition. If you can spare the space just get the Morpheus II. It's cheaper.


I haven't even received my vega64 yet and I've already ordered one. Thanks for making me aware of the product.


----------



## Newbie2009

Quote:


> Originally Posted by *Roboyto*
> 
> Was having some trouble getting scores similar to yours, and then PC crashed when attempting an UV OC. After that reboot scores are now up 400+ points
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why do computers have to be so fickle and unpredictable at times?
> 
> 
> 
> 
> 
> 
> 
> Anyway...
> 
> Stock Core, HBM @ 1050, 50% Power - Undervolt P6 1100 & P7 1125
> 
> Scores 6697 making it my best run yet
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Edit* What's your 3770 and RAM running at?


2400mhz


----------



## erase

Now own 2x Vega 56 cards. Using driver 17.8.2, no toggle for crossfire in settings? Im I missing something, is crossfire not supported all of sudden with Vega?


----------



## Newbie2009

Quote:


> Originally Posted by *erase*
> 
> Now own 2x Vega 56 cards. Using driver 17.8.2, no toggle for crossfire in settings? Im I missing something, is crossfire not supported all of sudden with Vega?


Not yet.


----------



## erase

Quote:


> Originally Posted by *Newbie2009*
> 
> Quote:
> 
> 
> 
> Originally Posted by *erase*
> 
> Now own 2x Vega 56 cards. Using driver 17.8.2, no toggle for crossfire in settings? Im I missing something, is crossfire not supported all of sudden with Vega?
> 
> 
> 
> Not yet.
Click to expand...

lol is this some kind of joke?

I been buying AMD cards off and on for years and all of them supported crossfire. Expected this is something wouldn't have to think twice about. So I go out and buy 2x Vega 56 cards for the purpose of crossfire and it doesn't work. Nice one AMD!


----------



## Newbie2009

Quote:


> Originally Posted by *erase*
> 
> lol is this some kind of joke?
> 
> I been buying AMD cards off and on for years and all of them supported crossfire. Expected this is something wouldn't have to think twice about. So I go out and buy 2x Vega 56 cards for the purpose of crossfire and it doesn't work. Nice one AMD!


It's a bad joke. First time I skipped xfire in a long time. I'd return a card if I was you.


----------



## erase

I can wait it out I guess, hard to get I hold on to them and make them mine instead.

Both 2x Vega 56 cards are Sapphire cards. The Vega 64 card I have in sig is Gigabyte.

How do I go about getting Sapphire version of the Vega 64 BIOS and how do flash it to these 2x 56 Vega?


----------



## Mandarb

Quote:


> Originally Posted by *erase*
> 
> I can wait it out I guess, hard to get I hold on to them and make them mine instead.
> 
> Both 2x Vega 56 cards are Sapphire cards. The Vega 64 card I have in sig is Gigabyte.
> 
> How do I go about getting Sapphire version of the Vega 64 BIOS and how do flash it to these 2x 56 Vega?


The reference cards are all the same, there's no Sapphire or Gigabyte version.

Go to techpowerup.com, download ATIWinflash and the Vega64 bios (check in details the power envelope to tell the standard from the power save apart).

Save your BIOS, flash the RX64, , reboot,check memory voltage with the newest HWinfo beta, OC memory.

Also, you can't touch the power save BIOS, you can only flash the standard BIOS (switch towards faceplate).


----------



## andreyb

Hi, please add be as an owner of Vega 64 AC.
Have anyone experienced an issue when lowering core voltage below 1050mv in Wattman doesn't work?
I was decreasing voltage from 1200mv to 1050mv with 10mv step and fan fixed. I see temperatures drop on each step, but when I reach 1050mv temperature doesn't get lower.
I use 17.8.2 driver on Windows 10.


----------



## ashman95

Quote:


> Originally Posted by *Roboyto*
> 
> If you have a driver crash and Radeon Settings tweaks to be just a blurry/fuzzy window, I've found that killing the whole host application/process with Task Manager solves the problem. It will then open and function normally without a reboot.
> 
> If there's a drive crash or a lockup, I've made it a habit to hit reset on Wattman every time to make sure settings are applied correctly.
> 
> Strange times these are...where subtracting voltage yields gains. Used to just
> 
> 
> 
> 
> 
> 
> 
> V's at the cards as long as they were cooled appropriately
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please do investigate.


Yep, I found the same - hit reset after every crash using Wattman!!


----------



## ontariotl

Quote:


> Originally Posted by *erase*
> 
> lol is this some kind of joke?
> 
> I been buying AMD cards off and on for years and all of them supported crossfire. Expected this is something wouldn't have to think twice about. So I go out and buy 2x Vega 56 cards for the purpose of crossfire and it doesn't work. Nice one AMD!


It's not the first time AMD/ATI brought out a card without crossfire support. Try buying a HD5970 dual GPU card and it had very limited support for crossfire on day one. It wasn't until a month and half later when proper drivers started to come out for it and it was still a mess.

I've read that it will be about a month (near the end of Sept) before proper drivers for crossfire come out. You wouldn't want them both running right now as 17.8.2 is kind of a mess.

Although, couldn't you still use them in DX12 for games that have mGPU support built in like Hitman? I thought crossfire implementation is not required because of DX12.


----------



## Sufferage

Quote:


> Originally Posted by *andreyb*
> 
> Hi, please add be as an owner of Vega 64 AC.
> Have anyone experienced an issue when lowering core voltage below 1050mv in Wattman doesn't work?
> I was decreasing voltage from 1200mv to 1050mv with 10mv step and fan fixed. I see temperatures drop on each step, but when I reach 1050mv temperature doesn't get lower.
> I use 17.8.2 driver on Windows 10.


Does work for me using 17.8.2 as well, currently running 1020mV. You can use latest beta HWInfo to check voltages on the card.


----------



## Newbie2009

Quote:


> Originally Posted by *andreyb*
> 
> Hi, please add be as an owner of Vega 64 AC.
> Have anyone experienced an issue when lowering core voltage below 1050mv in Wattman doesn't work?
> I was decreasing voltage from 1200mv to 1050mv with 10mv step and fan fixed. I see temperatures drop on each step, but when I reach 1050mv temperature doesn't get lower.
> I use 17.8.2 driver on Windows 10.


Using wattool here, don't trust wattman to function properly, although some have said a lot of things work now.


----------



## dagget3450

Quote:


> Originally Posted by *erase*
> 
> lol is this some kind of joke?
> 
> I been buying AMD cards off and on for years and all of them supported crossfire. Expected this is something wouldn't have to think twice about. So I go out and buy 2x Vega 56 cards for the purpose of crossfire and it doesn't work. Nice one AMD!


Funny thing is vega fe does have crossfire in launch drivers. Which is wierd why rx vega doesnt... Something going on driver side or maybe they knew there would be limited quantity of cards... I dont know but its wierd.


----------



## AlphaC

EK claims +17% performance with +5% power when cooled with water
https://videocardz.com/72368/amd-radeon-vega-bios-flashing-report


----------



## theBee2112

Apparently it's OpenCL that's sandbagging this card. From what I understand, there's a bottleneck in the driver.

Can anyone make some sense of this thread?
https://github.com/RadeonOpenCompute/ROCm/issues/147

It's for the ROCm 1.6.1 kernel driver, for Linux. *It seems to allow voltage and clock control*, as well as introduce a 2Mb pagefile? These people seem to be reporting improvements.
https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/

I'm not that good with Linux, but I just wanted to share and see if someone else could make sense of it.


----------



## kundica

Quote:


> Originally Posted by *AlphaC*
> 
> 
> EK claims +17% performance with +5% power when cooled with water
> https://videocardz.com/72368/amd-radeon-vega-bios-flashing-report


https://www.ekwb.com/blog/can-water-block-really-boost-gpu-performance/


----------



## ashman95

FE- EK waterblock 1645mhz/1095mhz High 1080


----------



## kundica

It seems my score increases when I lower p7 clock/voltage and lower voltage on p6. The card sustains higher clocks throughout the bench.

p7 set to 1727 at 1150mv, p6 set to 1667 at 1100mv. Still a work in progress. Thinking about rolling back to 17.8.1 to see if I can sustain higher clocks since it's a more stable driver for me.


----------



## ashman95

FE EK waterblock - just ran 4K, let me try to catch you with me my 1698mhz drivers -lol


----------



## kundica

Quote:


> Originally Posted by *kundica*
> 
> It seems my score increases when I lower p7 clock/voltage and lower voltage on p6. The card sustains higher clocks throughout the bench.
> 
> p7 set to 1727 at 1150mv, p6 set to 1667 at 1100mv. Still a work in progress. Thinking about rolling back to 17.8.1 to see if I can sustain higher clocks since it's a more stable driver for me.
> 
> 
> Spoiler: Warning: Spoiler!


Stock core on 17.8.1 with +50% power limit and HBM at 1100. Card crashes trying to undervolt 25mv on p6 and p7. Perhaps my card is at its limits.



Downclocking undervolting seems to help. P6 1667 at 1100mv, p7 1727 at 1150mv, HBM at 1100. Probably not stable while gaming though.


----------



## ashman95

I was running at 1648mhz @1130 mV crashed when I pushed clock to 1697mhz- good to know 1150mV is relative to 1700mhz- I'm working on stuff at the moment cant afford a crash- will play around with these settings when I'm finished ...P6 @ 1697mhz/1059mV .......later


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> It seems my score increases when I lower p7 clock/voltage and lower voltage on p6. The card sustains higher clocks throughout the bench.
> 
> p7 set to 1727 at 1150mv, p6 set to 1667 at 1100mv. Still a work in progress. Thinking about rolling back to 17.8.1 to see if I can sustain higher clocks since it's a more stable driver for me.


I'm on 17.8.1

Havent't even tried adjusting P6 clock setting yet. I'll have to try that next. Working on setting up mining at the moment...I figure it's not going to go away anytime soon...If I can turn a profit on a monthly basis for just letting the computer run...then so be it.

WCCFTech just posted an article and they have their air cooled Vega 64 undervolted and underpowered drawing ~250W for 43.5 MH/s. From my understanding that is pretty good.


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> .1 or .2 ?


.2


----------



## erase

Quote:


> Originally Posted by *Roboyto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *kundica*
> 
> It seems my score increases when I lower p7 clock/voltage and lower voltage on p6. The card sustains higher clocks throughout the bench.
> 
> p7 set to 1727 at 1150mv, p6 set to 1667 at 1100mv. Still a work in progress. Thinking about rolling back to 17.8.1 to see if I can sustain higher clocks since it's a more stable driver for me.
> 
> 
> 
> 
> 
> 
> I'm on 17.8.1
> 
> Havent't even tried adjusting P6 clock setting yet. I'll have to try that next. Working on setting up mining at the moment...I figure it's not going to go away anytime soon...If I can turn a profit on a monthly basis for just letting the computer run...then so be it.
> 
> WCCFTech just posted an article and they have their air cooled Vega 64 undervolted and underpowered drawing ~250W for 43.5 MH/s. From my understanding that is pretty good.
Click to expand...

Not sure if that is accurate from them. They only showed the first couple of ticks at 43. Let Vega run for about 10 mins and the hashrate will be much lower. After an hour or so thinking will be well into the 30's.


----------



## ashman95

Quote:


> Originally Posted by *erase*
> 
> Not sure if that is accurate from them. They only showed the first couple of ticks at 43. Let Vega run for about 10 mins and the hashrate will be much lower. After an hour or so thinking will be well into the 30's.


I was getting 37h/s consistently when I was on air, rarely did it go to 35- miner on YouTube got 41 on the first week of the FE water cards - will be running Eth later, have to reinstall some stuff.


----------



## ashman95

top bench is with freq @ 1647mhz /voltage @ 1140mV (raised from 1130mV)

bottom bench voltage @ 1130mV same freq

After these scores I Increased V to 1150mV and interestingly frequency was jumping back and forth from states 6 & 7 got a lower score on that run


----------



## andreyb

Quote:


> Originally Posted by *Sufferage*
> 
> Does work for me using 17.8.2 as well, currently running 1020mV. You can use latest beta HWInfo to check voltages on the card.


At 1050mv in Wattool/Wattman HWiNFO64 v5.57-3235 shows 1.000V at GPU Core Voltage (VDDC), ant it doesn't change anyhow if I try to set lower then 1050mv. Higher - yes, it follows the change.
Quote:


> Originally Posted by *pmc25*
> 
> My guess is that WattMan either doesn't apply voltages lower than 1050mV or 1000mV. Difficult to tell as I think modern graphics cards have LLC.


That's my story


----------



## Roboyto

Quote:


> Originally Posted by *ashman95*
> 
> I was getting 37h/s consistently when I was on air, rarely did it go to 35- miner on YouTube got 41 on the first week of the FE water cards - will be running Eth later, have to reinstall some stuff.


I'm bouncing between 37.7 and 38.7 with 17.8.1. Wondering if the blockchain driver has specifics in it, or if that carried over to later renditions? If that boost is to be had, then low 40's should be quite plausible?

DCR also going at a steady 1150.

Need to figure out Watt Tool to adjust all Pstates and get all the voltages/frequecies down so power can be dropped more.

Steady now for bout half hour. P6 1050mv @ 1537 P7 1075mv @ 1552 - 1100 HBM and +20% power


----------



## ashman95

Quote:


> Originally Posted by *Roboyto*
> 
> I'm bouncing between 37.7 and 38.7 with 17.8.1. Wondering if the blockchain driver has specifics in it, or if that carried over to later renditions? If that boost is to be had, then low 40's should be quite plausible?
> 
> DCR also going at a steady 1150.
> 
> Need to figure out Watt Tool to adjust all Pstates and get all the voltages/frequecies down so power can be dropped more.
> 
> Steady now for bout half hour. P6 1050mv @ 1537 P7 1075mv @ 1552 - 1100 HBM and +20% power


I'm sure I can get way over 40H/s , the only issue I had was that I was on air then- now I gotta EK block ; ) , these cards should be able to get to 1700mhz and above with no issues- I'm thinking AMD intentionally limited (bios or drivers) these cards, even the FE

BTW Eth mining is sensitive to memory speed, the higher the better- voltage is NOT a major knob to turn, so yes, you're right about a 'sweet spot'.


----------



## ashman95

Quote:


> Originally Posted by *Roboyto*
> 
> I'm bouncing between 37.7 and 38.7 with 17.8.1. Wondering if the blockchain driver has specifics in it, or if that carried over to later renditions? If that boost is to be had, then low 40's should be quite plausible?
> 
> DCR also going at a steady 1150.
> 
> Need to figure out Watt Tool to adjust all Pstates and get all the voltages/frequecies down so power can be dropped more.
> 
> Steady now for bout half hour. P6 1050mv @ 1537 P7 1075mv @ 1552 - 1100 HBM and +20% power


Robo, are you on Air??


----------



## Roboyto

Quote:


> Originally Posted by *ashman95*
> 
> I'm sure I can get way over 40H/s , the only issue I had was that I was on air then- now I gotta EK block ; ) , these cards should be able to get to 1700mhz and above with no issues- I'm thinking AMD intentionally limited (bios or drivers) these cards, even the FE
> 
> BTW Eth mining is sensitive to memory speed, the higher the better- voltage is NOT a major knob to turn, so yes, you're right about a 'sweet spot'.


Yeah, I know memory speed, and timing as well, are most important for ETH. I believe I read this is why GDDR5X doesn't work so well...you get high clock speeds/bandwidth but must sacrifice timing for that crazy 11GHz speed.

Yeah, 1700 is no problem once you're under water...but much higher than that and depending on the bench you won't likely have stability. 1682-1732 I've found to be the best pending the benchmark.

Super Position seems to work the cards harder hitting power threshold much faster. Undervolt is necessary to reduce power draw and maintain steady clock speeds. My OC/OV settings that were good for TimeSpy and Firestrike flopped hard in Super Position. By undervolting and reducing clock speed by 50 MHz, I increased my score by 9.5%.

Those undervolt settings also increased my FS Extreme score by a couple hundred points...figuring this was a good trend I ran TimeSpy again...BUT TimeSpy score went down. It seems TimeSpy would just rather have the extra 50 MHz of clock speed.

Quote:


> Originally Posted by *ashman95*
> 
> Robo, are you on Air??


No sir....Full Cover EK has been on for a couple days now.


----------



## Formula383

So i have a vega64 air. I wanted 16gb and liquid. i got neither lol. question is anyone knwo if they will be coming out with a 16GB gamer card soon ish? or should i just pony up for the vegaFE Liquid? I really want the 16GB and the higher 350w bios.


----------



## Roboyto

Quote:



> Originally Posted by *Formula383*
> 
> So i have a vega64 air. I wanted 16gb and liquid. i got neither lol. question is anyone knwo if they will be coming out with a 16GB gamer card soon ish? or should i just pony up for the vegaFE Liquid? I really want the 16GB and the higher 350w bios.


If there's going to be a 16GB card, it won't be until next gen Navi I reckon...

It is possible to flash the LC BIOS to get that power limit raised. What you want/need the 16GB for?


----------



## Formula383

Quote:


> Originally Posted by *Roboyto*
> 
> If there's going to be a 16GB card, it won't be until next gen Navi I reckon...
> 
> It is possible to flash the LC BIOS to get that power limit raised. What you want/need the 16GB for?


Ever since i got into high resolution gaming vram limits have been real. And extra vram just ensures you have a smoother screen







That and mods and just like having the extra vram. And with new games vram will be more and more needed imho 32gb would not be that far out of wack. just look at games like ark 120GB on the drive. Ofc not all that data has to be in vram but most of that is textures. So less swaping the better. And if your @4k or above the memory limitations can be real!


----------



## Formula383

Quote:


> Originally Posted by *Roboyto*
> 
> It is possible to flash the LC BIOS to get that power limit raised. What you want/need the 16GB for?


So would the FE liquid be available if i got the air cooled unit? I really like the idea of the AIO simply because then it wont affect what machine i want to put it in and out of for testing and w/e else i'm doing. 500$ for a cooler is just plain stupid! but i mean its what i would like.


----------



## dagget3450

Right now vega performance isnt strong enough for the high resolutuon vram limits to be a factor. Unless your doing crossfire which apparently only works on vega fe right now. I tested some games at 10k resolution while vram buffer was good , fps wasnt so much.


----------



## Roboyto

Quote:


> Originally Posted by *Formula383*
> 
> Ever since i got into high resolution gaming vram limits have been real. And extra vram just ensures you have a smoother screen
> 
> 
> 
> 
> 
> 
> 
> That and mods and just like having the extra vram. And with new games vram will be more and more needed imho 32gb would not be that far out of wack. just look at games like ark 120GB on the drive. Ofc not all that data has to be in vram but most of that is textures. So less swaping the better. And if your @4k or above the memory limitations can be real!


HBCC is supposed to alleviate these problems if you were to run out of physical VRAM though right? Why not test with Super Position at 8K to see what kind of VRAM that is requiring.


----------



## ashman95

Quote:


> Originally Posted by *Formula383*
> 
> So would the FE liquid be available if i got the air cooled unit? I really like the idea of the AIO simply because then it wont affect what machine i want to put it in and out of for testing and w/e else i'm doing. 500$ for a cooler is just plain stupid! but i mean its what i would like.


$999 I bought the FE air knowing I was going water ,
$239 EKWB EK-KIT L360 Complete Triple 120mm Water / Liquid Cooling Kit 360mm (Rev 2.0)
$150 EK waterblock

with savings
$129 Thermaltake Core P5 Black Edition ATX Open Frame Panoramic Viewing Tt LCS Certified Gaming Computer Case CA-1E7-00M1WN-00


----------



## Formula383

Quote:


> Originally Posted by *Roboyto*
> 
> HBCC is supposed to alleviate these problems if you were to run out of physical VRAM though right? Why not test with Super Position at 8K to see what kind of VRAM that is requiring.


It can help! but nothing like the real thing. Most games it is not needed at all. some do benefit tho. And generally Its play ability not fps.


----------



## Formula383

Quote:


> Originally Posted by *ashman95*
> 
> $999 I bought the FE air knowing I was going water ,
> $239 EKWB EK-KIT L360 Complete Triple 120mm Water / Liquid Cooling Kit 360mm (Rev 2.0)
> $150 EK waterblock
> 
> with savings
> $129 Thermaltake Core P5 Black Edition ATX Open Frame Panoramic Viewing Tt LCS Certified Gaming Computer Case CA-1E7-00M1WN-00


Yes i know i have many WC pc but. I dont like not having the ability to move the gpu around if i see fit. so its just a convince more than anything. And i will likely add a wb to my air vega64. i already have a small gpu block for it. just havent decided where its home will be just yet and or if i wanted a full cover block or not.

The real question i have is can i buy a good AIO for gpus?


----------



## Formula383

Also i am not here to tell anyone they need 16GB of vram. Its just something i would rather have. In my case my old titanX with 12GB has been used. So for you i really cant say how much you need. for me i use it. So it makes sense to buy it. Could i live with out it sure. But if i have the money to buy the extra even if it is a total rip off for the price well thats my choice lol.


----------



## ashman95

Quote:


> Originally Posted by *Formula383*
> 
> It can help! but nothing like the real thing. Most games it is not needed at all. some do benefit tho. And generally Its play ability not fps.


Prey: try getting to the 'reactor control room' to reboot the computer without HBBC- HBBC makes big difference there. I tried it with and without, night and day.


----------



## jehovah3003

Hey guys, i received my Sapphire RX Vega 64 LC 3 days ago, except the current freezing / overwatch crashing driver issues, i'm happy about it, except :

The fan is making a horrible "brr" noise, even at low rpm (it's less) but when the card is heating up the noise is very annoying, i can also clearly hear it when the PC boots or shuts down, is that normal ? Isn't the fan supposed to be completly silent ? Anyone having this issue too ? Should i return it ?

I tried to record a video but it doesn't catch the sound


----------



## Formula383

Quote:


> Originally Posted by *ashman95*
> 
> Prey: try getting to the 'reactor control room' to reboot the computer without HBBC- HBBC makes big difference there. I tried it with and without, night and day.


Nice glad to hear that it is working in at least some games







It really is a great idea and i'm assuming was pretty easy to implement. But yes video ram and even system memory are under rated and in the not so far future we will see games requiring 32gb of system memory or very large ssd caches to make them playable. i have already run into a few games that required me to increase my page file to load lol....


----------



## Formula383

Quote:


> Originally Posted by *jehovah3003*
> 
> Hey guys, i received my Sapphire RX Vega 64 LC 3 days ago, except the current freezing / overwatch crashing driver issues, i'm happy about it, except :
> 
> The fan is making a horrible "brr" noise, even at low rpm (it's less) but when the card is heating up the noise is very annoying, i can also clearly hear it when the PC boots or shuts down, is that normal ? Isn't the fan supposed to be completly silent ? Anyone having this issue too ? Should i return it ?
> 
> I tried to record a video but it doesn't catch the sound


Mine is quiet. might just do a visual inspection. if it is the fan it self then yes rma time


----------



## punchmonster

This isn't true. If you adequately cool your HBM it won't loosen timings.
Quote:


> Originally Posted by *erase*
> 
> Not sure if that is accurate from them. They only showed the first couple of ticks at 43. Let Vega run for about 10 mins and the hashrate will be much lower. After an hour or so thinking will be well into the 30's.


----------



## erase

Yea so the only way to do this is to crank the fan speed up, then you are faced with extra noise and early fan failure due to the fan running fast 24/7, no one hashs for a couple of hours and then stops.


----------



## Ghoxt

Quote: [Reddit]

__
https://www.reddit.com/r/6xqpot/vega_64_435_mhs_130w/



> So, I just managed to tweak my Vega to 1000 MHz Core and 1100 MHz HBM with a power target of -24%.
> 
> Getting 43.5 MH/s in ethereum. Pretty sure I can get similar with Vega 56. Things can only get better from here.
> 
> - 1st Edit. Here is an album with my screenshots and a picture of my desktop:
> 
> 
> http://imgur.com/1GgvR
> 
> [404]
> 
> - 2nd Edit. To clear up any confusion, 130w is total card power, not core power. Core power is around 104w.
> 
> - 3rd Edit. Thanks to Mumak for clearing some information up. My total power looks to be closer to 141.052w. Taking data from the current HWiNFO beta for core power of 0.449w * 256 + HBM power of 26.108w.
> 
> - 4th Edit. Thanks again to Mumak for the help and corrections. Looks like my original estimate of 130w was pretty good.
> 
> - 5th Edit. Holy cow! So much crap on the net about this post already.


Interesting that within a couple weeks the enthusiast communities seem to squeek out more performance despite roadblocks in their way. So while not 70 MH/s it will be interesting to see how much further things can go.


----------



## Luftdruck

Hey guys. I currently own a R9 Fury X and already ordered my RX Vega 56 to add it to my EKWB CPU-only loop.

Already read 30+ pages, but couldn't find any information on this;
- Is it possible to flash a Vega FE BIOS to a RX Vega to gain Radeon Pro driver support?


----------



## punchmonster

Or you simply buy the liquid cooled edition or a Morpheus II.
Quote:


> Originally Posted by *erase*
> 
> Yea so the only way to do this is to crank the fan speed up, then you are faced with extra noise and early fan failure due to the fan running fast 24/7, no one hashs for a couple of hours and then stops.


----------



## erase

Quote:


> Originally Posted by *Mandarb*
> 
> Quote:
> 
> 
> 
> Originally Posted by *erase*
> 
> I can wait it out I guess, hard to get I hold on to them and make them mine instead.
> 
> Both 2x Vega 56 cards are Sapphire cards. The Vega 64 card I have in sig is Gigabyte.
> 
> How do I go about getting Sapphire version of the Vega 64 BIOS and how do flash it to these 2x 56 Vega?
> 
> 
> 
> The reference cards are all the same, there's no Sapphire or Gigabyte version.
> 
> Go to techpowerup.com, download ATIWinflash and the Vega64 bios (check in details the power envelope to tell the standard from the power save apart).
> 
> Save your BIOS, flash the RX64, , reboot,check memory voltage with the newest HWinfo beta, OC memory.
> 
> Also, you can't touch the power save BIOS, you can only flash the standard BIOS (switch towards faceplate).
Click to expand...

Thanks for this. Couple of questions?

When I checked the two Vega 64 BIOS at techpowerup they both have different wattages inside, which one is the correct on the use? Also both BIOS there have liquid temps in the details, so is this actually the liquid cooled BIOS and not the regular air cooled BIOS?

Does towards the faceplate mean switch towards the IO display connectors or towards the 2x power connectors


----------



## punchmonster

mistake


----------



## gupsterg

Quote:


> Originally Posted by *erase*
> 
> Thanks for this. Couple of questions?
> 
> When I checked the two Vega 64 BIOS at techpowerup they both have different wattages inside, which one is the correct on the use? Also both BIOS there have liquid temps in the details, so is this actually the liquid cooled BIOS and not the regular air cooled BIOS?
> 
> Does towards the faceplate mean switch towards the IO display connectors or towards the 2x power connectors


All VEGA cards regardless of RX/FE have differing wattage on either BIOS switch.


----------



## Gdourado

I am curious of something.
How is the overclock and tuning potential of the Vega 64 Liquid?
I read the PCper review, and overclock was only 15mhz above Stock.
But how is it doing in the real world?


----------



## gupsterg

AMD Matt is an AMD rep, is on OCuk and as ltmatt here. He posts there more and has got close to 1800MHz for game use and some bench runs of 1812 IIRC.

*But* a lot of OCuk AIO owners have had to RMA cards as well. AMD Matt has had 3 so far and 2 are RMA'd and coming back. Most have issues of black screening, not being able to use higher PL, etc. One card had coolant leakage in the box. IIRC there are ~15 owners of AIO in OCuk thread and all have had 1 RMA at least IIRC.


----------



## dagget3450

Quote:


> Originally Posted by *gupsterg*
> 
> AMD Matt is an AMD rep, is on OCuk and as ltmatt here. He posts there more and has got close to 1800MHz for game use and some bench runs of 1812 IIRC.
> 
> *But* a lot of OCuk AIO owners have had to RMA cards as well. AMD Matt has had 3 so far and 2 are RMA'd and coming back. Most have issues of black screening, not being able to use higher PL, etc. One card had coolant leakage in the box. IIRC there are ~15 owners of AIO in OCuk thread and all have had 1 RMA at least IIRC.


I RMA'd one of my Vega FE's due to backscreen under load , haven't had any issues since i got the return card.


----------



## erase

Quote:


> Originally Posted by *punchmonster*
> 
> Or you simply buy the liquid cooled edition or a Morpheus II.
> Quote:
> 
> 
> 
> Originally Posted by *erase*
> 
> Yea so the only way to do this is to crank the fan speed up, then you are faced with extra noise and early fan failure due to the fan running fast 24/7, no one hashs for a couple of hours and then stops.
Click to expand...

Its about using what you have, not selling, spending, and buy other products. Most people will be buying the air cooled model, for all those that fit (not just me) in the category.


----------



## gupsterg

Quote:


> Originally Posted by *dagget3450*
> 
> I RMA'd one of my Vega FE's due to backscreen under load , haven't had any issues since i got the return card.


Yeah I noted that. On OCuk the AIO cards have been dropping like flies







.

AIR 56/64 guys only a few have had issues to RMA.

Some have had PSU issues. Some have had install issues on W10, flip to W7 all well. Some on Sandy bridge have had issues with mobo FW not liking VEGA IIRC. Several that went custom WC luv the cards and have good results.

Even though I got the MSI GTX 1080 Sea Hawk EK X I nearly pulled the trigger tonight on Sapphire RX Vega 64 AIR. OCuk have in stock at £480, which if I do SEP USD to GBP and add 20% VAT is launch price. But as Sapphire is pretty much warranty void when go WB I backed off. Scan did have some MSI RX Vega 64 AIR earlier today, by the time I made mind to order they were gone.

Currently only OCuk have V56/V64 AIR at SEP for standalone card.


----------



## Mandarb

Quote:


> Originally Posted by *erase*
> 
> Thanks for this. Couple of questions?
> 
> When I checked the two Vega 64 BIOS at techpowerup they both have different wattages inside, which one is the correct on the use? Also both BIOS there have liquid temps in the details, so is this actually the liquid cooled BIOS and not the regular air cooled BIOS?
> 
> Does towards the faceplate mean switch towards the IO display connectors or towards the 2x power connectors


Faceplate means towards IO donnector plate. You can't overwrite the secondary BIOS, so you will notice.









https://www.techpowerup.com/vgabios/194441/amd-rxvega64-8176-170719 is the one, I see it does say liquid but it's not. The liquid BIOS has 264Watt envelope and a different fan profile.. It doesn't work well with the aircooled cards.

P.S.: third line says Vega 10 XT, that' 64. XTX is Vega64 LC, XL is Vega56.


----------



## dagget3450

Quote:


> Originally Posted by *gupsterg*
> 
> Yeah I noted that. On OCuk the AIO cards have been dropping like flies
> 
> 
> 
> 
> 
> 
> 
> .
> 
> AIR 56/64 guys only a few have had issues to RMA.
> 
> Some have had PSU issues. Some have had install issues on W10, flip to W7 all well. Some on Sandy bridge have had issues with mobo FW not liking VEGA IIRC. Several that went custom WC luv the cards and have good results.
> 
> Even though I got the MSI GTX 1080 Sea Hawk EK X I nearly pulled the trigger tonight on Sapphire RX Vega 64 AIR. OCuk have in stock at £480, which if I do SEP USD to GBP and add 20% VAT is launch price. But as Sapphire is pretty much warranty void when go WB I backed off. Scan did have some MSI RX Vega 64 AIR earlier today, by the time I made mind to order they were gone.
> 
> Currently only OCuk have V56/V64 AIR at SEP for standalone card.


Don't worry i have a 1080ti i just got from a fellow OCN member... its only an FE edition but for 600$ i took the bait. It was brand new, so i may get an AIO for it later but for now im just going to play with it.
I do have some criticism of Nvidia though after not using them since 680gtx.. The Nvidia control panel is archaic as ever its just like i remember years back... I feel AMD is miles ahead on this point.


----------



## ontariotl

That sucks that more are RMA'ing their AIO cards. I guess I dodged a bullet by backing out on a pre-order.

I'll throw in my test with Superposition benchmark as well. This was run with the gpu core [email protected] [email protected] and HBM2 @1105. This is under the high power AIO bios. My card wont run at default settings unfortunately. I'm going to try and flash to the low power AIO bios and see what I get.


----------



## dagget3450

Here is a quick run on vega fe air CF - core stock/hbm 1050/slightly undervolted/50+pl



Not exactly great but something i guess


----------



## Ark-07

Hi all

So im trying to figure out how much power is needed for the vega 64? I have choices from the below links but I must say im not happy going over $200 for a good power supply or am I dreaming?

The choices
http://www.msy.com.au/nsw/ultimo/24-power-supply#/

Choice 1 would really prefer not to get this one plus its out of stock.
http://www.msy.com.au/nsw/ultimo/pc-components/15845--corsair-rm1000i-cp-9020084-au-1000watt-80plus-gold-full-modular-atx-power-supply-unit.html

choice 2
http://www.msy.com.au/nsw/ultimo/power-supply/15844--corsair-rm850i-cp-9020083-au-850watt-80plus-gold-full-modular-atx-power-supply-unit.html

choice 3
http://www.msy.com.au/nsw/ultimo/800w-1000w/18260-corsair-tx850m-cp-9020130-au-850watt-80plus-gold-atx-semi-modular-power-supply-unit.html

choice 4
http://www.msy.com.au/nsw/ultimo/pc-components/12215-antec-hcg-850m-850w-high-current-gamer-modular-gaming-psu.html


----------



## ashman95

Quote:


> Originally Posted by *dagget3450*
> 
> Here is a quick run on vega fe air CF - core stock/hbm 1050/slightly undervolted/50+pl
> 
> 
> 
> Not exactly great but something i guess


Dagget what are your full FE settings ? I want to duplacate your results


----------



## dagget3450

Quote:


> Originally Posted by *ashman95*
> 
> Dagget what are your full FE settings ? I want to duplacate your results


Crossfire profile added to Radeon ui, selected 1x1 mode in crossfire option

core clocks left stock
p6 1100mv p7 1150mv
hbm 1050 stock volts
pl+50%
fan curve maxed

i think thats it? lol so many steps


----------



## ashman95

Quote:


> Originally Posted by *dagget3450*
> 
> Crossfire profile added to Radeon ui, selected 1x1 mode in crossfire option
> 
> core clocks left stock
> p6 1100mv p7 1150mv
> hbm 1050 stock volts
> pl+50%
> fan curve maxed
> 
> i think thats it? lol so many steps


Crossfire? 2 FE's?


----------



## punchmonster

So wait for AIBs.

Fact is that drop you see isn't necessarily the case. You can mitigate it even on reference.
Quote:


> Originally Posted by *erase*
> 
> Its about using what you have, not selling, spending, and buy other products. Most people will be buying the air cooled model, for all those that fit (not just me) in the category.


----------



## Caldeio

Quote:


> Originally Posted by *ashman95*
> 
> Crossfire? 2 FE's?


It's enabled for FE, not RX yet


----------



## dagget3450

Quote:


> Originally Posted by *Caldeio*
> 
> It's enabled for FE, not RX yet


Correct, sorry i thought he had 2x FE?

my cards don't oc core for crap, not sure if thats bios/power limitng/thermals but i guess i could try messing with BIOS flashing. Although i havent payed enough attention to know if FE can have bios flashed successfully


----------



## Nuke33

Quote:


> Originally Posted by *Luftdruck*
> 
> Hey guys. I currently own a R9 Fury X and already ordered my RX Vega 56 to add it to my EKWB CPU-only loop.
> 
> Already read 30+ pages, but couldn't find any information on this;
> - Is it possible to flash a Vega FE BIOS to a RX Vega to gain Radeon Pro driver support?


+1
I would like to know that as well.

Quote:


> Originally Posted by *theBee2112*
> 
> Apparently it's OpenCL that's sandbagging this card. From what I understand, there's a bottleneck in the driver.
> 
> Can anyone make some sense of this thread?
> https://github.com/RadeonOpenCompute/ROCm/issues/147
> 
> It's for the ROCm 1.6.1 kernel driver, for Linux. *It seems to allow voltage and clock control*, as well as introduce a 2Mb pagefile? These people seem to be reporting improvements.
> https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/
> 
> I'm not that good with Linux, but I just wanted to share and see if someone else could make sense of it.


It is mainly dealing with ROCm Kernel driver issues. One of them is OpenCL memory throughput efficiency, which seems to be related to faulty powermanagement in the kernel driver and the linux OpenCL stack. It is not about allowing manual control of voltages but fixing the kernel driver so it can set voltages correctly to reach normal clockrates.
Manual voltage control is still broken in linux I think. Only way would be kernel modifications, just like powerplay registry mods in windows.

The 2MB pagesize update will improve performance on linux by roughly 10%. Linux uses small vram pagesizes, so 2mb pages will reduce overhead. It will not have a big impact on windows since it is using bigger vram pages already.
What does affect windows are large file allocation fixes.


----------



## cooljaguar

Does anyone here have their hands on Vega 56 yet? If so could you please do some gaming benchmarks while overclocked & undervolted as far it'll go? I'd greatly appreciate it!

I'd like to see how far Vega 56 can be pushed while staying at or under 300W, but if you go over that because you're using a flashed V64 BIOS or the power tables mod that's fine too.


----------



## deadman3000

I want Arctic to make an Accelero for RX VEGA!


----------



## Caldeio

Quote:


> Originally Posted by *cooljaguar*
> 
> Does anyone here have their hands on Vega 56 yet? If so could you please do some gaming benchmarks while overclocked & undervolted as far it'll go? I'd greatly appreciate it!
> 
> I'd like to see how far Vega 56 can be pushed while staying at or under 300W, but if you go over that because you're using a flashed V64 BIOS or the power tables mod that's fine too.


Any certain game? No multiplayer ones please.

So far I'm at 1642 core and 950 mem. Had some problems with the game I was using to test, my save got corrupted and would crash on loading.
950 memory seems fine and stable mining, above 960-965 started the artifacts in gaming/benchmarks (not mining)
I need better cooling!

1642, 950 mem for me, it averages 1362 clocks. Also according to HWInfo uses about 165watts, mining at the rate of:
31.3mh/s and 246 mh/s
ETHER and then LBRY^^

Downloaded superposition benchmark


----------



## cooljaguar

Quote:


> Originally Posted by *Caldeio*
> 
> Any certain game? No multiplayer ones please.
> 
> So far I'm at 1642 core and 950 mem. Had some problems with the game I was using to test, my save got corrupted and would crash on loading.
> 950 memory seems fine and stable mining, above 960-965 started the artifacts.
> I need better cooling!
> 
> 1642, 950 mem for me, it averages 1362 clocks. Also according to HWInfo uses about 165watts, mining at the rate of:
> 31.3mh/s and 246 mh/s
> ETHER and then LBRY^^
> 
> Downloaded superposition benchmark


GTA V, Metro: Last Light, Rise of the Tomb Raider, and TW: Warhammer all have built in benchmarks I think, so any of those would be perfect.









Doom 2016 and Battlefield 1 (singleplayer) would also be nice if you have the time.

Thanks for doing this, it'll be interesting to see the results!


----------



## buildzoid

can people here run a test to see if they lose performance at higher voltage?

I noticed while messing with 1.25V on the V64L BIOS that at 1.25V I can't match the scores I did on 1.2V however I am on the stock cooler so it could maybe be a thermal issue with the card. It's not the power limit since I'm using a 350A +200% power play table.

for me difference in Firestrike Extreme was ~12050 points at 1702/1100 1.25V VS 12250 points at 1702/1100 1.2V. I repeated the test several times to make sure it wasn't some random variance. I was using the 17.8.1 driver.


----------



## Tgrove

Sapphire vega 64 WC should be arriving this thursday (already received both promotional codes). Hopefully amd gets a driver update out by then. Cant wait to join the club and get in on the action! Ill have pics and use sig rig (usa)

ps4 pro + psvr + 55ks8000 held me over well but im itching since i sold both fury x' in june


----------



## kundica

Quote:


> Originally Posted by *buildzoid*
> 
> can people here run a test to see if they lose performance at higher voltage?
> 
> I noticed while messing with 1.25V on the V64L BIOS that at 1.25V I can't match the scores I did on 1.2V however I am on the stock cooler so it could maybe be a thermal issue with the card. It's not the power limit since I'm using a 350A +200% power play table.
> 
> for me difference in Firestrike Extreme was ~12050 points at 1702/1100 1.25V VS 12250 points at 1702/1100 1.2V. I repeated the test several times to make sure it wasn't some random variance. I was using the 17.8.1 driver.


I noticed something like this on my LC 64. On 17.8.2 I needed to up the voltage at stock clocks for the card to even be stable but if I dropped the voltage and the clock my sore would go up. I rolled back to 17.8.1 and noticed something similar. I don't think i have a clock to clock comparison to share, but I posted this earlier today. http://www.overclock.net/t/1634018/official-vega-frontier-rx-vega-owners-thread/1440#post_26322229


----------



## buildzoid

I don't care about clock for clock comparison I'm more worried that VEGA seems to have the same problem that Fiji did where more Vcore leads to less performance once you pass a certain voltage level. Which is ridiculously annoying if you have enough cooling and want to bench say 1800MHz at 1.3V but can't because your performance ends up worse than 1700ish on 1.2V.


----------



## kundica

Quote:


> Originally Posted by *buildzoid*
> 
> I don't care about clock for clock comparison I'm more worried that VEGA seems to have the same problem that Fiji did where more Vcore leads to less performance once you pass a certain voltage level. Which is ridiculously annoying if you have enough cooling and want to bench say 1800MHz at 1.3V but can't because your performance ends up worse than 1700ish on 1.2V.


I understand that. I mentioned the clock because I was trying to say that I don't have a comparison where the clock a controlled variable. I often change both at the same time but I came to a similar conclusion that lowering voltage almost always increases my performance(at least within a certain degree).


----------



## punchmonster

Quote:


> Originally Posted by *buildzoid*
> 
> can people here run a test to see if they lose performance at higher voltage?
> 
> I noticed while messing with 1.25V on the V64L BIOS that at 1.25V I can't match the scores I did on 1.2V however I am on the stock cooler so it could maybe be a thermal issue with the card. It's not the power limit since I'm using a 350A +200% power play table.
> 
> for me difference in Firestrike Extreme was ~12050 points at 1702/1100 1.25V VS 12250 points at 1702/1100 1.2V. I repeated the test several times to make sure it wasn't some random variance. I was using the 17.8.1 driver.


Tried this. As long as I keep my clock stable and HBM2 below 72°C scores keep going up. Note that beyond that temp your HBM timings will slide and lower your performance by a chunk. So watch HBM temps.


----------



## buildzoid

Quote:


> Originally Posted by *punchmonster*
> 
> Tried this. As long as I keep my clock stable and HBM2 below 72°C scores keep going up. Note that beyond that temp your HBM timings will slide and lower your performance by a chunk. So watch HBM temps.


Well 4.9K RPM isn't enough apparently. I guess I'm chopping up the base plate from my FE to use for VRM cooling(not like I can use it anyway since I routed the wiring for the Vcore hard mod through one of the screw holes on the end of the card).


----------



## punchmonster

On stock air cooler I couldn't prevent the HBM2 from slipping even with max RPM. At 1050mV core I was able to prevent the timings from slipping if I didn't fully load the core but as soon as I put the core under max load too the timings would slip.
Quote:


> Originally Posted by *buildzoid*
> 
> Well 4.9K RPM isn't enough apparently. I guess I'm chopping up the base plate from my FE to use for VRM cooling(not like I can use it anyway since I routed the wiring for the Vcore hard mod through one of the screw holes on the end of the card).


----------



## Irev

Quote:


> Originally Posted by *dagget3450*
> 
> I RMA'd one of my Vega FE's due to backscreen under load , haven't had any issues since i got the return card.


----------



## alecmg

Good news!

I connected three monitors to Vega, and it stays at lowest power setting!
And it is not a simplest setup to drive:
2560x1440 144Hz DP
1920x1200 60Hz HDMI
1920x1080 60Hz DP

Triple monitor power consumption is finally in check.


----------



## alanthecelt

boy is it a shock going back to AMD
got 3 vega 56's (2 sapphire 1 MSI) to replace 2 1080's in my miner

Firstly, the one 56 i put on a tested pcie riser would not work at all

so, ive got them mounted closely together which is way less than ideal.. at least until i work out a solution

So, im running a temperature target of 68 degrees, on the premise that improves the memory timing
memory sat at 950 so far

p6 1050mv and p7 1150

power limit +50
currently mining dagger /pascal at
30.8/36/33.4
983/1082/1003

fans are running fast on the restricted cards (gpu 2 being the card exposed to fresh air)

so obviously i need to get the middle cad out on a riser (proven to work on my 1080 or 1080ti)
any thoughts?


----------



## roybotnik

Quote:


> Originally Posted by *pmc25*
> 
> As has been repeatedly reported, Afterburner merely running (not trying to overclock) breaks the drivers. Uninstall it.
> 
> The drivers are a mess at the moment, anyway, particularly 17.8.2 with regard to clocking and voltages. We just need to be patient.


Hmm, even with after DDU and no attempt to use afterburner, I'm getting some weird issues with anything OpenCL. Either software crashes or the system does. Bleh. Might need to RMA.

My card (FE) is over 2 months old, been pretty patient... But all I have is launch day drivers or terribly buggy beta drivers


----------



## ducegt

Quote:


> Originally Posted by *buildzoid*
> 
> I don't care about clock for clock comparison I'm more worried that VEGA seems to have the same problem that Fiji did where more Vcore leads to less performance once you pass a certain voltage level. Which is ridiculously annoying if you have enough cooling and want to bench say 1800MHz at 1.3V but can't because your performance ends up worse than 1700ish on 1.2V.


Same with Fiji and Tonga. I flashed a 2gb R9 285 with a 380x BIOS that I swapped VRAM table with. Overvolting on 285 BIOS doesn't allow the card to reach what the 380x BIOS does without any vcore adjustments.


----------



## Tgrove

Quote:


> Originally Posted by *alecmg*
> 
> Good news!
> 
> I connected three monitors to Vega, and it stays at lowest power setting!
> And it is not a simplest setup to drive:
> 2560x1440 144Hz DP
> 1920x1200 60Hz HDMI
> 1920x1080 60Hz DP
> 
> Triple monitor power consumption is finally in check.


That screen setup isnt even possible on nvidia


----------



## alecmg

Why would it be a problem? It is not eyefinity, I'm only gaming on one screen.
Windows desktop takes care of joining the screens.

Fiji and Polaris could drive two monitors at low power, now Vega can do 3. I'm amazed it is not advertised more.


----------



## beatfried

so a first little summary after the first two weeks with my Vega64:

Before I had a RX480 which wasn't fast, but it was stable... no framerate dips, no whining nothing.

Now I have that Vega and am pretty dissappointed. I mostly play CS:GO and I know its not GPU limited but I get stutters and framerate dips to a point at which is annoying and losing me games. I really don't know what to do...

I can play Doom just fine but it can't play CS:GO at a stable framerate.


----------



## dagget3450

Quote:


> Originally Posted by *Irev*


Tried 3 different power supplys on mine. Also has 2 of these gpus, one did the blackscreen on load, the other didnt. My issue was just like it does when overclocked but at stock on load.

Havent had the issue since i got the replacement gpu.


----------



## pmc25

Quote:


> Originally Posted by *beatfried*
> 
> so a first little summary after the first two weeks with my Vega64:
> 
> Before I had a RX480 which wasn't fast, but it was stable... no framerate dips, no whining nothing.
> 
> Now I have that Vega and am pretty dissappointed. I mostly play CS:GO and I know its not GPU limited but I get stutters and framerate dips to a point at which is annoying and losing me games. I really don't know what to do...
> 
> I can play Doom just fine but it can't play CS:GO at a stable framerate.


Most probably to do with the broken dynamic clocking in the very alpha-state drivers.

HBM2 particularly will down clock at even the slightest lull. It doesn't effect average frame rates that much, but does cause micro stutter and drops.

Try setting your HBM P3 state to both min and max when you game. It alleviates it somewhat, in my experience.


----------



## beatfried

Quote:


> Originally Posted by *pmc25*
> 
> Most probably to do with the broken dynamic clocking in the very alpha-state drivers.
> 
> HBM2 particularly will down clock at even the slightest lull. It doesn't effect average frame rates that much, but does cause micro stutter and drops.
> 
> Try setting your HBM P3 state to both min and max when you game. It alleviates it somewhat, in my experience.


thank you!

I'm gonna try that this evening


----------



## alanthecelt

think ive got it more stable.. pulled the center car and put it via riser cable to an x4
had to disable crossfire yet again as it seems to default to on

set temp targets again, undervolted to 1025/1100
chose "power save" performance profile (what are these supposed to do? just a profile to save against?)
and left speeds stock for the time being as i was getting crashes even with the memory overclocked
if i get an hour or so completed ill see where i can go from there

right now i'm debating a PSU as i have 1 card running via a 650w random PSU which is also running hte motherboard
the second PSU is an EVGA 750 powering the other 2 vegas

i'm thinking of getting a 1kw to power 2 vega's and my TI


----------



## deadman3000

SPARKLIES!

I can't get my memory over 900Mhz on my RX 56 on certain games it would seem. I can run benchmarks at 950Mhz no problemo (Unigine Heaven, Timespy, Firestrike etc). But games like Tomb Raider Dagger of Xian (Unreal Engine 4) I get sparklies. Even at 900Mhz I have seen the odd sparkly. Clocked at 1562-1612 1030mV-1070mV fan speed/target makes no difference to memory clock.


----------



## dagget3450

I will update owners list tonight, being in florida i am going to be busy prepping for possible hurricane action rest of this week.


----------



## Newbie2009

So the new GPU Z Supports VEGA


----------



## deadman3000

The new version of GPU-Z is giving me problems. When open I get pauses (including mouse pauses) in D3D applications every 7-8 seconds.


----------



## NI6HTHAWK

Hello all, just joined the forum and count me in as a Vega 64 Liquid owner. I have been lurking this site for a few years now when looking for help with overclocking on previous builds.

I just picked up the Sapphire Vega 64 Liquid running GPU core at P6 1687mhz / 1100 mV & P7 1697 / 1150mV, +15% Power Limit and HMB2 at 1000 mhz / 950 mV.

I also have a 1080ti FE (running ~2025 mhz) which was on the same rig (Ryzen 7 1700 @ 3760mhz on all cores & 3432mhz DDR4), I can say that the Vega 64 really does seem to run smoother (better minimum FPS?) and handles Multi-display set ups very well compared with the 1080ti (i have 2 x 27" 1440p Freesync MG278Q's and a Sony 43" 750D 4k TV).

Also I've found a few instances where Vega 64 appears to beat the 1080ti (Rising Storm 2 Vietnam & World of Warships). I was having terrible problems with RS2 dropping below 60 fps @ 1440p resolutions with detail settings turned up on the 1080ti, however the Vega 64 seems to stay close to max freesync range @ 120-144hz. World of Warships may "feel" smother because they just switched to DX11 so I am going to go back and try it on DX9 (the 1080ti would drop down into the 35-40 range at times on DX9, the Vega 64 never below 71 fps on DX11). That said its about 20% slower in most benchmarks than the 1080ti at the moment although I think there may be more left in the tank with driver improvements as these drivers are RAW!!! The experience with those Freesync displays really makes up for the lower average FPS in most titles. The key here is try and get as much out of that HBM2 as you can, it really seems to be helpful with these unoptimized drivers.

That said I'm in Florida too and I'm not too happy that Irma is trying to take away my game time.







Stay safe OP!


----------



## chris89

Installing the driver alone via Device Manager alone without the suite speeds things up a lot I found... DDU first without restart then install the .INF via device manager.

Use Sapphire Trixx 6.4 or BIOS MODS....

Any bios mods yet?


----------



## pmc25

Quote:


> Originally Posted by *chris89*
> 
> Installing the driver alone via Device Manager alone without the suite speeds things up a lot I found... DDU first without restart then install the .INF via device manager.
> 
> Use Sapphire Trixx 6.4 or BIOS MODS....
> 
> Any bios mods yet?


Does Trixx actually work for clocks / volts / power limit?


----------



## Tyrael

Quote:


> Originally Posted by *dagget3450*
> 
> I will update owners list tonight, being in florida i am going to be busy prepping for possible hurricane action rest of this week.


Come healthy back and good luck


----------



## Kyle Ragnador

*
Is this wrong? Or am i on the way to catch the nvidia 1080?*

XFX Vega 64 AIR

click the blue "original" on the right side


----------



## sugarhell

Quote:


> Originally Posted by *Kyle Ragnador*
> 
> *
> Is this wrong? Or am i an the way to catch the nvidia 1080?*
> 
> Vega 64 click the blue "original" on the right side


It is GPU only without the rest of the pcb.

Also, Furmark is not that much accurate. Nowdays, they have safety mechanisms on the drivers that will start throttling the gpu with Furmark etc.

Another thing is that furmark only stress the shaders. Fire up something that will use the rasterizers and maybe this undervolt will not hold.


----------



## Kyle Ragnador

I had unigine heaven running for 1h. do you know a better benchmark for me to test?

Will use the 3d mark fire strike stresstest tonight.


----------



## 113802

Quote:


> Originally Posted by *Kyle Ragnador*
> 
> I had unigine heaven running for 1h. do you know a better benchmark for me to test?


Play overwatch and if your HBM overheats your screen turns black


----------



## Tyrael

Quote:


> Originally Posted by *Ark-07*
> 
> Hi all
> 
> So im trying to figure out how much power is needed for the vega 64? I have choices from the below links but I must say im not happy going over $200 for a good power supply or am I dreaming?
> 
> The choices
> http://www.msy.com.au/nsw/ultimo/24-power-supply#/
> 
> Choice 1 would really prefer not to get this one plus its out of stock.
> http://www.msy.com.au/nsw/ultimo/pc-components/15845--corsair-rm1000i-cp-9020084-au-1000watt-80plus-gold-full-modular-atx-power-supply-unit.html
> 
> choice 2
> http://www.msy.com.au/nsw/ultimo/power-supply/15844--corsair-rm850i-cp-9020083-au-850watt-80plus-gold-full-modular-atx-power-supply-unit.html
> 
> choice 3
> http://www.msy.com.au/nsw/ultimo/800w-1000w/18260-corsair-tx850m-cp-9020130-au-850watt-80plus-gold-atx-semi-modular-power-supply-unit.html
> 
> choice 4
> http://www.msy.com.au/nsw/ultimo/pc-components/12215-antec-hcg-850m-850w-high-current-gamer-modular-gaming-psu.html


Usually for a gaming system with one gpu you usually need 250 - 350 watt on load. PSU's usually are working most efficient at 50% load. So a PSU with 750 Watt and Gold/Platinum/Titanium certificate is fine.

You can look here for some models: http://www.tomshardware.com/reviews/best-psus,4229.html

I am using the Seasonic 750 watt psu because of the premium features and quality and also it does not need to run the fan to cool itself, it just stays cool.


----------



## LionS7

Quote:


> Originally Posted by *buildzoid*
> 
> I don't care about clock for clock comparison I'm more worried that VEGA seems to have the same problem that Fiji did where more Vcore leads to less performance once you pass a certain voltage level. Which is ridiculously annoying if you have enough cooling and want to bench say 1800MHz at 1.3V but can't because your performance ends up worse than 1700ish on 1.2V.


This is only on Firestrike with R9 Fury X from my testing. Until 1.23V its ok on 1080p, 1440p, 4K. Start on 1.26V it drops scores on every Firestrike test.


----------



## ValiumMm

Quote:


> Originally Posted by *Kyle Ragnador*
> 
> I had unigine heaven running for 1h. do you know a better benchmark for me to test?
> 
> Will use the 3d mark fire strike stresstest tonight.


Open up PUBG lobby screen. This is the only time I see my HBM temps go through the roof.
Its my new stress test lmao.


----------



## Kyle Ragnador

Quote:


> Originally Posted by *ValiumMm*
> 
> Open up PUBG lobby screen. This is the only time I see my HBM temps go through the roof.
> Its my new stress test lmao.


you should undervolt hbm


----------



## Dolk

Is there any way to have bootup profiles for the RX Vega yet? I'd like it if my AMD Wattman didn't reset its OC whenever it felt like it.


----------



## kundica

Quote:


> Originally Posted by *ValiumMm*
> 
> Open up PUBG lobby screen. This is the only time I see my HBM temps go through the roof.
> Its my new stress test lmao.


The lobby is a beast, I've seen the sparkles with HBM too high in the lobby. The game is too. If my card is unstable PUBG will crash it quickly.


----------



## chris89

*511mhz is 320gb/s id say 800mv vs 1350mv, which is about 50% less clock therefore 50% less voltage therefore 50% less power & watts therefore 50% cooler therefore resulting in 50% greater core clock overhead...

anyone say 2ghz?

128 billion pixels per second & 512 billion texels per second

maybe the person who created hawaii bios reader, may build vega bios reader equally as awesome with hbm voltage so we can see what 2ghz vega is truly all about*


----------



## Caldeio

Quote:


> Originally Posted by *cooljaguar*
> 
> GTA V, Metro: Last Light, Rise of the Tomb Raider, and TW: Warhammer all have built in benchmarks I think, so any of those would be perfect.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Doom 2016 and Battlefield 1 (singleplayer) would also be nice if you have the time.
> 
> Thanks for doing this, it'll be interesting to see the results!





metro 2033 redux, and tomb raider.
1632 and 945 mem.
PT+50%

Each with AA on and then with it off. Tomb raider is dx12, I'll test dx11 soon. Also for some reason GTA5 would not tell me the fps after I ran the benchmark it just jumped back to story mode, i'd like to know what i go too? Way better than my rx470 for sure.



A bit more testing,
1632 core clock
1140 memory clock- Was stable for benchmarks, loaded up csgo just now too play and grey screen lol Working on this some more!!

Needs better cooling for HBM 1150 black screens from heat if I dont undervolt/lower clocks.









Still only get 6200 in 4k optimized SUPERPOSITION benchmark. Way too hot!


----------



## kundica

Some observations I've noticed with drivers 17.8.2 and 17.8.1.

I'm one of the people having issues running 17.8.2 with stock core clock and +50% power limit. My card passes benches but will crash while gaming on that driver. My workaround has been to up the p7 voltage to 1250mv or lower the p7 clock to 1727 while leaving the the voltage at stock 1200mv. With 17.8.1 my card runs fine at stock core +50% power limit but the clock peaks at 1752(which is also p7). On 17.8.2, when p7 is set to stock 1752 the card will often peaks around 1780 but even as high as 1802. When p7 is set to 1727 on 17.8.2 it typically peaks at 1752.

If the clocks are reading correctly on both drivers, given my observations it's reasonable to think my card is boosting too high on 17.8.2 and that's why it's crashing. The max boost clock for p7 on the AIO Vega 64 is 1750 according to Sapphire's Vega launch announcement. 1802 is a lot higher than that and I don't think all cards would be able to reach it and remain stable.

As a side note, lowering p7 to 1727 helps my card sustain higher clocks on 17.8.2 than leaving p7 at the stock 1752


----------



## PontiacGTX

Quote:


> Originally Posted by *chris89*
> 
> 511mhz is 320gb/s id say 800mv vs 1350mv, which is about 50% less clock therefore 50% less voltage therefore 50% less power & watts therefore 50% cooler therefore resulting in 50% greater core clock overhead...
> 
> anyone say 2ghz?
> 
> 128 billion pixels per second & 512 billion texels per second
> 
> maybe the person who created hawaii bios reader, may build vega bios reader equally as awesome with hbm voltage so we can see what 2ghz vega is truly all about


That will make even worse the Bandwidth bottleneck
Quote:


> Originally Posted by *Caldeio*
> 
> 
> 
> 
> metro 2033 redux, and tomb raider.
> 1632 and 945 mem.
> PT+50%
> 
> Each with AA on and then with it off. Tomb raider is dx12, I'll test dx11 soon. Also for some reason GTA5 would not tell me the fps after I ran the benchmark it just jumped back to story mode, i'd like to know what i go too? Way better than my rx470 for sure.
> 
> 
> 
> A bit more testing,
> 1632 core clock
> 1140 memory clock
> Needs better cooling for HBM 1150 black screens from heat if I dont undervolt/lower clocks.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Still only get 6200 in 4k optimized SUPERPOSITION benchmark. Way too hot!


what did you use to change the memory voltage and what was the HBM temp


----------



## cooljaguar

Quote:


> Originally Posted by *Caldeio*
> 
> metro 2033 redux, and tomb raider.
> 1632 and 945 mem.
> PT+50%
> 
> Each with AA on and then with it off. Tomb raider is dx12, I'll test dx11 soon. Also for some reason GTA5 would not tell me the fps after I ran the benchmark it just jumped back to story mode, i'd like to know what i go too? Way better than my rx470 for sure.


Nice! Thanks again for running those benchmarks.


----------



## Caldeio

Quote:


> Originally Posted by *PontiacGTX*
> 
> That will make even worse the Bandwidth bottleneck
> what did you use to change the memory voltage and what was the HBM temp


Still working on everything so those hbm clocks were so far only benchmark, and a few games stable. Loaded up csgo and it grey screened.

Sidenote* 167mhz hbm clocks still gave me 275fps in csgo









bios mod and the temps in Superposition was 80c for hbm. Max fan and undervolt doesn't help either. Tried 1100mv at 1632core. Some mining testing it was 83c, and I think the limit is 85c for black screen.
I think the timings get looser at around 60c, and then 78c at least.

Oh I haven't tested yet, but for me when I apply 1060mv for HBM it sticks in wattman, I don't really want to test this on AIR









EDIT**
looks like 1060hbm is the max stable in csgo for me on AIR. So I'll think I'm done with my testing now.


----------



## hexc0de

In the new GPU-Z, we now have monitoring for "GPU Temperature (Hot Spot)". What part of the card does this refer to? Is it the VRMs? I replaced the reference cooler with a Morpheus II, but this temperature reading still reaches 105C and causes throttling.


----------



## Soggysilicon

Quote:


> Originally Posted by *Tyrael*
> 
> Usually for a gaming system with one gpu you usually need 250 - 350 watt on load. PSU's usually are working most efficient at 50% load. So a PSU with 750 Watt and Gold/Platinum/Titanium certificate is fine.
> 
> You can look here for some models: http://www.tomshardware.com/reviews/best-psus,4229.html
> 
> I am using the Seasonic 750 watt psu because of the premium features and quality and also it does not need to run the fan to cool itself, it just stays cool.


Good chance that Seasonic is the same as the old Thermaltake TPX-775M, I think several manufacturers used the same reference or where acquiring the supplies from a single manufacturer with slightly different "as built" specifications.

I have the TPX myself, great supply, never any trouble' runs Vega and Ryzen just fine.


----------



## Roboyto

Quote:


> Originally Posted by *Ark-07*
> 
> Hi all
> 
> So im trying to figure out how much power is needed for the vega 64? I have choices from the below links but I must say im not happy going over $200 for a good power supply or am I dreaming?
> 
> The choices
> http://www.msy.com.au/nsw/ultimo/24-power-supply#/
> 
> Choice 1 would really prefer not to get this one plus its out of stock.
> http://www.msy.com.au/nsw/ultimo/pc-components/15845--corsair-rm1000i-cp-9020084-au-1000watt-80plus-gold-full-modular-atx-power-supply-unit.html
> 
> choice 2
> http://www.msy.com.au/nsw/ultimo/power-supply/15844--corsair-rm850i-cp-9020083-au-850watt-80plus-gold-full-modular-atx-power-supply-unit.html
> 
> choice 3
> http://www.msy.com.au/nsw/ultimo/800w-1000w/18260-corsair-tx850m-cp-9020130-au-850watt-80plus-gold-atx-semi-modular-power-supply-unit.html
> 
> choice 4
> http://www.msy.com.au/nsw/ultimo/pc-components/12215-antec-hcg-850m-850w-high-current-gamer-modular-gaming-psu.html


Judging by the rest of your system, your current 620W PSU woud likely be sufficient even if pushing the Vega pretty hard. Worst case scenario the GPU alone could be drawing somewhere around 400W, which would stilll leave you with power for the rest of the system. This isn't the best thing for the PSU, but it could be done... If I had to make a suggestion it would be a 700-750W+ silver, or more efficiently, rated PSU. If you're smart about tweaking the card, undervolt, you can get good performance for minimal extra wattage.

Not that this was a good idea, but once upon a time I ran a (slightly) overvolted/overclocked R9 290 and overclocked 4790k on a 450W gold rated PSU. It was also powering numerous fans, a pair of water pumps, a pair of SSDs, HDD and ODD. Never had a lick of trouble with stability, benching etc.

Quote:


> Originally Posted by *Caldeio*
> 
> 
> 
> A bit more testing,
> 1632 core clock
> 1140 memory clock- Was stable for benchmarks, loaded up csgo just now too play and grey screen lol Working on this some more!!
> 
> Needs better cooling for HBM 1150 black screens from heat if I dont undervolt/lower clocks.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Still only get 6200 in 4k optimized SUPERPOSITION benchmark. Way too hot!


Even with full cover block my Super Position scores were suffering until I went with undervolting the core. It seems to hit power limit well before all other benchmarks I've tried. Running an undervolt on the core, with 50MHz lower clock speed and HBM ~1100, I boosted my Super Position score just shy of 10%.

The HBM runs too hot with the stock cooler. I'm somewhat shocked you can get those HBM speeds. Mine wouldn't clock over 970, but I was only forcing fan to ~3300.


----------



## CaptainTom

21460153_10213794021186129_1636182253_o.jpg 225k .jpg file


Don't know if anyone has reported this yet, but I just noticed that one of the 56' in my mining rig is reporting 3648 SP's (utilizing a Vega 64 bios, verified in attachment). It does in fact also mine better than my other 56' as well!

So I have a Vega 58! And in fact you CAN unlock additional shaders with a 64 bios, even if it is rare.


----------



## ducegt

Quote:


> Originally Posted by *CaptainTom*
> 
> 21460153_10213794021186129_1636182253_o.jpg 225k .jpg file
> 
> 
> Don't know if anyone has reported this yet, but I just noticed that one of the 56' in my mining rig is reporting 3648 SP's (utilizing a Vega 64 bios, verified in attachment). It does in fact also mine better than my other 56' as well!
> 
> So I have a Vega 58! And in fact you CAN unlock additional shaders with a 64 bios, even if it is rare.


Interesting, but mining and GPU-Z not so accurate measurements. 3D benchmarks tested yet? I hope you're right!


----------



## CaptainTom

Quote:


> Originally Posted by *ducegt*
> 
> Interesting, but mining and GPU-Z not so accurate measurements. 3D benchmarks tested yet? I hope you're right!


Cannot promise when I will get to that, but I intend to eventually. Also I can't honestly say gaming would be a good test either lol.

After all it has been shown that Vega (As usual with AMD), is mostly bandwidth limited. Vega 56 even with 3584 cores nearly matches Vega 64 at the same clocks (But Vega 64 clocks higher).


----------



## Gdourado

Strix Vega 64 review is up at Guru3d.
Same performance as reference.
Same temps as reference.
Only 2 DB quieter...
Is this first AIB design a total fail?


----------



## kundica

Quote:


> Originally Posted by *Gdourado*
> 
> Strix Vega 64 review is up at Guru3d.
> Same performance as reference.
> Same temps as reference.
> Only 2 DB quieter...
> Is this first AIB design a total fail?


It's probably a bs review. A handful of sites have already had a hands on with the card and they had much different results. Not only was it a lot more quiet but it sustained higher clocks.


----------



## 113802

Quote:


> Originally Posted by *kundica*
> 
> It's probably a bs review. A handful of sites have already had a hands on with the card and they had much different results. Not only was it a lot more quiet but it sustained higher clocks.


Vega 64 Core can't overclock, all the reviews used 17.8.1 which were fake frequencies.


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Vega 64 Core can't overclock, all the reviews used 17.8.1 which were fake frequencies.


That's fine. The scores they had were still representative of sustaining higher clocks.


----------



## chris89

Can someone test 511mhz memory?

945mhz divided by 483gb = 1.9565

511mhz divided by 320gb = 1.596875

945mhz divided by 511mhz is 1.8493 (84.93% less is 511 than 945)

1350mv hbm for 945mhz

945mhz is 1350mv divided by 730mv = 1.8493 84.93% less power requirement

_*511mhz @ 730mv*_

from *84c* hbm divided by 1.8493 = *45.42C*

483gb divided by 320gb = 50.937% less which means however 1350mv for 945mhz hbm means 730mv hbm at 511mhz is 84.93% cooler Core & hbm since they contact the same copper, meaning 2Ghz+ capable

*2500mhz core x 64 ROP @ 160,000,000,000 BILLION pixels & 2500mhz core x 256 TMU 640,000,000,000 BILLION Texel's per second.*

_2500mhz divided by 1750mhz core is 42.86% more so that brings core from 45C x 1.4286 = 57.87C @ 2.5GHZ ?!?_


----------



## Kyle Ragnador

Quote:


> Originally Posted by *sugarhell*
> 
> It is GPU only without the rest of the pcb.
> 
> Also, Furmark is not that much accurate. Nowdays, they have safety mechanisms on the drivers that will start throttling the gpu with Furmark etc.
> 
> Another thing is that furmark only stress the shaders. Fire up something that will use the rasterizers and maybe this undervolt will not hold.


With Stock settings + Powertune +50% the GPU Power is at 280 Watt (without powertune its capped at 220 watt bios setting)
With my settings the GPU Power is at 183Watt

!!! I have seen that the revert settings Option of Wattman isnt working for HBM Power . Its stays at the last settingyou used if you go back to custom its still the old value. !!!!


----------



## chris89

*17.8.2 INF ALONE is IDEAL, faster, trust me via Device Manager.*


----------



## 113802

Quote:


> Originally Posted by *kundica*
> 
> That's fine. The scores they had were still representative of sustaining higher clocks.


That is because HBM was overclocked and it still performed the same as stock Vega. Asus reached out to TweakTown regarding the similar performance and they mainly spoke about the cooling. I am still trying to find out if core overclocking is a power issue or driver. Setting the power limit to 100% and the core running at 52c max load it still hovers around 1668Mhz when overclocking. This is what they said about overclocking.

ASUS: "This is totally incorrect. In this generation, AMD made the overclocking a lot easier, where if you raise the "GPU power limit", it will do its job automatically and overclock the GPU clock automatically. If the power limit (TDP) is raised, there is no way a card can underperform another card of the same GPU provided it has not thermally throttled. By the way, AMD stock power limit should be 200W and 220W for VEGA 64, depending on the profile used.".

Read more: https://www.tweaktown.com/news/58803/asus-strix-vega-64-trades-blows-reference/index.html


----------



## chris89

*DDU then ...

C:\AMD\Non-WHQL-Radeon-Software-Crimson-ReLive-17.8.2-Win10-64Bit-Aug24\Packages\Drivers\Display\WT6A_INF\C0317359.inf*


----------



## Gdourado

Quote:


> Originally Posted by *kundica*
> 
> It's probably a bs review. A handful of sites have already had a hands on with the card and they had much different results. Not only was it a lot more quiet but it sustained higher clocks.


Can you link to such results please?
I usually think Guru3d has good rep.


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> That is because HBM was overclocked and it still performed the same as stock Vega. Asus reached out to TweakTown regarding the similar performance and they mainly spoke about the cooling. I am still trying to find out if core overclocking is a power issue or driver. Setting the power limit to 100% and the core running at 52c max load it still hovers around 1668Mhz when overclocking. This is what they said about overclocking.
> 
> ASUS: "This is totally incorrect. In this generation, AMD made the overclocking a lot easier, where if you raise the "GPU power limit", it will do its job automatically and overclock the GPU clock automatically. If the power limit (TDP) is raised, there is no way a card can underperform another card of the same GPU provided it has not thermally throttled. By the way, AMD stock power limit should be 200W and 220W for VEGA 64, depending on the profile used.".
> 
> Read more: https://www.tweaktown.com/news/58803/asus-strix-vega-64-trades-blows-reference/index.html


What are you talking about? You can OC the core just fine but it's highly impacted by power limit and thermals. The Asus statements are pretty clear.


----------



## pmc25

Quote:


> Originally Posted by *ValiumMm*
> 
> Open up PUBG lobby screen. This is the only time I see my HBM temps go through the roof.
> Its my new stress test lmao.


It's because of over-tesselation ... there is massive over-tesselation in PUBG with Vega cards.

I find this very suspicious, given NVIDIA-partnered games' past shenanigans (PUBG is now NVIDIA partnered) ... but I'm willing to give it the benefit of the doubt for now, and that it's just broken drivers on AMD's part ...


----------



## kundica

Quote:


> Originally Posted by *Gdourado*
> 
> Can you link to such results please?
> I usually think Guru3d has good rep.


This article sources a few. https://videocardz.com/72128/asus-rog-strix-radeon-rx-vega-64-finally-gets-tested

Keep in mind that it's not fair to compare the stock Strix card to an OC'd reference.


----------



## 113802

Quote:


> Originally Posted by *Gdourado*
> 
> Can you link to such results please?
> I usually think Guru3d has good rep.


There aren't any, that is why TweakTown wrote this article about the first Strix reviews came out.

Title: ASUS STRIX Vega 64 trades blows with reference Vega 64

Read more: https://www.tweaktown.com/news/58803/asus-strix-vega-64-trades-blows-reference/index.html
Quote:


> Originally Posted by *kundica*
> 
> What are you talking about? You can OC the core just fine but it's highly impacted by power limit and thermals. The Asus statements are pretty clear.


Vega core doesn't run above 1700Mhz sustained no matter what is tried. Eliminating power limits and thermals the cards still hover between 1668-1700Mhz. The card still has a ton of drivers so I am waiting for them to mature. Please have a Vega card before commenting.


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> There aren't any, that is why TweakTown wrote this article hen the first Strix reviews came out.
> 
> Title: Home News Video Cards ASUS STRIX Vega 64 trades blows with reference Vega 64
> 
> Read more: https://www.tweaktown.com/news/58803/asus-strix-vega-64-trades-blows-reference/index.html
> Vega core doesn't run above 1700Mhz sustained no matter what is tried. Eliminating power limits and thermals the cards still hover between 1668-1700Mhz. The card still has a ton of drivers so I am waiting for them to mature. Please have a Vega card before commenting.


Huh? I've owned the 64 Air and currently have the AIO 64. My card can easily sustain over 1700, typically around 1740 ish but it's very dependent on the task. Running synthetic benches it's usually a bit lower while rendering Blender tasks or rendering in Resolve it clocks higher.

See my post history.

Also, check out any of AMD Matt's videos using 17.8.2. Depending on the game you can see sustained clocks over 1700. https://www.youtube.com/user/TheMattB81


----------



## asdkj1740

hbm2 contact...


----------



## dsmwookie

What are the VRM temps for you guys on Vega? I typically use a universal GPU block and I wanted to see how hot they are getting.


----------



## CaptainTom

Quote:


> Originally Posted by *Gdourado*
> 
> Strix Vega 64 review is up at Guru3d.
> Same performance as reference.
> Same temps as reference.
> Only 2 DB quieter...
> Is this first AIB design a total fail?


This review actually leaked onto WccfTech like a week ago. Yeah it looked like a total dud.

People need to come to terms with the FACT that the reference coolers on Vega are fantastic - you just need to undervolt!!!!!!!

I wouldn't be surprised if some of these AIB's were actually lowering performance by overvolting their AIB cards.


----------



## PontiacGTX

Quote:


> Originally Posted by *chris89*
> 
> ]Can someone test 511mhz memory?
> 
> 945mhz divided by 483gb = 1.9565
> 
> 511mhz divided by 320gb = 1.596875
> 
> 945mhz divided by 511mhz is 1.8493 (84.93% less is 511 than 945)
> 
> 1350mv hbm for 945mhz
> 
> 945mhz is 1350mv divided by 730mv = 1.8493 84.93% less power requirement
> 
> _*511mhz @ 730mv*_
> 
> from *84c* hbm divided by 1.8493 = *45.42C*
> 
> 483gb divided by 320gb = 50.937% less which means however 1350mv for 945mhz hbm means 730mv hbm at 511mhz is 84.93% cooler Core & hbm since they contact the same copper, meaning 2Ghz+ capable
> 
> *2500mhz core x 64 ROP @ 160,000,000,000 BILLION pixels & 2500mhz core x 256 TMU 640,000,000,000 BILLION Texel's per second.*
> 
> _2500mhz divided by 1750mhz core is 42.86% more so that brings core from 45C x 1.4286 = 57.87C @ 2.5GHZ ?!?[/I_


you realize that 2 GHZ you arent gaining performance and THAT underclocking HBM makes the raster/pixel bottleneck EVEN WORSE?


----------



## 113802

Quote:


> Originally Posted by *kundica*
> 
> Huh? I've owned the 64 Air and currently have the AIO 64. My card can easily sustain over 1700, typically around 1740 ish but it's very dependent on the task. Running synthetic benches it's usually a bit lower while rendering Blender tasks or rendering in Resolve it clocks higher.
> 
> See my post history.
> 
> Also, check out any of AMD Matt's videos using 17.8.2. Depending on the game you can see sustained clocks over 1700. https://www.youtube.com/user/TheMattB81


I am curious if the monitoring software he is using works properly with Vega. I am using WattMan to monitor frequencies and temperatures. I can bench at 1852Mhz but anything over black screens but the highest the core hits is 1720 with a 100% Power Limit with temperatures at 52C using 17.8.2

3DMark FireStrike GPU score never goes above 25700 comparing stock with HBM @ 1105Mhz and GPU core at 1852Mhz/1105Mhz - WattMan never reports any frequencies above 1720Mhz, while 1668Mhz is sustained throughout the entire benchmark.

Edit: Overwatch/Tomb Raider/BattleField 1 never go above 1668Mhz while running 1852Mhz


----------



## kundica

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I am curious if the monitoring software he is using works properly with Vega. I am using WattMan to monitor frequencies and temperatures. I can bench at 1852Mhz but anything over black screens but the highest the core hits is 1720 with a 100% Power Limit with temperatures at 52C using 17.8.2
> 
> 3DMark FireStrike GPU score never goes above 25700 comparing stock with HBM @ 1105Mhz and GPU core at 1852Mhz/1105Mhz - WattMan never reports any frequencies above 1720Mhz, while 1668Mhz is sustained throughout the entire benchmark.
> 
> Edit: Overwatch/Tomb Raider/BattleField 1 never go above 1668Mhz while running 1852Mhz


He's using HWinfo64 with Rivatuner for the overlay. I use the same and it matches Wattman quite well.


----------



## Newbie2009

I've been playing around with the peak clock on my card.

I can run the card on uber low voltage @ stock AIR but I think my hard limit max OC is between 1715mhz-1735mhz (vega64)

@ 1750mhz set clock card will clock around 1710 @ 1150mv, seems to become unstable if I try to go much higher even with 1200mv

So I guess you could say the aio 64 stock clocks are more or less the limit for what my card can do regardless of volts.

Also note, the hotspot GPUZ reports, my block struggles with that, HBM and core temps are fine.


----------



## Tgrove

Incoming pictures!


----------



## Nuke33

Quote:


> Originally Posted by *CaptainTom*
> 
> 21460153_10213794021186129_1636182253_o.jpg 225k .jpg file
> 
> 
> Don't know if anyone has reported this yet, but I just noticed that one of the 56' in my mining rig is reporting 3648 SP's (utilizing a Vega 64 bios, verified in attachment). It does in fact also mine better than my other 56' as well!
> 
> So I have a Vega 58! And in fact you CAN unlock additional shaders with a 64 bios, even if it is rare.


Very interesting! Could you try to replicate your results with 17.8.2 driver instead of the mining one ?

Also I second what CaptainTom said, benches would verify better than mining.


----------



## Tgrove

Got here a day early! Time to join the fun


----------



## pmc25

Quote:


> Originally Posted by *Newbie2009*
> 
> I've been playing around with the peak clock on my card.
> 
> I can run the card on uber low voltage @ stock AIR but I think my hard limit max OC is between 1715mhz-1735mhz (vega64)
> 
> @ 1750mhz set clock card will clock around 1710 @ 1150mv, seems to become unstable if I try to go much higher even with 1200mv
> 
> So I guess you could say the aio 64 stock clocks are more or less the limit for what my card can do regardless of volts.
> 
> Also note, the hotspot GPUZ reports, my block struggles with that, HBM and core temps are fine.


We can talk about hard limits 3-6 months from now. Maybe not even then.

Drivers (and BIOS) are so early we have no idea what limits are.

Proper overclocking software will really help, too.


----------



## Newbie2009

Quote:


> Originally Posted by *Tgrove*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Got here a day early! Time to join thr fun


I'd be interested to see your MAX clocks, hot spot, power draw with the new GPUz while benching.


----------



## Newbie2009

Quote:


> Originally Posted by *pmc25*
> 
> We can talk about hard limits 3-6 months from now. Maybe not even then.
> 
> Drivers (and BIOS) are so early we have no idea what limits are.
> 
> Proper overclocking software will really help, too.


Not sure what you are expecting but I bought HD7970s on launch, 290Xs on launch and put both under water.

The tests I did day one with max fan on air before even putting under water was where everything more or less stayed, even under water, in terms of clocks&volts. And I had those cards for years.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *kundica*
> 
> Some observations I've noticed with drivers 17.8.2 and 17.8.1.
> 
> I'm one of the people having issues running 17.8.2 with stock core clock and +50% power limit. My card passes benches but will crash while gaming on that driver. My workaround has been to up the p7 voltage to 1250mv or lower the p7 clock to 1727 while leaving the the voltage at stock 1200mv. With 17.8.1 my card runs fine at stock core +50% power limit but the clock peaks at 1752(which is also p7). On 17.8.2, when p7 is set to stock 1752 the card will often peaks around 1780 but even as high as 1802. When p7 is set to 1727 on 17.8.2 it typically peaks at 1752.
> 
> If the clocks are reading correctly on both drivers, given my observations it's reasonable to think my card is boosting too high on 17.8.2 and that's why it's crashing. The max boost clock for p7 on the AIO Vega 64 is 1750 according to Sapphire's Vega launch announcement. 1802 is a lot higher than that and I don't think all cards would be able to reach it and remain stable.
> 
> As a side note, lowering p7 to 1727 helps my card sustain higher clocks on 17.8.2 than leaving p7 at the stock 1752


I have noticed the same on my Vega Liquid, thats why I lowered P7 to 1697, the scenes i had it rendering never showed it boosting up to 1752 anyways. I can maintain around 1680 mhz clocks with only a 15% power limit and 50 mv undervolt.


----------



## chris89

*You should try Sapphire Trixx 6.4.0 & DDU Device Manager Display Driver Alone.*

DDUv17.0.5.1.zip 2333k .zip file


If Trixx doesn't work, try Radeon Pro Tools.

Radeon-Pro-Tools.zip 3444k .zip file


TRIXX_installer_6.4.0.zip 2395k .zip file


----------



## chris89

Quote:


> Originally Posted by *PontiacGTX*
> 
> you realize that 2 GHZ you arent gaining performance and THAT underclocking HBM makes the raster/pixel bottleneck EVEN WORSE?


How, you never even dropped memory below 945mhz...?


----------



## Kyozon

Quick question guys. A little off-topic, but i am new to the Forum and i am not sure where i can ask this question.

I own a Polaris PRO Duo. I have noticed the Card seems to be Locked, i can't Increase or Decrease the Memory and Core Clocks/Voltages. Is there anyway i can unlock those controls on this Card?

Thanks.


----------



## PontiacGTX

Quote:


> Originally Posted by *chris89*
> 
> How, you never even dropped memory below 945mhz...?


it is known that the performance gains come form just overclocking memory, with default GPU clocks, now underclock memory and overclock GPU(asusming it can got beyond 1.8GHz) will make it a really bad bottleneck


----------



## milkbreak

Are any modifications necessary to install the Morpheus II on a Vega card? Any more word on whether this cooler can do anything for the VRM temps? I smelled burning a few times when overclocking my 56 and I have to assume it's the VRM. Everything seems to be working fine though.


----------



## Tyrael

Hey guys I just got my RX Vega 56 delivered, but I cannot install 17.8.2. It always leaves me with Error 1603. I also tried it after DDU but still same Error.
Anyone has a hint for me?


----------



## Newbie2009

Quote:


> Originally Posted by *Tyrael*
> 
> Hey guys I just got my RX Vega 56 delivered, but I cannot install 17.8.2. It always leaves me with Error 1603. I also tried it after DDU but still same Error.
> Anyone has a hint for me?


reseat the card for a start


----------



## shadowxaero

Vega 64 installed 

Tried flashing liquid bios, but would crash under any gpu load v.v *sniffle*


----------



## kundica

Quote:


> Originally Posted by *shadowxaero*
> 
> Vega 64 installed
> 
> Tried flashing liquid bios, but would crash under any gpu load v.v *sniffle*
> 
> 
> Spoiler: Warning: Spoiler!


Downclock the card to something more reasonable when using three AIO bios. Try 1727 or 1702 to start.


----------



## punchmonster

I slapped a strip of the thermal tape on there that came with the morpheus and slapped on the thinnest heatsinks. No specific modifications no.
Quote:


> Originally Posted by *milkbreak*
> 
> Are any modifications necessary to install the Morpheus II on a Vega card? Any more word on whether this cooler can do anything for the VRM temps? I smelled burning a few times when overclocking my 56 and I have to assume it's the VRM. Everything seems to be working fine though.


----------



## Caldeio

Quote:


> Originally Posted by *Nuke33*
> 
> Very interesting! Could you try to replicate your results with 17.8.2 driver instead of the mining one ?
> 
> Also I second what CaptainTom said, benches would verify better than mining.


Same here! 3648 shaders and it stuck after I flashed the bios back to stock.

Too much heat on the HBM! With how hot the hbm gets off my core(12-15c delta), I'm thinking I should look at the thermal paste. I have some Gelid Extreme, is that good?


----------



## chris89

Quote:


> Originally Posted by *Kyozon*
> 
> Quick question guys. A little off-topic, but i am new to the Forum and i am not sure where i can ask this question.
> 
> I own a Polaris PRO Duo. I have noticed the Card seems to be Locked, i can't Increase or Decrease the Memory and Core Clocks/Voltages. Is there anyway i can unlock those controls on this Card?
> 
> Thanks.


send me the bios plus pics of 32GB beast ... attach 2x bios as .zip via paperclip gpuz 2.2.0


----------



## shadowxaero

I did, anything over 1690 and crash, this is on stock bios with 142% power offset or the AIO bios.


----------



## punchmonster

The HBM will always run a chunk hotter than the core. It's why good cooling is Paramount as you want to avoid timings slip.
Quote:


> Originally Posted by *Caldeio*
> 
> Same here! 3648 shaders and it stuck after I flashed the bios back to stock.
> 
> Too much heat on the HBM! With how hot the hbm gets off my core(12-15c delta), I'm thinking I should look at the thermal paste. I have some Gelid Extreme, is that good?


----------



## kundica

Quote:


> Originally Posted by *shadowxaero*
> 
> I did, anything over 1690 and crash, this is on stock bios with 142% power offset or the AIO bios.


Seems like you've found your card's limit then. You could try 17.8.1 if you're not already using it. I find it to be more stable.


----------



## Greenland

Just received Vega 56, any ideas how to flash it with Vega 64 bios? Are the risks worth it for the performance ?

Many thanks,


----------



## shadowxaero

Quote:


> Originally Posted by *kundica*
> 
> Seems like you've found your card's limit then. You could try 17.8.1 if you're not already using it. I find it to be more stable.


Sidenote, when your card crashes, does the entire system crash? I have a 1700 and when I crash systems goes to Error Code 00.


----------



## Caldeio

Quote:


> Originally Posted by *Greenland*
> 
> Just received Vega 56, any ideas how to flash it with Vega 64 bios? Are the risks worth it for the performance ?
> 
> Many thanks,


If you have an aftermarket cooling solution sure, lots of heat!


----------



## kundica

Quote:


> Originally Posted by *shadowxaero*
> 
> Sidenote, when your card crashes, does the entire system crash? I have a 1700 and when I crash systems goes to Error Code 00.


Sometimes I'll get a full system reset, other times the screen will just go black and I have to force reset. I think it just depends on how bad the crash is and the task.


----------



## shadowxaero

Quote:


> Originally Posted by *kundica*
> 
> Sometimes I'll get a full system reset, other times the screen will just go black and I have to force reset. I think it just depends on how bad the crash is and the task.


Okay cool, that makes me feel better lol. I am used to driver resets when I crash haha not full system crashes.


----------



## Greenland

I will probably get one in the near future







, any ideas how to flash it ?


----------



## kundica

Quote:


> Originally Posted by *shadowxaero*
> 
> Okay cool, that makes me feel better lol. I am used to driver resets when I crash haha not full system crashes.


What PSU are you running? I recommend using 2 separate power cables instead of a single daisy chained cable if you aren't already.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *Kyozon*
> 
> Quick question guys. A little off-topic, but i am new to the Forum and i am not sure where i can ask this question.
> 
> I own a Polaris PRO Duo. I have noticed the Card seems to be Locked, i can't Increase or Decrease the Memory and Core Clocks/Voltages. Is there anyway i can unlock those controls on this Card?
> 
> Thanks.


You may need to use a 3rd party program like MSI Afterburner or Sapphire TriXX, thats what I had to do with my Fury X. It should be able to unlock GPU core voltage control and HBM memory speeds (not sure about HBM voltage though).

Edit: nevermind, noticed you said Polaris Pro DUO, didn't even know that existed!


----------



## Roboyto

Quote:


> Originally Posted by *Gdourado*
> 
> Strix Vega 64 review is up at Guru3d.
> Same performance as reference.
> Same temps as reference.
> Only 2 DB quieter...
> Is this first AIB design a total fail?


Quote:


> Originally Posted by *kundica*
> 
> It's probably a bs review. A handful of sites have already had a hands on with the card and they had much different results. Not only was it a lot more quiet but it sustained higher clocks.


Yeah...they didn't do a good job. They even mention in the overclocking section that you should try undervolting, but didn't actually do that themselves. As we know, it would have had very positive effects.

I'm waiting for Sapphire or XFX to release their air cooled cards like their Fury series...shorten PCB and leave a large section at the end with unrestricted airflow. This made for extremely cool and quiet cards.
Quote:


> Originally Posted by *WannaBeOCer*
> 
> Vega core doesn't run above 1700Mhz sustained no matter what is tried. Eliminating power limits and thermals the cards still hover between 1668-1700Mhz.


My Vega will run above 1700. 1732 best stable clock I've managed so far with stock BIOS. This was possible before I put the waterblock on too. I've got some of the best Time Spy and Firestrike graphics scores, since I last checked on 3DMark site, with my card running around 1732/1100.

If you said 1800 core isn't possible, then I would agree, as I haven't seen it yet.
Quote:


> Originally Posted by *Newbie2009*
> 
> Not sure what you are expecting but I bought HD7970s on launch, 290Xs on launch and put both under water.
> 
> The tests I did day one with max fan on air before even putting under water was where everything more or less stayed, even under water, in terms of clocks&volts. And I had those cards for years.


Agreed. I owned a half-dozen R9 290s over the last 4 years. I bought 2 brand new in late 2013. One of those 2 had a waterblock on it within a couple weeks. Watercooling didn't really change the performance of the card all that much...it did help some due to a silicon lottery jackpot. 1300/1700 and the RAM would hit that speed running the stock 1250 timings. The water block gave me about an extra 50MHz on the core clock, which wasn't that much of an improvement as Hawaii didn't benefit greatly after around 1200 core.

Aside from my freakishly awesome 290, all the others were pretty much the same. Max out voltage/power and they would all get pretty close to 1200 core, whike RAM overclocking was very spotty. Only the one card I had could get those crazy speeds though.
Quote:


> Originally Posted by *shadowxaero*
> 
> Sidenote, when your card crashes, does the entire system crash? I have a 1700 and when I crash systems goes to Error Code 00.


Sometimes. Pushing HBM too far generally results in a hardlock/black screen requiring a physical power reset.

Other crashes it will auto reboot.

Driver crashes will sometimes cause WattMan to tweak and the whole WattMan window just turned into a blurry transparent box. I've found using taskmanager to kill the whole Radeon task/host application will fix this issue.

Anytime there is a crash or failed bench, I would suggest hitting the reset in WattMan and reapplying your settings.


----------



## Greenland

@Roboyto:

Any ideas how to OC and undervolt the card? I have tried everything with Wattman and it fails to OC my vega 56 every single time. Here's my Wattman settings: http://i.imgur.com/PHh1VjR.png

Thanks.


----------



## Tyrael

Quote:


> Originally Posted by *Newbie2009*
> 
> reseat the card for a start


What do you mean with reseat?
Quote:


> Originally Posted by *Newbie2009*
> 
> reseat the card for a start


I reseated it and also tried 17.8.1 but still same Error. I installed the driver from the control center and used now Sapphires Trixx to set the powerlimit at -50.


----------



## Trender07

Hello! So my VEGA 64 just arrived today YAY!
Do I have to post something to join the Club?







Im leaving it as air cooled blower (limited edition).

As for now im running 1538 and 1040 mv and at least gaming is stable


----------



## Kyle Ragnador

double post because first was blocked


----------



## Kyle Ragnador

On my way to low power i hitted some interestring situations.

1: GPU Vcore cannot be under 1000mV. every setting in Wattman below have no effect.
2: HBM2 Power: If going under ~1015mV is decreases your GPU Core.
example
start GPU:1560MHz
reduce HBM2 Power to 955mV
GPU clocks with 1536MHz max (need 1.5% overclock while HBM2 is reduced to get back to 1560
)

Problem arises if you have undervolted your GPU and HBM and tries to increase HBM2 Power
start GPU 1560MHz(1.5% oc inluded) 1000mV HBM2: 1000MHz 955mV
increase HBM2 to 1010mV or 1050mV
GPU Clocks jump up to over 1600 (while only having 1000mV) and crashs

*Ok now to my LOW Power Test*

AIR Cooled Vega64
GPU-Z 2.3.0 GPU only Power Draw:

(custom gpu: 1,5% oc 1000mV HBM: 1000Mhz 955mV | Vcore under 1000mV are blocked and have no effect in Wattman)
Fan settings 74°C-85°C and 700rpm-2900rpm
resulting in constant 2450rpm after a spin up.

GPUPoints( only Test1 and 2 | without: Physics and combined)

Vega 64 /balanced 0%Powertune -> 220Watt ||3D Mark Fire Ex: 10506 (220 Watt Bios Limit)
Vega 64/balanced 50% Powertune -> 280Watt ||3D Mark Fire Ex: 11101 (Powertune Bios Limit increased to 330Watt )
Vega 64/custom 0% Powerune -> 175Watt ||3D Mark Fire Ex: 11361
Vega 64/custom 50% Powertune -> 175 Watt ||3D Mark Fire Ex: 11357 (PT without effect since the Bios Limit is at 220 Watt)

Balanced 50% Powertune: !!Attention GPU-Hotspot reaches 105°C and HBM hits 90°C (Temperatur Limit reduces Performance)!!

Conclusion: You can get the Same Performance with around 100 Watt less.

Bzw. With - 45Watt consumption +7,5% Performance (compared to balanced 0%PT Referenz)
With -105Watt consumption +2,3% Performance (compared to balanced 50% PT )﻿

Beware there should be a big chip quality difference out there.


----------



## Trender07

Whats the safe blower fan speeds? I've set it at 3200 max speed, is that safe so it won't break or loose? Because at 2400 rpm I had 86º and throttled.

Also another question, whats the 2nd bios setting? Any difference with 1st bios or theyre the same and we can try ;P


----------



## Kyle Ragnador

Quote:


> Originally Posted by *Trender07*
> 
> Hmm whats the blower safe speeds? I've set it at 3200 max speed, is that safe so it won't break or loose? Because at 2400 rpm I had 86º and throttled.
> 
> Also another question, whats the 2nd bios setting? Any difference with 1st bios or theyre the same and we can try ;P


The Air blower Vega has 2 Bios Settings.
For the Vega 64 Air cooled:

Left bios( left if you can read radeon normal) is the normal bios. Its has a Watt Limit of 220
right bios ____________________________ energy save bios______ has a Watt Limit of 200

Vega 56: 165Watt and 150Watt

This means 2nd Bios:
- Less Heat
-Less Energy consumption.
LESS Performance

Have you overclocked?

If not you can try to change the Vcore of the GPU from P7 1200mV to 1100-1070mV (P7 and P6 both to this new value) (should work on most cards)

This should reduce your temp.

you can check my http://www.overclock.net/t/1634018/official-vega-frontier-rx-vega-owners-thread/1600#post_26326240

for what is possible

for your 2nd question.
The blower should workwithout problems at 3000. its possible to use it at 4900 rpm.

BUT
it is unhealthy for you in person to hear the noise all the time (starting by headaches if you are hearing a vacuum cleaner for hours....


----------



## pmc25

Quote:


> Originally Posted by *Newbie2009*
> 
> Not sure what you are expecting but I bought HD7970s on launch, 290Xs on launch and put both under water.
> 
> The tests I did day one with max fan on air before even putting under water was where everything more or less stayed, even under water, in terms of clocks&volts. And I had those cards for years.


Are you seriously comparing their launch drivers with Vega's?

Voltage regulation, clock targets, dynamic clocking, LLC, temperature targets and power limits are a gigantic mess.

There may not have been much driver optimisation for 79xx oir 29x at launch, but neither had the litany of basic control issues that Vega does.

Until the drivers get better, there's no way of telling what limits are ... and given that it's now 2 weeks since the last driver release, and AMD seem happy to leave them in this broken state for an extended period of time, I think it's likely to be some while before significant rectification is seen.


----------



## Gdourado

When Vega launched, the 56 was reviewed as a 1070 contender and the 64 a 1080 contender.
Here the 56 is very close in price to some 1080 cards.
With current optimizations, how does the Vega 56 compare to a 2ghz boost 1080?
Is it still far and still only a 1070 contender?


----------



## Soggysilicon

Quote:


> Originally Posted by *dsmwookie*
> 
> What are the VRM temps for you guys on Vega? I typically use a universal GPU block and I wanted to see how hot they are getting.


Steve over at GN did 2 hybrids and noted that the VRM temping isn't a big deal... from what I saw, and from what I have done in the past with universal blocks... peppering a card with copper HS would be sufficient in a decently ventilated case.

That said, I went with the EK-FC block because I didn't want to have to get into fabricating a proper bracket for the block... and with the cards being difficult to get a hold of, really didn't want to push luck with a potential "down the road" issue and not have a card to replace my mistake with. If you think the risk is acceptable from the block mounting standpoint, go for it. The VRMs won't be a limiting issue.


----------



## Trender07

I have fps locked to 70 fps but my vega still gets as hot as it can on overwatch hero selection lol anyone else? then when the game starts its ok


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> Whats the safe blower fan speeds? I've set it at 3200 max speed, is that safe so it won't break or loose? Because at 2400 rpm I had 86º and throttled.
> 
> Also another question, whats the 2nd bios setting? Any difference with 1st bios or theyre the same and we can try ;P


That delta isn't going anywhere... feel free to wind it out if you don't mind the noise... may reduce it's life some, worse case... only other caveat is that the delta blower pulls from the mobo around 23 watts...


----------



## Trender07

Quote:


> Originally Posted by *Soggysilicon*
> 
> That delta isn't going anywhere... feel free to wind it out if you don't mind the noise... may reduce it's life some, worse case... only other caveat is that the delta blower pulls from the mobo around 23 watts...


U wanna my PC to fly out of my house don't you hahah


----------



## rancor

Finaly got around to installing the water block and overclocking.





Flashed the bios to the 64 LC version and I am using the soft power play mod with 150% power limit and 450A current limit. Temps are ~35C core and ~40C HBM under load.

So far it is running at 1660 MHz -1700 MHz depending on the load with HBM at 1100MHz.

https://www.3dmark.com/fs/13560078

Firestrike extreme: 10860 overall and 12572 graphics score


----------



## OMgoo

So I have a problem...

Put the Morpheus II yesterday on my Vega 64

35°C Idle

but under Load while playing PUBG still gets to 75°C Core and 82°C HBM.

HBM was on 1020MHz and Core 1580MHz.


----------



## punchmonster

Make sure to properly tighten it. I had some weird temperature behaviour at first, then I tightened the screws a bit more and it was perfect. Also how much thermal compound did you use and what fans?
Quote:


> Originally Posted by *OMgoo*
> 
> So I have a problem...
> 
> Put the Morpheus II yesterday on my Vega 64
> 
> 35°C Idle
> 
> but under Load while playing PUBG still gets to 75°C Core and 82°C HBM.
> 
> HBM was on 1020MHz and Core 1580MHz.


----------



## OMgoo

Quote:


> Originally Posted by *punchmonster*
> 
> Make sure to properly tighten it. I had some weird temperature behaviour at first, then I tightened the screws a bit more and it was perfect. Also how much thermal compound did you use and what fans?


I used plenty of Thermal Grizzly Kryonaut

But I used the original Brace instead of the Rajintek, so I could reuse the Backplate.

I'll try to edit the Backplat eso the rajintex brace will fit.


----------



## gupsterg

Quote:


> Originally Posted by *chris89*
> 
> maybe the person who created hawaii bios reader, may build vega bios reader equally as awesome with hbm voltage so we can see what 2ghz vega is truly all about


Currently no chance. VEGA has security processor which detect mod VBIOS so card will not post.
Quote:


> Originally Posted by *Gdourado*
> 
> Strix Vega 64 review is up at Guru3d.
> Same performance as reference.
> Same temps as reference.
> Only 2 DB quieter...
> Is this first AIB design a total fail?
> 
> Quote:
> 
> 
> 
> Originally Posted by *kundica*
> 
> It's probably a bs review. A handful of sites have already had a hands on with the card and they had much different results. Not only was it a lot more quiet but it sustained higher clocks.
> Quote:
> 
> 
> 
> Originally Posted by *Gdourado*
> 
> Can you link to such results please?
> I usually think Guru3d has good rep.
> 
> 
> 
> 
> 
> Click to expand...
Click to expand...

I'd agree with kundica.

Recently on OCuk, a member cited to me to read a 'proper' review. Linking a Guru3D one, when I linked a TPU one. When I pointed out how Guru3D, recycle data between reviews and pointed out the finer points of TPU review he stopped citing the 'proper' review.


----------



## Gdourado

just saw this:





This goes against what was being said about undervolting...
What's the deal here?


----------



## Newbie2009

Quote:


> Originally Posted by *Gdourado*
> 
> just saw this:
> 
> 
> 
> 
> 
> This goes against what was being said about undervolting...
> What's the deal here?


What does it say exactly


----------



## Gdourado

Quote:


> Originally Posted by *Newbie2009*
> 
> What does it say exactly


He undervolted the Vega 56, applied a bit of overclocking, Benchmarks are tied with the 1070 FE for most part, power consumption is way up and he says noise was a big issue!


----------



## SAMiN

@Gdourado Why don't you buy a GTX 1070 or 1080 and get over with it? Its obvious if you are going to get VEGA either 56 or 64 you should be fine knowing its going to consume more power thus creates little bit more heat comparing to nvidia ones. if you can't accept these and have no use for computational power VEGA offers over 1070/1080 so you better go with team green.


----------



## Newbie2009

Quote:


> Originally Posted by *Gdourado*
> 
> He undervolted the Vega 56, applied a bit of overclocking, Benchmarks are tied with the 1070 FE for most part, power consumption is way up and he says noise was a big issue!


Well there you have it, overclocking will cause more power draw.

Undervolting will reduce power and give slightly better performance as the card is not hitting the power envelope wall.

You can't expect to be able to undervolt then overclock in any meaningful way.


----------



## paulc010

Undervolting will generally raise the clock anyway (assuming the card is power or temperature limited). Manually boosting the clock even further will then increase the power draw, of course. I haven't watched the video but I assume that the poster actually measured the voltages and/or power draw rather than assuming that the tools were showing the correct values?

Most of the early reviews were extremely naive.


----------



## pmc25

Quote:


> Originally Posted by *OMgoo*
> 
> So I have a problem...
> 
> Put the Morpheus II yesterday on my Vega 64
> 
> 35°C Idle
> 
> but under Load while playing PUBG still gets to 75°C Core and 82°C HBM.
> 
> HBM was on 1020MHz and Core 1580MHz.


What you said about the bracket and perhaps over-application of TIM may be part of the issue.

However, test other games / benchmarks.

I suspect that PUBG is behaving similarly to Furmark in terms of thermal load. There's huge over-tesselation on Vega cards in PUBG ... whether due to funny business between Blue Hole / NVIDIA, or AMD's crap drivers, we don't know.


----------



## Trender07

Also guys how much of a safe temp is playing in 82-84º C? Im playing like that so I have low noise (but ofc if it wouldnt be safe reduce life or whatever I don't mind using spinning up the fan)


----------



## kundica

Quote:


> Originally Posted by *Gdourado*
> 
> just saw this:
> 
> 
> 
> 
> 
> This goes against what was being said about undervolting...
> What's the deal here?


The deal is Steve doesn't seem to have a clue about undervolting this card. It's a pity because he's generally liked by viewers so now a bunch of uninformed people are going to run with his results as gospel.
Quote:


> Originally Posted by *Newbie2009*
> 
> Well there you have it, overclocking will cause more power draw.
> 
> Undervolting will reduce power and give slightly better performance as the card is not hitting the power envelope wall.
> 
> You can't expect to be able to undervolt then overclock in any meaningful way.


I only skimmed the video but it also seems he left the power limit at 0. Nothing against Steve, but do these guys understand anything about the gear they're testing? Why is it that reviewers keeping setting p6 and p7 to the same numbers, do they not understand how the states work?


----------



## pmc25

Quote:


> Originally Posted by *Trender07*
> 
> Also guys how much of a safe temp is playing in 82-84º C? Im playing like that so I have low noise (but ofc if it wouldnt be safe reduce life or whatever I don't mind using spinning up the fan)


It's perfectly safe, but you experience constant thermal throttling. Your HBM timings will also be very poor.

Undervolt it and increase max fan speed a little.


----------



## Trender07

Quote:


> Originally Posted by *pmc25*
> 
> It's perfectly safe, but you experience constant thermal throttling. Your HBM timings will also be very poor.
> 
> Undervolt it and increase max fan speed a little.


My trying on undervolting for now (I think its ok?) is like this:

Core: 1582 MHz and 1000mV

as for HBM2 1000 mhz and 1040 mV (Ive read we cant undervolt less than 1000mv)


----------



## SAMiN

Quote:


> Originally Posted by *kundica*
> 
> The deal is Steve doesn't seem to have a clue about undervolting this card. It's a pity because he's generally liked by viewers so now a bunch of uninformed people are going to run with his results as gospel.
> I only skimmed the video but it also seems he left the power limit at 0. Nothing against Steve, but do these guys understand anything about the gear they're testing? Why is it that reviewers keeping setting p6 and p7 to the same numbers, do they not understand how the states work?


Exactly my thoughts!


----------



## Gdourado

Quote:


> Originally Posted by *SAMiN*
> 
> Exactly my thoughts!


I usually like Steve's videos and for me Techspot is a go to site for reviews.
It's too bad if he didn't get this one right.

How is undervolting properly done in regard to the p-states?

Cheers!


----------



## poisson21

I have a little problem of behavior with my card.

Rx vega 64 watercooled with ek block and flashed with a lc bios.

Test made with Unigine benchmark.

In wattman, up to 1732 Mhz / 50% i have a normal behavior i can see an outpout of 1710/17/20 Mhz and no problem.

but if i exceed 1732 Mhz , during benchmark the outpout rise to 1780/1800 Mhz and i crash.

For now the hbm is at only 1015 and i didn't touch the current delivery.

is it normal ??


----------



## Nuke33

Quote:


> Originally Posted by *kundica*
> 
> The deal is Steve doesn't seem to have a clue about undervolting this card. It's a pity because he's generally liked by viewers so now a bunch of uninformed people are going to run with his results as gospel.
> I only skimmed the video but it also seems he left the power limit at 0. Nothing against Steve, but do these guys understand anything about the gear they're testing? Why is it that reviewers keeping setting p6 and p7 to the same numbers, do they not understand how the states work?


I suspect they still do undervolting that way because the early press release drivers they had would only allow undervolting if one set both states at the same time. Otherwise the voltages did not apply.


----------



## Nuke33

Quote:


> Originally Posted by *Caldeio*
> 
> Same here! 3648 shaders and it stuck after I flashed the bios back to stock.


That is odd, can you verify it has higher performance now ?


----------



## Newbie2009

Quote:


> Originally Posted by *Nuke33*
> 
> That is odd, can you verify it has higher performance now ?


GPUZ Have said it is a bug, it doesn't unlock shaders.


----------



## Nuke33

Quote:


> Originally Posted by *Newbie2009*
> 
> GPUZ Have said it is a bug, it doesn't unlock shaders.


I suspected as much. Thank you for the confirmation.


----------



## pmc25

Quote:


> Originally Posted by *Trender07*
> 
> My trying on undervolting for now (I think its ok?) is like this:
> 
> Core: 1582 MHz and 1000mV
> 
> as for HBM2 1000 mhz and 1040 mV (Ive read we cant undervolt less than 1000mv)


HBM2 voltage cannot be adjusted.

WattMan is a mess and is mislabeled.

HBM2 Voltage adjustment in Wattman adjusts something relating to GPU Core Voltage (seems like it might be some kind of offset). I suggest you set it to the same 1000mV as you are setting the GPU Core Voltage.

If you're on a Vega64, you can almost certainly get much better than 1000Mhz HBM2. Try 1050Mhz if temperatures are still an issue. If not, try something 1085-1100Mhz.


----------



## Trender07

Quote:


> Originally Posted by *pmc25*
> 
> HBM2 voltage cannot be adjusted.
> 
> WattMan is a mess and is mislabeled.
> 
> HBM2 Voltage adjustment in Wattman adjusts something relating to GPU Core Voltage (seems like it might be some kind of offset). I suggest you set it to the same 1000mV as you are setting the GPU Core Voltage.
> 
> If you're on a Vega64, you can almost certainly get much better than 1000Mhz HBM2. Try 1050Mhz if temperatures are still an issue. If not, try something 1085-1100Mhz.


Ok thanks, then running 1582 mhz core and 1050 hbm2, both 1000 mv. I guess core clock affects more temps?

Also anybody else having Chrome crashing when twitter videos? Or on reddit imgur videos also crashes (but not youtube videos i.e)


----------



## CaptainTom

Quote:


> Originally Posted by *Newbie2009*
> 
> GPUZ Have said it is a bug, it doesn't unlock shaders.


Yes it does pal. I tested these 3 cards in the same rig: Vega 56, Vega 58, Vega 64.

Vega 58 was consistently in the middle of the two in temperature, performance, and power usage. I indeed have a Vega 58.


----------



## PontiacGTX

Quote:


> Originally Posted by *CaptainTom*
> 
> Yes it does pal. I tested these 3 cards in the same rig: Vega 56, Vega 58, Vega 64.
> 
> Vega 58 was consistently in the middle of the two in temperature, performance, and power usage. I indeed have a Vega 58.


the onyl way to prove they have higher performance ,it is having all the same clock speed, underclock all to VEGA 56 clocks then measure compute withn aida64 or test some benchmark that can be shader/alu bound


----------



## Newbie2009

Quote:


> Originally Posted by *CaptainTom*
> 
> Yes it does pal. I tested these 3 cards in the same rig: Vega 56, Vega 58, Vega 64.
> 
> Vega 58 was consistently in the middle of the two in temperature, performance, and power usage. I indeed have a Vega 58.


GPUz came out today and said it is just a bug, pal









https://www.techpowerup.com/236831/psa-flashing-rx-vega-56-with-rx-vega-64-bios-does-not-unlock-shaders


----------



## milkbreak

Can anyone knowledgeable explain the concern with installing a Morpheus II onto a Vega card? Specifically, the concern regarding shorting of certain components, possibly the VRM? I want to make sure I don't screw things up when my card arrives.


----------



## CaptainTom

Quote:


> Originally Posted by *PontiacGTX*
> 
> the onyl way to prove they have higher performance ,it is having all the same clock speed, underclock all to VEGA 56 clocks then measure compute withn aida64 or test some benchmark that can be shader/alu bound


Maybe I should have clarified - yes they were all at the same clocks when I tested them lol. The test would be pointless otherwise.

Why is this hard for some people to believe? Unlocking shaders on an AMD card is anything but new.


----------



## PontiacGTX

Quote:


> Originally Posted by *CaptainTom*
> 
> Maybe I should have clarified - yes they were all at the same clocks when I tested them lol. The test would be pointless otherwise.
> 
> Why is this hard for some people to believe? Unlocking shaders on an AMD card is anything but new.


can you show an AIDA64 benchmark of VEGA 56 and VEGA 56 with 58 CU


----------



## Nuke33

Quote:


> Originally Posted by *milkbreak*
> 
> Can anyone knowledgeable explain the concern with installing a Morpheus II onto a Vega card? Specifically, the concern regarding shorting of certain components, possibly the VRM? I want to make sure I don't screw things up when my card arrives.


Shorting your VRM by installing the base is unlikely. You just have to take care when installing the little vrm heatsinks. possibly use thermalglue to be sure.


----------



## laczarus

Latest GPU-Z version 2.4.0 reports 3584 shaders on my Vega 56 now.
Sadly not a Vega 57 anymore
https://www.techpowerup.com/download/techpowerup-gpu-z/


----------



## Kyle Ragnador

*My New Power Saving settings.
*

Vega64:

P6- 1557MHz 950mV
P7- 1662MHz 950mV

HBM- 1000MHz 950mV
Fan Range: 700rpm - 2900rpm
Target Temp 73-85

Resulting in GPU: 1507MHz HBM: 1000MHz *! 160Watt in Furmark !* THATS -120WATT then Balanced +50PT (280Watt+)

after a spinning up for save reserves it goes back to 2250rpm with Temp (Furmark): GPU:73°C GPU(Hotspot):76°C HBM2: 84°C

Firestrike EXTREME GPU Score: 11,100


----------



## Nuke33

Quote:


> Originally Posted by *CaptainTom*
> 
> Maybe I should have clarified - yes they were all at the same clocks when I tested them lol. The test would be pointless otherwise.
> 
> Why is this hard for some people to believe? Unlocking shaders on an AMD card is anything but new.


I would be very happy if it works.
But I am sceptical because AMD locked down the Bios so much. Would be strange if they allowed unlocking shaders, since it is easy for them to lasercut the die.


----------



## Trender07

btw guys althought the gpu start thorttling downclocking at 85º, at which temp does the (sensible as I've read) HBM2 temp starts throttling?


----------



## laczarus

Quote:


> Originally Posted by *Trender07*
> 
> btw guys althought the gpu start thorttling downclocking at 85º, at which temp does the (sensible as I've read) HBM2 temp starts throttling?


HBM2 should be kept below 80°C according to buildzoid's latest video.
When you want to run it at 1100MHz that is iirc


----------



## pmc25

AFAIK the HBM timings will begin increasing in steps from 40C and up.


----------



## chris89

Quote:


> Originally Posted by *Trender07*
> 
> btw guys althought the gpu start thorttling downclocking at 85º, at which temp does the (sensible as I've read) HBM2 temp starts throttling?


you need to undervolt hdm below 1.000 volt as to get it down to 60C

stock hbm is 1.350v @ 945mhz @ 483gb/s, so 400gb/s is 782.6mv & 73% cooler than stock

its throttling above 75c


----------



## Mandarb

Quote:


> Originally Posted by *Trender07*
> 
> Ok thanks, then running 1582 mhz core and 1050 hbm2, both 1000 mv. I guess core clock affects more temps?
> 
> Also anybody else having Chrome crashing when twitter videos? Or on reddit imgur videos also crashes (but not youtube videos i.e)


Bug in the current driver, disable hardware acceleration in your browser settings.


----------



## ookiie

Hi. I have vega56 with 64 bios which works nice, but I just slaped the EKWB FC waterblock on it and I'm wondering, can I put vega 64lc bios on it? I didn't find any posts or websites reporting the LC bios works on the 56 or anyone who did it. Is it possible and are there any benefits? my temperatures are around 40 degrees celsius under load for both core and hbm, so temps shouldn't be an issue here.


----------



## kundica

Returning my 2nd AIO card for crashing at stock clocks on 17.8.2. Snagged an Air card for the correct price. Ordering my full loop momentarily.


----------



## kundica

Oh ****. http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.9.1-Release-Notes.aspx


----------



## pmc25

Quote:


> Originally Posted by *chris89*
> 
> you need to undervolt hdm below 1.000 volt as to get it down to 60C
> 
> stock hbm is 1.350v @ 945mhz @ 483gb/s, so 400gb/s is 782.6mv & 73% cooler than stock
> 
> its throttling above 75c


You can't under or over volt HBM. It forever remains at 1.356V. Heat output from the GPU is the main determining factor in how hot HBM gets. Not HBM clock, and probably not voltage, were it adjustable.


----------



## CaptainTom

Quote:


> Originally Posted by *Nuke33*
> 
> I would be very happy if it works.
> But I am sceptical because AMD locked down the Bios so much. Would be strange if they allowed unlocking shaders, since it is easy for them to lasercut the die.


It wouldn't be strange at all lol - *that's the standard method of unlocking shaders*. The bios is locked down from being _changed,_ but we aren't changing any bios; we are simply flashing already approved bios' on different cards.

The Vega 56 bios can read up to 56 shaders (hence why flashing 56 on 64 breaks the card), but the Vega 64 bios can read _up to_ 64. So it simply scans the GPU and identifies that there are 57, and keeps on truckin'.


----------



## pmc25

Quote:


> Originally Posted by *CaptainTom*
> 
> It wouldn't be strange at all lol - *that's the standard method of unlocking shaders*. The bios is locked down from being _changed,_ but we aren't changing any bios; we are simply flashing already approved bios' on different cards.
> 
> The Vega 56 bios can read up to 56 shaders (hence why flashing 56 on 64 breaks the card), but the Vega 64 bios can read _up to_ 64. So it simply scans the GPU and identifies that there are 57, and keeps on truckin'.
> 
> But anyways I believe I have explained why this would work in addition to proving it _does_ work. Temperatures, performance numbers, power draw, and even GPU-Z readings all confirm I have successfully flashed a Vega 56 into a Vega 57. *Any doubt at this point simply means there is nothing anyone could ever do to convince you it worked.*


You haven't proved anything except that you're prone to hyperbole.

Download AMD GPU Profiler and test it. That should tell you how many shaders are actually working or not.


----------



## rancor

Quote:


> Originally Posted by *CaptainTom*
> 
> It wouldn't be strange at all lol - *that's the standard method of unlocking shaders*. The bios is locked down from being _changed,_ but we aren't changing any bios; we are simply flashing already approved bios' on different cards.
> 
> The Vega 56 bios can read up to 56 shaders (hence why flashing 56 on 64 breaks the card), but the Vega 64 bios can read _up to_ 64. So it simply scans the GPU and identifies that there are 57, and keeps on truckin'.


https://www.techpowerup.com/236834/techpowerup-gpu-z-v2-4-0-released

Is it still showing on the new GPUz? If it is testing need to be done with identical power limits, voltages, and clocks to give solid proof that shaders are unlocked.


----------



## Caldeio

Quote:


> Originally Posted by *rancor*
> 
> https://www.techpowerup.com/236834/techpowerup-gpu-z-v2-4-0-released
> 
> Is it still showing on the new GPUz? If it is testing need to be done with identical power limits, voltages, and clocks to give solid proof that shaders are unlocked.


it is not


----------



## milkbreak

Quote:


> Originally Posted by *Nuke33*
> 
> Shorting your VRM by installing the base is unlikely. You just have to take care when installing the little vrm heatsinks. possibly use thermalglue to be sure.


What exactly is the concern though? Any of the VRM heatsinks touching each other, or touching another component, or what?


----------



## pmc25

Installed 17.9.1 ... weird having to 'overclock' significantly to reach desired stock(ish) frequencies is still there.

They STILL haven't correctly labelled what ever the Memory Voltage in WattMan actually is ... absolutely no excuse for this.

Game profiles still don't have any effect whatsoever.

Furthermore, the update function on Radeon Settings is broken. It keeps prompting you to install a new driver (17.9.1) when it's already installed.

Can't say much else as I haven't had time to test yet, but given the above points, I'm not exactly optimistic.

Edit: At least, as the release notes hint at, hardware video acceleration in browsers is now fixed.


----------



## Trender07

Well looks like nothing for us on the new drivers :/

EDIT:

Looks like Chrome doesn't crash anymore with videos!


----------



## Tyrael

Nothing changed for me. Still error 1603 and no crimson software.


----------



## Wbroach23

Hey Anyone have any experience with any of the Vega cards and the HTC Vive yet?


----------



## Newbie2009

So apart from the new drivers removing enhanced sync for some reason, the performance is solid.


----------



## kundica

Quote:


> Originally Posted by *Newbie2009*
> 
> So apart from the new drivers removing enhanced sync for some reason, the performance is solid.
> 
> 
> Spoiler: Warning: Spoiler!


Yeah. My AIO 64 also stopped crashing on the new driver. Bad news is, I already have an Air 64 on the way to replace it.


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> [/SPOILER]
> 
> Yeah. My AIO 64 also stopped crashing on the new driver. Bad news is, I already have an Air 64 on the way to replace it.


I don't understand why you and others are RMA'ing cards for replacement when you know drivers are totally unstable atm. What does it achieve?


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> I don't understand why you and others are RMA'ing cards for replacement when you know drivers are totally unstable atm. What does it achieve?


Because the card is crazy expensive and crashes with no overclocking. I find that completely unacceptable. I don't care what the state of drivers are when I'm not even pushing the card and it fails to work. Also, the AMD rep on ocUK even said the cards are faulty. He's returned 2 already.

Also, I'm not RMA'ing for replacement. I'm returning for a refund. I don't want an AIO card if this is what I'm going to get. I'll bought an Air and I'll put a block on it.


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> Because the card is crazy expensive and crashes with no overclocking. I find that completely unacceptable. I don't care what the state of drivers are when I'm not even pushing the card and it fails to work. Also, the AMD rep on ocUK even said the cards are faulty. He's returned 2 already.
> 
> Also, I'm not RMA'ing for replacement. I'm returning for a refund. I don't want an AIO card if this is what I'm going to get. I'll bought an Air and I'll put a block on it.


But it's not the hardware. It's just a lottery as to whether the hardware will play semi-nice with the drivers.

You stand just as good a chance of the same happening with a different SKU. Drivers are the same.

The only reason more AIO SKUs are unstable is because they're fundamentally more unstable due to the BIOS allowing higher GPU voltage and power limits.

... and I'm guessing you intend to flash the vanilla RX64 (which could be a lower bin) with the AIO BIOS ... you're back to square one.


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> But it's not the hardware. It's just a lottery as to whether the hardware will play semi-nice with the drivers.
> 
> You stand just as good a chance of the same happening with a different SKU. Drivers are the same.
> 
> The only reason more AIO SKUs are unstable is because they're fundamentally more unstable due to the BIOS allowing higher GPU voltage and power limits.


You're speculating as to why the cards are crashing but it doesn't really matter. The card works or it doesn't and as I said before it's bs that cards are shipping in such a state(driver or not). If the AMD rep himself is returning cards and telling others to return them, then there's a very good reason to return the card.

And no, the air 64 SKUs do not have this issue. Go through ocUK thread and see how many people are having issues with their AIO cards. Some people can't even run their card on anything but powersave and that's with good PSUs.


----------



## pmc25

I have. I just told you why they have more issues.

PCB is identical, chips are identical (unless AIOs are a higher bin).

Only thing that differs is the BIOS.


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> I have. I just told you why they have more issues.
> 
> PCB is identical, chips are identical (unless AIOs are a higher bin).
> 
> Only thing that differs is the BIOS.


Clearly they are higher binned. I haven't seen any Air cards that can run the clocks the AIO run even those on custom loops running the AIO bios. They also have a completely different stepping, C0 vs C1.

Why don't you stop being a condescending jerk and know your facts before you try to crap on other people. Just like how you complain about the clocks on the new driver and needing to OC much higher to run stock freqs. You clearly don't understand how the stepping and boost clocks on these cards work.


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> The deal is Steve doesn't seem to have a clue about undervolting this card. It's a pity because he's generally liked by viewers so now a bunch of uninformed people are going to run with his results as gospel.
> I only skimmed the video but it also seems he left the power limit at 0. Nothing against Steve, but do these guys understand anything about the gear they're testing? Why is it that reviewers keeping setting p6 and p7 to the same numbers, do they not understand how the states work?


Do you even need to ask that!?







HAHAHA! Good 1!


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> Ok thanks, then running 1582 mhz core and 1050 hbm2, both 1000 mv. I guess core clock affects more temps?
> 
> Also anybody else having Chrome crashing when twitter videos? Or on reddit imgur videos also crashes (but not youtube videos i.e)


Go into chrome advanced settings and disable "hardware acceleration", the same goes for Wall paper engine, if you happen to use that as well. HWA has been an intermittent issue since the 280x days and is certainly an issue here. Do this and your problem will go away.



Quick edit... but looks like you already updated drivers... so time to give that a go!


----------



## punchmonster

Anyone know where I can get the steps for the registry power limit change? I can't seem to find it.

Also is undervolting working on the new drivers?


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> Go into chrome advanced settings and disable "hardware acceleration", the same goes for Wall paper engine, if you happen to use that as well. HWA has been an intermittent issue since the 280x days and is certainly an issue here. Do this and your problem will go away.


Not necessary if he's willing to update to today's driver 17.9.1. It's one of the fixes. I've been using it all evening and it appears to resolve the issue.


----------



## Soggysilicon

Quote:


> Originally Posted by *milkbreak*
> 
> Can anyone knowledgeable explain the concern with installing a Morpheus II onto a Vega card? Specifically, the concern regarding shorting of certain components, possibly the VRM? I want to make sure I don't screw things up when my card arrives.


Caught buildzoids video today which mentioned it... concerning contacting the "open drain" on the fet would result in a catastrophic short.

So, the short of the long is that a fet, bjt, or similar hybrid transistor works utilizing a three lead arrangement which comprises a "source" current or voltage, a gating current or voltage, and a ground return.

BJT is a base, collector, emitter; and a metal oxide field effect transistor is a gate, drain, source. In this case its a fet.

In the case of a fet there are common drain or common source, dependent on a P-channel or N-channel doping of the gate. These devices work in a couple different arrangements dependent on what one is trying to achieve. They can be analogous to a water valve or a switch, that is that the gating material is saturated such that current will conduct from one side of the device to the other side of the device (like mechanically turning a valve).

The amount of "saturation" accentuates this current flow, so in the case of high speed switching fets you want to saturate and then pull off the current very quickly to get a very short rise time and complete fall time (very important in digital switching TTL - transistor - transistor logic.

So back to "open drain", implies the source is common, or that the fets "source" indicated leads all share the same ground plane, and as such are electrically considered on the same node (the same electrical potential or "common ground").

The drain, however, in this arrangement implies that the different fets will be at different potentials. This gets into phasing, synonymous with timing of a waveform(s).

Because they are not shielded, contacting a metal surface to the metal drain "leg, lead" could result in current flowing from the leg and to the material that is in contact with it as opposed to the "load". If this short was back to the ground plane this can cause all sorts of havoc on the electrical equilibrium of the device resulting in component failure.

Or you short the gate and the "hot" drain together, could easily latch the gate into a permanent conductive state; effectively destroying it as a "switch", additionally destroying the drives behind it which are often very sensitive comparator circuit ICs, as well as sensitive decoupling capacitor packages in smt.

Bottom line, the leads are not doped with a non-conductive material and as such present an electrical hazard to the careless.


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> btw guys althought the gpu start thorttling downclocking at 85º, at which temp does the (sensible as I've read) HBM2 temp starts throttling?


Rumors hot in the streets indicate that the timing "tighten up" ~60c, not sure about "throttling"... I was only on air for... maybe and hour before blocking the card.


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> Not necessary if he's willing to update to today's driver 17.9.1. It's one of the fixes. I've been using it all evening and it appears to resolve the issue.


Yeap just saw that... installing atm myself... see if it changes anything worthwhile...


----------



## pmc25

Quote:


> Originally Posted by *kundica*
> 
> Clearly they are higher binned. I haven't seen any Air cards that can run the clocks the AIO run even those on custom loops running the AIO bios. They also have a completely different stepping, C0 vs C1.
> 
> Why don't you stop being a condescending jerk and know your facts before you try to crap on other people. Just like how you complain about the clocks on the new driver and needing to OC much higher to run stock freqs. You clearly don't understand how the stepping and boost clocks on these cards work.


I've heard no evidence that the stepping is of any significance whatsoever. I wouldn't guarantee they are a higher bin ... all the high bin chips will be sequestered away for Instinct cards. Highish will go to WX cards. Anything a bit above average is likely reserved for FEs. Binning the RX cards, I suspect the only higher bin, we have not yet seen - Nanos.

I do understand how it is working, however it should not require 'overclocking' in WattMan to hit what should be stock frequencies when voltage, thermals and power limit are not issues.

It makes absolutely no logical sense to have the card behave like this under such circumstances.


----------



## kundica

Quote:


> Originally Posted by *pmc25*
> 
> I've heard no evidence that the stepping is of any significance whatsoever. I wouldn't guarantee they are a higher bin ... all the high bin chips will be sequestered away for Instinct cards. Highish will go to WX cards. Anything a bit above average is likely reserved for FEs. Binning the RX cards, I suspect the only higher bin, we have not yet seen - Nanos.
> 
> I do understand how it is working, however it should not require 'overclocking' in WattMan to hit what should be stock frequencies when voltage, thermals and power limit are not issues.
> 
> It makes absolutely no logical sense to have the card behave like this under such circumstances.


I'd like to rewind for a second since this all starting with you attacking me (and others) for returning their cards for replacement. Again, I'm not, I'm getting my money back. I wouldn't have ordered my second one at all if it weren't for the AMD rep telling us on ocUK that the cards are faulty, we should return them for replacement, and that he returned one of his and got a working card.

You can pretend you're righteous but under no circumstances is it acceptable to pay 700-800 for a card that cannot run at stock settings. As a consumer, it's my right to return it and it's not my loss if I do, it's AMD's. If it's drivers, that's still on them, and this is coming from someone who nearly always defends them(call me a fanboy) and a shareholder with several thousand shares.

As for this bs about cards being equal. There are multiple examples, some people here, whose AIO can sustain 1727 to 1752+ clocks. Please show me a single example of an Air 64 card on a custom loop that can hit those clocks. Also, your statement clearly shows you don't understand the boost clock. https://forums.overclockers.co.uk/threads/the-rx-vega-64-owners-thread.18789713/page-67#post-31110683


----------



## rancor

Broke 400W








Quote:


> Originally Posted by *kundica*
> 
> I'd like to rewind for a second since this all starting with you attacking me (and others) for returning their cards for replacement. Again, I'm not, I'm getting my money back. I wouldn't have ordered my second one at all if it weren't for the AMD rep telling us on ocUK that the cards are faulty, we should return them for replacement, and that he returned one of his and got a working card.
> 
> You can pretend you're righteous but under no circumstances is it acceptable to pay 700-800 for a card that cannot run at stock settings. As a consumer, it's my right to return it and it's not my loss if I do, it's AMD's. If it's drivers, that's still on them, and this is coming from someone who nearly always defends them(call me a fanboy) and a shareholder with several thousand shares.
> 
> As for this bs about cards being equal. There are multiple examples, some people here, whose AIO can sustain 1727 to 1752+ clocks. Please show me a single example of an Air 64 card on a custom loop that can hit those clocks. Also, your statement clearly shows you don't understand the boost clock. https://forums.overclockers.co.uk/threads/the-rx-vega-64-owners-thread.18789713/page-67#post-31110683


I am another air 64 owner that can only sustain 1690-1710 underwater. Have there been recent AIO users on 17.8.2 or 17.9.1 that can sustain those clocks?


----------



## punchmonster

Calm down fellas. It's a new card and no one knows everything. Also any paying consumer is reasonable in expecting a product that's fully functional and to belittle them for returning such a product is absurd pmc25.

Now someone tell me where to get the registry edit for powerlimit.


----------



## Tgrove

Doont know how peoples card are crashing on stock settings. Not a single crash or black screen so far. Using wattman with +50% PT, 1100 mhz hbm, adequate fan speed, and p6 and p7 set to same frequency. Doesnt downclock much like that, havent tried undervolting so far.

17.9.1 working great for me. This will soon be sorted out. I dont care about power usage i used to use fury x crossfire on this same system

Btw dying light is a great game to stress your gpu


----------



## DrZine

17.9.1 drivers seems to be a lot better for me. No more random crashes so far. Kinda crazy that there was no mention in the release notes or by anyone here, AMD did some serous retunes to the default power profiles. At least power save and balance have been gutted. I did a full round of benchmarks on V64 AIR (no powerplay mods used) as well as what the newest HWinfo reports. (its on beta 3240)

Power Save
Vcore 1050 mv
gpu 1150 Mhz
HBM 800 Mhz
Chip Power 170W

Heaven 1712 (1943 on 17.8.2)
Timespy 5608
Firestrike 17649

Balance
Vcore 1100 mv
gpu 1400 Mhz
HBM 945 Mhz
Chip Power 220W

Heaven 1974 (2001 on 17.8.2)
Timespy 6455
Firestrike 20215

Turbo
Vcore 1150 mv
gpu 1500 Mhz
HBM 945 Mhz
Chip Power 255W

Heaven 1982 (1995 on 17.8.2)
Timespy 6833
Firestrike 21844

My best previous 100% stable oc setting have been;
1632 gpu
1100 HBM
1100 mv Vcore
+50% PL
Fan cranked!
70C temp target

17.9.1 (17.8.2)
Heaven 2271 (2274)
Timespy 7632 (7723)
Firestrike 24191 (24163)

HWinfo reports;
Vcore ~1050mv
gpu 1600Mhz
HBM 1100Mhz
chip power 280W

I find it funny AMD doesn't at the very least turn up the fan profile for turbo. The core was hitting 85 and the HBM was hitting 90. In my case (Fractal XL R2) the stock fan profile is silent. They could have gotten away with at least 3k rpm. and that would likely give a huge performance boost. With the fan being aloud to hit max speed my oc never goes past 75 on the hbm.

Could someone with a waterblock report similar benchmarks? I wont be able to foot the bill for on myself until around Black Friday. I need someone to live vicariously through!


----------



## Soggysilicon

19.1

Slight improvement in Heaven benchies as well as more consistency
Slight dis-improvement in super position 4k... but more consistent benchies... also noticed some screen tearing...







suspect frames hitting the 48 cutout... additionally DP dropped out twice once the bench started...

Going to play some Gemini Warlords and make a call from there...

Havn't seen sustained 1700+ in some time on my blocked air 64, suspect will need LC bios... maybe this weekend...

Cheers!


----------



## kundica

Quote:


> Originally Posted by *rancor*
> 
> I am another air 64 owner that can only sustain 1690-1710 underwater. Have there been recent AIO users on 17.8.2 or 17.9.1 that can sustain those clocks?


Yes, I wouldn't include any bios but those 2 since the others just show p7 clock even though synthetics and gaming benchmarks don't represent it.


----------



## punchmonster

my AC V64 is ******ed and keeps boosting to 1667Mhz despite me having it set to 1630Mhz. I'm afraid it's gonna crash due to not enough voltage, but it hasn't yet.
Quote:


> Originally Posted by *kundica*
> 
> Yes, I wouldn't include any bios but those 2 since the others just show p7 clock even though synthetics and gaming benchmarks don't represent it.


On that note I did manage to get individual WattMan profiles to work properly now. I had to DDU but I have a separate UC + max fan profile now and it's applying correctly, netting me 39~Mh/s on 1.7.9.1. Not bad for 180 Watt on the clamp! And yes it will maintain this for an hour plus


----------



## Tgrove

Absolutely loving this card! Decided to undervolt it and its amazing.

Current wattman settings 17.9.1 bios

Core
P6 and p7 1682mhz 1100mv

Fan speed
Min 400 max 2400

Temp target
Max 70 target 55

power target +50%

Hbm
1100mhz 950mv

2k-2.2k fan rpm keeps card 50-55c. I could easily keep it under 50c with more fan speed

So happy i got the liquid. Fury x crossfire dictated the purchase.


----------



## poisson21

Rx vega 64 on waterblock with lc bios.

Achieve 1732Mhz setting with +35% power target. (+50% passed in unigine benchmark but crash in any 3dmarkbench, still 1720/1725Mhz outpout)

hbm at 1100Mhz.

No under/over volting for now.

Waiting for crossfire to work ^^.


----------



## punchmonster

It's not recommended to set your P6 and P7 to the same number. Set your P7 to your maximum boost, set your P6 to your maximum stable clock.
Quote:


> Originally Posted by *Tgrove*
> 
> Absolutely loving this card! Decided to undervolt it and its amazing.
> 
> Current wattman settings 17.9.1 bios
> 
> Core
> P6 and p7 1687mhz 1100mv
> 
> Fan speed
> Min 400 max 2400
> 
> Temp target
> Max 70 target 55
> 
> power target +50%
> 
> Hbm
> 1100mhz 950mv
> 
> 2k-2.2k fan rpm keeps card 50-55c. I could easily keep it under 50c with more fan speed
> 
> So happy i got the liquid. Fury x crossfire dictated the purchase.


----------



## Tgrove

Quote:


> Originally Posted by *punchmonster*
> 
> It's not recommended to set your P6 and P7 to the same number. Set your P7 to your maximum boost, set your P6 to your maximum stable clock.


EDIT

Where did you see that? It seems to be much more stable this way. Clock only fluctuates 1-10mhz mostly (almost sticks). The p7 is basically the max stable clock for 1100mv. I dont want it downclocking too far or boosting too high and crashing. Im actually pretty happy with these results

Plus i went off this review

https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html

Next step is to see if maintaining under 50c will help even more (i think it will)


----------



## OMgoo

my HBM on my Vega 64 Air only goes to 1020MHz stable, I'm a bit disappointed


----------



## punchmonster

From personal testing and from AMD reps. We only set P6 and P7 to the same values on launch because of broken drivers. And if that's the case for you just set P7 to your max stable clock. It'll stay there as long as it's within the power limit.
Quote:


> Originally Posted by *Tgrove*
> 
> EDIT
> 
> Where did you see that? It seems to be much more stable this way. Clock only fluctuates 1-10mhz mostly (almost sticks). The p7 is basically the max stable clock for 1100mv. I dont want it downclocking too far or boosting too high and crashing.
> 
> Plus i went off this review
> 
> https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html


It probably goes much further than that but stock air is super temperature limited on the HBM.
Either heavily undervolt your core and up fanspeed or get a Morpheus II.
Quote:


> Originally Posted by *OMgoo*
> 
> my HBM on my Vega 64 Air only goes to 1020MHz stable, I'm a bit disappointed


----------



## Gdourado

Guru3d published this:
Quote:


> Yesterday we published the ASUS Radeon RX 64 STRIX review. As shown, it performs awfully similar towards the reference Radeon RX 64. This morning I received a phone call from ASUS, asking us if we'd be willing to take down the article for a few days as they have made a mistake.
> 
> The sample we received did not get a final BIOS for its final clock frequencies and fan tweaking. Ergo, the sample we received carries a default reference BIOS.
> 
> It's a colossal mistake, but as such the end-results in the review are not representative enough for the final product. ASUS will get the finalized BIOS over once they have finished (likely a day or two) after which we will re-test the card with that final BIOS and thus republish the review. All this explains why the STRIX card was so incredibly close to Vega 64 performance.
> 
> Apologies for the inconvenience, but this mistake was not one coming from us.


Asus made a mistake? lol


----------



## OMgoo

Quote:


> Originally Posted by *punchmonster*
> 
> It probably goes much further than that but stock air is super temperature limited on the HBM.
> Either heavily undervolt your core and up fanspeed or get a Morpheus II.


I have a Morpheus II thats why I'm disappointed


----------



## Tgrove

Im using the vega 64 liquid. It boosts ridiculously high, but 1100mv will only allow for a certain clock. Stock p7/boost is 1200mv/1750mhz and p6 is 1150mv/1667mhz. It fluctuates like crazy with such a big difference, same p6/p7 makes the clocks way more stable

In the review i linked they got higher fps with lower undervolted clocks because it held a stable clock throughout. Got the same results in my testing. 10%+ boost with 1100 hbm, stable core clocks, lower volts and temps


----------



## Newbie2009

There seems to be some confusion on how high cards can clock. People using the launch driver, you are not hitting 1750mhz, it is incorrect.

AIR 64 with a block can do 1700mhz Actual fine, which means having the clock set to 1750. That is about a 6.5% overclock on stock. (1747 @ 1050mv). One can go higher with higher volts, but there is no real point, hit the power limit, clocks fluctuate too much and really past point of optimum clock past 5%.

I have noticed the higher you push the core the less room you have on the HBM. (with newest drivers)

There is very little difference in performance pushing the card from 5% oc to say 6.5%-8%.

My advice is tho oc by 5%, tune the voltage, then push HBM as far as you can.


----------



## Tgrove

1750 is on the box and reported in the driver, monitoring shows it can hit this as well. It doesnt on its own because its limited by certain factors we are now overcoming

With less heat, volts, stable core clock, and 1100 hbm oc im seeing 10%+ in games


----------



## Newbie2009

Quote:


> Originally Posted by *Tgrove*
> 
> 1750 is on the box and reported in the driver, monitoring shows it can hit this as well. It doesnt on its own because its limited by certain factors we are now overcoming


GpuZ, look at the clocks under load, whatever you set the clock to, it will have about a 35mhz droop, this has been the case since 17.8.2.


----------



## Tgrove

It can hit 1750 mhz, no one said anything about maintining that clock though lol

https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html

Please read


----------



## Newbie2009

Quote:


> Originally Posted by *Tgrove*
> 
> It can hit 1750 mhz, no one said anything about maintining that clock though lol


yeah but to hit 1750 you have to clock the card to about 1785, inless you are using 17.8.1, which incorrectly reports the clock.


----------



## Tgrove

Yea thats why i set both p6/p7 to 1682, and actual clocks in game is like you said 25-35mhz droop depending on game. But its super stable this way and only fluctuates very few. Like right now im playing gta 5 maxed out 4k no aa and it is always 1650-1660 mhz

Im using 17.9.1 and it works great


----------



## Newbie2009

Quote:


> Originally Posted by *Tgrove*
> 
> Yea thats why i set both p6/p7 to 1682, and actual clocks in game is like you said 25-35mhz droop depending on game
> 
> Im using 17.9.1 and it works great


yeah seem fine to me too, enhanced sync is gone and the voltage seems tighter (or maybe newer gpuz is reporting better)


----------



## punchmonster

What are your HBM temps under load? (test ETH mining to see your full load)
Quote:


> Originally Posted by *OMgoo*
> 
> I have a Morpheus II thats why I'm disappointed


----------



## gupsterg

Quote:


> Originally Posted by *CaptainTom*
> 
> Why is this hard for some people to believe? Unlocking shaders on an AMD card is anything but new.
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nuke33*
> 
> I would be very happy if it works.
> But I am sceptical because AMD locked down the Bios so much. Would be strange if they allowed unlocking shaders, since it is easy for them to lasercut the die.
> Quote:
> 
> 
> 
> Originally Posted by *CaptainTom*
> 
> It wouldn't be strange at all lol - *that's the standard method of unlocking shaders*. The bios is locked down from being _changed,_ but we aren't changing any bios; we are simply flashing already approved bios' on different cards.
> 
> The Vega 56 bios can read up to 56 shaders (hence why flashing 56 on 64 breaks the card), but the Vega 64 bios can read _up to_ 64. So it simply scans the GPU and identifies that there are 57, and keeps on truckin'.
> 
> 
> 
> 
> 
> Click to expand...
Click to expand...

Only my opinion.

VEGA 56 unlocking to VEGA64 is going to be very rare.

For example a Fury Tri-X which allowed unlock, if I flashed a Fury X ROM it would not unlock SP, as it lacked a table that sets SP. So my only option was to use Fury ROM where the SP count table was modified. As this type of modification would violate VEGA security feature you won't get unlock even if ASIC has been left writable for that configuration aspect.


----------



## alanthecelt

so, i've flashed my 3 56's and got the same sort of mining rate as i did after spending ages fine tuning the overclock on the 56 bios

currently tweaking some more, going above the stock 64 HBM speed is detrimental.. ive found setting a temp target of 72 seems to work for me

anything above power limit +20 seems to make more heat and noise than anything else

p6 set to 1050 and p7 set to 1100mv ... haven't gone any lower as it doesn't have any measurable effect that i can see

claymore showing 37/36/37 on eth
i have no idea why the middle card is hashing lower.. no matter what i did its the same

it could be its location.. it may be breathing from a 3 slot gap, while the other cards are exposed completely to air

so, nicehash return on 3x vega 56 is more than i was getting on 1 1080ti and 2 1080's, power usage excluded but the cost of the cards is dramatically different, more like £950 vs £1650


----------



## SpaceGorilla47

Did some Firestrike testing on 17.8.2 with a Vega 64 LC:

*Graphics Score*___*Core*_____*Voltage*__*HBM*______*PT*________*Power (Wall)*
25133___________1702MHz__1,2V_____1050MHz__+50________560W
24755___________1702MHz__1,1V_____1050MHz__+50________460W
24747___________1702MHz__1,1V_____1050MHz__+0_________460W
24165______________________________945MHz__Turbo_______560W
23690______________________________945MHz__Balanced____480W
22174______________________________945MHz__Powersaver__380W

Vega seems to like high Voltage.
More Voltage results in higher Boost-Clockspeeds, while a higher PowerTarget doesnt do anything for me on Firestrike.

For 1.08V i need to lower the Core to 1602MHz with a Powerdraw of still ~460W. But more testing needs to be done


----------



## SlushPuppy007

Quote:


> Originally Posted by *alanthecelt*
> 
> so, i've flashed my 3 56's and got the same sort of mining rate as i did after spending ages fine tuning the overclock on the 56 bios
> 
> currently tweaking some more, going above the stock 64 HBM speed is detrimental.. ive found setting a temp target of 72 seems to work for me
> 
> anything above power limit +20 seems to make more heat and noise than anything else
> 
> p6 set to 1050 and p7 set to 1100mv ... haven't gone any lower as it doesn't have any measurable effect that i can see
> 
> claymore showing 37/36/37 on eth
> i have no idea why the middle card is hashing lower.. no matter what i did its the same
> 
> it could be its location.. it may be breathing from a 3 slot gap, while the other cards are exposed completely to air
> 
> so, nicehash return on 3x vega 56 is more than i was getting on 1 1080ti and 2 1080's, power usage excluded but the cost of the cards is dramatically different, more like £950 vs £1650


Can someone please throw some hate on this miner?


----------



## Irev

can anyone confirm if msi AB is working yet for VEGA ?


----------



## Skinnered

It seems 17.9.1 fixed my black screen crashes with 50% powerlimit and default Voltages







Card is mostly running between 1680-1710 mhz and 1085 mem with default P6 en P7 voltages and clocks. So this weekend I'll have to look again to see what's possible for lower voltages or overclocks. But at least it seems stable now at reasonable clocks and voltages.

One question still stand, does somebody play GTA5 en Fallout4 with an enb? My textures become completely dark.


----------



## kundica

Quote:


> Originally Posted by *SpaceGorilla47*
> 
> Did some Firestrike testing on 17.8.2 with a Vega 64 LC:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> *Graphics Score*___*Core*_____*Voltage*__*HBM*______*PT*________*Power (Wall)*
> 25133___________1702MHz__1,2V_____1050MHz__+50________560W
> 24755___________1702MHz__1,1V_____1050MHz__+50________460W
> 24747___________1702MHz__1,1V_____1050MHz__+0_________460W
> 24165______________________________945MHz__Turbo_______560W
> 23690______________________________945MHz__Balanced____480W
> 22174______________________________945MHz__Powersaver__380W
> 
> 
> 
> Vega seems to like high Voltage.
> More Voltage results in higher Boost-Clockspeeds, while a higher PowerTarget doesnt do anything for me on Firestrike.
> 
> For 1.08V i need to lower the Core to 1602MHz with a Powerdraw of still ~460W. But more testing needs to be done


Many of us have actually noticed the opposite. Set your voltage to 1250mv on p7 and see what happens. You'll receive the highest scores from running the max sustained clocks you can hit with the lowest amount of voltage. For example, try setting p7 on the LC 64 to 1727 and undervolting to 1150.


----------



## kundica

Quote:


> Originally Posted by *Tgrove*
> 
> EDIT
> 
> Where did you see that? It seems to be much more stable this way. Clock only fluctuates 1-10mhz mostly (almost sticks). The p7 is basically the max stable clock for 1100mv. I dont want it downclocking too far or boosting too high and crashing. Im actually pretty happy with these results
> 
> Plus i went off this review
> 
> https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html
> 
> Next step is to see if maintaining under 50c will help even more (i think it will)


----------



## pmc25

Putting the computer to sleep still results in performance degradation in games and benchmarks, after waking, even though the video hardware acceleration is now fixed and no longer freezes.


----------



## Tgrove

Quote:


> Originally Posted by *kundica*


Thats amdmatt, hes overclocking, im undervolting stock frequency


----------



## kundica

Quote:


> Originally Posted by *Tgrove*
> 
> Thats amdmatt, hes overclocking, im undervolting stock frequency


I know. The point is you can achieve sustained clocks even at higher clocks, but you do it by setting p7 to your max and p6 to sustained.


----------



## SpaceGorilla47

Quote:


> Originally Posted by *kundica*
> 
> Many of us have actually noticed the opposite. Set your voltage to 1250mv on p7 and see what happens. You'll receive the highest scores from running the max sustained clocks you can hit with the lowest amount of voltage. For example, try setting p7 on the LC 64 to 1727 and undervolting to 1150.


For the Air card this is correct.

But u are not limited by Power or Temp on the LC version. (at least in Firestrike)

The only chance to achive higher boost clocks is to ramp up the voltage


----------



## kundica

Quote:


> Originally Posted by *SpaceGorilla47*
> 
> For the Air card this is correct.
> 
> But u are not limited by Power or Temp on the LC version. (at least in Firestrike)
> 
> The only chance to achive higher boost clocks is to ramp up the voltage


I'm on the LC 64 and this is applicable to me. I posted some bench results several days ago.


----------



## SpaceGorilla47

Very strange behavior.
Will do a couple Firestrike runs later this day and post the results.


----------



## OMgoo

Do I might have power problem?

I undervolt the card an give it 50%+, yet GPU-Z shows a max draw of 220W, also the Meter at the Wall shows about 230Watt under Load?


----------



## Ragsters

Does anyone know why my Firestrike benchmark will not validate with RX Vega 56?


----------



## Newbie2009

Quote:


> Originally Posted by *Ragsters*
> 
> Does anyone know why my Firestrike benchmark will not validate with RX Vega 56?


beta drivers? It is the newer of the 2 cards also.


----------



## Gdourado

Is the Vega 64 limited edition just an air 64 with a silver shroud instead?


----------



## Newbie2009

Quote:


> Originally Posted by *Gdourado*
> 
> Is the Vega 64 limited edition just an air 64 with a silver shroud instead?


think so


----------



## pillowsack

Well I'm on the latest driver. I still can have my overclock of 1680/1100. I'm still upset I can't crank the voltage up because I don't give a heck about watts used or temperate since I have it under water


----------



## Ragsters

Quote:


> Originally Posted by *Newbie2009*
> 
> beta drivers? It is the newer of the 2 cards also.


Yes beta driver. But it mentions the bad timing thing. Also, how do I know if its the newer card?


----------



## hyp36rmax

Is crossfire working yet?


----------



## Newbie2009

Quote:


> Originally Posted by *hyp36rmax*
> 
> Is crossfire working yet?


If I was a betting man I would say no.


----------



## Tgrove

Quote:


> Originally Posted by *kundica*
> 
> I know. The point is you can achieve sustained clocks even at higher clocks, but you do it by setting p7 to your max and p6 to sustained.


Again, this would apply to me if i was looking to overclock the core with more than 1100mv. I am using the highest sustained clock for 1100mv, an undervolt


----------



## kundica

Quote:


> Originally Posted by *Tgrove*
> 
> Again, this would apply to me if i was looking to overclock the core with more than 1100mv. I am using the highest sustained clock for 1100mv, an undervolt


The same principle can be applied to underclocking.


----------



## Tgrove

Yes and thats what im doing lol. If i want to go higher than 1682 core i need to increase voltage.....i want 1100mv p7 volts

I found a nice sweet spot, my gpu stays roughly 10c cooler than the one in that video. He has his p7 voltage at 1230mv, im good on that (he straight up overclocked it voltage + frequency). Since hbm is cooked with the core i dont want core temps over 55c. 1.7k rpm fan speed generally keeps the card between 50-55c

So im gonna stick to my guns on this one. No one has shown anything so far to downplay the undervolting methodology in this review. And it has worked the best for me so far

Https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html


----------



## kundica

Quote:


> Originally Posted by *Tgrove*
> 
> Yes and thats what im doing lol. If i want to go higher than 1682 core i need to increase voltage.....i want 1100mv p7 volts
> 
> I found a nice sweet spot, my gpu stays roughly 10c cooler than the one in that video. He has his p7 voltage at 1230mv, im good on that. Since hbm is cooked with the core i dont want core temps over 55c
> 
> So im gonna stick to my guns on this one. No one has shown anything so far to downplay the undervolting methodology in this review. And it has worked the best for me so far
> 
> Https://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/44084-amd-radeon-rx-vega-56-und-vega-64-im-undervolting-test.html


Downvolting is not an issue, it's actually a good thing if your card can do it. This whole debate revolves around you setting p6 an p7 to the same clock which is not the best way of handling clocks on this card.


----------



## Tgrove

So i guess what im asking is for some proof of that claim? At least i came with a review backing me up, that youtube video proved nothing

I could easily set p6 -5mv/mhz lower than p7 and achieve exactly what your saying, but what is the point?


----------



## LionS7

Did Vega Frontier Edition benefits from new gaming drivers like the new 17.9.1 ? I don't talk about support for wattman and gaming part of the Crimson, just - did the drivers work stable on Frontier Edition, when the are none on the official FE page from 17.8.2 beta...


----------



## hyp36rmax

Quote:


> Originally Posted by *Newbie2009*
> 
> If I was a betting man I would say no.


I wonder what they could be working on to enable this requiring a delay? Heard they demoed crossfire VEGA at events, not sure what could really be different from crossfire with previous gen gpus too. i guess we'll see soon enough.... or may never at all lol.


----------



## dagget3450

Quote:


> Originally Posted by *LionS7*
> 
> Did Vega Frontier Edition benefits from new gaming drivers like the new 17.9.1 ? I don't talk about support for wattman and gaming part of the Crimson, just - did the drivers work stable on Frontier Edition, when the are none on the official FE page from 17.8.2 beta...


Quote:


> Originally Posted by *hyp36rmax*
> 
> I wonder what they could be working on to enable this requiring a delay? Heard they demoed crossfire VEGA at events, not sure what could really be different from crossfire with previous gen gpus too. i guess we'll see soon enough.... or may never at all lol.


I cannot get 17.9.1 to work on vega fe at all. Still stuck on 17.6 launch drivers. I dont know why crossfire is enabled on vega fe and not rx. They are the same thing.... If someone had dual rx vega maybe they could try the 17.6 vega drivers and see if cf works... I doubt it would work but who knows.


----------



## LionS7

Quote:


> Originally Posted by *dagget3450*
> 
> I cannot get 17.9.1 to work on vega fe at all. Still stuck on 17.6 launch drivers. I dont know why crossfire is enabled on vega fe and not rx. They are the same thing.... If someone had dual rx vega maybe they could try the 17.6 vega drivers and see if cf works... I doubt it would work but who knows.


Thank you for the info ! So, no Frontier Edition for gaming, right... It's not a sure and consistent thing. Hope someone launch 16GB HBM2 RX Vega 64 card.


----------



## kundica

Quote:


> Originally Posted by *Tgrove*
> 
> So i guess what im asking is for some proof of that claim? At least i came with a review backing me up, that youtube video proved nothing
> 
> I could easily set p6 -5mv/mhz lower than p7 and achieve exactly what your saying, but what is the point?


I shared one with you. I also posted some of my test results a while back. There's also this from a user on ocUK. Lastly, test for yourself, you'll be surprised at the results.


----------



## Trender07

Well for me AIR 64, Fire Strike crash at 1637/1000 mV but doesn't crash at 1600


----------



## Tgrove

Quote:


> Originally Posted by *kundica*
> 
> I shared one with you. I also posted some of my test results a while back. There's also this from a user on ocUK. Lastly, test for yourself, you'll be surprised at the results.


Im gonna try 5o underclock p6 even further. I guess i just need to determine how high i want it to boost

Anyone know what clock and voltage those diminishing returns start?


----------



## Tgrove

1100mv boost roughly 1640mhz during benchmark with sig rig


----------



## Caldeio

Quote:


> Originally Posted by *Tgrove*
> 
> 
> 
> 1100mv boost roughly 1640mhz during benchmark with sig rig


I only get 6200, too much heat on my 56!


----------



## Trender07

More tries, at 1110 MHz HBM2 I get flickering on Fire Strike and the test is cancelled automatically, at 1070 works tho


----------



## pmc25

Quote:


> Originally Posted by *pmc25*
> 
> Putting the computer to sleep still results in performance degradation in games and benchmarks, after waking, even though the video hardware acceleration is now fixed and no longer freezes.


More strange behaviour.

As many people experienced, after a Radeon Settings / driver crash whilst testing stability, the card would throttle and stutter horribly unless you process killed Radeon Settings, reopened a new iteration (and reapplied settings).

However, on 17.9.1, it seems ANY kind of stability problem (even a game crash that doesn't result in Radeon Settings (completely) s***** it's pants will cause this behaviour. However, the kicker is that process kill and reapply settings, or revert to stock settings has no effect.

You have to reboot for the card to go back to behaving normally.

Never had a driver / card that behaved like this before. Obviously very frustrating when testing stability and changing settings around.

Like the prior driver, I don't think I'll be wasting much time on testing limits and trying a bunch of different settings. The drivers just aren't developed enough or stable enough yet.
Quote:


> Originally Posted by *Trender07*
> 
> More tries, at 1110 MHz HBM2 I get flickering on Fire Strike and the test is cancelled automatically, at 1070 works tho


You know the 5Mhz increments for HBM2 are real? It's not like Fiji('s HBM) where there were staggered increments and the clock you set would cling to the nearest valid number (500/545/600).

1100Mhz is the limit for most Vega 64 cards. A few will do 1105Mhz (mine when it's in a good mood). Almost none do 1110 and will crash at idle.

Virtually all, with the right setup and low enough temperatures, will do 1095Mhz.

Suggest you try 1095Mhz (though if temperatures are high it may need to be lower).


----------



## BeetleatWar1977

Using the PP tables i´m way down with my voltages - and power....



1080p, max Settings, Vsync on, 60Frames constant.....
World of Tanks: Chippower 55W
Starcraft 2: Chippower 52W
with the last HWinfo


----------



## CaptainTom

Quote:


> Originally Posted by *gupsterg*
> 
> Only my opinion.
> 
> VEGA 56 unlocking to VEGA64 is going to be very rare.
> 
> For example a Fury Tri-X which allowed unlock, if I flashed a Fury X ROM it would not unlock SP, as it lacked a table that sets SP. So my only option was to use Fury ROM where the SP count table was modified. As this type of modification would violate VEGA security feature you won't get unlock even if ASIC has been left writable for that configuration aspect.


Yeah I can confirm that I was wrong and it was a software bug. *My bad everyone







*


----------



## pmc25

Quote:


> Originally Posted by *Trender07*
> 
> More tries, at 1110 MHz HBM2 I get flickering on Fire Strike and the test is cancelled automatically, at 1070 works tho


Quote:


> Originally Posted by *BeetleatWar1977*
> 
> Using the PP tables i´m way down with my voltages - and power....
> 
> 
> 
> 1080p, max Settings, Vsync on, 60Frames constant.....
> World of Tanks: Chippower 55W
> Starcraft 2: Chippower 52W
> with the last HWinfo


Those power readings are faulty .. as is fairly obvious from them being so incredibly low. Even with FRTC and Chill, there's no way it's that low.

Latest HWiNFO64 is reading correctly for me and others. Prior versions were semi accurate.

What version do you have installed? PP tables might break it.


----------



## CaptainTom

So I looked all over the internet and cannot find this all in one place: What is the stock HBM voltage for each version of Vega?

I believe Vega 56 is 1.2, 64 is 1.25, and Vega Liquid = 1.3???

Same questions with core clocks...


----------



## punchmonster

1.35v for all V64
Quote:


> Originally Posted by *SlushPuppy007*
> 
> Can someone please throw some hate on this miner?


Quote:


> Originally Posted by *CaptainTom*
> 
> So I looked all over the internet and cannot find this all in one place: What is the stock HBM voltage for each version of Vega?
> 
> I believe Vega 56 is 1.2, 64 is 1.25, and Vega Liquid = 1.3???
> 
> Same questions with core clocks...


----------



## shadowxaero

I am pretty happy with my card. Sure it is a power hog lol but decent upgrade from my Fury.


----------



## dsmwookie

What kind of vrms temps are you guys seeing under load?


----------



## Tgrove

Sapphire and powercolor liquid versions in stock at newegg

https://m.newegg.com/products/N82E16814131726?ignorebbr=1

https://m.newegg.com/products/N82E16814131726?ignorebbr=1


----------



## Tgrove

Quote:


> Originally Posted by *shadowxaero*
> 
> I am pretty happy with my card. Sure it is a power hog lol but decent upgrade from my Fury.


What volts/clocks are you using?


----------



## shadowxaero

Quote:


> Originally Posted by *Tgrove*
> 
> What volts/clocks are you using?


1.25v @ 1702Mhz (Benchmark runs it around 1668 to 1675 with spikes to 1690 and 1700+ occasionally)

HBM is at 1105Mhz.

Flashed the AIO bios on my 64 Air (though I have an ekwb on it).


----------



## DMatthewStewart

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> Using the PP tables i´m way down with my voltages - and power....
> 
> 
> 
> 1080p, max Settings, Vsync on, 60Frames constant.....
> World of Tanks: Chippower 55W
> Starcraft 2: Chippower 52W
> with the last HWinfo


How did you get the power temp to stick at 80? Mine just defaults back to 70 whether its done in the GUI or via profile. Almost everything else is adjustable though. Its the one thing I really need to be adjustable.


----------



## CaptainTom

Quote:


> Originally Posted by *Tgrove*
> 
> What volts/clocks are you using?


If there's one thing this forum is proving, it's that Vega really _isn't_ a power hog.

AMD should have clocked all of these cards at ~150mV less, increased memory speed by 100MHz, and then they would have truly had:

Vega 56 ~=1080 for $400 with about the same power draw

Vega 64 >> 1080 for $500 with less efficiency

Vega LQ =< 1080 Ti for $600 with less efficiency

It's completely puzzling to me how terrible the stock settings are on these cards. Is ANYONE here unable to undervolt by at least 100-150mV? Is ANYONE here not able to at least hit 1050MHz on the memory?

I am not joking when I suggest we should seriously do a tally of all of the members here just to get some kind of idea of what the undervolting + HBM Overclock success rate is here. It seems like at a minimum one can expect a 15% performance increase on any cards with reduced power usage.


----------



## surfinchina

Quote:


> Originally Posted by *CaptainTom*
> 
> If there's one thing this forum is proving, it's that Vega really _isn't_ a power hog.
> 
> AMD should have clocked all of these cards at ~150mV less, increased memory speed by 100MHz, and then they would have truly had:
> 
> Vega 56 ~=1080 for $400 with about the same power draw
> 
> Vega 64 >> 1080 for $500 with less efficiency
> 
> Vega LQ =< 1080 Ti for $600 with less efficiency
> 
> It's completely puzzling to me how terrible the stock settings are on these cards. Is ANYONE here unable to undervolt by at least 100-150mV? Is ANYONE here not able to at least hit 1050MHz on the memory?
> 
> I am not joking when I suggest we should seriously do a tally of all of the members here just to get some kind of idea of what the undervolting + HBM Overclock success rate is here. It seems like at a minimum one can expect a 15% performance increase on any cards with reduced power usage.


It's also proving that they rushed it into release with rubbish settings. With the hardware it has, it should be hammering the opposition. At the moment it's about the same as my old Nano, which has inferior hardware.
Hopefully at some point they'll come up with new bios to help it realise it's potential...

ps I have zero success with undervolting + HBM overclock because I'm running it in Mac OS x and there isn't any software for that.
On the plus side it's the first serious GPU that's ever worked natively on a Mac.


----------



## Tgrove

Quote:


> Originally Posted by *CaptainTom*
> 
> If there's one thing this forum is proving, it's that Vega really _isn't_ a power hog.
> 
> AMD should have clocked all of these cards at ~150mV less, increased memory speed by 100MHz, and then they would have truly had:
> 
> Vega 56 ~=1080 for $400 with about the same power draw
> 
> Vega 64 >> 1080 for $500 with less efficiency
> 
> Vega LQ =< 1080 Ti for $600 with less efficiency
> 
> It's completely puzzling to me how terrible the stock settings are on these cards. Is ANYONE here unable to undervolt by at least 100-150mV? Is ANYONE here not able to at least hit 1050MHz on the memory?
> 
> I am not joking when I suggest we should seriously do a tally of all of the members here just to get some kind of idea of what the undervolting + HBM Overclock success rate is here. It seems like at a minimum one can expect a 15% performance increase on any cards with reduced power usage.


I agree 100%, really is a great card when you fine tune it.


----------



## shadowxaero

Quote:


> Originally Posted by *CaptainTom*
> 
> If there's one thing this forum is proving, it's that Vega really _isn't_ a power hog.
> 
> AMD should have clocked all of these cards at ~150mV less, increased memory speed by 100MHz, and then they would have truly had:
> 
> Vega 56 ~=1080 for $400 with about the same power draw
> 
> Vega 64 >> 1080 for $500 with less efficiency
> 
> Vega LQ =< 1080 Ti for $600 with less efficiency
> 
> It's completely puzzling to me how terrible the stock settings are on these cards. Is ANYONE here unable to undervolt by at least 100-150mV? Is ANYONE here not able to at least hit 1050MHz on the memory?
> 
> I am not joking when I suggest we should seriously do a tally of all of the members here just to get some kind of idea of what the undervolting + HBM Overclock success rate is here. It seems like at a minimum one can expect a 15% performance increase on any cards with reduced power usage.


I can't sustain my 1670+ clocks if I undervolt v.v....I need the 1.25v lol


----------



## Tgrove

I think the liquids are definitely binned higher


----------



## zimm16

Quote:


> Originally Posted by *Tgrove*
> 
> Absolutely loving this card! Decided to undervolt it and its amazing.
> 
> Current wattman settings 17.9.1 bios
> 
> Core
> P6 and p7 1682mhz 1100mv
> 
> Fan speed
> Min 400 max 2400
> 
> Temp target
> Max 70 target 55
> 
> power target +50%
> 
> Hbm
> 1100mhz 950mv
> 
> 2k-2.2k fan rpm keeps card 50-55c. I could easily keep it under 50c with more fan speed
> 
> So happy i got the liquid. Fury x crossfire dictated the purchase.


sapphire 64 liquid here - loving these settings, thanks! max temp is 55C

edit - HBM is dropping to 500 infrequently, i'm assumng this is some power saving feature, but that was on PUBG not a real good test


----------



## SpaceGorilla47

Retested Firestrike with 17.9.1 but with a different PSU:

*Graphics Score*___*Core*_____*Voltage*___*Actual Core Speed*__*HBM*______*PT*________*Power (Wall)*
24761___________1702MHz__1,10V______1652-1669MHz_____1050MHz__+50________450W
24949___________1702MHz__1,15V______1668-1685MHz_____1050MHz__+50________500W
24965___________1702MHz__1,20V______1666-1687MHz_____1050MHz__+50________550W

Superposition 4K:
*Score*___________*Core*_____*Voltage*___*Actual Core Speed*__*HBM*______*PT*________*Power (Wall)*
6788____________1702MHz__1,10V______1644-1650MHz_____1050MHz__+50________500W
6824____________1702MHz__1,15V______1662-1666MHz_____1050MHz__+50________540W
6809____________1702MHz__1,20V______1655-1666MHz_____1050MHz__+50________590W
6813*___________1702MHz*_1,15V______1661-1667MHz_____1050MHz__+50________540W
Crashed__________1752MHz__1,25V______1680-1700MHz_____1050MHz__+50________650W (725W Peak)

*P7: [email protected] P6: [email protected]

My conclusion:
-My Card needs 1.15V to reach max potential given Clockspeed. 1.2V just draws more power without a benefit
-1.1V vs 1.15V minimal Speed/Clock difference but 40-50W more power consumption
-Setting P6 lower than P7 doesnt change max boost clock or performance
-Vega can boost higher just by adding more Voltage until it reaches a plateau. Further Voltage will just draw more Power
-Playing aroung with the Voltage can kill your PSU


----------



## Roboyto

Quote:


> Originally Posted by *Greenland*
> 
> @Roboyto:
> 
> Any ideas how to OC and undervolt the card? I have tried everything with Wattman and it fails to OC my vega 56 every single time. Here's my Wattman settings: http://i.imgur.com/PHh1VjR.png
> 
> Thanks.


Sorry I didn't see your post as the @Roboyto didn't flag me properly.

Looking at your screenshot I noticed that it says only 1% power target with the slider maxed out. Not sure why it is doing that as I don't have a 56. Maybe try reinstalling drivers, or use 17.8.1 drivers as that is what I have had the best luck with. Use DDU to uninstall (in safe mode) and CCleaner to clear any remnant registry bits that might end up hanging around, and then reinstall drivers.

Also, 1672 may be asking a bit too much for the Vega 56 due to it's inherent power constraints compared to the 64. When undervolting I generally set one voltage to P6 and slightly higher voltage to P7, 20-25mV. I would try undervolting at 1075 P6/1100 P7 and start with a more modest clock considering base clock for 56 is 1151 and (best case scenario) boost clock is 1471, which the card would struggle to maintain with boosting power limit and forcing fan to obnoxious levels. I don't think the 56, with stock BIOS limits, will be able to sustain high 1600's from what I've seen on Gamers Nexus.

I've said this many times before and I'll say it again...Overclocking is a tedious process if you want to find out how your hardware reacts to various settings...it is even more so a PITA as this is brand new hardware that hasn't even been out for a month yet...so we're still learning things.

Do yourself a favor and use Google Docs/Sheets and make a spreadsheet with the core/HBM clocks, voltages, power limits and whatever benchmark scores you want to record. Once you start logging all of this information it will be exponentially easier to see what benefits performance and what does not. I've got around 80 benchmark runs that I've recorded...and I know I'm missing a few here or there due to crashes and what not.


----------



## CaptainTom

Quote:


> Originally Posted by *shadowxaero*
> 
> I can't sustain my 1670+ clocks if I undervolt v.v....I need the 1.25v lol


But that's not the point. Vega shows almost no benefit from overclocking the core, it really doesn't need it at all. It is much better to simply undervolt the core as much as possible: I did so (1000mV @P6, 1060 @P7) and I reduced core power from 320w to 240w! That is absolutely insane!

On the other side of the coin Vega is incredibly bandwidth starved. In fact I have read that ideally Vega would have ~650 GB/s to match the core's performance, and this actually line's up with the initial projections AMD gave ~2 years ago on what HBM2 would be capable of. HBM2 is clearly behind it's goals, and it has hamstrung Vega in more than a few ways.

Quote:


> Originally Posted by *Tgrove*
> 
> I agree 100%, really is a great card when you fine tune it.


It is abnormally better lol. It goes from slightly losing to the 1080 while using more energy than the 1080 Ti, to trading blows with the 1080 Ti while using around the same energy.

Without a doubt I think AMD should do a Vega relaunch as soon as possible - A card with 1) Aftermarket coolers, 2) Faster HBM @ 1150MHz, 3) a refined bios with reduced voltage, and 4) Mature Drivers! They could then also release bios updates to current owners (or at least let us mod the bios!).

I bet some kind of "Vega 64 Black Edition" could be ready as soon as December.

Quote:


> Originally Posted by *Tgrove*
> 
> I think the liquids are definitely binned higher


In my experience this seems to be true as well (Same with 56, they can't reach as high clocks). It's funny too because every review reports Vega Liquid as insanely inefficient, but it is actually the opposite and the stock settings are even dumber than 64.


----------



## shadowxaero

Quote:


> Originally Posted by *CaptainTom*
> 
> But that's not the point. Vega shows almost no benefit from overclocking the core, it really doesn't need it at all. It is much better to simply undervolt the core as much as possible: I did so (1000mV @P6, 1060 @P7) and I reduced core power from 320w to 240w! That is absolutely insane!


I definitely hear you lol, I didn't by a water block for this card to undervolt it though lol. In the words of Jeremy Clarkson....POWERRRRR!!! I send as many watts as the card can take/pull lol.


----------



## Soggysilicon

Quote:


> Originally Posted by *Gdourado*
> 
> Guru3d published this:
> Asus made a mistake? lol


HAHA Say it ain't so! At any rate, the fact that it runs with some "reference bios" is promising... perhaps the strix bios can be looted for a reference card?


----------



## Tgrove

Quote:


> Originally Posted by *zimm16*
> 
> sapphire 64 liquid here - loving these settings, thanks! max temp is 55C
> 
> edit - HBM is dropping to 500 infrequently, i'm assumng this is some power saving feature, but that was on PUBG not a real good test


Awesome im glad those settings work for you too! My hbm will bounce down to 500 with a super light load like the splash screen of a game


----------



## Trender07

My HBM2 V64 is kind of meh, it flickers on fire strike and then cancels the benchmark on 1100 MHz :/


----------



## zimm16

Quote:


> Originally Posted by *Tgrove*
> 
> Awesome im glad those settings work for you too! My hbm will bounce down to 500 with a super light load like the splash screen of a game


yea, this is in game, but PUBG is not a good test. will try BF1 and 4


----------



## Caldeio

Quote:


> Originally Posted by *Trender07*
> 
> My HBM2 V64 is kind of meh, it flickers on fire strike and then cancels the benchmark on 1100 MHz :/


hbm temps?


----------



## Tgrove

Quote:


> Originally Posted by *SpaceGorilla47*
> 
> Retested Firestrike with 17.9.1 but with a different PSU:
> 
> *Graphics Score*___*Core*_____*Voltage*___*Actual Core Speed*__*HBM*______*PT*________*Power (Wall)*
> 24761___________1702MHz__1,10V______1652-1669MHz_____1050MHz__+50________450W
> 24949___________1702MHz__1,15V______1668-1685MHz_____1050MHz__+50________500W
> 24965___________1702MHz__1,20V______1666-1687MHz_____1050MHz__+50________550W
> 
> Superposition 4K:
> *Score*___________*Core*_____*Voltage*___*Actual Core Speed*__*HBM*______*PT*________*Power (Wall)*
> 6788____________1702MHz__1,10V______1644-1650MHz_____1050MHz__+50________500W
> 6824____________1702MHz__1,15V______1662-1666MHz_____1050MHz__+50________540W
> 6809____________1702MHz__1,20V______1655-1666MHz_____1050MHz__+50________590W
> 6813*___________1702MHz*_1,15V______1661-1667MHz_____1050MHz__+50________540W
> Crashed__________1752MHz__1,25V______1680-1700MHz_____1050MHz__+50________650W (725W Peak)
> 
> *P7: [email protected] P6: [email protected]
> 
> My conclusion:
> -My Card needs 1.15V to reach max potential given Clockspeed. 1.2V just draws more power without a benefit
> -1.1V vs 1.15V minimal Speed/Clock difference but 40-50W more power consumption
> -Setting P6 lower than P7 doesnt change max boost clock or performance
> -Vega can boost higher just by adding more Voltage until it reaches a plateau. Further Voltage will just draw more Power
> -Playing aroung with the Voltage can kill your PSU


+rep, great info right here thanks for sharing


----------



## punchmonster

Stop spreading misinformation. None if this is true. You keep coming into the thread posting blatantly stupid crap clearly having no clue what you're talking about. It's time to stop my dude.

EDIT: to elaborate, if it was bandwidth starved your memory OC would scale linearly. It doesn't.
It also under no circumstances is ever "competitive" with the 1080Ti. People will read your posts and go and spread this total misinformation as fact.
Quote:


> Originally Posted by *CaptainTom*
> 
> On the other side of the coin *Vega is incredibly bandwidth starved*. In fact I have read that ideally Vega would have ~650 GB/s to match the core's performance, and this actually line's up with the initial projections AMD gave ~2 years ago on what HBM2 would be capable of. HBM2 is clearly behind it's goals, and it has hamstrung Vega in more than a few ways.
> It is abnormally better lol. It goes from slightly losing to the 1080 while using more energy than the 1080 Ti, *to trading blows with the 1080 Ti* while using around the same energy.


----------



## CaptainTom

Quote:


> Originally Posted by *punchmonster*
> 
> Stop spreading misinformation. None if this is true. You keep coming into the thread posting blatantly stupid crap clearly having no clue what you're talking about. It's time to stop my dude.
> 
> EDIT: to elaborate, if it was bandwidth starved your memory OC would scale linearly. It doesn't.
> It also under no circumstances is ever "competitive" with the 1080Ti. People will read your posts and go and spread this total misinformation as fact.


Um what are you questioning?

1) I have benchmarked my 64 air against a 1080 Ti. It matches it in BF1, Metro: LL, and Deus Ex. I get 97.5 FPS in Metro: LL 1080p, 1080 Ti at stock gets 97.6FPS (A measly 0.001% difference, and I haven't increase core clock lol).

Furthermore there are several games where a stock Vega 64 matches or beats the 1080 Ti anyways (Warhammer, Civ, Dirt 4), so it really isn't AT ALL crazy to think a Vega 64 could match a 1080 Ti if it stopped throttling and you fixed the memory bottleneck.

2) The memory does scale near linearly genius:

http://www.guru3d.com/news-story/flashing-radeon-vega-64-bios-into-vega-56-does-increase-performance.html

Why do you think Vega 56 performs so much better on a 64 bios? Flashing 64 on to 56 increases the HBM voltage, and allows for massively higher memory clocks. Hence why Vega 64 and 56 perform almost identically if you can get their memory to the same frequencies - those extra 512 shaders are doing almost nothing!

Stop attacking me, and go attack half of the other people here because they are all getting big performance boosts from memory overclocking. Oh, and then stop spreading FUD in here. Someone might read it


----------



## pmc25

Quote:


> Originally Posted by *Soggysilicon*
> 
> HAHA Say it ain't so! At any rate, the fact that it runs with some "reference bios" is promising... perhaps the strix bios can be looted for a reference card?


Judging from photos, it uses a stock PCB and exactly the same components on said PCB ... only thing that differs is the heatsink and fans.

If they come out with a better BIOS, then I don't see why it wouldn't work on other RX 64s. Unless of course they have some kind of micro-code injected into the GPUs as an identifier which will only work with certain BIOS.

Given their disarray with that review, and sending a card out with the wrong vBIOS, I suspect their initial effort at improving AMD's (likely lacking) vBIOS might not be stellar ..


----------



## punchmonster

Quote:


> Originally Posted by *CaptainTom*
> 
> Um what are you questioning?
> 
> 1) I have benchmarked my 64 air against a 1080 Ti. It matches it in BF1, Metro: LL, and Deus Ex. I get 97.5 FPS in Metro: LL 1080p, 1080 Ti at stock gets 95.6FPS (A measly 0.001% difference, and I haven't increase core clock lol).
> 
> Furthermore there are several games where a stock Vega 64 matches or beats the 1080 Ti anyways (Warhammer, Civ, Dirt 4), so it really isn't AT ALL crazy to think a Vega 64 could match a 1080 Ti if it stopped throttling and you fixed the memory bottleneck.


a few games does not a trend make. By this logic the 1060 is now competitive with Vega64 because the 1060 is faster in PUBG.

Quote:


> Originally Posted by *CaptainTom*
> 
> 2) The memory does scale near linearly genius:
> 
> http://www.guru3d.com/news-story/flashing-radeon-vega-64-bios-into-vega-56-does-increase-performance.html


800Mhz to 945Mhz is an 18% increase. In your link the performance in synthetics only go up 9.5%, 9429 to 10340. That's not even close to linear. While the 16.5% jump to 1100Mhz on a Vega64 only nets you 5~% extra performance. Still nowhere near linear.
Quote:


> Originally Posted by *CaptainTom*
> 
> Why do you think Vega 56 performs so much better on a 64 bios? Flashing 64 on to 56 increases the HBM voltage, and allows for massively higher memory clocks. Hence why Vega 64 and 56 perform almost identically if you can get their memory to the same frequencies - those extra 512 shaders are doing almost nothing!


If you say so my dude. Just ignore the 3% performance difference at identical clocks and the fact Vega64 can clock way higher. Also that still has nothing to do with HBM.
Stop posting crap.
Quote:


> Originally Posted by *CaptainTom*
> 
> Stop attacking me, and go attack half of the other people here because they are all getting big performance boosts from memory overclocking. Oh, and then stop spreading FUD in here. Someone might read it


No FUD here. Simple math my dude. Just because performance goes up does not make it a "massive" memory bottleneck at all.


----------



## Trender07

Quote:


> Originally Posted by *Caldeio*
> 
> hbm temps?


Quote:


> Originally Posted by *Caldeio*
> 
> hbm temps?


81ºC

EDIT:

Ok changin objective to 75º and cranking up fan speed and giving it +50 mv more and it doesn't crash anymore, but I don't know why it crashed because of temps or because of voltage


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> Judging from photos, it uses a stock PCB and exactly the same components on said PCB ... only thing that differs is the heatsink and fans.
> 
> If they come out with a better BIOS, then I don't see why it wouldn't work on other RX 64s. Unless of course they have some kind of micro-code injected into the GPUs as an identifier which will only work with certain BIOS.
> 
> Given their disarray with that review, and sending a card out with the wrong vBIOS, I suspect their initial effort at improving AMD's (likely lacking) vBIOS might not be stellar ..


The sad thing is here... I agree with you! Trying out this LC bios on the 64, and its a mixed bag of nuts... the extra headroom is helpful but I am starting to wonder if these proc. where bin'd.









Quick edit, the 64LC bios with a little tinkering is shaping up to be a repeatably better performer than the air OC'd. Most my nit picks with superposition are gone. Results are +/- 1 point so far... very solid. Think it's a keeper!















TW:Warhammer DvO 3440x1440 DX12 Ultra/FXAA 58.5 fps picking up a solid 2~3 fps, which is nontrivial when trying to stay above 48 in the lows.


----------



## rancor

Quote:


> Originally Posted by *CaptainTom*
> 
> Snip


I did testing for you on a Vega 64 air under water. With the same core clocks(1690) in all tests 150% power limit and 450A limit. Performed on 17.9.1 drivers. Core temp max 35C and HBM max temp 42C.

Superposition benchmark

800 HBM - 6570
945 HBM - 6733
1100 HBM - 7035
Overall 37.5% clock increase for a 7.1% performance increase.

Firestrike Extreme (graphics score)

800 HBM - 11635
945 HBM - 11968
1100 HBM - 12533
Overall 37.5% clock increase for a 7.9% performance increase

Don't get me wrong HBM overclocks should be done as they are basicly free performance but Vega is *not* incredibly bandwidth starved


----------



## CaptainTom

Quote:


> Originally Posted by *punchmonster*
> 
> a few games does not a trend make. By this logic the 1060 is now competitive with Vega64 because the 1060 is faster in PUBG.
> 800Mhz to 945Mhz is an 18% increase. In your link the performance in synthetics only go up 9.5%, 9429 to 10340. That's not even close to linear. While the 16.5% jump to 1100Mhz on a Vega64 only nets you 5~% extra performance. Still nowhere near linear.
> If you say so my dude. Just ignore the 3% performance difference at identical clocks and the fact Vega64 can clock way higher. Also that still has nothing to do with HBM.
> Stop posting crap.
> No FUD here. Simple math my dude. Just because performance goes up does not make it a "massive" memory bottleneck at all.


Good lord you _are_ spouting FUD lol. Yeah PUBG (A single early access game) is a comparably valid counterpoint to results of SEVERAL established games I mentioned. /s

Then you bring up canned synthetic benchmarks that have ALWAYS favored compute far more than games people actually can play.

But my favorite part of your clown post is acting like a 3% performance increase matters when *Vega 64 has 14% more SP's*. "simple math" huh? Simple math you yourself cannot interpret correctly.


----------



## CaptainTom

Quote:


> Originally Posted by *rancor*
> 
> I did testing for you on a Vega 64 air under water. With the same core clocks(1690) in all tests 150% power limit and 450A limit. Performed on 17.9.1 drivers. Core temp max 35C and HBM max temp 42C.
> 
> Superposition benchmark
> 
> 800 HBM - 6570
> 945 HBM - 6733
> 1100 HBM - 7035
> Overall 37.5% clock increase for a 7.1% performance increase.
> 
> Firestrike Extreme (graphics score)
> 
> 800 HBM - 11635
> 945 HBM - 11968
> 1100 HBM - 12533
> Overall 37.5% clock increase for a 7.9% performance increase
> 
> Don't get me wrong HBM overclocks should be done as they are basicly free performance but Vega is *not* incredibly bandwidth starved


Please perform this on an actual game. 3DMark is a horrible indicator of bandwidth gains. I went from 85 FPS to 95.6 in Metro LL buddy. BF1 netted a 15%+ boost.


----------



## punchmonster

Quote:


> Originally Posted by *CaptainTom*
> 
> Good lord you _are_ spouting FUD lol. Yeah PUBG (A single early access game) is a comparably valid counterpoint to results of SEVERAL established games I mentioned. /s
> 
> Then you bring up canned synthetic benchmarks that have ALWAYS favored compute far more than games people actually can play.
> 
> But my favorite part of your clown post is acting like a 3% performance increase matters when *Vega 64 has 14% more SP's*. "simple math" huh? Simple math you yourself cannot interpret correctly.


The Vega64 is worse than the 1070 because it gets lower FPS in GTA V. Or is GTA V not an established game?









Just because shaders don't scale doesn't mean it's memory starved. Fury X shaders also don't scale and it's not bandwidth starved at all.
Quote:


> Originally Posted by *CaptainTom*
> 
> Please perform this on an actual game. 3DMark is a horrible indicator of bandwidth gains. I went from 85 FPS to 95.6 in Metro LL buddy. BF1 netted a 15%+ boost.


Wow you got within a few percent of a 9.5% increase? What a surprise. Nice 18%. Stop claiming crap.
You are the pinnacle of Dunning-Kruger.
Quote:


> Originally Posted by *CaptainTom*
> 
> Maybe I should have clarified - yes they were all at the same clocks when I tested them lol. The test would be pointless otherwise.
> 
> Why is this hard for some people to believe? Unlocking shaders on an AMD card is anything but new.


Hey Tom, why don't you tell us how your Vega58 is doing? Because you're so good at benchmarking and performance comparisons









But apparently I'm the one spreading bull lmao


----------



## Trender07

Quote:


> Originally Posted by *pmc25*
> 
> More strange behaviour.
> 
> As many people experienced, after a Radeon Settings / driver crash whilst testing stability, the card would throttle and stutter horribly unless you process killed Radeon Settings, reopened a new iteration (and reapplied settings).
> 
> However, on 17.9.1, it seems ANY kind of stability problem (even a game crash that doesn't result in Radeon Settings (completely) s***** it's pants will cause this behaviour. However, the kicker is that process kill and reapply settings, or revert to stock settings has no effect.
> 
> You have to reboot for the card to go back to behaving normally.
> 
> Never had a driver / card that behaved like this before. Obviously very frustrating when testing stability and changing settings around.
> 
> Like the prior driver, I don't think I'll be wasting much time on testing limits and trying a bunch of different settings. The drivers just aren't developed enough or stable enough yet.
> You know the 5Mhz increments for HBM2 are real? It's not like Fiji('s HBM) where there were staggered increments and the clock you set would cling to the nearest valid number (500/545/600).
> 
> 1100Mhz is the limit for most Vega 64 cards. A few will do 1105Mhz (mine when it's in a good mood). Almost none do 1110 and will crash at idle.
> 
> Virtually all, with the right setup and low enough temperatures, will do 1095Mhz.
> 
> Suggest you try 1095Mhz (though if temperatures are high it may need to be lower).


Well yeah, cranking fan for keeping it at 75°(was 80-83° before) and upping mV(from 1000 mV to 1050mV) and now I can run bench at 1100 mhz HBM2. But I don't know if it crashed before because of temps or lack of voltage


----------



## rancor

Quote:


> Originally Posted by *CaptainTom*
> 
> Please perform this on an actual game. 3DMark is a horrible indicator of bandwidth gains. I went from 85 FPS to 95.6 in Metro LL buddy. BF1 netted a 15%+ boost.




Metro LL benchmark

800 HBM - 98.2 fps
945 HBM - 102.18 fps
1100 HBM - 109.17 fps

37.5% clock increase for a 11.2% performance increase


----------



## punchmonster

Vega64 is so bandwidth starved that it should be easy to tell which of these screenshots was with HBM2 @ 945Mhz and which was with HBM2 @ 1095Mhz.

Both HBM2 settings were confirmed with superposition benchmarks before and after to make sure it was actually applying clocks.





settings: pictured + 2X MSAA and depth of field off @ 2560x1440


super position benchmarks proving the HBM2 clocks were working:


----------



## chris89

Here's a comparison that the memory clock doesn't matter... declock the ram and clock the core sky high to 2ghz...

390x for comparison... memory makes like no difference couple fps.. the vega is way faster than 390x and should eat the 390x alive...

firestrike 1080p results? Compare 500mhz memory vs 945mhz memory vs 1000mhz memory vs temperature and power ... plus clock the core sky high on the undervolt/ clock HBM2


----------



## Tgrove

The best part to me is that we all know this card will see some major future gains via driver updates. From 17.8.2 to 17.9.1 i saw dramatic positive changes already

This is my first single gpu setup in years and its going better than expected. All i need is 33+ fps and im good (freesync). I do feel bad for my crossfire bros, its not looking good for sli or crossfire.


----------



## Tgrove

Quote:


> Originally Posted by *punchmonster*
> 
> Vega64 is so bandwidth starved that it should be easy to tell which of these screenshots was with HBM2 @ 945Mhz and which was with HBM2 @ 1095Mhz.
> 
> Both HBM2 settings were confirmed with superposition benchmarks before and after to make sure it was actually applying clocks.
> 
> 
> 
> 
> 
> settings: pictured + 2X MSAA and depth of field off @ 2560x1440
> 
> 
> super position benchmarks proving the HBM2 clocks were working:


Is your 1700 overclocked? Whats gpu core clocks? Your 4k superposition should be in the 6.9k with vega 64 and 16 threads.

Im sorry i just personally noted that everyone with vega 64 and 16 threads gets about 6.9k points vs my 12 threads (i scored 6768). I attributed the difference to the core/thread count. An ongoing comparison to decide when ill get ryzen. Definitely will at some point


----------



## punchmonster

Stock 1700 during that benchmark. Core set to stock air with undervolt.
Quote:


> Originally Posted by *Tgrove*
> 
> Is your 1700 overclocked? Whats gpu core clocks? Your 4k superposition should be in the 6.9k with vega 64 and 16 threads.
> 
> Im sorry i just personally noted that everyone with vega 64 and 16 threads gets about 6.9k points vs my 12 threads (i scored 6768). I attributed the difference to the core/thread count. An ongoing comparison to decide when ill get ryzen. Definitely will at some point


----------



## gupsterg

Quote:


> Originally Posted by *Soggysilicon*


Not meaning to upset VEGA owners by my post. I read this thread with interest as to keep abreast with what VEGA owners are experiencing and hope when I see one on a decent/viable promo will grab one

Only got my rig fired up a day or so ago as was waiting on a EK TR block.

- 1950X stock, 2x 8GB 2133MHz stock.
- MSI GTX 1080 Sea Hawk EK X stock.

I'm using a WD 2TB HDD, has W10A install on when last used on i5/Z97. Booted fine to OS, removed AMD GPU drivers, ran DDU, installed latest AMD Chipset and GeForce drivers. Got this in SuperPosition (not even checked GPU drivers for settings just whatever is set as is after install).



I have also zero experience on nVidia, but I reckon due to how their boost tech is, the GPU is boosting to 1974MHz. SuperPosition showed this in OSD and runs of Heaven/Valley I did yesterday.


----------



## pmc25

Quote:


> Originally Posted by *Trender07*
> 
> 81ºC
> 
> EDIT:
> 
> Ok changin objective to 75º and cranking up fan speed and giving it +50 mv more and it doesn't crash anymore, but I don't know why it crashed because of temps or because of voltage


You can't change HBM voltage. As has been repeated ad nauseum, the HBM voltage setting in WattMan has nothing to do with the HBM .. it's something to do with GPU core voltage (perhaps some kind of offset?).

HBM, from what I've read and my own experience, will get progressively more unstable with temperature, as well as lowering timings. Far more than the core will (which will just throttle). When my card was on air, I generally had to keep HBM temperatures below ~63C or it would sometimes crash at 1075-1100 (depending on game / benchmark). Water, no problems, and obviously lower timings to boot.


----------



## pmc25

Quote:


> Originally Posted by *gupsterg*
> 
> Not meaning to upset VEGA owners by my post. I read this thread with interest as to keep abreast with what VEGA owners are experiencing and hope when I see one on a decent/viable promo will grab one
> 
> Only got my rig fired up a day or so ago as was waiting on a EK TR block.
> 
> - 1950X stock, 2x 8GB 2133MHz stock.
> - MSI GTX 1080 Sea Hawk EK X stock.
> 
> I'm using a WD 2TB HDD, has W10A install on when last used on i5/Z97. Booted fine to OS, removed AMD GPU drivers, ran DDU, installed latest AMD Chipset and GeForce drivers. Got this in SuperPosition (not even checked GPU drivers for settings just whatever is set as is after install).
> 
> 
> 
> I have also zero experience on nVidia, but I reckon due to how their boost tech is, the GPU is boosting to 1974MHz. SuperPosition showed this in OSD and runs of Heaven/Valley I did yesterday.


NVIDIA tend to win heavily on most synthetic game benchmarks.

There are certainly a host of games where Vega, once undervolted and generally tuned, will run far away from a 1080, and as a previous poster hinted at, will trade blows with a 1080Ti.


----------



## Trender07

Quote:


> Originally Posted by *punchmonster*
> 
> Vega64 is so bandwidth starved that it should be easy to tell which of these screenshots was with HBM2 @ 945Mhz and which was with HBM2 @ 1095Mhz.
> 
> Both HBM2 settings were confirmed with superposition benchmarks before and after to make sure it was actually applying clocks.
> 
> 
> 
> 
> 
> settings: pictured + 2X MSAA and depth of field off @ 2560x1440
> 
> 
> super position benchmarks proving the HBM2 clocks were working:


Man how do you get those points in Superposition, you really have stock Ryzen? Im getting 6100-6500 points on superposition with 3.6 Ryzen so I thought it was because of my low clock.
I also have stock clocks in Vega with undervolt, but I don't see (while running the superposition) the core mhz more than 1550


----------



## PontiacGTX

Quote:


> Originally Posted by *punchmonster*
> 
> Stop spreading misinformation. None if this is true. You keep coming into the thread posting blatantly stupid crap clearly having no clue what you're talking about. It's time to stop my dude.
> 
> EDIT: to elaborate, if it was bandwidth starved your memory OC would scale linearly. It doesn't.
> It also under no circumstances is ever "competitive" with the 1080Ti. People will read your posts and go and spread this total misinformation as fact.


the fact it has huge performance hit while using high levels of AA is a sign it is bandwidth limited,the big performance in games while overclocking
Also overclocking in mining...
Quote:


> AA Performance Issue?
> 
> An issue that we weren't expecting, is traditional Multi-Sample or Super Sample Anti-Aliasing performance.
> 
> Based on our testing there is indication that MSAA is detrimental to AMD Radeon RX Vega 64 performance in a big way. In three separate games, enabling MSAA drastically reduced performance on AMD Radeon RX Vega 64 and the GTX 1080 was faster with MSAA enabled. In Deus EX: Mankind Divided we enabled 2X MSAA at 1440p with the highest in-game settings. The GeForce GTX 1080 was faster with 2X MSAA enabled. However, without MSAA, the AMD Radeon RX Vega 64 was faster. It seems MSAA drastically affected performance on AMD Radeon RX Vega 64.
> 
> In Rise of the Tomb Raider we enabled 2X SSAA at 1440p. Once again, we see AMD Radeon RX Vega 64 drop in performance. GeForce GTX 1080 was faster with 2X SSAA compared to Radeon RX Vega 64 with SSAA. Finally, in Dirt 4, which is playable at 8X MSAA, was faster on GTX 1080.
> 
> This is combined evidence enough that enabling forms of anti-aliasing like MSAA or SSAA are for some reason performance-impacting on AMD Radeon RX Vega 64. We need to do more testing on this for sure.
> 
> The conclusion so far is thus, when using shader based AA methods like SMAA, or FXAA or Temporal AA, or CMAA AMD Radeon RX Vega 64 performs much better and can compete with GTX 1080. However, if enabling any level of MSAA or SSAA then performance will decrease more on AMD Radeon RX Vega 64 and GTX 1080 will give more performance in that scenario. Therefore currently, AMD Radeon RX Vega 64 is best played with shader based AA methods versus traditional MSAA or SSAA in games for now. It will be interested to see if this can get addressed in a driver update.


Quote:


> Originally Posted by *rancor*
> 
> Don't get me wrong HBM overclocks should be done as they are basicly free performance but Vega is *not* incredibly bandwidth starved


Funny how Fiji HBM OC didnt increase performance as it does on Vega


----------



## Trender07

This is weird
Stock core clocks and 1050 HBM2

Stock core clocks and 1100 HBM2


I think I should get more performance with 1050 HBM2...
Also looks like my card wants more mV, because at 1100 hbm2/1050mV I get flickering , and at 1000 mV it flickers and crashes bench


----------



## rancor

Quote:


> Originally Posted by *Trender07*
> 
> This is weird
> Stock core clocks and 1050 HBM2
> 
> Stock core clocks and 1100 HBM2
> 
> 
> I think I should get more performance with 1050 HBM2...
> Also looks like my card wants more mV, because at 1100 hbm2/1050mV I get flickering , and at 1000 mV it flickers and crashes bench


Your 1050MHz score and 1100MHz score should be much closer.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *pmc25*
> 
> You can't change HBM voltage. As has been repeated ad nauseum, the HBM voltage setting in WattMan has nothing to do with the HBM .. it's something to do with GPU core voltage (perhaps some kind of offset?).
> .


It seems to be the core voltage, which must be present, to run HBM at Fullspeed.


----------



## pmc25

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> It seems to be the core voltage, which must be present, to run HBM at Fullspeed.


You can run HBM at 1100Mhz with both the P-state voltage and the 'HBM Memory Voltage' (not HBM Memory Voltage) in Wattman set to 950Mhz. In fact, it makes sustaining stable high HBM clocks much easier, because the core doesn't get so hot due to lower voltage and clocks, and so neither does the HBM. That voltage has absolutely nothing to do with the HBM.


----------



## sternheim

Hi guys,
I just installed my Morpheus 2 on my RX 56. The temps on core and HBM2 never go above 60 so thats fine (right?).
The Temps on the GPU hotspot (GPU-Z) however reach around 80 (max 85) degrees. I have not been able to figure out what the "Hotspot Temp" reading actually measures (has anyone?). If thats only the VRMS that would be ok - but I dont know whether thats the case. Do I have to make more modifications?
What are temps that you are getting?
Thx!


----------



## Trender07

Quote:


> Originally Posted by *rancor*
> 
> Your 1050MHz score and 1100MHz score should be much closer.


Yeah I'm thinking it was because of that bug when it kind of crashes you have to restart or it throttles


----------



## Tgrove

Quote:


> Originally Posted by *Trender07*
> 
> This is weird
> Stock core clocks and 1050 HBM2
> 
> Stock core clocks and 1100 HBM2
> 
> 
> I think I should get more performance with 1050 HBM2...
> Also looks like my card wants more mV, because at 1100 hbm2/1050mV I get flickering , and at 1000 mV it flickers and crashes bench


Im gonna say it because of your gpu temps. Thats the biggest difference i see with your score, otherwise something is wrong. I make sure dont let mine go above 55c because hbm is with the core



Vega has some weird limitations based off heat. I saw increased performance with lower heat, volts, with same gpu clocks


----------



## BeetleatWar1977

Quote:


> Originally Posted by *pmc25*
> 
> You can run HBM at 1100Mhz with both the P-state voltage and the 'HBM Memory Voltage' (not HBM Memory Voltage) in Wattman set to 950Mhz. In fact, it makes sustaining stable high HBM clocks much easier, because the core doesn't get so hot due to lower voltage and clocks, and so neither does the HBM. That voltage has absolutely nothing to do with the HBM.


try it - if you set the HBM voltage slider above your boost voltage - it only goes to the secondlast booststate - 700Mhz at my R56


----------



## pmc25

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> try it - if you set the HBM voltage slider above your boost voltage - it oly goes to the secondlast booststate - 700Mhz at my R56


It has absolutely nothing to do with HBM though. Zero. Increasing voltage on that setting will only make HBM more unstable due to larger heat dissipation from the adjacent GPU core.


----------



## Trender07

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> try it - if you set the HBM voltage slider above your boost voltage - it only goes to the secondlast booststate - 700Mhz at my R56


lol I have set my HBM2 voltage from 1000 mV to 950 mV (I had 985 mV on core) and now I can run 1100 MHz HBM2 perfectly withouth my crash and flickering issue

EDIT:

Was fixed on Superposition, but Fire Strike crashed , changed volts to 1002 and 1000mv hbm

BTW any of you gets black artifacts in PUBG?


----------



## BeetleatWar1977

Quote:


> Originally Posted by *Trender07*
> 
> lol I have set my HBM2 voltage from 1000 mV to 950 mV (I had 985 mV on core) and now I can run 1100 MHz HBM2 perfectly withouth my crash and flickering issue
> 
> EDIT:
> 
> Was fixed on Superposition, but Fire Strike crashed , changed volts to 1002 and 1000mv hbm
> 
> BTW any of you gets black artifacts in PUBG?


try the voltage from the P6 - or slightly under.....


----------



## Nuke33

Quote:


> Originally Posted by *milkbreak*
> 
> What exactly is the concern though? Any of the VRM heatsinks touching each other, or touching another component, or what?


Quote:


> Originally Posted by *Soggysilicon*
> 
> Caught buildzoids video today which mentioned it... concerning contacting the "open drain" on the fet would result in a catastrophic short.
> 
> So, the short of the long is that a fet, bjt, or similar hybrid transistor works utilizing a three lead arrangement which comprises a "source" current or voltage, a gating current or voltage, and a ground return.
> 
> BJT is a base, collector, emitter; and a metal oxide field effect transistor is a gate, drain, source. In this case its a fet.
> 
> In the case of a fet there are common drain or common source, dependent on a P-channel or N-channel doping of the gate. These devices work in a couple different arrangements dependent on what one is trying to achieve. They can be analogous to a water valve or a switch, that is that the gating material is saturated such that current will conduct from one side of the device to the other side of the device (like mechanically turning a valve).
> 
> The amount of "saturation" accentuates this current flow, so in the case of high speed switching fets you want to saturate and then pull off the current very quickly to get a very short rise time and complete fall time (very important in digital switching TTL - transistor - transistor logic.
> 
> So back to "open drain", implies the source is common, or that the fets "source" indicated leads all share the same ground plane, and as such are electrically considered on the same node (the same electrical potential or "common ground").
> 
> The drain, however, in this arrangement implies that the different fets will be at different potentials. This gets into phasing, synonymous with timing of a waveform(s).
> 
> Because they are not shielded, contacting a metal surface to the metal drain "leg, lead" could result in current flowing from the leg and to the material that is in contact with it as opposed to the "load". If this short was back to the ground plane this can cause all sorts of havoc on the electrical equilibrium of the device resulting in component failure.
> 
> Or you short the gate and the "hot" drain together, could easily latch the gate into a permanent conductive state; effectively destroying it as a "switch", additionally destroying the drives behind it which are often very sensitive comparator circuit ICs, as well as sensitive decoupling capacitor packages in smt.
> 
> Bottom line, the leads are not doped with a non-conductive material and as such present an electrical hazard to the careless.


+1

Also these little VRM sinks with preapplied thermalpads tend to fall off after some time. If they short out anything it is probably not very good for the card, depending on what they actually short.


----------



## Nuke33

Quote:


> Originally Posted by *CaptainTom*
> 
> Yeah I can confirm that I was wrong and it was a software bug. *My bad everyone
> 
> 
> 
> 
> 
> 
> 
> *


No problem, everyone draws inaccurate conclusions from time to time. We are all just human after all


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> Only my opinion.
> 
> VEGA 56 unlocking to VEGA64 is going to be very rare.
> 
> For example a Fury Tri-X which allowed unlock, if I flashed a Fury X ROM it would not unlock SP, as it lacked a table that sets SP. So my only option was to use Fury ROM where the SP count table was modified. As this type of modification would violate VEGA security feature you won't get unlock even if ASIC has been left writable for that configuration aspect.


Thats probably the reason why AMD locked down their bios so thight, since all their cards use almost the same silicon except HBM sizes for FE and Instinct.
A logical move in my oppinion.


----------



## elderblaze

I've had my Vega56 for a week or so, been tweaking it I came up with 2 profiles.

Max Perf/watt cool and quiet.

P6 950, P7 955, Power Limit +50%, -*1% underclock*, HBM 950 Mhz. This produces ~1465 3D LOAD CLOCK, steady and stable, 1800 RPM fan @ 75C.
The 1% underclock was required to get stability at this very low voltage.

I know the MHZ is low, but it makes very little difference. I benchmarked it, firestrike @ 22,031 (Graphics)

I know it's not comparable to you water folks runing 1700 MHZ, but that's a very respectable score for cool and quiet operation out of the reference cooler.

My second profile is what i'd called my "Sane" max clocks.

P6 1030, P7 1040, power limit +50%, HBM 950 mhz, clock speed set to 1612, This produces 1550 3D LOAD CLOCK, steady and stable, 2600 RPM fan @75C.

This profile is what i'd call max sane clocks for a reference Vega56 on air. Performance is only slightly better, by maybe 3%, power use and noise are definitely much higher, but still tolerable.

3D Mark Firestrike score of 22,528 (graphics) and 17940 (total system) respectively.

I've tweaked and bench marked, and stability tested, im getting a bit tired and ready to just enjoy this card now.

Edit: i know people have claimed there is no difference under 1000mv, that has not been my experience on vega56, I saw marked improvements going down to 955 from 1000, mostly in temps and fan speed.


----------



## Soggysilicon

Quote:


> Originally Posted by *gupsterg*
> 
> Not meaning to upset VEGA owners by my post. I read this thread with interest as to keep abreast with what VEGA owners are experiencing and hope when I see one on a decent/viable promo will grab one
> 
> Only got my rig fired up a day or so ago as was waiting on a EK TR block.
> 
> - 1950X stock, 2x 8GB 2133MHz stock.
> - MSI GTX 1080 Sea Hawk EK X stock.
> 
> I'm using a WD 2TB HDD, has W10A install on when last used on i5/Z97. Booted fine to OS, removed AMD GPU drivers, ran DDU, installed latest AMD Chipset and GeForce drivers. Got this in SuperPosition (not even checked GPU drivers for settings just whatever is set as is after install).
> 
> 
> 
> I have also zero experience on nVidia, but I reckon due to how their boost tech is, the GPU is boosting to 1974MHz. SuperPosition showed this in OSD and runs of Heaven/Valley I did yesterday.


No salt here friend. I shopped that very card some months ago but I really like my Sammy CF791 monitor so I was stuck doing the "waiting on Vega" dance. Was just showing some of my results having of swapped the bios on my 64 reference to the 64LC with an EK-FC block and custom ole' school water rig. The air card is hamstrung with its power settings, so its either reprogg' the flash or mod the registry.

The performance that I have observed so far on the card is somewhere between a 1080 and 1080 Ti, generally on the lower side of the 1080, OC'd its fairly consistent with high performance 1080 setups like yours. I would really like to see another 5% uplift (especially in the "average lows") with this card so that the "ultimate engine" freesnyc doesn't crap out when running games with all the bells' n' whistles turned on max and or modded games with large textures.

TW: Warhammer at 3440 maxed out it benches 58.4~58.5 which is perfectly acceptable considering that game doesn't punish "a lack of twitch responsive for a good experience. Something like CS:GO a different monitor and setup would maybe be preferable.

In practice the frame rate is usually much higher, but as I have mentioned it can dip to the 48-49 mark which causes that anti screen tearing magic buffer to kick off.









With these higher px monitors the screen tearing is real... and very noticeable.







A couple percent uplift makes a pretty big difference. I would claim that anyone saying any different has not gone beyond 1080 or 2560, which in g-sync terms is a fairly non-trivial expense.

I suppose my other take away is that once Vega is dialed in, at least in my sample size of 1, it is very obedient and not prone to crashing. Getting it there may be a different story... expect many all expense paid trips to POST.

Youtubers like "adoredTV" set the stage for the expectations some months ago. Vega is about where I expect it to be. Not particularly competitive (game wise), but an interesting _alternative_ especially if you have some mixed workloads.

But yeah, playing games... on this monitor, with my current setup, I really like it. The prices right now, well.. they are what they are... once your in the high end... we always pay more for less.


----------



## theor14

Anyone managed to fit a universal waterblock ?
Want to fit a black and keep the base plate but not sure which blocks this will work with.


----------



## Trender07

lel why is this happening to me in Overwatch? Im the only one? Time Spy runs withouth problems on this settings


Anyone else having this problem?


----------



## Caldeio

Just wanted to add, my 56 does about 818-845k in [email protected] depending on what else im doing in the background.
P6//////P7
1626///////1632 core
1080//////1120mv
core averages 1580

hbm doesn't matter here but it's at 945.

The lower my temps the better my score since my clockspeeds go up. sitting at 53.5 core, 62.7 hot spot, and 56.2 hbm for temps.

172w average for gpu


----------



## Roboyto

Quote:


> Originally Posted by *Trender07*
> 
> lel why is this happening to me in Overwatch? Im the only one? Time Spy runs withouth problems on this settings
> 
> 
> Anyone else having this problem?


Stability in one bench/game is never a guarantee for wholistic problem free computing once overclocked. Dial clocks down, or turn voltage/power up if possible.


----------



## JunXaos

Quote:


> Originally Posted by *Trender07*
> 
> lel why is this happening to me in Overwatch? Im the only one? Time Spy runs withouth problems on this settings
> 
> 
> Anyone else having this problem?


I'm also having this problem if I change the FPS cap from 60 to anything higher. I though it's from over heating since I have my vega 64 in a small mITX case. No overclock default balance setting and I'm living in a super hot weather country.


----------



## Trender07

Quote:


> Originally Posted by *JunXaos*
> 
> I'm also having this problem if I change the FPS cap from 60 to anything higher. I though it's from over heating since I have my vega 64 in a small mITX case. No overclock default balance setting and I'm living in a super hot weather country.


Glad I'm not alone, I was getting worried. I also have the fps caps but to 70. About the temp I'm not sure if that could be the problem as it was in about 73-76 and on 73 when the glitch occurred.


----------



## punchmonster

Having a weird issue where my hashrate drops after a few hours of mining. Weirder is that if I turn it off, GPU idle for a while, then turn it back on, it's still stuck on the low hashrate. I can only get it back by restarting my PC. I've tested last 3 driver versions. Any ideas?

Temps are well within parameters. 65ºC on HBM2 and 48ºC on core. I also don't see any shift in temperatures when the hashrate drops.


----------



## diabetes

Hello, I have my Vega 56 (non-molded package) on a EKWB Block. I am hitting 46C on the core and 49C on the HBM while gaming with increased power target. Is the difference between core and HBM normal under water? The delta never gets bigger than 4C.
Also, my card is having some weird HBM temperature sensor readouts from time to time. The HBM min temperature sometimes shows 1C and the max temperature sometimes shows 1230C. This happens in HWInfo64 and GPU-Z. Have some of you also encountered this?


----------



## ookiie

yup that's normal and it's pretty much the same as my temperatures with same the v56 and ekfc block, HBM is always few degrees hotter, same on AIR cooling







The readings aren't correct for those weird maximums and minimums it shows as HWinfo shows 1.7kw of consumtion for core and about the same for HBM sometimes.. I believe my 650w PSU would explode with that kind of power draw and with 1230C the whole case would melt, so don't worry


----------



## diabetes

Ok, thanks for your reply. The strange readings actually got me worried more than the temp delta. I was thinking that I damaged something when mounting the block.


----------



## Chaoz

Gave the mod a try aswell. Turned out great if I say so myself.


----------



## ontariotl

Deleted as its a repeat.


----------



## ontariotl

Quote:


> Originally Posted by *Chaoz*
> 
> Gave the mod a try aswell. Turned out great if I say so myself.


That's Awesome! Great job! Do you have pics of the internals of your work? I'm curious to see what LED's you used and what you accomplished to get it done. Also did you use a plain piece of white paper to diffuse the hotspots?


----------



## Chaoz

Quote:


> Originally Posted by *ontariotl*
> 
> That's Awesome! Great job! Do you have pics of the internals of your work? I'm curious to see what LED's you used and what you accomplished to get it done. Also did you use a plain piece of white paper to diffuse the hotspots?


Thanks!! unfortunately I forgot to take pics and super glued it together. I used a GeForce LED block and stripped the tape off and then Dremel'd it smaller to make it fit. I couldn't put a piece of paper inbetween it, made it slightly too thick. I might try it again with RGB LED's.

This is what I used. It's just a small piece of plexi with a small LEDstrip at the bottom.
https://nl.aliexpress.com/item/FREEZEMOD-special-graphics-card-LED-lighting-with-multi-color-FR-GEFORCE-GTX/32740829360.html


----------



## ontariotl

Quote:


> Originally Posted by *Chaoz*
> 
> Thanks!! unfortunately I forgot to take pics and super glued it together. I used a GeForce LED block and stripped the tape off and then Dremel'd it smaller to make it fit. I couldn't put a piece of paper inbetween it, made it slightly too thick. I might try it again with RGB LED's.
> 
> This is what I used. It's just a small piece of plexi with a small LEDstrip at the bottom.
> https://nl.aliexpress.com/item/FREEZEMOD-special-graphics-card-LED-lighting-with-multi-color-FR-GEFORCE-GTX/32740829360.html


Well that module would have been easier to deal with than to solder those RGB LED's with very fine wire. If you cut the backplate off the EK bracket it will give you more room. And a thin piece of paper wont add too much thickness either. I just placed it in front of the lettering on the decal and the adhesive on the decal just held it in place.


----------



## Chaoz

Quote:


> Originally Posted by *ontariotl*
> 
> Well that module would have been easier to deal with than to solder those RGB LED's with very fine wire. If you cut the backplate off the EK bracket it will give you more room. And a thin piece of paper wont add too much thickness either. I just placed it in front of the lettering on the decal and the adhesive on the decal just held it in place.


I probably needed to file it a bit more, but I'm happy with it. Don't really wanna **** up my bracket too much.


----------



## ontariotl

Quote:


> Originally Posted by *Chaoz*
> 
> I probably needed to file it a bit more, but I'm happy with it. Don't really wanna **** up my bracket too much.


Fair enough.


----------



## Papa Emeritus

Quote:


> Originally Posted by *Chaoz*
> 
> Gave the mod a try aswell. Turned out great if I say so myself.


Awesome mod!


----------



## Chaoz

Quote:


> Originally Posted by *Papa Emeritus*
> 
> Awesome mod!


Thanks


----------



## jeshuastarr

can anyone give me an idea what VEGA 56 can do with the 64 bios and overclocked HBM with the new driver from yesterday on pubg at 1080p? (got enough prepositional phrases?)


----------



## punchmonster

Quote:


> Originally Posted by *jeshuastarr*
> 
> can anyone give me an idea what VEGA 56 can do with the 64 bios and overclocked HBM with the new driver from yesterday on pubg at 1080p? (got enough prepositional phrases?)


enough FPS at anything up to and including 1440p


----------



## Rootax

Quote:


> Originally Posted by *LionS7*
> 
> Thank you for the info ! So, no Frontier Edition for gaming, right... It's not a sure and consistent thing. Hope someone launch 16GB HBM2 RX Vega 64 card.


No gaming mode under 17.8.2 beta for Vega FE, but the performances are the same as "gaming" driver anyway, so it's ok. And even if their is no wattman, wattool is working with 17.8.2 (I've a Vega FE).


----------



## Whatisthisfor

I like 17.9.1. My card (AIO) runs much quieter with it, looks like they changed the fan curve.


----------



## steadly2004

Got my EK blocks installed and added a rad down below.

My first go was a fail. Had a leak at the bridge. Then I tried to just tighten the Allen screws.... Put a small crack on the plastic of the bottom block, arg. So I went to the auto parts store and siliconed around the screws and it it holding water it seems. Highest temps so far on the core is 37, nice. That's with just a couple firestrike runs and now benching the niceHash to see where it lands. I'm sure it'll end up being warmer but so far so good!


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> No gaming mode under 17.8.2 beta for Vega FE, but the performances are the same as "gaming" driver anyway, so it's ok. And even if their is no wattman, wattool is working with 17.8.2 (I've a Vega FE).


In a very rare case (my case) 17.8.2 driver not only doesn't have game mode, but no crossfire. So i stick with 17.6


----------



## Trender07

How do you guys have the HBCC set? Should I enable it? (games)
I've read that it doesn't even matter because as we have 8 gb theres plenty of vram so it does nothing (idk if this is right)


----------



## bir86

Hi guys,
Am I the only one having problems with manually setting the core and HBM clocks?
My HBM clocks goes haywire and down clocks to 500 or 800Mhz the second I make manual adjustments on the core clocks.

It works fine when I just increase the clocks with the frequency slider and I can even OC the HBM all the way up to 1105Mhz.

Overclocking with Wattman and even tried Wattool.
V64 on the latest drivers.

Quote:


> Originally Posted by *Trender07*
> 
> How do you guys have the HBCC set? Should I enable it? (games)
> I've read that it doesn't even matter because as we have 8 gb theres plenty of vram so it does nothing (idk if this is right)


Stock


Stock + HBCC 11GB


Stock + HBCC 16GB


----------



## Tgrove

Wow i forgot all about hbcc. Where is the acrual setting for that in crimson?


----------



## majestynl

Quote:


> Originally Posted by *bir86*
> 
> Hi guys,
> Am I the only one having problems with manually setting the core and HBM clocks?
> My HBM clocks goes haywire and down clocks to 500 or 800Mhz the second I make manual adjustments on the core clocks.
> 
> It works fine when I just increase the clocks with the frequency slider and I can even OC the HBM all the way up to 1105Mhz.
> 
> Overclocking with Wattman and even tried Wattool.
> V64 on the latest drivers.
> Stock
> 
> 
> Stock + HBCC 11GB
> 
> 
> Stock + HBCC 16GB


There is a bug in wattman if you set exact same clockspeeds on P6 and P7!
It will run around p5 speeds for clocks and HBM. Probably you did that right?

Just set p6 slightly lower then p7. So NOT exact same clockspeeds.

See more details in my post: link


----------



## Tgrove

I don't think i have that bug. I did some testing with p6/p7 at different speeds vs the same, and clockspeed was the same


----------



## Soggysilicon

Quote:


> Originally Posted by *Chaoz*
> 
> Gave the mod a try aswell. Turned out great if I say so myself.


Lookin' good, I really need to do this mod!!


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> How do you guys have the HBCC set? Should I enable it? (games)
> I've read that it doesn't even matter because as we have 8 gb theres plenty of vram so it does nothing (idk if this is right)


All my testing resulted with one of three outcomes, all my testing was done at my native 3440x1440 res at whatever maximum allowable settings I could toss at it... those outcomes where as follows... 1) a very "slight" improvement, talking 1 fps - repeatable, 2) no change, and 3) lower fps... at about a 20 / 50 / 30 odds rate respectively.

I did notice that load times could be adversely affected in my setup with it on... Potentially as in some applications which are able to hog video ram, now will hog a large piece of your system ram.

Maybe doesn't play nice with ramcache ii; end of the day, it has "potential" but its a feature that looks like it needs to be either coded for directly, or the game / application is calling for a tremendous amount of texture / ram encapsulated data to be on tap. If you ran 4k~8k textures modded on some games there is a potential for some improvement, maybe under a modded unity?

Later generation cards may have an NVME socket on the card as well that could utilize this as well, as a buffer or cache for the ssd. Additionally the strategy employed for the read in read out could have a large impact on how this performs... hard to say.

_*Short... turn it off, forget that its there... when it does a thing... AMD will let us know.







*_


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> Lookin' good, I really need to do this mod!!


Thanks







. Yeah, you should try it. It's quite easy, just takes some time to get it right. But it really pays off.


----------



## bir86

[quot
Quote:


> Originally Posted by *majestynl*
> 
> There is a bug in wattman if you set exact same clockspeeds on P6 and P7!
> It will run around p5 speeds for clocks and HBM. Probably you did that right?
> 
> Just set p6 slightly lower then p7. So NOT exact same clockspeeds.
> 
> See more details in my post: link


Hmm...
I've set them both to the same.

Will try this after work.

Thanks for the help!


----------



## Newbie2009

Quote:


> Originally Posted by *gupsterg*
> 
> Not meaning to upset VEGA owners by my post. I read this thread with interest as to keep abreast with what VEGA owners are experiencing and hope when I see one on a decent/viable promo will grab one
> 
> Only got my rig fired up a day or so ago as was waiting on a EK TR block.
> 
> - 1950X stock, 2x 8GB 2133MHz stock.
> - MSI GTX 1080 Sea Hawk EK X stock.
> 
> I'm using a WD 2TB HDD, has W10A install on when last used on i5/Z97. Booted fine to OS, removed AMD GPU drivers, ran DDU, installed latest AMD Chipset and GeForce drivers. Got this in SuperPosition (not even checked GPU drivers for settings just whatever is set as is after install).
> 
> 
> 
> I have also zero experience on nVidia, but I reckon due to how their boost tech is, the GPU is boosting to 1974MHz. SuperPosition showed this in OSD and runs of Heaven/Valley I did yesterday.


It appears that benchmarks like high core cpu.


----------



## erase

Flashed 2x Vega 56 cards with Vega 64 BIOS, found at the same clock speed Vega 56 was now slower clock for clock.

One of the Vega 56 cards the memory overclocked way higher to around 1050 mem, but the other Vega 56 card was rubbish and stuck at 935 mem

Reason I have two Vega 56 was for crossfire but no support yet, so using them for ethereum. Basically the hashrate goes down with Vega 64 BIOS due to limited memory overclock and performance is slower clock for clock.

With that said I am wonder if it is possible (backwards flash) to flash a Vega 64 card with a Vega 56 BIOS, as I thinking the Vega 56 has a tighter timing set?


----------



## Gdourado

Is there something like Official Freesync monitors?
I was looking at the product page for the Acer XR382 and on the specs it just says Tear preventing technology Adaptive sync.
But then the reviews I read of the monitor say it has Freesync and has a Freesync Range of 48-75hz.

Is there any difference between monitors that say they have adaptive sync and monitors that say they have freesync?

Cheers!


----------



## gupsterg

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> No salt here friend. I shopped that very card some months ago but I really like my Sammy CF791 monitor so I was stuck doing the "waiting on Vega" dance. Was just showing some of my results having of swapped the bios on my 64 reference to the 64LC with an EK-FC block and custom ole' school water rig. The air card is hamstrung with its power settings, so its either reprogg' the flash or mod the registry.
> 
> The performance that I have observed so far on the card is somewhere between a 1080 and 1080 Ti, generally on the lower side of the 1080, OC'd its fairly consistent with high performance 1080 setups like yours. I would really like to see another 5% uplift (especially in the "average lows") with this card so that the "ultimate engine" freesnyc doesn't crap out when running games with all the bells' n' whistles turned on max and or modded games with large textures.
> 
> TW: Warhammer at 3440 maxed out it benches 58.4~58.5 which is perfectly acceptable considering that game doesn't punish "a lack of twitch responsive for a good experience. Something like CS:GO a different monitor and setup would maybe be preferable.
> 
> In practice the frame rate is usually much higher, but as I have mentioned it can dip to the 48-49 mark which causes that anti screen tearing magic buffer to kick off.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> With these higher px monitors the screen tearing is real... and very noticeable.
> 
> 
> 
> 
> 
> 
> 
> A couple percent uplift makes a pretty big difference. I would claim that anyone saying any different has not gone beyond 1080 or 2560, which in g-sync terms is a fairly non-trivial expense.
> 
> I suppose my other take away is that once Vega is dialed in, at least in my sample size of 1, it is very obedient and not prone to crashing. Getting it there may be a different story... expect many all expense paid trips to POST.
> 
> Youtubers like "adoredTV" set the stage for the expectations some months ago. Vega is about where I expect it to be. Not particularly competitive (game wise), but an interesting _alternative_ especially if you have some mixed workloads.
> 
> But yeah, playing games... on this monitor, with my current setup, I really like it. The prices right now, well.. they are what they are... once your in the high end... we always pay more for less.


Yeah FreeSync is all I really miss or makes me want to get VEGA.

Hopefully I'll find one on good deal near Crimbo/Black Friday.
Quote:


> Originally Posted by *Newbie2009*
> 
> It appears that benchmarks like high core cpu.


The C6H allowed you to knock out cores on Ryzen. I would assume ZE has same feature for ThreadRipper, I will check.

For now view this link with a member who has R5 1500X and knows how to OC nVidia card (which I yet have not had time to do).


----------



## alanthecelt

Quote:


> Originally Posted by *erase*
> 
> Flashed 2x Vega 56 cards with Vega 64 BIOS, found at the same clock speed Vega 56 was now slower clock for clock.
> 
> One of the Vega 56 cards the memory overclocked way higher to around 1050 mem, but the other Vega 56 card was rubbish and stuck at 935 mem
> 
> Reason I have two Vega 56 was for crossfire but no support yet, so using them for ethereum. Basically the hashrate goes down with Vega 64 BIOS due to limited memory overclock and performance is slower clock for clock.
> 
> With that said I am wonder if it is possible (backwards flash) to flash a Vega 64 card with a Vega 56 BIOS, as I thinking the Vega 56 has a tighter timing set?


How are you finding the hashing? as of my last bench mark my 3 56s flashed as 64's hashed at 36-37
currently dual mining @102mh /3050 mh sia
all cards at 945mhz ram, +20mhz power stock frequencys in wattman


----------



## Tgrove

Quote:


> Originally Posted by *Gdourado*
> 
> Is there something like Official Freesync monitors?
> I was looking at the product page for the Acer XR382 and on the specs it just says Tear preventing technology Adaptive sync.
> But then the reviews I read of the monitor say it has Freesync and has a Freesync Range of 48-75hz.
> 
> Is there any difference between monitors that say they have adaptive sync and monitors that say they have freesync?
> 
> Cheers!


Theres no didference. My monitor is a korean brand with freesync, the only choice you have for 4k freesync over 43 inches. Freesync range is 40-60hz like all the other 4k freesync monitors, but i lowered it to 33-60hz

If you get the one with a 48-75hz range, you can lower it to 37-75hz (a guarantee). That would activate lfc, then your covered at any framerate

The official feeesync tag just means it was certified by amd, like if you want the freesync tag on your product. The actual freesync tech is the same though in all monitors. Its baked into either displayport 1.2a or hdmi 2.1 spec


----------



## futr_vision

Is there any real benefit of 16GB of VRAM on some of these Vega 64's? I am thinking of dabbling in some video production and even some machine learning but I would think I'd need to be dealing with either super high resolution video(video porduction) or very very large data sets(machine learning) to even come close to utilizing that much VRAM or am I missing something?


----------



## Pholostan

Quote:


> Originally Posted by *Skinnered*
> 
> One question still stand, does somebody play GTA5 en Fallout4 with an enb? My textures become completely dark.


I play Fallout 4 with an ENB and those black textures you're talking about is a familiar thing. I also have flashing textures, light and shadows seem to bug out etc. Was in a rad storm the other day, got an almost complete white-out. See screenshot below:



Something with shadows and fog seems to be really problematic. Funny thing though, moving around in that white soup eventually got me out of it. But flickering, light and shadow problems persisted. I haven't had time to test enough to find the root cause, ofc the game runs fine if I disable the ENB.


----------



## Trender07

Dear god I ran firestrike superposition and time spy but Overwatch lags and crashes!


----------



## rancor

Quote:


> Originally Posted by *Trender07*
> 
> Dear god I ran firestrike superposition and time spy but Overwatch lags and crashes!


It might not be instability. From the 17.9.1 driver notes. Overwatch™ may experience a random or intermittent hang on some system configurations.

I think this problem also existed in 17.8.2 and 17.8.1.


----------



## Energylite

Quote:


> Originally Posted by *rancor*
> 
> It might not be instability. From the 17.9.1 driver notes. Overwatch™ may experience a random or intermittent hang on some system configurations.
> 
> I think this problem also existed in 17.8.2 and 17.8.1.


Yeah, they were already in 8.1 and 8.2


----------



## laczarus

Can't seem to break the 4800 mark here
Any p7 clock set past 1682 leading to a crash in Superposition, no matter the voltage
Think I've reached the limit of my Vega 56 (64 bios), and I'm using the Morpheus II cooler on it.

P7 1682MHz @ 1120mV | HBM 1100MHz @ 950mV | Power limit +50%


----------



## SpaceGorilla47

someone else noticed a new Task from AMD since 17.9.1?
Clogging 6% CPU-Power and doing some background magic:


----------



## SAN-NAS

Well I sold my GTX 1070 for $4 more than what I paid for it (thanks miners). Bought the RX 56 and it was $18 more than what I sold the GTX 1070. Basically, Im doing another side upgrade so I can play with new stuff. RX 56 has been great even with crap drivers. Most things were about the same performance as my OC 1070, while running stock on the 56.

I decided to go ahead and flash the card as I was not able to get over 1500mhz on the core and the HBM2 ram didn't like anything over 800mhz. I used the air version bios of RX 64 not the liquid. It took without any issues. Now when I run turbo mode it clock around 1560mhz and the ram is 945mhz. I've been able to OC into the 1600mhz range but not sure its limits but know that 1700mhz is a no go. If I can figure out the limits, I might try to flash to the liquid version as it gives even more watts available but need to test for current heat and the fan profiles to keep it under 80c.

Really a neat card and cant believe I have an all AMD system again... Super happy about that!


----------



## PontiacGTX

I wonder what could be the performance of VEGA 56 once Primitive Shader/DSBR/RapidPacked Math/HBCC is being enabled and used in games?


----------



## pmc25

Quote:


> Originally Posted by *laczarus*
> 
> Can't seem to break the 4800 mark here
> Any p7 clock set past 1682 leading to a crash in Superposition, no matter the voltage
> Think I've reached the limit of my Vega 56 (64 bios), and I'm using the Morpheus II cooler on it.
> 
> P7 1682MHz @ 1120mV | HBM 1100MHz @ 950mV | Power limit +50%


The HBM voltage is not HBM voltage. It's something to do with GPU core voltage.

If you're setting it at that, in my experience, then the P7 core voltage will reach nothing even close to 1120mV.

Therefore your clocks / throttling will also be much lower.

Your score reflects that.
Quote:


> Originally Posted by *PontiacGTX*
> 
> I wonder what could be the performance of VEGA 56 once Primitive Shader/DSBR/RapidPacked Math/HBCC is being enabled and used in games?


You may need to check back in 6-12 months.

The drivers are so basic, developer work is required too, and I get the feeling half these features and support for them are a dry run for NAVI.


----------



## rancor

Quote:


> Originally Posted by *SAN-NAS*
> 
> Well I sold my GTX 1070 for $4 more than what I paid for it (thanks miners). Bought the RX 56 and it was $18 more than what I sold the GTX 1070. Basically, Im doing another side upgrade so I can play with new stuff. RX 56 has been great even with crap drivers. Most things were about the same performance as my OC 1070, while running stock on the 56.
> 
> I decided to go ahead and flash the card as I was not able to get over 1500mhz on the core and the HBM2 ram didn't like anything over 800mhz. I used the air version bios of RX 64 not the liquid. It took without any issues. Now when I run turbo mode it clock around 1560mhz and the ram is 945mhz. I've been able to OC into the 1600mhz range but not sure its limits but know that 1700mhz is a no go. If I can figure out the limits, I might try to flash to the liquid version as it gives even more watts available but need to test for current heat and the fan profiles to keep it under 80c.
> 
> Really a neat card and cant believe I have an all AMD system again... Super happy about that!


It would be recommended to just use the soft power play table mods for power limits. The LC bios changes temp limits that are not great for air cooling and the max 1.25Vcore voltage is not really useful on air.


----------



## PontiacGTX

Quote:


> Originally Posted by *pmc25*
> 
> You may need to check back in 6-12 months.
> 
> The drivers are so basic, developer work is required too, and I get the feeling half these features and support for them are a dry run for NAVI.


lets see how well does VEGA on Wolfenstein


----------



## erase

Quote:


> Originally Posted by *alanthecelt*
> 
> Quote:
> 
> 
> 
> Originally Posted by *erase*
> 
> Flashed 2x Vega 56 cards with Vega 64 BIOS, found at the same clock speed Vega 56 was now slower clock for clock.
> 
> One of the Vega 56 cards the memory overclocked way higher to around 1050 mem, but the other Vega 56 card was rubbish and stuck at 935 mem
> 
> Reason I have two Vega 56 was for crossfire but no support yet, so using them for ethereum. Basically the hashrate goes down with Vega 64 BIOS due to limited memory overclock and performance is slower clock for clock.
> 
> With that said I am wonder if it is possible (backwards flash) to flash a Vega 64 card with a Vega 56 BIOS, as I thinking the Vega 56 has a tighter timing set?
> 
> 
> 
> How are you finding the hashing? as of my last bench mark my 3 56s flashed as 64's hashed at 36-37
> currently dual mining @102mh /3050 mh sia
> all cards at 945mhz ram, +20mhz power stock frequencys in wattman
Click to expand...

I was already getting 36-37 with the stock Vega 56 BIOS with settings 852 core, -25 power, 2200 fan, 935 mem. Trick was to start miner, set settings the memory will run full speed, finally reboot and run miner, then everything works.
This would get 2x Vega 56 at 69 Mh with wall power at 420w on old X79 platform.

Vega 64 only allow one card to go slight faster due to better mem overclock. The other card cannot overclock mem any better. Clock for clock the Vega 56 BIOS is faster.


----------



## Disharmonic

So, i just got my Vega56 card on Friday. Spent some time playing around and so far i've ended up with the following:
P6 V:1020 Cl:1612
P7 V:1040 Cl:1656
HBM: V:960 Cl: 920


I've been unable to replicate that 4K score though. Might have something to do with my W10 install being completely unoptimized


----------



## Trender07

Quote:


> Originally Posted by *rancor*
> 
> It might not be instability. From the 17.9.1 driver notes. Overwatch™ may experience a random or intermittent hang on some system configurations.
> 
> I think this problem also existed in 17.8.2 and 17.8.1.


Mass Effect Andromeda also crashed (temps were ok)


----------



## Caldeio

Quote:


> Originally Posted by *Disharmonic*
> 
> So, i just got my Vega56 card on Friday. Spent some time playing around and so far i've ended up with the following:
> P6 V:1020 Cl:1612
> P7 V:1040 Cl:1656
> HBM: V:960 Cl: 920
> 
> 
> I've been unable to replicate that 4K score though. Might have something to do with my W10 install being completely unoptimized


I get about 6200 as well. What does your core run at during the test?


----------



## Newbie2009

My best undervolted score , 6.5% OC @ 1150mv core


----------



## laczarus

Quote:


> Originally Posted by *pmc25*
> 
> The HBM voltage is not HBM voltage. It's something to do with GPU core voltage.
> 
> If you're setting it at that, in my experience, then the P7 core voltage will reach nothing even close to 1120mV.
> 
> Therefore your clocks / throttling will also be much lower.
> 
> Your score reflects that.


I am aware of that since buildzoid's video.
However, applying even 1100mV on that "HBM voltage" (vcore floor voltage according to buildzoid) doesn't help.
4778 is the maximum I could get in Superposition 1080p extreme. Maybe some beefier VRM cooling will help


----------



## pmc25

Quote:


> Originally Posted by *laczarus*
> 
> I am aware of that since buildzoid's video.
> However, applying even 1100mV on that "HBM voltage" (vcore floor voltage according to buildzoid) doesn't help.
> 4778 is the maximum I could get in Superposition 1080p extreme. Maybe some beefier VRM cooling will help


It's definitely not vCore floor voltage. But I don't know what it is, either.


----------



## Newbie2009

So these are my best scores with Vega 64 , don't think I will play around with overclocking anymore as have had a good play with it and think I know the sweet spot. Only drivers will bring the card (my card anyway) any further in performance.

Overall I'm happy with, though I think the cards really were released at clocks which are at the limit of what they can do. Pretty happy my scores are with undervolt and can be used for 24/7 gaming.


----------



## kundica

So.... My new Air 64 card arrived about an hour ago. Opened the shipping box from Newegg to find they sent me a used card. The Sapphire box seal was broken and the card covered in fingerprints. ***.


----------



## ontariotl

Quote:


> Originally Posted by *kundica*
> 
> So.... My new Air 64 card arrived about an hour ago. Opened the shipping box from Newegg to find they sent me a used card. The Sapphire box seal was broken and the card covered in fingerprints. ***.


That really sucks. Man you are not having much luck.


----------



## majestynl

Quote:


> Originally Posted by *Newbie2009*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> So these are my best scores with Vega 64 , don't think I will play around with overclocking anymore as have had a good play with it and think I know the sweet spot. Only drivers will bring the card (my card anyway) any further in performance.
> 
> Overall I'm happy with, though I think the cards really were released at clocks which are at the limit of what they can do. Pretty happy my scores are with undervolt and can be used for 24/7 gaming.


Nice scores. What clocks have you achieved while running Superposition and Firestrike ?

Below my results so far while playing with WattMan





Achieved GPU max. clocks with SuperPosition: 1692Mhz
Achieved GPU max. clocks with 3D Mark: 1713Mhz

Next step im going to test UV and see results for scores/clocks and temps...


----------



## futr_vision

Quote:


> Originally Posted by *kundica*
> 
> So.... My new Air 64 card arrived about an hour ago. Opened the shipping box from Newegg to find they sent me a used card. The Sapphire box seal was broken and the card covered in fingerprints. ***.


I've had Newegg do that crap before. Actually, I've had Amazon do it too. I don't know how they get away with it. Post a negative review.


----------



## futr_vision

Is there a preferred vendor for the Vega 64's? they seem to all be roughly the same price. I've had Sapphire cards in the past and they seem to be pretty good but I'm wondering what the other companies bring to the table that would make one better than the other?


----------



## springs113

Quote:


> Originally Posted by *kundica*
> 
> So.... My new Air 64 card arrived about an hour ago. Opened the shipping box from Newegg to find they sent me a used card. The Sapphire box seal was broken and the card covered in fingerprints. ***.


was newegg the seller? Make sure you take photos of it, I do unboxing videos because of crap like that. That sucks, I believe that's how one of my zenith boards were too.


----------



## Newbie2009

Quote:


> Originally Posted by *majestynl*
> 
> Nice scores. What clocks have you achieved while running Superposition and Firestrike ?
> 
> Below my results so far while playing with WattMan
> 
> 
> 
> 
> 
> Achieved GPU max. clocks with SuperPosition: 1692Mhz
> Achieved GPU max. clocks with 3D Mark: 1713Mhz
> 
> Next step im going to test UV and see results for scores/clocks and temps...


I'd have to run FS again but hangs around 1700mhz +or-5mhz most of super position. Similar for FS I think.

As requested majestynl. Good Luck!


----------



## kundica

Quote:


> Originally Posted by *ontariotl*
> 
> That really sucks. Man you are not having much luck.


Quote:


> Originally Posted by *futr_vision*
> 
> I've had Newegg do that crap before. Actually, I've had Amazon do it too. I don't know how they get away with it. Post a negative review.


Yeah. It gets worse. I also inquired about the refund on my first defective LC 64 and they said they said they issued me a gift card credit. One, they haven't and two, wasn't part of the deal. When I initially sent my first LC 64 back for RMA they told me that because they couldn't replace it, which is all I wanted at the time, they would do a full refund but I would have to ship it(along with the bundled items) back and reorder if I wanted a new one.

My GF was convinced the faulty issues were because of Mercury Retrograde. Well, we're past that now, lol!

Just spent the past hour on the phone. Hopefully everything is straight now, minus waiting on a new card to replace the opened one today.
Quote:


> Originally Posted by *springs113*
> 
> was newegg the seller? Make sure you take photos of it, I do unboxing videos because of crap like that. That sucks, I believe that's how one of my zenith boards were too.


Yeah, Newegg was the seller. I took photos. Not sure they help much.


----------



## springs113

Quote:


> Originally Posted by *kundica*
> 
> Yeah. It gets worse. I also inquired about the refund on my first defective LC 64 and they said they said they issued me a gift card credit. One, they haven't and two, wasn't part of the deal. When I initially sent my first LC 64 back for RMA they told me that because they couldn't replace it, which is all I wanted at the time, they would do a full refund but I would have to ship it(along with the bundled items) back and reorder if I wanted a new one.
> 
> My GF was convinced the faulty issues were because of Mercury Retrograde. Well, we're past that now, lol!
> 
> Just spent the past hour on the phone. Hopefully everything is straight now, minus waiting on a new card to replace the opened one today.
> Yeah, Newegg was the seller. I took photos. Not sure they help much.


that's why I mainly do videos it shows the item from sealed to unseal, especially with retailers such as newegg.


----------



## punchmonster

My Vega does 38MH/s with your settings so it's not faster clock for clock. And I can push em to 39.5~ MH/s on gaming drivers.
Quote:


> Originally Posted by *erase*
> 
> I was already getting 36-37 with the stock Vega 56 BIOS with settings 852 core, -25 power, 2200 fan, 935 mem. Trick was to start miner, set settings the memory will run full speed, finally reboot and run miner, then everything works.
> This would get 2x Vega 56 at 69 Mh with wall power at 420w on old X79 platform.
> 
> Vega 64 only allow one card to go slight faster due to better mem overclock. The other card cannot overclock mem any better. Clock for clock the Vega 56 BIOS is faster.


----------



## erase

Quote:


> Originally Posted by *punchmonster*
> 
> My Vega does 38MH/s with your settings so it's not faster clock for clock. And I can push em to 39.5~ MH/s on gaming drivers.
> Quote:
> 
> 
> 
> Originally Posted by *erase*
> 
> I was already getting 36-37 with the stock Vega 56 BIOS with settings 852 core, -25 power, 2200 fan, 935 mem. Trick was to start miner, set settings the memory will run full speed, finally reboot and run miner, then everything works.
> This would get 2x Vega 56 at 69 Mh with wall power at 420w on old X79 platform.
> 
> Vega 64 only allow one card to go slight faster due to better mem overclock. The other card cannot overclock mem any better. Clock for clock the Vega 56 BIOS is faster.
Click to expand...

What card type/model, settings, BIOS and drivers version, claymore version, and did you run the miner for more than just a few minutes like an hour or more?


----------



## Soggysilicon

Quote:


> Originally Posted by *Chaoz*
> 
> Thanks
> 
> 
> 
> 
> 
> 
> 
> . Yeah, you should try it. It's quite easy, just takes some time to get it right. But it really pays off.


Kinda thinkin' I may pick up some RGB compatible with ASUS unicorn lighting setup and mod in with that... make it matchy' matchy' with the lighting on the mobo, definitely before the end of the year.

If that doesn't work out I'll come up with a circuit, I'll be sure to post schems' and pics here just in case its something others would like to try!


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> Kinda thinkin' I may pick up some RGB compatible with ASUS unicorn lighting setup and mod in with that... make it matchy' matchy' with the lighting on the mobo, definitely before the end of the year.
> 
> If that doesn't work out I'll come up with a circuit, I'll be sure to post schems' and pics here just in case its something others would like to try!


Sounds cool. I mainly use white, but I might change it eventually to RGB if I get tired of white. Was just seeing how well it would look in person, before I take the big jump and make something more difficult for it.


----------



## Soggysilicon

Quote:


> Originally Posted by *Newbie2009*
> 
> It appears that benchmarks like high core cpu.


Well the Sea Hawk is a strong card out of the box... tuned pretty hot for a custom loop. I can regularly hit 6.9k on my 64 ref w/ LC bios on my 1800X, the only OC it has is 3.2k for the ram which does OC the fabric... high res test / gaming generally show a dependence on the gfx card, with low res (1080) and below showing the advantages of higher cpu clocks and lower core counts (plenty of old discussion to be had here with Ryzens release).

Suffice to say this being a 4k test, a high clocked cpu is doing more yawning than anything. Maybe it gives sumthin'? but its diminishing returns.

Now this 1080 is going to boost its gpu core significantly, and there in is going to make some difference. A good number of 1080s and Tis are capable of sustaining a 2k clock under a proper loop. I am pretty sure with some more tweaking with the registry the Vega can boost up to upper 1700s without crashing, which would bring the scores more in line with what this score is... but at a certain point I think we are drag racing. The wattage n' heat makes it sorta a 'meh' proposition, and there is no getting around the HBM heat bump one gets with core bumps.









I do think Vega, like a lot of AMD stuff, will take LN2 or a direct phase change very well; so there is that.


----------



## Soggysilicon

Quote:


> Originally Posted by *gupsterg*
> 
> Yeah FreeSync is all I really miss or makes me want to get VEGA.
> 
> Hopefully I'll find one on good deal near Crimbo/Black Friday.
> The C6H allowed you to knock out cores on Ryzen. I would assume ZE has same feature for ThreadRipper, I will check.
> 
> For now view this link with a member who has R5 1500X and knows how to OC nVidia card (which I yet have not had time to do).


On monitors, I really like the CF791, 100 Hz, "Ultimate Engine" 48-100hz Freesync... some folks have squawked for flicker, but on the 64 tuned its "rare" and actually predictable (single image or images being upscaled to res on a static screen, usually in a menu, normally accompanied by the tale-tale coil whine)... I strongly suspect its a phasing issue internal to the monitor FPGA. The next gen. Sammies look really promising as well. Screen was 0 dead px. Quality is top notch, colors are excellent. It has speakers... but why?

I am pretty sure I could lock the 1800x at 4.2... I just don't predict it would make much of a difference, I just wish the P states for XFR where a little more generous out of the box "with proper cooling"... not sure if AMD and I are on the same page here!


----------



## Soggysilicon

Quote:


> Originally Posted by *laczarus*
> 
> Can't seem to break the 4800 mark here
> Any p7 clock set past 1682 leading to a crash in Superposition, no matter the voltage
> Think I've reached the limit of my Vega 56 (64 bios), and I'm using the Morpheus II cooler on it.
> 
> P7 1682MHz @ 1120mV | HBM 1100MHz @ 950mV | Power limit +50%


Mind posting your 4k optimized result? The 1080 doesn't tell the whole story. Thx!


----------



## Soggysilicon

Quote:


> Originally Posted by *SpaceGorilla47*
> 
> someone else noticed a new Task from AMD since 17.9.1?
> Clogging 6% CPU-Power and doing some background magic:


I suspect its related to this...

https://en.wikipedia.org/wiki/PowerPC

Many of the processors in my work utilize this for a myriad of various task, Vega is set to launch in a new Apple productivity platform sometime in the future... so... wouldn't surprise me?


----------



## Soggysilicon

Quote:


> Originally Posted by *SAN-NAS*
> 
> Well I sold my GTX 1070 for $4 more than what I paid for it (thanks miners). Bought the RX 56 and it was $18 more than what I sold the GTX 1070. Basically, Im doing another side upgrade so I can play with new stuff. RX 56 has been great even with crap drivers. Most things were about the same performance as my OC 1070, while running stock on the 56.
> 
> I decided to go ahead and flash the card as I was not able to get over 1500mhz on the core and the HBM2 ram didn't like anything over 800mhz. I used the air version bios of RX 64 not the liquid. It took without any issues. Now when I run turbo mode it clock around 1560mhz and the ram is 945mhz. I've been able to OC into the 1600mhz range but not sure its limits but know that 1700mhz is a no go. If I can figure out the limits, I might try to flash to the liquid version as it gives even more watts available but need to test for current heat and the fan profiles to keep it under 80c.
> 
> Really a neat card and cant believe I have an all AMD system again... Super happy about that!


PSA, If I recall Bzoid said the LC bios had a "shutdown" at or around 70~75c on the core... just so your aware.


----------



## Soggysilicon

Quote:


> Originally Posted by *pmc25*
> 
> The HBM voltage is not HBM voltage. It's something to do with GPU core voltage.
> 
> If you're setting it at that, in my experience, then the P7 core voltage will reach nothing even close to 1120mV.
> 
> Therefore your clocks / throttling will also be much lower.
> 
> Your score reflects that.
> You may need to check back in 6-12 months.
> 
> The drivers are so basic, developer work is required too, and I get the feeling half these features and support for them are a dry run for NAVI.


I think the same thing, Vega may very well be the last commercial level monolithic chip set from AMD/Radeon GFX.









"Gluing Chips Together" is going to be the future... if for any reason, better yields...


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> Well the Sea Hawk is a strong card out of the box... tuned pretty hot for a custom loop. I can regularly hit 6.9k on my 64 ref w/ LC bios on my 1800X, the only OC it has is 3.2k for the ram which does OC the fabric... high res test / gaming generally show a dependence on the gfx card, with low res (1080) and below showing the advantages of higher cpu clocks and lower core counts (plenty of old discussion to be had here with Ryzens release).
> 
> Suffice to say this being a 4k test, a high clocked cpu is doing more yawning than anything. Maybe it gives sumthin'? but its diminishing returns.
> 
> Now this 1080 is going to boost its gpu core significantly, and there in is going to make some difference. A good number of 1080s and Tis are capable of sustaining a 2k clock under a proper loop. I am pretty sure with some more tweaking with the registry the Vega can boost up to upper 1700s without crashing, which would bring the scores more in line with what this score is... but at a certain point I think we are drag racing. The wattage n' heat makes it sorta a 'meh' proposition, and there is no getting around the HBM heat bump one gets with core bumps.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I do think Vega, like a lot of AMD stuff, will take LN2 or a direct phase change very well; so there is that.


Quote:


> Originally Posted by *Soggysilicon*
> 
> On monitors, I really like the CF791, 100 Hz, "Ultimate Engine" 48-100hz Freesync... some folks have squawked for flicker, but on the 64 tuned its "rare" and actually predictable (single image or images being upscaled to res on a static screen, usually in a menu, normally accompanied by the tale-tale coil whine)... I strongly suspect its a phasing issue internal to the monitor FPGA. The next gen. Sammies look really promising as well. Screen was 0 dead px. Quality is top notch, colors are excellent. It has speakers... but why?
> 
> I am pretty sure I could lock the 1800x at 4.2... I just don't predict it would make much of a difference, I just wish the P states for XFR where a little more generous out of the box "with proper cooling"... not sure if AMD and I are on the same page here!


Quote:


> Originally Posted by *Soggysilicon*
> 
> Mind posting your 4k optimized result? The 1080 doesn't tell the whole story. Thx!


Quote:


> Originally Posted by *Soggysilicon*
> 
> I suspect its related to this...
> 
> https://en.wikipedia.org/wiki/PowerPC
> 
> Many of the processors in my work utilize this for a myriad of various task, Vega is set to launch in a new Apple productivity platform sometime in the future... so... wouldn't surprise me?


Quote:


> Originally Posted by *Soggysilicon*
> 
> PSA, If I recall Bzoid said the LC bios had a "shutdown" at or around 70~75c on the core... just so your aware.


Quote:


> Originally Posted by *Soggysilicon*
> 
> I think the same thing, Vega may very well be the last commercial level monolithic chip set from AMD/Radeon GFX.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> "Gluing Chips Together" is going to be the future... if for any reason, better yields...


Multi-quoting exists for this reason, tho. So you don't have to make 6 posts with each a quote. Just click the multi button of the posts you want to quote and click quote of the last one. Makes everything much easier to read, so people don't have to scroll an entire page.


----------



## shadowxaero

Buildzoid discovered that "memory voltage" has a direct effect on vcore voltage. Even if you have vcore set at 1.25v, it wont run at the voltage if the memory voltage is to low. He called the memory voltage "vcore floor' or something like that. And sure enough I tested, bumping up that memory voltage I can sustain 1720+ clocks...problem for me is, when ever I set the memory voltage high than 1.050v, HBM gets stuck at 800Mhz but runs fine a 1105Mhs </= 1.050v.

Need someone to confirm is this is a problem with wattman on the 17.9.1 drivers.


----------



## Soggysilicon

Quote:


> Originally Posted by *Chaoz*
> 
> Multi-quoting exists for this reason, tho. So you don't have to make 6 posts with each a quote. Just click the multi button of the posts you want to quote and click quote of the last one. Makes everything much easier to read, so people don't have to scroll an entire page.


Ah, Neat! Thanks for sharing! +1


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> Ah, Neat! Thanks for sharing! +1


Np







. Makes everything much easier







.


----------



## punchmonster

You're pumping extra heat to your HBM what do you expect?
Quote:


> Originally Posted by *shadowxaero*
> 
> Buildzoid discovered that "memory voltage" has a direct effect on vcore voltage. Even if you have vcore set at 1.25v, it wont run at the voltage if the memory voltage is to low. He called the memory voltage "vcore floor' or something like that. And sure enough I tested, bumping up that memory voltage I can sustain 1720+ clocks...problem for me is, when ever I set the memory voltage high than 1.050v, HBM gets stuck at 800Mhz but runs fine a 1105Mhs </= 1.050v.
> 
> Need someone to confirm is this is a problem with wattman on the 17.9.1 drivers.


----------



## Newbie2009

Quote:


> Originally Posted by *punchmonster*
> 
> You're pumping extra heat to your HBM what do you expect?


yeah i noticed this issue


----------



## Ne01 OnnA




----------



## shadowxaero

Quote:


> Originally Posted by *punchmonster*
> 
> You're pumping extra heat to your HBM what do you expect?


See the thing is, my HBM and Core both stay under 50C...perks of EK and what not.


----------



## punchmonster

Quote:


> Originally Posted by *shadowxaero*
> 
> See the thing is, my HBM and Core both stay under 50C...perks of EK and what not.


you're also pushing more voltage to the memory controller itself and it might not like it. There is no reason to raise vcore floor anyhow.


----------



## laczarus

Quote:


> Originally Posted by *Soggysilicon*
> 
> Mind posting your 4k optimized result? The 1080 doesn't tell the whole story. Thx!


here it is, 4k Optimized at same settings
P7 1682MHz @ 1120mV | HBM 1100MHz @ 950mV | Power limit +50%


----------



## pmc25

Quote:


> Originally Posted by *shadowxaero*
> 
> Buildzoid discovered that "memory voltage" has a direct effect on vcore voltage. Even if you have vcore set at 1.25v, it wont run at the voltage if the memory voltage is to low. He called the memory voltage "vcore floor' or something like that. And sure enough I tested, bumping up that memory voltage I can sustain 1720+ clocks...problem for me is, when ever I set the memory voltage high than 1.050v, HBM gets stuck at 800Mhz but runs fine a 1105Mhs </= 1.050v.
> 
> Need someone to confirm is this is a problem with wattman on the 17.9.1 drivers.


He didn't discover it.

I posted about it 3 weeks ago. A few others then confirmed it.


----------



## rv8000

Quote:


> Originally Posted by *Tgrove*
> 
> Theres no didference. My monitor is a korean brand with freesync, the only choice you have for 4k freesync over 43 inches. Freesync range is 40-60hz like all the other 4k freesync monitors, but i lowered it to 33-60hz
> 
> If you get the one with a 48-75hz range, you can lower it to 37-75hz (a guarantee). That would activate lfc, then your covered at any framerate
> 
> The official feeesync tag just means it was certified by amd, like if you want the freesync tag on your product. The actual freesync tech is the same though in all monitors. Its baked into either displayport 1.2a or hdmi 2.1 spec


I have the new X342CK from acer (48-75hz one), and while I haven't tried custom resolutions/refresh with my Vega card, my monitor would lose signal below 40hz and at 38-39 would black in an out when I tried playing games with freesync enabled (desktop was fine, was also using a 290 at the time). I'm not sure if this was related to the 290, but I would like to know how 37hz is a guarantee? Is there some specific guide for CRU and this monitor where someone details out specifics in making the resolutions?


----------



## 99belle99

Quote:


> Originally Posted by *rv8000*
> 
> I have the new X342CK from acer (48-75hz one), and while I haven't tried custom resolutions/refresh with my Vega card, my monitor would lose signal below 40hz and at 38-39 would black in an out when I tried playing games with freesync enabled (desktop was fine, was also using a 290 at the time). I'm not sure if this was related to the 290, but I would like to know how 37hz is a guarantee? Is there some specific guide for CRU and this monitor where someone details out specifics in making the resolutions?


That's a strange one. What card do you have now and do the same symptoms occur with the new card?


----------



## Tgrove

Quote:


> Originally Posted by *rv8000*
> 
> I have the new X342CK from acer (48-75hz one), and while I haven't tried custom resolutions/refresh with my Vega card, my monitor would lose signal below 40hz and at 38-39 would black in an out when I tried playing games with freesync enabled (desktop was fine, was also using a 290 at the time). I'm not sure if this was related to the 290, but I would like to know how 37hz is a guarantee? Is there some specific guide for CRU and this monitor where someone details out specifics in making the resolutions?


Sounds like your monitor is messed up. All the screens with 48-75hz can be lowered at at least 37-75hz.

http://www.overclock.net/t/1613642/increasing-freesync-range-advice#post_25585381


----------



## redshoulder

Just ordered a sapphire vega 64, I use dvi at the moment, does it come with a displayport cable?


----------



## Trender07

Quote:


> Originally Posted by *redshoulder*
> 
> Just ordered a sapphire vega 64, I use dvi at the moment, does it displayport cable?


Not really afaik is 1 HDMI and display port


----------



## rancor

Quote:


> Originally Posted by *redshoulder*
> 
> Just ordered a sapphire vega 64, I use dvi at the moment, does it displayport cable?


If you are asking if it comes with a displayport cable it does not.

If you are asking if it supports DVI. It only supports DVI single link on HDMI or DP with adaptors(does not come with one). You need an active DP to DL-DVI if you want to run 2560x1440 over DVI.


----------



## Newbie2009

Quote:


> Originally Posted by *redshoulder*
> 
> Just ordered a sapphire vega 64, I use dvi at the moment, does it displayport cable?


Pro tip, buy a new monitor or cancel your order.

OR be willing to spend $100 on an active adapter which are expensive and very unreliable.

Speaking from experience, save yourself the pain.


----------



## redshoulder

Quote:


> Originally Posted by *Newbie2009*
> 
> Pro tip, buy a new monitor or cancel your order.
> 
> OR be willing to spend $100 on an active adapter which are expensive and very unreliable.
> 
> Speaking from experience, save yourself the pain.


Typo, will order a displayport cable


----------



## shadowxaero

Quote:


> Originally Posted by *pmc25*
> 
> He didn't discover it.
> 
> I posted about it 3 weeks ago. A few others then confirmed it.


Oh lol, Sorry I was just unaware then.

Quote:


> Originally Posted by *punchmonster*
> 
> you're also pushing more voltage to the memory controller itself and it might not like it. There is no reason to raise vcore floor anyhow.


If I didn't see noticeably higher sustained clock speeds I wouldn't bother raising it, but since I am getting 50 to 60MHz higher, I see a reason to raise it lol.

Also it appears that that voltage doesn't effect the memory controller, it effects vcore. @buildzoid and I assume @pmc25 (who found this weeks ago, credit where credit is do lol) noted raising that slider actually doesn't effect the memory voltage at all.


----------



## chris89

Let's compare VEGA to say The 390X. AIDA64 GPGPU Test / CPU & GPU. 1,120 Frames Per Second On Julia with 390X.

Let's get some AIDA64 GPGPU TESTS ROLLING DUDES! Let's Check The True Output.

You have to rename each one to reflect this to work... this was hecka tricky... I see what the Mandrel FPS is on your GPUs bros.

*Rename to Reflect ............ .zip.001 & .zip.002 & .zip.003 & .zip.004 & .zip.005 & .zip.006*

aida001.zip 3072k .zip file


aida002.zip 3072k .zip file


aida003.zip 3072k .zip file


aida004.zip 3072k .zip file


aida005.zip 3072k .zip file


aida006.zip 1603k .zip file


----------



## GroupB

Quote:


> Originally Posted by *shadowxaero*
> 
> Oh lol, Sorry I was just unaware then.
> If I didn't see noticeably higher sustained clock speeds I wouldn't bother raising it, but since I am getting 50 to 60MHz higher, I see a reason to raise it lol.
> 
> Also it appears that that voltage doesn't effect the memory controller, it effects vcore. @buildzoid and I assume @pmc25 (who found this weeks ago, credit where credit is do lol) noted raising that slider actually doesn't effect the memory voltage at all.


Its weird how ppl nowday assume youtuber are the best source of info ... I have nothing against buildzoid, he do his things with electronic and its way better than random youtuber taking info from a source and just push it back for view, its kinda refreshing way to talk about gpu. To me its look like new generation of people dont take the time to read no more, youtube is they only source of info...

If you really want to know what can be done with overclock/underclock its here you should be looking, reading post of random user testing they cpu/gpu, finding weird thing, hard mod the thing, starting a discussion about something and get together to create a tool.

Sorry its kinda of a rant I guess, but Ive seen too much " Oh this youtuber did this or this one discover that" lately all around forum and reddit when in fact they only report what they read on technical forum/website or replicate test already done by many people all around the web. I dont blame the youtuber themself but more they audience for spreading misinformation.

a note on those IMC voltage , sorry to be bold, but the way it behave is a know thing for anyone underclocking since polaris card, talk to any crypto miner ( they all underclock) and know this for a fact since polaris/wattman release.


----------



## PontiacGTX

https://www.hardocp.com/article/2017/09/12/radeon_rx_vega_64_vs_r9_fury_x_clock_for/5


----------



## shadowxaero

Quote:


> Originally Posted by *GroupB*
> 
> Its weird how ppl nowday assume youtuber are the best source of info ... I have nothing against buildzoid, he do his things with electronic and its way better than random youtuber taking info from a source and just push it back for view, its kinda refreshing way to talk about gpu. To me its look like new generation of people dont take the time to read no more, youtube is they only source of info...
> 
> If you really want to know what can be done with overclock/underclock its here you should be looking, reading post of random user testing they cpu/gpu, finding weird thing, hard mod the thing, starting a discussion about something and get together to create a tool.
> 
> Sorry its kinda of a rant I guess, but Ive seen too much " Oh this youtuber did this or this one discover that" lately all around forum and reddit when in fact they only report what they read on technical forum/website or replicate test already done by many people all around the web. I dont blame the youtuber themself but more they audience for spreading misinformation.
> 
> a note on those IMC voltage , sorry to be bold, but the way it behave is a know thing for anyone underclocking since polaris card, talk to any crypto miner ( they all underclock) and know this for a fact since polaris/wattman release.


I don't think YouTubers are the best source of information. I just happened to watch a video by buildzoid and he revealed some information I was unaware of. When I found out a user found the same info weeks ago I corrected myself.

As for as my voltage issue I just wanted to know if anyone had experienced the same issue I am having or if everything was working properly...which no one has yet to confirm v.v.

I am not to concerned with underclocking or undervolting in practice (wouldn't have bought and water block if I was lol), I just wanted to know if others have had HBM down clock to 800MHz (which seems to be one of HBM2s states on Vega) when exceeding 1.050v.

Similar to how if you set pstates 6 and 7 to the same clocks, you get stuck at p state 5.

My memory clocks get stuck at 800Mhz when I exceed 1.050v, even just going to 1.060v, I get stuck at 800mhz.


----------



## GroupB

Did you tried to lock the state of HBM ? put your state 3 as min/max ?

Some Game I play never reach high hbm and are lock in low state say arma3 do that, I have to lock the hbm in state 3 and its all good


----------



## NI6HTHAWK

Quote:


> Originally Posted by *shadowxaero*
> 
> problem for me is, when ever I set the memory voltage high than 1.050v, HBM gets stuck at 800Mhz but runs fine a 1105Mhs </= 1.050v.
> 
> Need someone to confirm is this is a problem with wattman on the 17.9.1 drivers.


Quote:


> Originally Posted by *shadowxaero*
> 
> As for as my voltage issue I just wanted to know if anyone had experienced the same issue I am having or if everything was working properly...which no one has yet to confirm v.v.
> 
> I am not to concerned with underclocking or undervolting in practice (wouldn't have bought and water block if I was lol), I just wanted to know if others have had HBM down clock to 800MHz (which seems to be one of HBM2s states on Vega) when exceeding 1.050v.
> 
> Similar to how if you set pstates 6 and 7 to the same clocks, you get stuck at p state 5.
> 
> My memory clocks get stuck at 800Mhz when I exceed 1.050v, even just going to 1.060v, I get stuck at 800mhz.


I noticed similar issues (i'm on 17.9.1) when i increased the HBM voltage (or whatever it is), however all things considered i could run 1100 Mhz with it set to 950 mv although it seemed to cause stability problems likely due to temperature (i think i have it running 1050 mhz stable @ stock 950 mv). The memory downclocking seemed to happen automatically when i raised the mv above 1000 mv but unfortunately I don't recall where exactly i noticed this and honestly I am not even sure i was loading the card at the time so it may have just backed off to a lower P-state. Unfortunately I am at work where I don't have my Vega 64 LC and at home I don't have internet at the moment thanks to Irma.









Is this downclocking happening even while the card is loaded? I can try and mess with mine an update when i get the internet back.

Edit: i get hbm downclocking under load to 800 mhz anything over 1050 mv in wattman, the 167 and 500 mhz P-states are present. anything below 1050 mV and i can hit the target up to 1100 mhz HBM speed. Since i get stability issues beyond 1050 mhz i just leave it at 1050 mhz and 950 mv and it works great.


----------



## chris89

You could proabably save 20C on the core & hbm if changed HBM states 2,3, since its 0,1,2,3 (4) and change it to 626MHZ. It'll auto-operate at it's ultra-low-voltage. HBM just 50-60C load. Core sees less throttling and can handle more clock on the core, throttle free. The difference at 4k/5k full tilt, is minimal fps difference. Once you get the core clocked high enough, you can surpass the high memory clock on high core clock.

Try RADEON PRO TOOLS to adjust memory down to 626MHZ and see if the HBM temperature changes... then we know its using less voltage, which should yield less core clock throttling.

I can't quite find the proper hex offset for the 945mhz memory state yet. Since the other locations should be near by... 800mhz etc.

I can help mod VEGA hex to do this... if anyone is willing to try & show how much cooler it runs?

http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios

Just needs this changed to ...

50,000 is 500mhz which is C3 50 swapped to 50 C3 00 00

945mhz is 483.8GB/s which is 945 / 483.8 = 1.95328

500 / 1.95328 = 255.97GB basically 256GB/s

*626 / 1.95328 = 320.48GB/s

626MHZ = 62,600 = F4 88 swapped to 88 F4 00 00*

So, just gotta change STATE 2(#3)800mzh ,3(#4) 945mhz to 626MHZ which is 320GB/s still plenty of performance on nearly half the heat.

500MHZ is 50 C3 00 00
511MHZ is

*No way to Correct Checksum for the moment is the issue? Anyone know how???*

*Open Your .ROM in HXD & Search Hex For 01 04 3C 41 00 00 00 00 00 50 C3 00 00 00 00 00 80 38 01 00 02 00 00 24 71 01*

Sapphire-Vega-64-Air-Black-Stock-Switch-Position1.zip 136k .zip file


Sapphire-Vega-64-Air-Black-STATE-23-626MHZ-320GBs.zip 136k .zip file


typedef struct _ATOM_Vega10_MCLK_Dependency_Table {
01 UCHAR ucRevId;
04 UCHAR ucNumEntries; /* Number of entries. */
ATOM_Vega10_MCLK_Dependency_Record entries[1]; /* Dynamically allocate entries. */
} ATOM_Vega10_MCLK_Dependency_Table;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
3C 41 00 00 (167MHz) ULONG ulMemClk; /* Clock Frequency */
00 UCHAR ucVddInd; /* SOC_VDD index */
00 UCHAR ucVddMemInd; /* MEM_VDD - only non zero for MCLK record */
00 UCHAR ucVddciInd; /* VDDCI = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
*50 C3 00 00* (500MHz) ULONG ulMemClk; /* Clock Frequency */
01 UCHAR ucVddInd; /* SOC_VDD index */
00 UCHAR ucVddMemInd; /* MEM_VDD - only non zero for MCLK record */
00 UCHAR ucVddciInd; /* VDDCI = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
80 38 01 00 (*800MHz*) ULONG ulMemClk; /* Clock Frequency */
02 UCHAR ucVddInd; /* SOC_VDD index */
00 UCHAR ucVddMemInd; /* MEM_VDD - only non zero for MCLK record */
00 UCHAR ucVddciInd; /* VDDCI = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
24 71 01 00 (*945MHz*) ULONG ulMemClk; /* Clock Frequency */
03 UCHAR ucVddInd; /* SOC_VDD index */
00 UCHAR ucVddMemInd; /* MEM_VDD - only non zero for MCLK record */
00 UCHAR ucVddciInd; /* VDDCI = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;


----------



## deadman3000

I need some help. My RX Vega 56 (flashed as 64 or not) has problems with the VDDC and memory clock getting stuck after using multiple monitors. The only way to reset it is to set the secondary and tirtiary monitors to disconnected (hotkey set to C:\Windows\System32\DisplaySwitch.exe /INTERNAL is the quickest method) and reboot Windows 10. Not only that but the VDDC keeps changing and getting stuck higher than it should when idle. The default VDDC is 0.750mV and 167Mhz HBM in single monitor mode. In multi monitor mode HBM stays at 500Mhz until boosted in a 3D app. But the VDDC gets stuck in various states from 0.7688 up to 1.1mV after exiting a 3D app when idle! This causes a constant power draw on my system (Tested with wall wart) whereby my usual powerdraw is ~90w but stuck at 1.1mV it draws a constant ~120w. Rebooting or turning off the PC and clearing the capacitors sometimes bring it down to around 0.8375. But only setting to single monitor then a reboot brings it back down to the lowest VDDC. I don't expect it to be as low when multiple monitors are connected obviously but I do expect the poewerdraw when idle to remain stable.

BTW sometimes in multi monitor mode HBM will drop to 167Mhz after running a 3D app. Which is odd in and of itself.


----------



## chris89

I think most of the issue is in the HBM clock and Core Clock States... It has to be known which clock does it run at 24/7 & which clock can it hold long enough for benchmarks, without throttle?

#0 - 852Mhz Idle Clock? 700-800mv idle?
#1 - 991Mhz @ 836mv
#2 - 1138Mhz @ 960mv
#3 - 1269Mhz @ 1070mv
#4 - 1348Mhz @ 1137mv
#5 - 1440Mhz @ 1215mv
#6 - 1528Mhz @ 1289mv
#7 - 1600Mhz @ 1350mv (18.5185%) or 1.185185

I'd set it like the 390X

0 - 300mhz
1 - 400mhz
2 - 500mhz
3 - 600mhz
4 - 700mhz
5 - *1250mhz* (*80 GPixel/s & 320 GTexel/s*) On 65286 Could run at this continuous at ultra-low temperature. Depending on how low set Max Temp & Hot Spot 75C for 1250mhz & 84C for short 1,875Mhz Burst & Long Runs @ 1,563Mhz.
6 - *1563mhz* (*100 GPixel/s & 400 GTexel/s*) On 65287 Runs at this for 75% longer than 1,875Mhz & must throttle down to below after a long time & it'll go back & fourth 1,250Mhz to 1,563Mhz Very-Fast
7 - *1875mhz* (*120 GPixel/s & 480 GTexel/s*) On 65288 Runs at this for benchmarks only & throttles to below clock when Max Temp & Hot Spot & Power Limit Is Reached. Remove Power Limit & Only be limited by when the MAX TEMP & HOT SPOT ARE REACHED BY SETTING TDP MAX, TDC, TDP to 999..

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Table {
00 UCHAR ucRevId;
08 UCHAR ucNumEntries; /* Number of entries. */
ATOM_Vega10_GFXCLK_Dependency_Record entries[1]; /* Dynamically allocate entries. */
} ATOM_Vega10_GFXCLK_Dependency_Table;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
D0 4C 01 00 (852MHz) ULONG ulClk; /* Clock Frequency */
00 UCHAR ucVddInd; /* SOC_VDD index */
00 80 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
1C 83 01 00 (991MHz) ULONG ulClk; /* Clock Frequency */
01 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
88 BC 01 00 (1138MHz) ULONG ulClk; /* Clock Frequency */
02 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
B4 EF 01 00 (1269MHz) ULONG ulClk; /* Clock Frequency */
03 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
90 0E 02 00 (1348MHz) ULONG ulClk; /* Clock Frequency */
04 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
80 32 02 00 (1440MHz) ULONG ulClk; /* Clock Frequency */
05 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
E0 54 02 00 (1528MHz) ULONG ulClk; /* Clock Frequency */
06 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;

typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
00 71 02 00 (1600MHz) ULONG ulClk; /* Clock Frequency */
07 UCHAR ucVddInd; /* SOC_VDD index */
00 00 USHORT usCKSVOffsetandDisable; /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
00 00 USHORT usAVFSOffset; /* AVFS Voltage offset */
} ATOM_Vega10_GFXCLK_Dependency_Record;


----------



## rancor

Quote:


> Originally Posted by *chris89*
> 
> You could proabably save 20C on the core & hbm if changed HBM states 2,3, since its 0,1,2,3 (4) and change it to 626MHZ. It'll auto-operate at it's ultra-low-voltage. HBM just 50-60C load. Core sees less throttling and can handle more clock on the core, throttle free. The difference at 4k/5k full tilt, is minimal fps difference. Once you get the core clocked high enough, you can surpass the high memory clock on high core clock.
> 
> Snip


Why do you keep posting about reducing HBM clocks? HBM power consumption and heat output is tiny (about 20W).

You cannot edit the Vega bios.


----------



## chris89

Quote:


> Originally Posted by *rancor*
> 
> Why do you keep posting about reducing HBM clocks? HBM power consumption and heat output is tiny (about 20W).
> 
> You cannot edit the Vega bios.


85C HBM temperatures with a 115C limit in the bios, 20 Watts? must be teasing. We see 15-20% more HBM temperature than the Core. HBM is limiting performance on the BIG-END.

Cannot know this until you try. I ALREADY KNOW THIS. I'll prove it later, until then I'll just sit back and see what yall come up with in the mean time. I'm justing setting in the proper direction. Helpful Guidance, if you will.


----------



## Reikoji

I've a happy owner of a Sapphire LC RX Vega 64. I haven't done any serious attempts to overclock, just been playing around with the power target. I'm not too keen on GPU overclocking either.


----------



## Reikoji

Quote:


> Originally Posted by *rancor*
> 
> Why do you keep posting about reducing HBM clocks? HBM power consumption and heat output is tiny (about 20W).
> 
> You cannot edit the Vega bios.


HBM2 power usage is indeed low, but the HBM2 temperature still gets higher than the GPU temp


----------



## rancor

Quote:


> Originally Posted by *chris89*
> 
> 85C HBM temperatures with a 115C limit in the bios, 20 Watts? must be teasing. We see 15-20% more HBM temperature than the Core. HBM is limiting performance on the BIG-END.
> 
> Cannot know this until you try. I ALREADY KNOW THIS. I'll prove it later, until then I'll just sit back and see what yall come up with in the mean time. I'm justing setting in the proper direction. Helpful Guidance, if you will.


You do that


----------



## pmc25

Quote:


> Originally Posted by *chris89*
> 
> 85C HBM temperatures with a 115C limit in the bios, 20 Watts? must be teasing. We see 15-20% more HBM temperature than the Core. HBM is limiting performance on the BIG-END.
> 
> Cannot know this until you try. I ALREADY KNOW THIS. I'll prove it later, until then I'll just sit back and see what yall come up with in the mean time. I'm justing setting in the proper direction. Helpful Guidance, if you will.


By far the biggest effect on HBM temperature is GPU core voltage and clock speed, and therefore heat dissipation.

Given unchanging core clocks and voltages, HBM clocks being progressively raised doesn't significantly increase HBM temperatures (and much less so GPU temperatures).

Underclocking HBM reduces performance in virtually any workload, some hugely.

You want HBM as high as you can possibly get it.

If cooling is inadequate or you can't bear high fan noise, then it may need to be 'only' 1050Mhz, as above that most RX64s will begin to get some instability above 65C.

HBM timings seem to slip from 40C, but badly from 60C.

If your core clock and gpu core voltage are high enough to be making the whole package hot enough to cause HBM2 instability, then dial them back, not the HBM.


----------



## shadowxaero

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> I noticed similar issues (i'm on 17.9.1) when i increased the HBM voltage (or whatever it is), however all things considered i could run 1100 Mhz with it set to 950 mv although it seemed to cause stability problems likely due to temperature (i think i have it running 1050 mhz stable @ stock 950 mv). The memory downclocking seemed to happen automatically when i raised the mv above 1000 mv but unfortunately I don't recall where exactly i noticed this and honestly I am not even sure i was loading the card at the time so it may have just backed off to a lower P-state. Unfortunately I am at work where I don't have my Vega 64 LC and at home I don't have internet at the moment thanks to Irma.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Is this downclocking happening even while the card is loaded? I can try and mess with mine an update when i get the internet back.
> 
> Edit: i get hbm downclocking under load to 800 mhz anything over 1050 mv in wattman, the 167 and 500 mhz P-states are present. anything below 1050 mV and i can hit the target up to 1100 mhz HBM speed. Since i get stability issues beyond 1050 mhz i just leave it at 1050 mhz and 950 mv and it works great.


Can you test this, I think it has already been confirmed by most (including my self), max out your HBM voltage and check to see if you get higher sustained core clock speeds. I get 1720ish+ running with the HBM slider at 1.25v but because the memory gets stuck at 800MHz it because rather moot.

Guess it is a driver bug after all, thanks for confirming that HBM gets stuck at 800Mhz. I am running insider previews, I have a Vega64 AIO bios flashed on my 64 Air, and using the edited powerplay tables, so a lot of variables at play. Figured easier to rule out potential options if someone else is having the same problems I am lol, means I didn't eff anything up haha.


----------



## rancor

Quote:


> Originally Posted by *Reikoji*
> 
> HBM2 power usage is indeed low, but the HBM2 temperature still gets higher than the GPU temp


True and HBM temps will likely always be above core temps as long as that is some power being dissipated in them. The heat has to go somewhere and it will need a positive gradient to the surrounding materials being heated by the GPU die.

The HBM isn't necessarily hotter than parts of the GPU. While GPU temperature is reported below HBM the GPU "hotspot" is well above. For example GPU temp 32C, HBM 37C, GPU Hot Spot 55C.
Quote:


> Originally Posted by *shadowxaero*
> 
> Can you test this, I think it has already been confirmed by most (including my self), max out your HBM voltage and check to see if you get higher sustained core clock speeds. I get 1720ish+ running with the HBM slider at 1.25v but because the memory gets stuck at 800MHz it because rather moot.
> 
> Guess it is a driver bug after all, thanks for confirming that HBM gets stuck at 800Mhz. I am running insider previews, I have a Vega64 AIO bios flashed on my 64 Air, and using the edited powerplay tables, so a lot of variables at play. Figured easier to rule out potential options if someone else is having the same problems I am lol, means I didn't eff anything up haha.


It's also possible you are getting higher clocks because the GPU isn't as stressed as much with the lower HMB clocks.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *shadowxaero*
> 
> Can you test this, I think it has already been confirmed by most (including my self), max out your HBM voltage and check to see if you get higher sustained core clock speeds. I get 1720ish+ running with the HBM slider at 1.25v but because the memory gets stuck at 800MHz it because rather moot.


Yes i noticed the same, i was hitting higher core speeds at ~1720 with the HBM voltage increased above 1050mv. Normally with it at 950mv i get around 1670-1690, i ended up setting P7 down to 1697 mhz because of this but now it makes me wonder if i can squeeze a bit more sustained core clock by keeping the HBM @ 1050 mv. Guess I'll give it a shot since i'm stuck on my phone for internet right now!


----------



## Tgrove

Quote:


> Originally Posted by *pmc25*
> 
> By far the biggest effect on HBM temperature is GPU core voltage and clock speed, and therefore heat dissipation.
> 
> Given unchanging core clocks and voltages, HBM clocks being progressively raised doesn't significantly increase HBM temperatures (and much less so GPU temperatures).
> 
> Underclocking HBM reduces performance in virtually any workload, some hugely.
> 
> You want HBM as high as you can possibly get it.
> 
> If cooling is inadequate or you can't bear high fan noise, then it may need to be 'only' 1050Mhz, as above that most RX64s will begin to get some instability above 65C.
> 
> HBM timings seem to slip from 40C, but badly from 60C.
> 
> If your core clock and gpu core voltage are high enough to be making the whole package hot enough to cause HBM2 instability, then dial them back, not the HBM.


Temp is a huge limiting factor for hbm. Thats why i keep my core at 55 max temp. Do you think the timings tighten up under 50c?


----------



## pmc25

Quote:


> Originally Posted by *Tgrove*
> 
> Temp is a huge limiting factor for hbm. Thats why i keep my core at 55 max temp. Do you think the timings tighten up under 50c?


I don't think we have a way of detecting timings, unfortunately.

But my experience is that temperatures above 40C see performance taper a little, then above 60C they quickly taper off ... for the poor people on stock profiles hitting well above 75C, it's a major drop.


----------



## Reikoji

Quote:


> Originally Posted by *rancor*
> 
> True and HBM temps will likely always be above core temps as long as that is some power being dissipated in them. The heat has to go somewhere and it will need a positive gradient to the surrounding materials being heated by the GPU die.
> 
> The HBM isn't necessarily hotter than parts of the GPU. While GPU temperature is reported below HBM the GPU "hotspot" is well above. For example GPU temp 32C, HBM 37C, GPU Hot Spot 55C.
> It's also possible you are getting higher clocks because the GPU isn't as stressed as much with the lower HMB clocks.


I wonder if they thought about that when they made the cooling solutions. With some precision a split in the heat sink between the gpu and memory could help with HBM2 temperature. Tho, someone said that samsung said HBM2 safe temp range is like 120c. Cant find the source myself tho.


----------



## Chaoz

Finally received my DisplayPort cable today. Tried it immediately and it's so smooth, GPU doesn't even go to 100% usage on 75hz.


----------



## 99belle99

Quote:


> Originally Posted by *Reikoji*
> 
> I've a happy owner of a Sapphire LC RX Vega 64. I haven't done any serious attempts to overclock, just been playing around with the power target. I'm not too keen on GPU overclocking either.


Where did you buy it from?


----------



## Reikoji

Quote:


> Originally Posted by *99belle99*
> 
> Where did you buy it from?


Ordered of from Newegg on launch day with the monitor, when the price was 'normal'....


----------



## Chaoz

Is this score any good? Can't find similar TS benches. Got it on Balanced mode and CPU on 4.3GHz.


----------



## 99belle99

Quote:


> Originally Posted by *Reikoji*
> 
> Ordered of from Newegg on launch day with the monitor, when the price was 'normal'....


I was just wondering as I want one myself but I'm not willing to pay the inflated prices.


----------



## shadowxaero

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> Yes i noticed the same, i was hitting higher core speeds at ~1720 with the HBM voltage increased above 1050mv. Normally with it at 950mv i get around 1670-1690, i ended up setting P7 down to 1697 mhz because of this but now it makes me wonder if i can squeeze a bit more sustained core clock by keeping the HBM @ 1050 mv. Guess I'll give it a shot since i'm stuck on my phone for internet right now!


Gonna report it as a bug to AMD, hopefully they can fix it in the next few driver updates, as it is just extra performance waiting to be used at virtually no cost.


----------



## Reikoji

Quote:


> Originally Posted by *shadowxaero*
> 
> Gonna report it as a bug to AMD, hopefully they can fix it in the next few driver updates, as it is just extra performance waiting to be used at virtually no cost.


Me too actually. If i increase P6 and P7 voltage, the gpu clock runs slower than if I had just left it alone. Depending on the application P6 and P7 seem to just be ignored even at +50% power target in a lot of cases. I've been expecting my card to crash with the settings i've put in but nothing has happened !

Maybe try rolling back to 17.8.x?

Then there is GPU-Z... the render test seems to be capable of pushing the gpu clock passed P7 setting. still no crashy.


----------



## chris89

Quote:


> Originally Posted by *Reikoji*
> 
> HBM2 power usage is indeed low, but the HBM2 temperature still gets higher than the GPU temp


I'd say pull the GPU and pull the heatsink and compare the "Height" of the HBM to the GPU Core. Take pictures for us. Maybe a 1/4mm gap extra?
Quote:


> Originally Posted by *rancor*
> 
> You do that


Yeah I'd take a pic of the HBM compared to the height of the silicon GPU Core Diode.


----------



## Soggysilicon

Quote:


> Originally Posted by *laczarus*
> 
> here it is, 4k Optimized at same settings
> P7 1682MHz @ 1120mV | HBM 1100MHz @ 950mV | Power limit +50%


Try rebooting your system, once your up go to wattman and "reset" your settings, tune again, apply, and then run your benches. I am curious if you will see a difference. There seems to be an issue which affects the boost clock (or AMDs version of it) that "de-tunes" if you let your monitor go to sleep, or watch videos, probably some other use cases as well. It is something I am able to reproduce consistently, wondering if its the same for you... Just curious... for science!








Quote:


> Originally Posted by *PontiacGTX*
> 
> https://www.hardocp.com/article/2017/09/12/radeon_rx_vega_64_vs_r9_fury_x_clock_for/5


Interesting read, thanks for sharing!
Quote:


> Originally Posted by *chris89*
> 
> 85C HBM temperatures with a 115C limit in the bios, 20 Watts? must be teasing. We see 15-20% more HBM temperature than the Core. HBM is limiting performance on the BIG-END.
> 
> Cannot know this until you try. I ALREADY KNOW THIS. I'll prove it later, until then I'll just sit back and see what yall come up with in the mean time. I'm justing setting in the proper direction. Helpful Guidance, if you will.


Not sure I am following this post... watts are a rate, I am not seeing a correlation between 85c and 115c based on "watts"... now if one where to say that the stock cooler at such and thus rpm executes a thermal dissipation of such and thus watts therefor X watts means Y with respect to Z, sure...









Additionally guessing that the temp diode or comparitor for the temps on the HBM is down on the lowest wafer to the interposer... so its readings are going to be closer to the junction temperature, which is the way to go as there is going to be a thermal gradient between the different chips... which is a great segue into Boltzmann... but at any rate, chain is only as strong as its weakest link.
Quote:


> Originally Posted by *Reikoji*
> 
> I've a happy owner of a Sapphire LC RX Vega 64. I haven't done any serious attempts to overclock, just been playing around with the power target. I'm not too keen on GPU overclocking either.


The AIO is clocked hot by virtue of it being the AIO... hence why its popular to nick its bios and flash the air card to it. Changing that power target allows more power headroom for the "boost" which is OC'n.... so... your sorta already OC'n?
Quote:


> Originally Posted by *Reikoji*
> 
> HBM2 power usage is indeed low, but the HBM2 temperature still gets higher than the GPU temp


As mentioned above with a tweak, I doubt its any "hotter", more likely the reading is being taken closer to the interposer. Memory is temp sensitive where noise can become a factor in cell stability. If there is indeed a feedback loop which is utilizing temperatures (to loosen timings as an example) from a production standpoint one would want to take that sample from a convenient "worst case" position.
Quote:


> Originally Posted by *Chaoz*
> 
> Is this score any good? Can't find similar TS benches. Got it on Balanced mode and CPU on 4.3GHz.




I think you still got some gas in the tank to get a little more out of your card!


----------



## Reikoji

Hey i finally got it to crash !. . . but only cuz P6+ voltage settings seem to be ignored hrm...


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> I think you still got some gas in the tank to get a little more out of your card!


Okay, thanks. Will tinker with it some more later on.


----------



## Reikoji

hrm... Some tinkering and ignoring what I see on screen, I managed this.


probably low. idk.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *shadowxaero*
> 
> Gonna report it as a bug to AMD, hopefully they can fix it in the next few driver updates, as it is just extra performance waiting to be used at virtually no cost.


So no internet made most games unplayable so i used Valley bench to do some fps dyno pulls and see what i could accomplish. I got some intersting results with a lot of different settings, I'll post full results when i have a keyboard to type wit. Here is the best i could accomplish stable after 2 runs (1 to warmup and one to bench). Now I'll let this run a few hours to make sure it doesn't crash and get something to eat before I crash!

P7 1722/1200
P6 1667/1175
HBM 1100/950
FAN 400/3000
TEMP 70/65

Valley @ 1440p Ultra x8AA

56.4 AVG FPS
29.2 MIN FPS
112.6 MAX FPS
2359 SCORE

MAX GPU CORE: 1718 MHz
MAX TEMP: 66 C
MAX FAN: 1936 RPM


----------



## Reikoji

HBCC off:


HBCC on:


I'm pretty sure that didnt have any major effect, much better scores than i got previously... measely 7300's...

I think wattman settings propperly apply if you simply close and reopen radeon settings, as thats what I did to finally get the core clock running at what I set it to.


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> hrm... Some tinkering and ignoring what I see on screen, I managed this.
> 
> 
> probably low. idk.


You said you had the sapphire LC, I think 6.9 is doable on 4k optimized. Your HBM is going to be a hard limit, so best to figure out where that is simply going to just "flat out crash"... for me its greater than 1105... anything beyond that is a reboot. 1105, however, is like a rock. From there you can find your power / frequency combination which is going to work best for you. OC'n Vega is a little tedious as its tendency is to simply crash the card forcing a reboot. Even if it doesn't reboot and the driver stalls you will need to reboot and reapply settings to insure the target / registry / driver is set... wattman is kinda "meh" in this regard. Good Luck!


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> You said you had the sapphire LC, I think 6.9 is doable on 4k optimized. Your HBM is going to be a hard limit, so best to figure out where that is simply going to just "flat out crash"... for me its greater than 1105... anything beyond that is a reboot. 1105, however, is like a rock. From there you can find your power / frequency combination which is going to work best for you. OC'n Vega is a little tedious as its tendency is to simply crash the card forcing a reboot. Even if it doesn't reboot and the driver stalls you will need to reboot and reapply settings to insure the target / registry / driver is set... wattman is kinda "meh" in this regard. Good Luck!


Does the HBM voltage even need to be increased beyond 950mv? HWinfo always shows 1.356 anyway. which one is right?


----------



## prom

Quote:


> Originally Posted by *Reikoji*
> 
> Does the HBM voltage even need to be increased beyond 950mv? HWinfo always shows 1.356 anyway. which one is right?


The tweakable "memory voltage" setting isn't _actually_ memory voltage.
HBM voltage is fixed unless you're messing with bios mods/swaps.


----------



## Reikoji

Quote:


> Originally Posted by *prom*
> 
> The tweakable "memory voltage" setting isn't _actually_ memory voltage.
> HBM voltage is fixed unless you're messing with bios mods/swaps.


so just leave fake voltage setting at 950, confirmed.


----------



## Reikoji

1110mhz. Nope.

1100 seemed to work tho... cept when leaving core speeds and voltages alone, then I crash. messing with P6 and P7 seems to just lead to performance loss in my case.


----------



## Newbie2009

Quote:


> Originally Posted by *shadowxaero*
> 
> Can you test this, I think it has already been confirmed by most (including my self), max out your HBM voltage and check to see if you get higher sustained core clock speeds. I get 1720ish+ running with the HBM slider at 1.25v but because the memory gets stuck at 800MHz it because rather moot.
> 
> Guess it is a driver bug after all, thanks for confirming that HBM gets stuck at 800Mhz. I am running insider previews, I have a Vega64 AIO bios flashed on my 64 Air, and using the edited powerplay tables, so a lot of variables at play. Figured easier to rule out potential options if someone else is having the same problems I am lol, means I didn't eff anything up haha.


HBM memory is set @ 1.35v as far as I know. Whatever wattman is showing as HBM volts, well, isn't. No point in changing unless you have modded the card/bios.


----------



## laczarus

Quote:


> Originally Posted by *Soggysilicon*
> 
> Try rebooting your system, once your up go to wattman and "reset" your settings, tune again, apply, and then run your benches. I am curious if you will see a difference. There seems to be an issue which affects the boost clock (or AMDs version of it) that "de-tunes" if you let your monitor go to sleep, or watch videos, probably some other use cases as well. It is something I am able to reproduce consistently, wondering if its the same for you... Just curious... for science!


I can confirm that I noticed that issue with the latest 17.9.1 driver.
Thought it was an improvement that the driver just crashes if something goes wrong instead of the system locking up, so I installed 17.9.1.
But when discovering that issue I had to reboot anyways and reverted back to 17.8.2
I haven't tried replicating it on this driver so far, will give it a check.
Now I have the LC bios on my V56 and will have to find the proper settings first though
LC bios also improved the SP score by about 45 points, but so far stable clocks above 1682MHz were impossible. Probably my 600W PSU holding me back or reached the limit of my chip?


----------



## ontariotl

Quote:


> Originally Posted by *laczarus*
> 
> I can confirm that I noticed that issue with the latest 17.9.1 driver.
> Thought it was an improvement that the driver just crashes if something goes wrong instead of the system locking up, so I installed 17.9.1.
> But when discovering that issue I had to reboot anyways and reverted back to 17.8.2
> I haven't tried replicating it on this driver so far, will give it a check.
> Now I have the LC bios on my V56 and will have to find the proper settings first though
> LC bios also improved the SP score by about 45 points, but so far stable clocks above 1682MHz were impossible. Probably my 600W PSU holding me back or reached the limit of my chip?


You've reached the limit of your chip. Air Vega 64's just barely run at that speed with a custom setting using the LC bios. Mine does not run default with LC bios. I managed 1727 (which equates to about 1705 in game) settings and it could go no further.


----------



## kundica

Quote:


> Originally Posted by *laczarus*
> 
> I can confirm that I noticed that issue with the latest 17.9.1 driver.
> Thought it was an improvement that the driver just crashes if something goes wrong instead of the system locking up, so I installed 17.9.1.
> But when discovering that issue I had to reboot anyways and reverted back to 17.8.2
> I haven't tried replicating it on this driver so far, will give it a check.
> Now I have the LC bios on my V56 and will have to find the proper settings first though
> LC bios also improved the SP score by about 45 points, but so far stable clocks above 1682MHz were impossible. Probably my 600W PSU holding me back or reached the limit of my chip?
> 
> 
> Spoiler: Warning: Spoiler!


Is your card on a waterblock? The LC bios has lower temp limits so it's pointless, possibly detrimental, to run it on an Air cooled card if you can't keep it below those limits. It'll give your card more power but it'll also cause it to throttle a lot sooner.
Quote:


> Originally Posted by *Soggysilicon*
> 
> You said you had the sapphire LC, I think 6.9 is doable on 4k optimized. Your HBM is going to be a hard limit, so best to figure out where that is simply going to just "flat out crash"... for me its greater than 1105... anything beyond that is a reboot. 1105, however, is like a rock. From there you can find your power / frequency combination which is going to work best for you. OC'n Vega is a little tedious as its tendency is to simply crash the card forcing a reboot. Even if it doesn't reboot and the driver stalls you will need to reboot and reapply settings to insure the target / registry / driver is set... wattman is kinda "meh" in this regard. Good Luck!


My LC card did 7k without changing voltages or clocks on the card. Stock clock/voltages with HBM at 1100 and HBCC on.


----------



## laczarus

Quote:


> Originally Posted by *ontariotl*
> 
> You've reached the limit of your chip. Air Vega 64's just barely run at that speed with a custom setting using the LC bios. Mine does not run default with LC bios. I managed 1727 (which equates to about 1705 in game) settings and it could go no further.


I'm worried that the VRM cooling is too insufficient to push it to the hardest limit. The big VRM heat sink doesn't fit on the PCB so I had to improvise with all the small ones.
Quote:


> Originally Posted by *kundica*
> 
> Is your card on a waterblock? The LC bios has lower temp limits so it's pointless, possibly detrimental, to run it on an Air cooled card if you can't keep it below those limits. It'll give your card more power but it'll also cause it to throttle a lot sooner.


Not a waterblock, but on the Morpheus II with 2200rpm Silent Wings 3 fans. Highest temp on the GPU was 62°C, HBM in the lower 70's
I have to tune down the P7 clock from 1752MHz to 1722 along with lowering the voltage. The balanced and turbo profiles of the LC bios don't work so well with my Vega 56


----------



## Ne01 OnnA




----------



## shadowxaero

Quote:


> Originally Posted by *Newbie2009*
> 
> HBM memory is set @ 1.35v as far as I know. Whatever wattman is showing as HBM volts, well, isn't. No point in changing unless you have modded the card/bios.


We know that the slider in wattman doesn't effect actually memory voltage. It does effect core voltage however.


----------



## Newbie2009

Quote:


> Originally Posted by *shadowxaero*
> 
> We know that the slider in wattman doesn't effect actually memory voltage. It does effect core voltage however.


What slider? Wattman sliders overclock the core clock and the HBM clock. Volts have to be punched in manually.


----------



## shadowxaero

Quote:


> Originally Posted by *Newbie2009*
> 
> What slider? Wattman sliders overclock the core clock and the HBM clock. Volts have to be punched in manually.


Lol sorry not actually a slider, I just mean the Memory voltage control effects vcore voltage. If you bump that up to 1.2 or 1.25v you will notice higher sustained core clocks. Problem is HBM clocks tank if you do v.v. But yea HBM always run at 1.256v in the case of Vega56 or 1.356 in the case of Vega64.


----------



## Newbie2009

Quote:


> Originally Posted by *shadowxaero*
> 
> Lol sorry not actually a slider, I just mean the Memory voltage control effects vcore voltage. If you bump that up to 1.2 or 1.25v you will notice higher sustained core clocks. Problem is HBM clocks tank if you do v.v. But yea HBM always run at 1.256v in the case of Vega56 or 1.356 in the case of Vega64.


Regarding the HBM downclocking, I noticed that at high(unstable) clocks rather than specifically high volts. 1.2v is stock on 64.


----------



## PontiacGTX

So anyone has checked if the clocks you have setup always are pegged to the set core clocks? or they vary depending on workload? what happens if you force always the clocks


----------



## rancor

Quote:


> Originally Posted by *PontiacGTX*
> 
> So anyone has checked if the clocks you have setup always are pegged to the set core clocks? or they vary depending on workload? what happens if you force always the clocks


They are not locked and dependent on load. As far as i can tell the user has no control over this action at this time.

Under high GPU load the core clock will back off 40-50MHz from the P7 state even with massive power limits, 450A limits, and custom watercooling. Under lighter loads(Still can be reported as 100% usage) core clocks will be within 5-10MHz of P7. This just seems to be the default action and can't be changed. Setting P6 higher has no effect.

In GPUz when the card is doing this small downclocking the core voltage is lower than what is set in my case 1.875 vs 1.21. My early guess is that the card is dynamically downlocking to keep stability as the high GPU power draw is causing vdroop. This might be voltage drop in the package or chip and that would need to be verified with a multimeter.


----------



## shadowxaero

Quote:


> Originally Posted by *Newbie2009*
> 
> Regarding the HBM downclocking, I noticed that at high(unstable) clocks rather than specifically high volts. 1.2v is stock on 64.


I just think the HBM downclocking in my case is a bug in the drivers. If I set 1.050v in wattman for the memory voltage, card runs fine at 1105Mhz. If I set 1.051v or anything higher, card will only run at 800Mhz


----------



## PontiacGTX

Quote:


> Originally Posted by *rancor*
> 
> They are not locked and dependent on load. As far as i can tell the user has no control over this action at this time.
> 
> Under high GPU load the core clock will back off 40-50MHz from the P7 state even with massive power limits, 450A limits, and custom watercooling. Under lighter loads(Still can be reported as 100% usage) core clocks will be within 5-10MHz of P7. This just seems to be the default action and can't be changed. Setting P6 higher has no effect.
> 
> In GPUz when the card is doing this small downclocking the core voltage is lower than what is set in my case 1.875 vs 1.21. My early guess is that the card is dynamically downlocking to keep stability as the high GPU power draw is causing vdroop. This might be voltage drop in the package or chip and that would need to be verified with a multimeter.


have you tried setitng at leats 3 P states at same core clock? or setting P7 lower than P5/P6?


----------



## Reikoji

Quote:


> Originally Posted by *shadowxaero*
> 
> I just think the HBM downclocking in my case is a bug in the drivers. If I set 1.050v in wattman for the memory voltage, card runs fine at 1105Mhz. If I set 1.051v or anything higher, card will only run at 800Mhz


same. It also results in the core running a bit faster, but still a lower score at the end of the benchmarks.


----------



## rancor

Quote:


> Originally Posted by *PontiacGTX*
> 
> have you tried setitng at leats 3 P states at same core clock? or setting P7 lower than P5/P6?


You can't set P5 with software and I haven't tried with a registry mod( not sure if that is possible). Wattool is broken right now and screws up HBM clocks I could maybe try that.

Currently there is a bug that forces the GPU to P5 if P6 is set to P7.


----------



## PontiacGTX

https://www.computerbase.de/2017-09/radeon-rx-vega-bios-power-test/2/#abschnitt_benchmarks_in_ultra_hd

have anyone tried to undervolt/underclock/increase power target vs undervolt and increase power target?
Quote:


> Originally Posted by *rancor*
> 
> You can't set P5 with software and I haven't tried with a registry mod( not sure if that is possible). Wattool is broken right now and screws up HBM clocks I could maybe try that.
> 
> Currently there is a bug that forces the GPU to P5 if P6 is set to P7.


can you check any of these scenarios?
What IF
P5>P6=P7
What IF
P5=P6=P7?
What IF
P5=P6>P7?

or what could be done to set clock speed to be 100% stuck the one in wattman/watttool woudl it gain more performance?(also do it with increase power limit to avoid any power limitation)


----------



## baakstaff

So it looks like changing the State P6 voltage doesn't actually do anything (at least on my Powercolor 56, stock BIOS), setting it as either the maximum or minimum state shows that at idle, GPU-Z says VDDC is at 1100mV and drops to 1050mV under load despite being set to either 990mV or 950mV in Wattman (HBM voltage was at 950mV for both 990mV and 950mV tests, so it's not an issue with the voltage floor). It looks accurate because power at the wall increases and the gpu fan ramps up to try and cool the extra heat. I don't have the tools to measure the voltage on the card, so I'm not 100% certain, if anyone else can verify this that'd be great.

Also somewhat related, setting both the target and max temperature to 75 with a custom fan speed of 2800 to force the card to temperature throttle saw the power state drop as low as P3/P4 ([email protected] and [email protected] according to WattTool) in order to manage temps. Whats strange is that the fan (which was set at 2800) eventually drops to the default speed of 2400 as if the card doesn't need the extra fan speed to maintain it's target temp, ignoring the heavy core clock drop. It won't drop below 2400, but also won't go back up to 2800 to try and hit higher clocks. I'm not sure if this is due to the lower power states preventing the fan from dropping lower or if it's just how this bug works. The same thing happened when setting target temp to 75 and max temp to 78, so it's not an issue with setting the same temperature value. Setting it to automatic and the fan to 2500 also caused the fan to drop to 2400, so it seems to be strictly a bug with the fan profile.

Basically, the voltage that you put in for P6 doesn't matter, and it looks like if you start to thermal throttle with a custom fan setting, Wattman will try to set the fan back to default 2400 and drop the core clock in order to maintain temperatures.


----------



## NI6HTHAWK

Okay I have managed to compile the results of my testing using Valley Bench and since I still have no internet







I have brought them into work to share with you all. Enjoy!

GPU: Sapphire RX Vega 64 Liquid Cooled
CPU: AMD Ryzen R7 1700 @ 3914 MHz (3800 x 103) @ 1.3625V
MEM: G.Skill Trident Z (non RGB) @ 3432 MHz (3333 @ 103)
MOBO: ASRock Fatal1ty Gaming Professional X370
PSU: Corsair AX860i

Valley Benchmark @ 1440p Ultra 8xAA

Wattman Settings: Balanced Power Mode
Result: Crashed

I reduced the GPU core's P7 clock rate to prevent crashing

Wattman Settings: P7 @ 1697MHz/1200mV, Temp Max/Target 65/65, Power Limit +25%
Result: 51.9 Avg FPS, 27.7 Min FPS, 103.3 Max FPS, Score 2171

After getting through a pass I raised HBM mV to see if it has any effect on anything

Wattman Settings: Memory 945 MHz/1050 MHz
Result: 51.8 Avg FPS, 28.3 Min FPS, 103.8 Max FPS, Score 2168

I returned HBM back to 950 mV and tried dropping GPU Core voltage

Wattman Settings: P7 1697MHz/1150mV, P6 1667MHz/1100mV
Result: 52 Avg FPS, 27.6 Min FPS, 103.5 Max FPS, Score 2176

No real difference so I tried lowering P7 voltage further to see what the voltage floor is

Wattman Settings: P7 1697MHz/1125mV
Result: Crashed

Restested with last good configuration and raised P7 voltage back to 1150mV and made a pass to confirm previous results

Now for the Memory clock!

Wattman Settings: Memory 1050MHz/950mV
Result: 54.8 Avg FPS, 29.1 Min FPS, 109.3 Max FPS, Score 2292

Gains! Lets go for broke!

Wattman Settings: Memory 1100MHz/950mV
Result: 55.9 Avg FPS, 28.6 Min FPS, 112.6 Max FPS, Score 2340

Not bad scaling, at this point I tried to see if I could hit 1752 MHz on the GPU core with more voltage and remain stable

Wattman Settings: P7 1752MHz/1225mV
Result: Crashed

Wattman Settings: P7 1732MHz/1225mV
Result: Unigine crashed to desktop (all other crashes required restarting the rig)

Wattman Settings: P7 1722MHz/1225mV
Result: 56.4 Avg FPS, 29.2 Min FPS, 112.6 Max FPS, Score 2359

Nice! I ran another pass to see if i could run the GPU core voltage back at the stock 1200 which it did. At this point I let it run for about 30 minutes to see if the temps stabilized and if it crashed. The temps stabilized @ around 67 C with fan speed hitting 2675 rpm. I ran another test to see if there was any noticeable losses from temperature saturation.

Result: 56.6 Avg FPS, 28.9 Min FPS, 112.4 Max FPS, Score 2368

My Conclusion:

If you have the thermal headroom undervolting will likely not gain you anything, air cooled cards seem to benefit more than Liquid cooled. Raising HBM voltage seems pointless unless greater than 1100MHz can be accomplished, my results have been that I run into stability problems with higher than 1100MHz(your results may vary). And finally While GPU core is king, raising HBM frequency really helps overall performance. It seems that the Liquid Cooled cards were set pretty close (or even beyond) to what they were capable of out of the box, some minor tweaking to help stabilize GPU core fluctuations and raise HBM bandwidth really help the card perform.

Other odd issues:

Changing the power limit results in the Fan settings being lost in Wattman. I actually had this card running 100% at around 55C on the stock cooler but when I raised Power Limit to +50% the fan settings defaulted back to 400/2300 rpms, however the BIOS must have been able to raise fans beyond these limits as I saw the fan speed ramp up to 2678 rpm (above the 2300rpm max setting) as the target was 65 C. Since Wattman doesn't appear to save the General Settings I use MSI Afterburner to set the clocks, and power limit since voltage adjustments are not necessary for me.


----------



## Reikoji

Quote:


> Originally Posted by *kundica*
> 
> [/SPOILER]
> 
> Is your card on a waterblock? The LC bios has lower temp limits so it's pointless, possibly detrimental, to run it on an Air cooled card if you can't keep it below those limits. It'll give your card more power but it'll also cause it to throttle a lot sooner.
> My LC card did 7k without changing voltages or clocks on the card. Stock clock/voltages with HBM at 1100 and HBCC on.


hrmm my card doesnt seem to like Unigine. I can get over 8000 on timespy but stuck at in the 6600's in this one. What do you get in timespy?


----------



## kundica

Quote:


> Originally Posted by *Reikoji*
> 
> hrmm my card doesnt seem to like Unigine. I can get over 8000 on timespy but stuck at in the 6600's in this one. What do you get in timespy?


Nearly 8100 for graphics score. https://www.3dmark.com/3dm/22020765


----------



## theBee2112

For anyone using the AIO BIOS on their Air Cooled Vega 64:

This BIOS will shut down your PC if the card reaches its 75*C TJ Max. Not good for heavy, sustained loads!
The TJ Max on Air BIOS is 85*C, so the problem is less prevalent.

Noticed the fluid in my water cooling loop was permeating really fast, and was anywhere from 65-80*C. (2 GPU, 1700x, all overclocked balls out, 24/7 100% load for a week.. Need more radiators!)

It's a bit of a trade-off actually.. use the AIO BIOS for more power/heat/performance. It will mine ETH in background at 40-42MH/s while gaming. Impressive. Until the loop heat soaks up to the point it shuts down. OR use the Air cooled BIOS, to increase stability, but it will only mine at 19MH/s while gaming.. But no crashes at high temps!


----------



## Reikoji

Quote:


> Originally Posted by *kundica*
> 
> Nearly 8100 for graphics score. https://www.3dmark.com/3dm/22020765


all i could muster: http://www.3dmark.com/spy/2380494


----------



## shadowxaero

Quote:


> Originally Posted by *theBee2112*
> 
> For anyone using the AIO BIOS on their Air Cooled Vega 64:
> 
> This BIOS will shut down your PC if the card reaches its 75*C TJ Max. Not good for heavy, sustained loads!
> The TJ Max on Air BIOS is 85*C, so the problem is less prevalent.
> 
> Noticed the fluid in my water cooling loop was permeating really fast, and was anywhere from 65-80*C. (2 GPU, 1700x, all overclocked balls out, 24/7 100% load for a week.. Need more radiators!)
> 
> It's a bit of a trade-off actually.. use the AIO BIOS for more power/heat/performance. It will mine ETH in background at 40-42MH/s while gaming. Impressive. Until the loop heat soaks up to the point it shuts down. OR use the Air cooled BIOS, to increase stability, but it will only mine at 19MH/s while gaming.. But no crashes at high temps!


Get two 360x45mm rads lol


----------



## shadowxaero

Quote:


> Originally Posted by *Reikoji*
> 
> all i could muster: http://www.3dmark.com/spy/2380494


My best score on stable clocks, graphics score of 8099. Might be able to get higher if I push a little bit more.

https://www.3dmark.com/spy/2354347

(Sorry about double post v.v forgot I could just add this to my post above)


----------



## Reikoji

I forgot to ramp my CPU up







. http://www.3dmark.com/spy/2380730.

I think mine will just explode if I try 1105 on mem, but lets see.

-

Well it didnt explode, slightly better graphics score. CPU scores are so random.

http://www.3dmark.com/spy/2380763


----------



## mrnice31

Hi guys, I just install a block of water ek in my msi vega 64 air. I tried to flash the LC version, but when active turbo mode and run benchmarkt the screen goes black and crashes. I have a 1000w gold evga power supply. any idea why this happens?


----------



## majestynl

Quote:


> Originally Posted by *mrnice31*
> 
> Hi guys, I just install a block of water ek in my msi vega 64 air. I tried to flash the LC version, but when active turbo mode and run benchmarkt the screen goes black and crashes. I have a 1000w gold evga power supply. any idea why this happens?


Did you cleaned (full uninstall) your radeon software ?

I suggest always clean everything then reinstall. See post link

See "fix this" drop-down. You could also edit the register powertables.

And don't forget, if you leave the dynamic freq on in Wattman. You could shoot into high freqs with to low vcore. Enable logging in gpu-z. Then you can track the Max freq and voltage before the crash.

I would always suggest to keep the voltage on auto to see what vcore your card needs for certain clocks. After that you can tweak...


----------



## rancor

Quote:


> Originally Posted by *PontiacGTX*
> 
> https://www.computerbase.de/2017-09/radeon-rx-vega-bios-power-test/2/#abschnitt_benchmarks_in_ultra_hd
> 
> have anyone tried to undervolt/underclock/increase power target vs undervolt and increase power target?
> can you check any of these scenarios?
> What IF
> P5>P6=P7
> What IF
> P5=P6=P7?
> What IF
> P5=P6>P7?
> 
> or what could be done to set clock speed to be 100% stuck the one in wattman/watttool woudl it gain more performance?(also do it with increase power limit to avoid any power limitation)


In all cases the clock still fluctuates under load even down to 1500MHz. If P7 is under P6 clocks seem to lock to P6 even if P6 is lower than P5 but still the same fluctuation under load. Ether it will get fixed later or it's just something we will need to deal with.


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> 1110mhz. Nope.
> 
> 1100 seemed to work tho... cept when leaving core speeds and voltages alone, then I crash. messing with P6 and P7 seems to just lead to performance loss in my case.


I don't think I have seen but one person report higher "working" HBM than 1105, 1100 seems to be average on a 64, there is some indication that HBM is binned 56/65/LC.
Quote:


> Originally Posted by *kundica*
> 
> [/SPOILER]
> 
> Is your card on a waterblock? The LC bios has lower temp limits so it's pointless, possibly detrimental, to run it on an Air cooled card if you can't keep it below those limits. It'll give your card more power but it'll also cause it to throttle a lot sooner.
> My LC card did 7k without changing voltages or clocks on the card. Stock clock/voltages with HBM at 1100 and HBCC on.


I spose I could try with HBCC on (then again I have found HBCC all but useless if not outright detrimental in actual gaming)... could be gimmicking the score up a touch, similar to how a ram cache on a HDD will give a strong initial boost which inflates score... may try that tonight. My last runs where all +/- 1 pt 6937, 64Air on LC bios power +50 frequency -1%, if the boost gets up around 1780ish its playing with matches stability wise without a core bump... which I will try as well.

May also be worth noting that the 4k test has some "janky-ness" with my monitor running with its "ultimate engine" mode on; potential exist here for a slight performance degrade with this version of freesync.
Quote:


> Originally Posted by *PontiacGTX*
> 
> https://www.computerbase.de/2017-09/radeon-rx-vega-bios-power-test/2/#abschnitt_benchmarks_in_ultra_hd
> 
> have anyone tried to undervolt/underclock/increase power target vs undervolt and increase power target?
> can you check any of these scenarios?
> What IF
> P5>P6=P7
> What IF
> P5=P6=P7?
> What IF
> P5=P6>P7?
> 
> or what could be done to set clock speed to be 100% stuck the one in wattman/watttool woudl it gain more performance?(also do it with increase power limit to avoid any power limitation)


Quick n' dirty, I found my gains / stability to be just that on the LC bios... underclock ~1% and PT +50; next step is locking down the voltage and going for a higher sustained boost... loop can handle temps, no problem; more concerned with stability atm.
Quote:


> Originally Posted by *theBee2112*
> 
> For anyone using the AIO BIOS on their Air Cooled Vega 64:
> 
> This BIOS will shut down your PC if the card reaches its 75*C TJ Max. Not good for heavy, sustained loads!
> The TJ Max on Air BIOS is 85*C, so the problem is less prevalent.
> 
> Noticed the fluid in my water cooling loop was permeating really fast, and was anywhere from 65-80*C. (2 GPU, 1700x, all overclocked balls out, 24/7 100% load for a week.. Need more radiators!)
> 
> It's a bit of a trade-off actually.. use the AIO BIOS for more power/heat/performance. It will mine ETH in background at 40-42MH/s while gaming. Impressive. Until the loop heat soaks up to the point it shuts down. OR use the Air cooled BIOS, to increase stability, but it will only mine at 19MH/s while gaming.. But no crashes at high temps!


This PSA should be a sticky!









You do indeed need more radiators friend! Mind you will just heat up the room without a dedicated inverter or some scheme to dump waste heat.
Quote:


> Originally Posted by *NI6HTHAWK*
> 
> Okay I have managed to compile the results of my testing using Valley Bench and since I still have no internet
> 
> 
> 
> 
> 
> 
> 
> I have brought them into work to share with you all. Enjoy!
> 
> My Conclusion:
> 
> If you have the thermal headroom undervolting will likely not gain you anything, air cooled cards seem to benefit more than Liquid cooled. Raising HBM voltage seems pointless unless greater than 1100MHz can be accomplished, my results have been that I run into stability problems with higher than 1100MHz(your results may vary). And finally While GPU core is king, raising HBM frequency really helps overall performance. It seems that the Liquid Cooled cards were set pretty close (or even beyond) to what they were capable of out of the box, some minor tweaking to help stabilize GPU core fluctuations and raise HBM bandwidth really help the card perform.
> 
> Other odd issues:
> 
> Changing the power limit results in the Fan settings being lost in Wattman. I actually had this card running 100% at around 55C on the stock cooler but when I raised Power Limit to +50% the fan settings defaulted back to 400/2300 rpms, however the BIOS must have been able to raise fans beyond these limits as I saw the fan speed ramp up to 2678 rpm (above the 2300rpm max setting) as the target was 65 C. Since Wattman doesn't appear to save the General Settings I use MSI Afterburner to set the clocks, and power limit since voltage adjustments are not necessary for me.


I would like at least the option of attempting to go past the HBM wall, but I doubt that is coming anytime soon, additionally the power delivery for the mem seems to be the shruggingly pedestrian... I am beginning to wonder if its the overshoot and undershoot on the boost on core which is the root of the instability... there seems to be a close correlation to the memory timing and frequency of the core... pipe-lining, or the failure there'of which is where crashes are occurring... super position on 1080 extreme boost up very high indeed (almost as if the card is trying to hit a target frame rate), much higher than what I see in 4k; yet in 4k there are specific points in the test which are sure fire crashes which seem related to a throttling event.

Very much appreciate your results and testing, I think I will use those as a baseline for some of my own tinkering!


----------



## CryWin

I'm home. My Sapphire Vega 56 arrived today from Amazon finally.


----------



## Soggysilicon

As a follow up to the earlier conversation.

1800X 3200 14/14/14 XFR
V64 Air / LC Bios -1% freq. 1105HBM +50 PWR


Typical setup, NOTE: "Wall Paper Engine" is ON HBCC OFF


Same as above, "Wall Paper Engine" is OFF HBCC OFF


Same as above, "WPE" OFF HBCC ON

So... WPE is sandbagging, and HBCC does have an repeatable positive effect in this application.

Now to play with some volts.


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> As a follow up to the earlier conversation.
> 
> 1800X 3200 14/14/14 XFR
> V64 Air / LC Bios -1% freq. 1105HBM +50 PWR
> 
> 
> Typical setup, NOTE: "Wall Paper Engine" is ON HBCC OFF
> 
> 
> Same as above, "Wall Paper Engine" is OFF HBCC OFF
> 
> 
> Same as above, "WPE" OFF HBCC ON
> 
> So... WPE is sandbagging, and HBCC does have an repeatable positive effect in this application.
> 
> Now to play with some volts.


What is Wall Paper Engine?


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> What is Wall Paper Engine?
















Review has "language", be aware... maybe NSFW...







but you get the idea.


----------



## Trender07

Im the only one with black screen/black glitches on Overwatch ? Vega 64 air and looks like it happens when I set more than 1050 MHz HBM(which is really low for v64) even after passing fire strike fire strike ultra time spy and super position


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Review has "language", be aware... maybe NSFW...
> 
> 
> 
> 
> 
> 
> 
> but you get the idea.


Yea I can see that ruining GPU benchmark scores :3


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> Im the only one with black screen/black glitches on Overwatch ? Vega 64 air and looks like it happens when I set more than 1050 MHz HBM(which is really low for v64) even after passing fire strike fire strike ultra time spy and super position


Don't play Overwatch, but super position bench will black screen on me if the drivers are hung or need to be "reset" or from other applications requiring a reboot 'retune. In my case its an issue with the display port and my monitor specifically... CF791. I think OW is a known issue, but be sure to check that your monitor driver is installed correctly. Updating Radeon drivers is no guarantee that your monitor was detected and installed / hooked correctly. For what its worth...


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> Yea I can see that ruining GPU benchmark scores :3


Well yeah... "but" WPE is configured so that applications which go full screen (not windowed) unloads the WP, WPE has no appreciable affect when I run other Unigine or 3DMark benchies... not as significant as the hit in super position... Considering that SP seems to utilize HBCC at least superficially, I am wondering if Unigine 2 is making lower level calls to the card than other benchies or applications...?


----------



## Azazil1190

Pls let me in to the club.
A happy owner on vega64 lc.
Nice to meet you guys!!


----------



## Azazil1190

Havent had too much free time to test the card as i want to.
But i made one quick run of f.s

Stock voltage
Stock core clock
+50 power tar.
And finally 1050 for hbm

And im happy with the results
I can pass my previous score with my last strix oc gtx1080 2100 core.
Sorry for my English









https://www.3dmark.com/3dm/22135569?


----------



## alanthecelt

Quote:


> Originally Posted by *Soggysilicon*
> 
> I don't think I have seen but one person report higher "working" HBM than 1105, 1100 seems to be average on a 64, there is some indication that HBM is binned 56/65/LC.


That would appear to ring true
my 3 56's (2x powercolour and 1 MSI) flashed as 64's
2 degrade hash rate after 955 ram
1 goes up to 1025.. i suspect the MSI has better ram, i've had a hard time identifying which card is which, and haven't had time to confirm


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> Well yeah... "but" WPE is configured so that applications which go full screen (not windowed) unloads the WP, WPE has no appreciable affect when I run other Unigine or 3DMark benchies... not as significant as the hit in super position... Considering that SP seems to utilize HBCC at least superficially, I am wondering if Unigine 2 is making lower level calls to the card than other benchies or applications...?


I went and got it to see if I lose benchmark performance on my machine.

.... Purely for scientific reasons, of course.


----------



## chris89

I was looking over the Vega 56 Air Cooler & It's EXTREMELY Clear why the power consumption is so high & throttling occurs like mad, & The Core & HBM are heating up far too much, along with Even Some People Tripping The Breaker On the AMD RX VEGA.

The issue isn't with VEGA at all, it's the cooler. Look. I could easily solve the throttling & power consumption.

AMD set the HOT SPOT & MAX TEMP Limits to 115 Degrees Celsius. This means that it will never throttle if it stays below 115C. That Includes EVERYTHING on the PCB, LITERALLY EVERY SINGLE CHIP.

Once Every Single Chip is actively cooled, AMD VEGA is the more efficient GPU EVER CREATED. It's also even more Efficient than the TITAN XP. Only 220 Watts when all components cooled. Along with zero throttling.

Check these pictures out, showing blatant & clear choices to cause the GPU to Overheat & Burn through Power like no bodies business, to Develop a Bad Reputation For AMD VEGA.

Basically the Hotspot & Max Temp monitor every single chip over the entire PCB. When set to 115C, those chips from which are not cooled with a thermal pad and direct heatsink contact, they will soar and throttle at 115C. So if the Core is throttling at like 50-60C or so then it's clear these components are overheating and throttling at 115C.

Even the Waterblock does not address these chips, and thats why Waterblock Gains are minimal, like a total waste of money.

We can cool the Core/ HBM & everything down by 40C at least once we properly adhere & trim & cut the Copper shims to the Metal-Cooling-Plate using the thermal adhesive. Then add a thermal pad under each of these "HOT" chips... Then we won't throttle at all & Eat the TITAN XP Alive. Not to down on Nvidia, I love the TITAN XP, however the VEGA is a true Work-Of-Art & A Masterpiece.

http://www.ebay.com/itm/10-x-Pads-Pure-Copper-Shims-Heat-Sink-Conductor-15mm-x-0-3mm-Motherboard-Reflow-/132149188765?hash=item1ec4b4989d:g:iV4AAOSwA29Y4wx7

http://www.ebay.com/itm/IC-Chipset-GPU-CPU-Thermal-Heatsink-Copper-Pad-Shim-Size-20-X-20-X-0-3mm-Pack-Of-/253032915022?epid=1665626958&hash=item3ae9efe04e:g:tscAAOSwSzFZX0WF

http://www.ebay.com/itm/30g-Thermal-Conductive-Silicone-Glue-Adhesive-LED-GPU-Heatsink-Mosfets-/371384720080?hash=item5678411ad0:g:~w0AAOSwWiBY-Kzi


----------



## punchmonster

This is filled with so many falsehoods and misinformation it's ridiculous.
I don't even know where to start.
No one with a waterblock or LC edition is temp throttling on anything that isn't the HBM2. The powerdelivery on most cards stays positively frosty.
It isn't remotely power efficient. It's not going to beat the Titan XP. Stop.
Quote:


> Originally Posted by *chris89*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> I was looking over the Vega 56 Air Cooler & It's EXTREMELY Clear why the power consumption is so high & throttling occurs like mad, & The Core & HBM are heating up far too much, along with Even Some People Tripping The Breaker On the AMD RX VEGA.
> 
> The issue isn't with VEGA at all, it's the cooler. Look. I could easily solve the throttling & power consumption.
> 
> AMD set the HOT SPOT & MAX TEMP Limits to 115 Degrees Celsius. This means that it will never throttle if it stays below 115C. That Includes EVERYTHING on the PCB, LITERALLY EVERY SINGLE CHIP.
> 
> Once Every Single Chip is actively cooled, AMD VEGA is the more efficient GPU EVER CREATED. It's also even more Efficient than the TITAN XP. Only 220 Watts when all components cooled. Along with zero throttling.
> 
> Check these pictures out, showing blatant & clear choices to cause the GPU to Overheat & Burn through Power like no bodies business, to Develop a Bad Reputation For AMD VEGA.
> 
> Basically the Hotspot & Max Temp monitor every single chip over the entire PCB. When set to 115C, those chips from which are not cooled with a thermal pad and direct heatsink contact, they will soar and throttle at 115C. So if the Core is throttling at like 50-60C or so then it's clear these components are overheating and throttling at 115C.
> 
> Even the Waterblock does not address these chips, and thats why Waterblock Gains are minimal, like a total waste of money.
> 
> We can cool the Core/ HBM & everything down by 40C at least once we properly adhere & trim & cut the Copper shims to the Metal-Cooling-Plate using the thermal adhesive. Then add a thermal pad under each of these "HOT" chips... Then we won't throttle at all & Eat the TITAN XP Alive. Not to down on Nvidia, I love the TITAN XP, however the VEGA is a true Work-Of-Art & A Masterpiece.
> 
> http://www.ebay.com/itm/10-x-Pads-Pure-Copper-Shims-Heat-Sink-Conductor-15mm-x-0-3mm-Motherboard-Reflow-/132149188765?hash=item1ec4b4989d:g:iV4AAOSwA29Y4wx7
> 
> http://www.ebay.com/itm/IC-Chipset-GPU-CPU-Thermal-Heatsink-Copper-Pad-Shim-Size-20-X-20-X-0-3mm-Pack-Of-/253032915022?epid=1665626958&hash=item3ae9efe04e:g:tscAAOSwSzFZX0WF
> 
> http://www.ebay.com/itm/30g-Thermal-Conductive-Silicone-Glue-Adhesive-LED-GPU-Heatsink-Mosfets-/371384720080?hash=item5678411ad0:g:~w0AAOSwWiBY-Kzi


----------



## Newbie2009

Quote:


> Originally Posted by *punchmonster*
> 
> This is filled with so many falsehoods and misinformation it's ridiculous.
> I don't even know where to start.
> No one with a waterblock or LC edition is temp throttling on anything that isn't the HBM2. The powerdelivery on most cards stays positively frosty.
> It isn't remotely power efficient. It's not going to beat the Titan XP. Stop.


Yeah, nonsense.


----------



## Whatisthisfor

Did anybody here with an AIO and a PSU below 1000W encounter stability issues?


----------



## twan69666

Quote:


> Originally Posted by *chris89*
> 
> I was looking over the Vega 56 Air Cooler & It's EXTREMELY Clear why the power consumption is so high & throttling occurs like mad, & The Core & HBM are heating up far too much, along with Even Some People Tripping The Breaker On the AMD RX VEGA.
> 
> The issue isn't with VEGA at all, it's the cooler. Look. I could easily solve the throttling & power consumption.
> 
> AMD set the HOT SPOT & MAX TEMP Limits to 115 Degrees Celsius. This means that it will never throttle if it stays below 115C. That Includes EVERYTHING on the PCB, LITERALLY EVERY SINGLE CHIP.
> 
> Once Every Single Chip is actively cooled, AMD VEGA is the more efficient GPU EVER CREATED. It's also even more Efficient than the TITAN XP. Only 220 Watts when all components cooled. Along with zero throttling.
> 
> Check these pictures out, showing blatant & clear choices to cause the GPU to Overheat & Burn through Power like no bodies business, to Develop a Bad Reputation For AMD VEGA.
> 
> Basically the Hotspot & Max Temp monitor every single chip over the entire PCB. When set to 115C, those chips from which are not cooled with a thermal pad and direct heatsink contact, they will soar and throttle at 115C. So if the Core is throttling at like 50-60C or so then it's clear these components are overheating and throttling at 115C.
> 
> Even the Waterblock does not address these chips, and thats why Waterblock Gains are minimal, like a total waste of money.
> 
> We can cool the Core/ HBM & everything down by 40C at least once we properly adhere & trim & cut the Copper shims to the Metal-Cooling-Plate using the thermal adhesive. Then add a thermal pad under each of these "HOT" chips... Then we won't throttle at all & Eat the TITAN XP Alive. Not to down on Nvidia, I love the TITAN XP, however the VEGA is a true Work-Of-Art & A Masterpiece.


I dont hang out here often, but you remind me of a mad scientist


----------



## chris89

Um, haven't you seen Jayz 2 Cents throttling after the waterblock, still? If it's cooling and not hitting it's component temperature limits then its NOT going to throttle.

When it throttles, its hitting the component temperature limits.

This is QUITE clear.

When I pickup a VEGA, I'll Prove It To You It Can Be Way Faster. Just Gotta Use Your Mind & Find The Clearly Apparent Flaws.


----------



## aliquis

There can be various different reasons for why a vega card is throttling/changing its clockrate. The temperature is just one of them. As far as i am aware vega does have some form of dynamic boost clock, so if temperature/load/core voltage/powerlimit is not high enough for a given load Scenario, the clock rate may dynamically change, but that doesn't mean that there is a temperature issue somewhere.


----------



## poisson21

Weird when i test with timespy i have a strange report.

if i test with clock at -1.5% (1250mV) it report accuretly the hbm frequency 1100/1050mV https://www.3dmark.com/spy/2383347

But with -1% (1250mV) and above it report only 200+ Mhz https://www.3dmark.com/3dm/22145359?

edit : try with 1300 mV for the gpu clock and same thing, the score is good but the frequence are reported incorrectly https://www.3dmark.com/3dm/22146023?

I'll try with unigine if it report the same thing.

Ps: i have 2 card and try to apply a 142% limit power from hellm to the second card registry but the 142% didn't apply in wattman while it is present in the registry. edit: Several reboot solve the problem.


----------



## chris89

Right on. The VRM is reported but the Hot Spot I don't believe all parts of the PCB are reported.

Although they did apply a TDP Limit. The images I have seen where many of the Hottest Power Holding Components are open to open air cooling & that's insufficient.

I would fix all components & cool them all & then BIOS mod it and remove the TDP Limit all together so I don't even have to touch the power limit bar. Just clock it according to what is possible & what my Max Temp & Hot Spot limits are set to.


----------



## kundica

Quote:


> Originally Posted by *chris89*
> 
> Um, haven't you seen Jayz 2 Cents throttling after the waterblock, still? If it's cooling and not hitting it's component temperature limits then its NOT going to throttle.
> 
> When it throttles, its hitting the component temperature limits.
> 
> This is QUITE clear.
> 
> When I pickup a VEGA, I'll Prove It To You It Can Be Way Faster. Just Gotta Use Your Mind & Find The Clearly Apparent Flaws.


Jay's card isn't thermal throttling. It could be throttling of number of reasons from how he's configured the card to the application itself, it's just how Vega functions. Take a look at the following video(or any of AMDMatt's recent videos). He's using the AIO version of the card which hits much higher temps than the card on an EK block. His clocks fluctuate when in menus, spiking when under less demanding loads, but remain relatively stable during actual gameplay. AMDMatt achieves this by simply setting the power limit to +50% and adjusting his p6/p7 clocks and voltages.






Here's another example from a user who has the Vega Air 64 on an EK block.


----------



## chris89

Thanks for the videos. Glad to see it not throttling. Would be nice to see a comparison between the actual TITAN XP vs the VEGA 64. Wth it not throttling & all.

Right on bud, those videos showcase far different results than I was seeing on previous reviewers. Getting like consistently 40fps @ 4k and throttling all over.


----------



## Trender07

Is my V64 Air just that bad? Everyones here goes 1100 like nothing but when I set more than 1050 Mhz hbm2 it start with black artifacts and ends up with black screen.

Also in Superposition I get artifacts setting 1100 HBM2, I recorded it:




(You can clearly see it at 0:14)

(This is underclock and undervolt, but even so I tried with stock volts and its much even worse lol)


----------



## The EX1

Quote:


> Originally Posted by *Trender07*
> 
> Is my V64 Air just that bad? Everyones here goes 1100 like nothing but when I set more than 1050 Mhz hbm2 it start with black artifacts and ends up with black screen.
> 
> Also in Superposition I get artifacts setting 1100 HBM2, I recorded it:
> 
> 
> 
> 
> (You can clearly see it at 0:14)
> 
> (This is underclock and undervolt, but even so I tried with stock volts and its much even worse lol)


Not everyone can get to 1100 HBM stable, especially on air. As long as you can hit 1025+ on air cooling, I would be happy.


----------



## Newbie2009

a lot can run 1100 true but just for benchmarking. Mine (underwater) can but I don't think it is 100% stable


----------



## poisson21

Personnally i have no problem with 1100Mhz on my card (air 64 with ek block and lc bios)

i have more problem with the gpu core, not able to go above 1712Mhz (1250mV/142%) in games.

Up to 1732Mhz it is stable with benchmark but not in games.


----------



## kundica

Quote:


> Originally Posted by *Trender07*
> 
> Is my V64 Air just that bad? Everyones here goes 1100 like nothing but when I set more than 1050 Mhz hbm2 it start with black artifacts and ends up with black screen.
> 
> Also in Superposition I get artifacts setting 1100 HBM2, I recorded it:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (You can clearly see it at 0:14)
> 
> (This is underclock and undervolt, but even so I tried with stock volts and its much even worse lol)


Can you use GPU-Z and let us know what the HBM and hotspot temps are while benchmarking at 1100?

Not everyone can hit 1100 and many of those who can, are only able to get through benchmarks with it set that high. My AIO card would bench fine at 1100 but could not sustain 1100 while gaming. Cards that can sustain 1050 while gaming I'd say are very good, while 1100 being the creme de la creme.


----------



## Trender07

Quote:


> Originally Posted by *kundica*
> 
> Can you use GPU-Z and let us know what the HBM and hotspot temps are while benchmarking at 1100?
> 
> Not everyone can hit 1100 and many of those who can, are only able to get through benchmarks with it set that high. My AIO card would bench fine at 1100 but could not sustain 1100 while gaming. Cards that can sustain 1050 while gaming I'd say are very good, while 1100 being the creme de la creme.


Temp 73º but Hotspot 86º and HBM 78º, but even so I get those artifacts even just starting the test when the gpu is 60º and while playing with fps capped I get them even if I set 1080 MHz too.
Used GPUZ to test again with 1050 HBM and the hotspot and hbm temps are just the same as hot as 1100.
I have to set 1050 mhz if I don't want games crashing or artifacts I hope my gpu isn't faulty


----------



## kundica

Quote:


> Originally Posted by *Trender07*
> 
> Temp 73º but Hotspot 86º and HBM 78º, but even so I get those artifacts even just starting the test when the gpu is 60º and while playing with fps capped I get them even if I set 1080 MHz too.
> Used GPUZ to test again with 1050 HBM and the hotspot and hbm temps are just the same as hot as 1100.
> I have to set 1050 mhz if I don't want games crashing or artifacts I hope my gpu isn't faulty


Doubt it's faulty, just not the best overclocker. 1050 is good though if it runs stable.


----------



## jearly410

Quote:


> Originally Posted by *kundica*
> 
> Can you use GPU-Z and let us know what the HBM and hotspot temps are while benchmarking at 1100?
> 
> Not everyone can hit 1100 and many of those who can, are only able to get through benchmarks with it set that high. My AIO card would bench fine at 1100 but could not sustain 1100 while gaming. Cards that can sustain 1050 while gaming I'd say are very good, while 1100 being the creme de la creme.


I can game with 1050 with an air 64. Any higher crashes battlefield 1.


----------



## Trender07

Quote:


> Originally Posted by *jearly410*
> 
> I can game with 1050 with an air 64. Any higher crashes battlefield 1.


Mines doesn't crash but I get artifacts at 1050+ :/ so yeah have to use 1050 too.
Crashes I had were Overwatch (but idk it says it can hang on patch notes...)
and Mass Effect Andromeda


----------



## kaspar737

Can you get a Vega 56 anywhere in Europe at or near MSRP with a reasonable wait time?


----------



## redshoulder

Received Vega 64 today and and getting coil whine even with low framerates (30fps).
For power I use 2 separate 8 pin cables to psu.

With previous card (gtx 780) also had coil whine but it died down after a few weeks.So I am m thinking it is psu related (ax860i).

Anyone in same boat?


----------



## Disharmonic

Quote:


> Originally Posted by *Caldeio*
> 
> I get about 6200 as well. What does your core run at during the test?


Core runs at 1600 +- 5hz. Most of my runs end up at 58xx though. Either i used other settings for that run or some windows service is causing some performance drop.
Quote:


> Originally Posted by *kaspar737*
> 
> Can you get a Vega 56 anywhere in Europe at or near MSRP with a reasonable wait time?


As far as i can see it's in stock at many shops, but at over €500. Geizhals didn't find a single offer below.


----------



## jearly410

Pubg hard crashes my computer at stock settings.


----------



## majestynl

*Wattman Full of bugs. Possible workarround found*

After doing 200+ runs and trying tons of settings, i came up with following. Wattman 17.9.1 has to many bugs and if you are not aware of this,
things could get confusing while your are trying to get the best out of your card. I played a lot with settings and measured a lot of things before i got my conclusion.

_Note: This effects my Card/Situation, i cant guarantee if this happends to everybody. But at least you can read/test or try it._

*Radeon Vega 64 // Original air model // Flashed AIO Bios // Mod Powertables for 142% Power 400+*


As told before in my previous post: Dynamic GPU Freq will overshoot my freq to 1800+ in 3DMark benches and crashes the card. Probably not enough voltage during freq overshoot
HBM Voltage control is not the real HBM Voltage but definitely effects my GPU Freq. I get much higher P6 and P7 states when set this higher. Please note below, if you get a crash will increasing this setting
Leaving P6 and P7 on exact same Freq can lead to HBM running max 800mhz
Too high HBM voltage slider can lead to HBM running lower

*Eventually i found a way to get the best scores with bench-marking and not crashing the card:*

Start With a fresh reset of Wattman settings

*1.* Slide your Power Limit to the MAX.
*2.* Disable the Dynamic Freq switch on your GPU Freq (so you will see actual mhz of your p6 and p7 states)
*3.* Leave GPU voltage control on auto.
*4.* Switch off Dynamic HBM Control and then set your HBM Freq on whatever you want. I can run easily 1100mhz
After that turn Dynamic switch on again. ( No panic, it will run on your setting)
*5.* Switch On your HBM voltage and set your voltage ( i did 1050mv), after that switch it back to "Automatic"

Its very strange put playing with settings and switching back to Auto will keep your edited settings. But this time more stable!

With above i got my highest score without any crash or overshoot in Frequencies!

_Best score with Superposition 6910 / FS 18662. And as you read above. I didn't fine tune the voltages yet. This is just running it STABLE with the highest freqs._

Again: above effects definitely my situation. I just found the best case today, and need to test long term / playing games etc etc..
But if some of you could test with me, we can find the best way to tune this card. And we could also sent RTG the bugs we found!


----------



## pmc25

Quote:


> Originally Posted by *Trender07*
> 
> Temp 73º but Hotspot 86º and HBM 78º, but even so I get those artifacts even just starting the test when the gpu is 60º and while playing with fps capped I get them even if I set 1080 MHz too.
> Used GPUZ to test again with 1050 HBM and the hotspot and hbm temps are just the same as hot as 1100.
> I have to set 1050 mhz if I don't want games crashing or artifacts I hope my gpu isn't faulty


I don't think there is any way anyone will get HBM2 at 1100Mhz stable at those temperatures.

It's impossible. 1070Mhz would be expecting far too much. That your HBM2 is stable at 1050Mhz at those temperatures is a good result. Mine crashed above 65C at 1050Mhz when it was on air, so had to turn the fans right up. It was totally stable at 1085Mhz, as long as I kept it below 58C. Any higher and it'd get crashy. It's fine at 1100Mhz on water.

As has been stated repeatedly, there is almost no increase in thermal dissipation between 1000Mhz HBM2 and 1100Mhz HBM2. It just gets progressively more temperature sensitive (unstable) as clocks rise.

There's nothing wrong with your GPU (memory). It's just far too hot to sustain a massive overclock of the HBM2.

Regarding your repeated questions about Overwatch ... as several people have answered, it's known there are driver problems. It's even mentioned in the patch notes. You can't do anything, you just have to wait for a fix.


----------



## Gdourado

Any release dates for AIB cards? Sapphire or power color?


----------



## Trender07

Quote:


> Originally Posted by *pmc25*
> 
> I don't think there is any way anyone will get HBM2 at 1100Mhz stable at those temperatures.
> 
> It's impossible. 1070Mhz would be expecting far too much. That your HBM2 is stable at 1050Mhz at those temperatures is a good result. Mine crashed above 65C at 1050Mhz when it was on air, so had to turn the fans right up. It was totally stable at 1085Mhz, as long as I kept it below 58C. Any higher and it'd get crashy. It's fine at 1100Mhz on water.
> 
> As has been stated repeatedly, there is almost no increase in thermal dissipation between 1000Mhz HBM2 and 1100Mhz HBM2. It just gets progressively more temperature sensitive (unstable) as clocks rise.
> 
> There's nothing wrong with your GPU (memory). It's just far too hot to sustain a massive overclock of the HBM2.
> 
> Regarding your repeated questions about Overwatch ... as several people have answered, it's known there are driver problems. It's even mentioned in the patch notes. You can't do anything, you just have to wait for a fix.


lol u were right

I cranked the fan speed to MAX (4900 rpm) before but it still artifacts.
I changed the Objective Speed to 65º AND MAX Speed and no artifacts lol. So a question now, is it safe to leave it at 4900 rpm for a long time wont it break or something the blower.
I was kinda confident on the temps because I left them auto and they were under 75º, one can think it should run good at auto max temps but well looks like Hotspot and HBM temps were really high even tho the card was at 73º


----------



## kundica

Quote:


> Originally Posted by *Trender07*
> 
> lol u were right
> 
> I cranked the fan speed to MAX (4900 rpm) before but it still artifacts.
> I changed the Objective Speed to 65º AND MAX Speed and no artifacts lol. So a question now, is it safe to leave it at 4900 rpm for a long time wont it break or something the blower.
> I was kinda confident on the temps because I left them auto and they were under 75º, one can think it should run good at auto max temps but well looks like Hotspot and HBM temps were really high even tho the card was at 73º


4900rpm would make the card unbearable, imo. I'd just find a balance of fan and hbm oc and run at something like 1020 until I decided to go aftermarket on another air cooler or put it on a waterblock.


----------



## Trender07

Quote:


> Originally Posted by *kundica*
> 
> 4900rpm would make the card unbearable, imo. I'd just find a balance of fan and hbm oc and run at something like 1020 until I decided to go aftermarket on another air cooler or put it on a waterblock.


Well yeah Im keeping it at 1050 because thats stable for me no matter the temps and withouth high speed fan. But for benchmark and testing games for highest fps I don't want to break my blower
Quote:


> Originally Posted by *majestynl*
> 
> *Wattman Full of bugs. Possible workarround found*
> 
> After doing 200+ runs and trying tons of settings, i came up with following. Wattman 17.9.1 has to many bugs and if you are not aware of this,
> things could get confusing while your are trying to get the best out of your card. I played a lot with settings and measured a lot of things before i got my conclusion.
> 
> _Note: This effects my Card/Situation, i cant guarantee if this happends to everybody. But at least you can read/test or try it._
> 
> *Radeon Vega 64 // Original air model // Flashed AIO Bios // Mod Powertables for 142% Power 400+*
> 
> 
> As told before in my previous post: Dynamic GPU Freq will overshoot my freq to 1800+ in 3DMark benches and crashes the card. Probably not enough voltage during freq overshoot
> HBM Voltage control is not the real HBM Voltage but definitely effects my GPU Freq. I get much higher P6 and P7 states when set this higher. Please note below, if you get a crash will increasing this setting
> Leaving P6 and P7 on exact same Freq can lead to HBM running max 800mhz
> Too high HBM voltage slider can lead to HBM running lower
> 
> *Eventually i found a way to get the best scores with bench-marking and not crashing the card:*
> 
> Start With a fresh reset of Wattman settings
> 
> *1.* Slide your Power Limit to the MAX.
> *2.* Disable the Dynamic Freq switch on your GPU Freq (so you will see actual mhz of your p6 and p7 states)
> *3.* Leave GPU voltage control on auto.
> *4.* Switch off Dynamic HBM Control and then set your HBM Freq on whatever you want. I can run easily 1100mhz
> After that turn Dynamic switch on again. ( No panic, it will run on your setting)
> *5.* Switch On your HBM voltage and set your voltage ( i did 1050mv), after that switch it back to "Automatic"
> 
> Its very strange put playing with settings and switching back to Auto will keep your edited settings. But this time more stable!
> 
> With above i got my highest score without any crash or overshoot in Frequencies!
> 
> _Best score with Superposition 6910 / FS 18662. And as you read above. I didn't fine tune the voltages yet. This is just running it STABLE with the highest freqs._
> 
> Again: above effects definitely my situation. I just found the best case today, and need to test long term / playing games etc etc..
> But if some of you could test with me, we can find the best way to tune this card. And we could also sent RTG the bugs we found!


Hmm on my testing it doesn't keep my voltages (p6 and p7)


----------



## kundica

Have any of you updated to the latest Windows 10 update? It should be KB4038788. If so can you try running Superposition? After updating Superposition just loads a black screen for me.I can see the overlay with frames, etc., but nothing renders. 3d Mark is fine.


----------



## majestynl

Quote:


> Originally Posted by *Trender07*
> 
> Hmm on my testing it doesn't keep my voltages (p6 and p7)


Yeah as we saw often before. Not all the bugs are happening at everybody.

How do you test it? Save then toggle the switch back, close wattman en reopen to check? Or just toggle without closing wattman?


----------



## Newbie2009

Just use watt tool


----------



## jearly410

Quote:


> Originally Posted by *Newbie2009*
> 
> Just use watt tool


I agree. Far easier than wattman.


----------



## Trender07

Quote:


> Originally Posted by *kundica*
> 
> Have any of you updated to the latest Windows 10 update? It should be KB4038788. If so can you try running Superposition? After updating Superposition just loads a black screen for me.I can see the overlay with frames, etc., but nothing renders. 3d Mark is fine.


Yeah KB4038788 installed 14/09. Can run Superposition just fine
Quote:


> Originally Posted by *majestynl*
> 
> Yeah as we saw often before. Not all the bugs are happening at everybody.
> 
> How do you test it? Save then toggle the switch back, close wattman en reopen to check? Or just toggle without closing wattman?


Oh my bad then, didn't saved before toggling again to auto


----------



## pmc25

Quote:


> Originally Posted by *Trender07*
> 
> lol u were right
> 
> I cranked the fan speed to MAX (4900 rpm) before but it still artifacts.
> I changed the Objective Speed to 65º AND MAX Speed and no artifacts lol. So a question now, is it safe to leave it at 4900 rpm for a long time wont it break or something the blower.
> I was kinda confident on the temps because I left them auto and they were under 75º, one can think it should run good at auto max temps but well looks like Hotspot and HBM temps were really high even tho the card was at 73º


Problem is it will be core throttling to keep that temperature. But that's better than huge HBM throttles, and dog-s**t timings.

If you want the most out of your Vega, water is 100% the way to go.

It's not a bad investment, either. If you got it at current prices, with hashing creeping closer to 50MH/s, there's no way Vega prices will be dropping.

Probably won't be a faster, more efficient card until NAVI. Which is earliest late Q3 '18, more likely late Q1 '19.


----------



## Tgrove

Are you all running windows 10? Maybe im not seeing these bugs because im on windows 7 (liquid version). Wattman has been pretty good for me since 17.9.1, i actually like it


----------



## majestynl

Quote:


> Originally Posted by *Newbie2009*
> 
> Just use watt tool


Quote:


> Originally Posted by *jearly410*
> 
> I agree. Far easier than wattman.


I also use watttool for myself









I'm just testing wattman and responding to many wattman users (almost everybody) who are fighting with crashes true Wattman.
Like my post says, Wattman can confuse you while tweaking because of the many bugs.

Quote:


> Originally Posted by *Tgrove*
> 
> Are you all running windows 10? Maybe im not seeing these bugs because im on windows 7 (liquid version)


Yep, testrig is win10


----------



## pmc25

Quote:


> Originally Posted by *Tgrove*
> 
> Are you all running windows 10? Maybe im not seeing these bugs because im on windows 7 (liquid version)


I wouldn't touch W10 with a bargepole ...

There are plenty of bugs in Vega W7 drivers and Radeon Settings ...


----------



## Trender07

Is kind of weird to undervolt this card when just setting lower voltages also lowerd frequency :/
Just changed from 985Vcore 950Vhbm to 920Vcore 900Vhbm with same frequencys (1600mhz) and real speeds changed from 1540mhz+ to 1470mhz+ :/
I've read Im not alone on this, so how Im supposed to get higher (real) Core speed lol?


----------



## baakstaff

Quote:


> Originally Posted by *Trender07*
> 
> Is kind of weird to undervolt this card when just setting lower voltages also lowerd frequency :/
> Just changed from 985Vcore 950Vhbm to 920Vcore 900Vhbm with same frequencys (1600mhz) and real speeds changed from 1540mhz+ to 1470mhz+ :/
> I've read Im not alone on this, so how Im supposed to get higher (real) Core speed lol?


You can still adjust the core frequency even with an undervolt, it's just the gpu trying to maintain stability. For example, setting P7 to 960 mV and leaving the frequency at stock, My Vega 56 averages around 1495 Mhz using a paused scene in Unigine Valley, but by either increasing P7 by 40 Mhz to 1632 Mhz from 1592 Mhz (stock), or adjusting the slider to +2.5%, I can hit 1519 Mhz, a 24 Mhz increase. So the increase isn't linear which means it isn't just selecting an offset, but both methods of setting the clock work.


----------



## gamervivek

Quote:


> Originally Posted by *Trender07*
> 
> Is kind of weird to undervolt this card when just setting lower voltages also lowerd frequency :/
> Just changed from 985Vcore 950Vhbm to 920Vcore 900Vhbm with same frequencys (1600mhz) and real speeds changed from 1540mhz+ to 1470mhz+ :/
> I've read Im not alone on this, so how Im supposed to get higher (real) Core speed lol?


Good to know that I'm not alone, but the strange thing is that the power consumption also goes up and it throttles more than on the stock configuration. As a result I get lower performance than stock.
Stock boosts are 13xx while the power slider gets them over 14xx.

edit: just realized you've the 64, so it has higher clocks.


----------



## kundica

Quote:


> Originally Posted by *Trender07*
> 
> Is kind of weird to undervolt this card when just setting lower voltages also lowerd frequency :/
> Just changed from 985Vcore 950Vhbm to 920Vcore 900Vhbm with same frequencys (1600mhz) and real speeds changed from 1540mhz+ to 1470mhz+ :/
> I've read Im not alone on this, so how Im supposed to get higher (real) Core speed lol?


Don't undervolt as much. If the core doesn't have enough voltage to sustain higher clocks it won't. At the same time you're fighting against the massive heat monster living inside of the card so you have to find a balance. For example, my replacement Air 64 arrived for the used/opened card I received earlier this week. Running stock clock on the core and HBM at 1100 during Superposition my core fluctuated all over the place and the sustained was super low. Dropping p7 just 50mv without changing the clocks balanced my core(nearly flatline it was so stable) just under 1600.


----------



## Reikoji

Putting the core frequency to -1.5% gets me better scorez

 Dont see me getting 7000 in this.

http://www.3dmark.com/spy/2382667


----------



## PontiacGTX

Quote:


> Originally Posted by *rancor*
> 
> In all cases the clock still fluctuates under load even down to 1500MHz. If P7 is under P6 clocks seem to lock to P6 even if P6 is lower than P5 but still the same fluctuation under load. Ether it will get fixed later or it's just something we will need to deal with.


I think that has to do with the power saving feature of the architecture maybe working the BIOS will allow reducing the that clock speed change


----------



## dagget3450

Finally have internet restored.... *gives Irma middle finger* it could be worse, so i choose to be positive now!

Ill try to get the owners list updated soon, sorry for the delays they literally were from an act of god.


----------



## Tgrove

Glad your ok, dont forget about me lol


----------



## Soggysilicon

Quote:


> Originally Posted by *Azazil1190*
> 
> Pls let me in to the club.
> A happy owner on vega64 lc.
> Nice to meet you guys!!


Handsome setup!








Quote:


> Originally Posted by *punchmonster*
> 
> This is filled with so many falsehoods and misinformation it's ridiculous.
> I don't even know where to start.
> No one with a waterblock or LC edition is temp throttling on anything that isn't the HBM2. The powerdelivery on most cards stays positively frosty.
> It isn't remotely power efficient. It's not going to beat the Titan XP. Stop.














Quote:


> Originally Posted by *Whatisthisfor*
> 
> Did anybody here with an AIO and a PSU below 1000W encounter stability issues?


I use this... on an 64 Air with LC bios... on this PSU:

Thermaltake Toughpower XT TPX-775M 775W ATX 12V v2.3 / EPS 12V v2.91 SLI Certified CrossFire Certified 80 PLUS BRONZE Certified Modular Active PFC Power Supply GeForce GTX 470 Certified; have not encountered any "power supplied" issues...
Quote:


> Originally Posted by *chris89*
> 
> Um, haven't you seen Jayz 2 Cents throttling after the waterblock, still? If it's cooling and not hitting it's component temperature limits then its NOT going to throttle.
> 
> When it throttles, its hitting the component temperature limits.
> 
> This is QUITE clear.
> 
> When I pickup a VEGA, I'll Prove It To You It Can Be Way Faster. Just Gotta Use Your Mind & Find The Clearly Apparent Flaws.


Jay puts together some very nice looking builds... which are all fairly mediocre when it comes to performance. I think the "Youtube" discussion has come up before... turn off brain, munch








Quote:


> Originally Posted by *Trender07*
> 
> Is my V64 Air just that bad? Everyones here goes 1100 like nothing but when I set more than 1050 Mhz hbm2 it start with black artifacts and ends up with black screen.
> 
> Also in Superposition I get artifacts setting 1100 HBM2, I recorded it:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (You can clearly see it at 0:14)
> 
> (This is underclock and undervolt, but even so I tried with stock volts and its much even worse lol)


Chips are hot, artifacts are the result from the excess thermal noise.

https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise
Quote:


> Originally Posted by *kundica*
> 
> Can you use GPU-Z and let us know what the HBM and hotspot temps are while benchmarking at 1100?
> 
> Not everyone can hit 1100 and many of those who can, are only able to get through benchmarks with it set that high. My AIO card would bench fine at 1100 but could not sustain 1100 while gaming. Cards that can sustain 1050 while gaming I'd say are very good, while 1100 being the creme de la creme.


1105 set it and forget it... benchies... gaming.... hours on end... no issues... I think I saw my HBM hit 41c... once...
Quote:


> Originally Posted by *redshoulder*
> 
> Received Vega 64 today and and getting coil whine even with low framerates (30fps).
> For power I use 2 separate 8 pin cables to psu.
> 
> With previous card (gtx 780) also had coil whine but it died down after a few weeks.So I am m thinking it is psu related (ax860i).
> 
> Anyone in same boat?


Coil Whine on Vega is *"Real"* tm. Not PSU related... inductors in the circuits are oscillating such that there is a physical vibration of the coil, or whine at a frequency within the audible spectrum, the reference board is not particularly well damped to account for it... its VERY noticeable on EK blocked cards which tend to "waveguide" the effect.
Quote:


> Originally Posted by *Trender07*
> 
> lol u were right
> 
> I cranked the fan speed to MAX (4900 rpm) before but it still artifacts.
> I changed the Objective Speed to 65º AND MAX Speed and no artifacts lol. So a question now, is it safe to leave it at 4900 rpm for a long time wont it break or something the blower.
> I was kinda confident on the temps because I left them auto and they were under 75º, one can think it should run good at auto max temps but well looks like Hotspot and HBM temps were really high even tho the card was at 73º


Your patience may break before that Delta blower breaks! Deltas are used in many manufacturing environments because of their reliability at speed. Fan/Air speed has diminishing returns, increasing the blade density on the heat sink is more effective due to surface area; or do both; but then there is turbulence which is noisy! The Morpheus HS goes in that direction... then there is changing the transfer medium and doing the heat exchange in another location... such as in the LC or loop setups.
Quote:


> Originally Posted by *kundica*
> 
> Have any of you updated to the latest Windows 10 update? It should be KB4038788. If so can you try running Superposition? After updating Superposition just loads a black screen for me.I can see the overlay with frames, etc., but nothing renders. 3d Mark is fine.


Windows 10 C...



I get the black screen in SP... all the time... requires a reboot... sometimes 2.... BRILLIANT!
Quote:


> Originally Posted by *Tgrove*
> 
> Are you all running windows 10? Maybe im not seeing these bugs because im on windows 7 (liquid version). Wattman has been pretty good for me since 17.9.1, i actually like it


DX12... no other options...


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> I went and got it to see if I lose benchmark performance on my machine.
> 
> .... Purely for scientific reasons, of course.


Of course!









http://steamcommunity.com/sharedfiles/filedetails/?id=947302989

If I can ever get some time off work I wouldn't mind trying to figure out a way to link Cortana into an MMD... you know... for science...









http://steamcommunity.com/sharedfiles/filedetails/?id=849263397


----------



## Trender07

Quote:


> Originally Posted by *kundica*
> 
> Don't undervolt as much. If the core doesn't have enough voltage to sustain higher clocks it won't. At the same time you're fighting against the massive heat monster living inside of the card so you have to find a balance. For example, my replacement Air 64 arrived for the used/opened card I received earlier this week. Running stock clock on the core and HBM at 1100 during Superposition my core fluctuated all over the place and the sustained was super low. Dropping p7 just 50mv without changing the clocks balanced my core(nearly flatline it was so stable) just under 1600.


lmao indeed it's a heat monster, so I went to the hard undervolt way because lol mine can't run stock 1637 MHz speed at 1000 mV but can run 1602 MHz at 1000 mV -.- I need about 1025-1035 mV for stock speed and it's kinda annoying to increase volts just for 37 MHz more but I guess its just because the auto downclock without enough voltage.
Geez this weird card, it was so easy to OC my old Rx 480, she was a champ, and one never knows with this card running at the speed she want, guess this was the "it got a brain" ads


----------



## milkbreak

Question about the Morpheus II on the Vega: the included heatsinks aren't actually fully suitable for for the VRMs and whatnot around the die itself, right? Has anyone actually cut the blower and used that part as a heatsink? for the non-die parts? Any photos or suggestions?


----------



## punchmonster

Buildzoid did. But regardless, you can cover all VRMs with the supplied heatsinks still
Quote:


> Originally Posted by *milkbreak*
> 
> Question about the Morpheus II on the Vega: the included heatsinks aren't actually fully suitable for for the VRMs and whatnot around the die itself, right? Has anyone actually cut the blower and used that part as a heatsink? for the non-die parts? Any photos or suggestions?


----------



## Paul17041993

Spoiler: NSFW!




is naked











@dagget3450 you can add me as 64 waterblocked


----------



## milkbreak

Quote:


> Originally Posted by *punchmonster*
> 
> Buildzoid did. But regardless, you can cover all VRMs with the supplied heatsinks still


Does he document this anywhere? I can't find it. Also, can you cover all necessary components with the supplied heatsinks? Safely? Every Morpheus II install I've seen had to use additional heatsinks.


----------



## Whatisthisfor

Quote:


> Originally Posted by *Azazil1190*
> 
> Pls let me in to the club.
> A happy owner on vega64 lc.
> Nice to meet you guys!!


Looks very nice. I have the LC'ed version too and now i will rotate the radiator by 180 degrees to have it like your setup.

I have a 750W Titanium PSU from Seasonic, which one do you have?


----------



## Azazil1190

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Looks very nice. I have the LC'ed version too and now i will rotate the radiator by 180 degrees to have it like your setup.
> 
> I have a 750W Titanium PSU from Seasonic, which one do you have?


Thanks dude!
Appreciate!
I have the ax1200i corsair.
I think that a good 1000w psu is enough for vega lc for a heavy oc too.
Next step is to try to put one more 3.000 rpm fan on radiator.
Like push pull or push
Need to do some test for better temps

Btw your psu is enough for vega i think.
Its a titanium







and very good brand


----------



## gamervivek

Finally got the UV/OC to work properly, best FS score with ~1.5Ghz core clock and 900Mhz HBM, about 2-2.5k more than at stock settings,



Superposition doesn't crash but while it runs the 1.5Ghz core clock at the start, it then gets fixed on a power draw and the clocks drop to 1.2Ghz which is even lower than stock. I was testing on it earlier, stopped when I saw others get similar problem.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> Finally got the UV/OC to work properly, best FS score with ~1.5Ghz core clock and 900Mhz HBM, about 2-2.5k more than at stock settings,
> 
> 
> 
> Superposition doesn't crash but while it runs the 1.5Ghz core clock at the start, it then gets fixed on a power draw and the clocks drop to 1.2Ghz which is even lower than stock. I was testing on it earlier, stopped when I saw others get similar problem.


Did you increase power limit?


----------



## sega4ever

Is it possible to change the fan on the liquid cooled vega cards with something thats silent while not voiding the warranty?


----------



## Reikoji

Quote:


> Originally Posted by *sega4ever*
> 
> Is it possible to change the fan on the liquid cooled vega cards with something thats silent while not voiding the warranty?


you can hear that fan? i keep mine @3000 rpm and dont hear a thing.


----------



## Whatisthisfor

Quote:


> Originally Posted by *Azazil1190*
> 
> Thanks dude!
> Appreciate!
> I have the ax1200i corsair.
> I think that a good 1000w psu is enough for vega lc for a heavy oc too.
> Next step is to try to put one more 3.000 rpm fan on radiator.
> Like push pull or push
> Need to do some test for better temps
> 
> Btw your psu is enough for vega i think.
> Its a titanium
> 
> 
> 
> 
> 
> 
> 
> and very good brand


Your welcome! Besides, Push/Pull is a good idea. The fan sounds annoying above 900rpm, so push/pull may be a good solution for keeping the beast more silent. I would be interested how youll add the second fan. Do you pland to open the case for pwm and share the signal with the two fans? I hope 750W may be enough for vega, because i dont plan to overclock it much.


----------



## gamervivek

Quote:


> Originally Posted by *PontiacGTX*
> 
> Did you increase power limit?


Yes, +50%, using hwinfo to monitor it, goes up to 200-210W while 160-170W on stock. In superposition it gets fixed at around 180W, come to think of it same happens with FS demo at the end, so it might just be the cooling not being good enough for higher power output. But I'm inclined to think there is something else going on since superposition does better if you keep the voltages/clocks at auto and just increase the power limit.


----------



## Rootax

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Your welcome! Besides, Push/Pull is a good idea. The fan sounds annoying above 900rpm, so push/pull may be a good solution for keeping the beast more silent. I would be interested how youll add the second fan. Do you pland to open the case for pwm and share the signal with the two fans? I hope 750W may be enough for vega, because i dont plan to overclock it much.


I tried push/pull on my FuryX (before putting it in my loop with a XSPC block), and it did not work so well... I think that the limiting factor is mainly the radiator dimension, and adding another fan, at the same speed that the main fan, won't do much on Vega either. But hey, If you test it and you gain some degrees, good for you


----------



## milkbreak

I found this Morpheus II heatsink layout for the Vega in this Youtube video: 




Can anyone spot any problems with this layout? I'm probably going to emulate what this guy did for my install unless someone has objections/concerns.


----------



## Azazil1190

This card is really nice






















new score hbcc off im gonna try with on

https://www.3dmark.com/spy/2390738


----------



## Roboyto

Quote:


> Originally Posted by *milkbreak*
> 
> Question about the Morpheus II on the Vega: the included heatsinks aren't actually fully suitable for for the VRMs and whatnot around the die itself, right? Has anyone actually cut the blower and used that part as a heatsink? for the non-die parts? Any photos or suggestions?


Whatever heatsinks you can get on them should suffice as long as there is a bit of airflow over them.

IIRC GamersNexus had *no heatsinks* on any of that stuff when they were pushing their Vega 56 to 400W of power draw...they just had several fans blowing over the areas of concern.


----------



## Soggysilicon

Dug into tuning a little bit more last night. The settings to achieve this score are unsuitable for other applications but I learned a good bit.

The black screen issue on SP (for me) was remedied by executing a run in 1080, exiting, and then running the 4k.

I suspect "for me" the issue is related to the DP dropping connection 'link' to the monitor after the AMD driver is restarted. I am beginning to wonder if the DP is latching / assigned to a bus register and not refreshing it. Hence the multi-reboots to clear it. Likely something to do with how SP handles 4k when no native 4k exist for the display. Vega has had some very "meh" DP performance out of the box... so this isn't anything new, just another flavor of the same ole' problem.

I need to modify the registry to go any further, need more power... voltage tweaks are pointless once the wattage is topped out, however there are some interesting results with slamming the voltage low in some cases.

There "seems" -=speculation mode=- to be a cascade feedback loop which tunes the card frequencies and voltages... there are some instances where the nyquist stability criterion doesn't seem to have been strictly obeyed... hence the cases where the core freq. will spike to 1780+, indicating an asymptotic response / spur / oscillation. There also appears to be some multiples of frequencies which simply "perform better", I assume there is a penalty for fetches that either wait or are discarded... tighter synchronization tends to give very repeatable results... bench score mongering tosses a lot of that in the trash can, especially once one hits the memory wall... which limits the end product to half the cascade.

Anywho... that's my latest score.


----------



## pmc25

Does anyone know of a way to enable Virtual Super Resolution without Radeon Settings?

Since Radeon Settings is 1) fundamentally unstable with Vega and offers no useful functionality at all aside from VSR (as game settings both global and individual have no effect); and 2) when it crashes, the card is stuck in a 30-50% performance mode that cannot be cleared until you reboot, it drives me round the bend.

MSAA and SSAA performance are disastrous with Vega, but bumping up the resolution barely touches FPS for me, if no non-shader based AA is enabled. So I need it for decent performance, and doing it this way renders AA redundant.

Not installing Radeon Settings would bring much needed stability.

If you can't enable VSR without Radeon Settings, and the next update doesn't bring some kind of stability to Radeon Settings, or at least fix its performance crippling effect when it does crash (the latter never happened prior to Vega, even when it crashed ALL THE TIME when it was first introduced), then I'll be selling my Vega card. It's just a total waste of time at the moment.


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Dug into tuning a little bit more last night. The settings to achieve this score are unsuitable for other applications but I learned a good bit.
> 
> The black screen issue on SP (for me) was remedied by executing a run in 1080, exiting, and then running the 4k.
> 
> I suspect "for me" the issue is related to the DP dropping connection 'link' to the monitor after the AMD driver is restarted. I am beginning to wonder if the DP is latching / assigned to a bus register and not refreshing it. Hence the multi-reboots to clear it. Likely something to do with how SP handles 4k when no native 4k exist for the display. Vega has had some very "meh" DP performance out of the box... so this isn't anything new, just another flavor of the same ole' problem.
> 
> I need to modify the registry to go any further, need more power... voltage tweaks are pointless once the wattage is topped out, however there are some interesting results with slamming the voltage low in some cases.
> 
> There "seems" -=speculation mode=- to be a cascade feedback loop which tunes the card frequencies and voltages... there are some instances where the nyquist stability criterion doesn't seem to have been strictly obeyed... hence the cases where the core freq. will spike to 1780+, indicating an asymptotic response / spur / oscillation. There also appears to be some multiples of frequencies which simply "perform better", I assume there is a penalty for fetches that either wait or are discarded... tighter synchronization tends to give very repeatable results... bench score mongering tosses a lot of that in the trash can, especially once one hits the memory wall... which limits the end product to half the cascade.
> 
> Anywho... that's my latest score.


These scores are hax! So you're editing reg to make the card draw more power? Also cooled very well hrm.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *Soggysilicon*
> 
> I am beginning to wonder if its the overshoot and undershoot on the boost on core which is the root of the instability... there seems to be a close correlation to the memory timing and frequency of the core... pipe-lining, or the failure there'of which is where crashes are occurring... super position on 1080 extreme boost up very high indeed (almost as if the card is trying to hit a target frame rate), much higher than what I see in 4k; yet in 4k there are specific points in the test which are sure fire crashes which seem related to a throttling event.


I would agree with this, i think the lack of pipeline optimization is causing spikes on the core frequency and that's causing some of these crash events. I'm going to wait on an RMA because i don't think my crashes at stock settings related to a defective piece of silicon, at least not yet!

With that said I managed 6924 on SP 4k Optimized!

Ryzen 7 1700 @ 3914MHz
2x8GB 16GB G.Skill Trident Z @ 3432MHz
ASRock Fatal1ty Gaming Professional X370 (V3.0 BIOS)
Windows 10 Professional 15063.608

Sapphire Vega 64 Liquid
GPU Core: 1722Mhz/1200mV
Mem: 1100MHz/950mV
Power Limit:+50%
HBCC Off
Crimson Relive 17.9.1



My Firestrike 1.1 score of 19542 was good enough for 3rd for Ryzen 7 1700 / Vega 64 machines

http://www.3dmark.com/fs/13634929

Considering the same machine with 1080ti managed 20888 I am pretty impressed with early performance of the Vega 64 LC

https://www.3dmark.com/fs/12885185

Of course my i7 2600k @ 5.0GHz / Fury X Crossfire machine still can muster a 20055 and is its own class!









https://www.3dmark.com/fs/11698334


----------



## NI6HTHAWK

Quote:


> Originally Posted by *Azazil1190*
> 
> Havent had too much free time to test the card as i want to.
> But i made one quick run of f.s
> 
> Stock voltage
> Stock core clock
> +50 power tar.
> And finally 1050 for hbm
> 
> And im happy with the results
> I can pass my previous score with my last strix oc gtx1080 2100 core.
> Sorry for my English
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/3dm/22135569?


That 8903 combined test score is impressive, I noticed I got a nice boost in the Firestrike combined test score with Vega 64 LC vs my 1080ti FE. I have no idea why that is, I'm guessing maybe the compute advantage?


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> These scores are hax! So you're editing reg to make the card draw more power? Also cooled very well hrm.


It's a "Legit" ^tm. score! SP is really really sensitive to memory, at least with Vega. HBCC must be ON, task manager and shutdown every life support process possible, OC' your system ram, crank up the HBM to the raggedy' ass edge, and undervolt it to starvation... cool the room off (may need a warm binky to keep yourself from freezing), under clock your core and crank up the volts... max the power target and let it rip.

The HBM in starvation mode won't cut mustard in Heaven, TS, or FS though... but some game benches may squeak by.

I suspect this works because of the way the cascade feedback seems to function. It exploits that Vega doesn't boost high enough to cause a core frequency crash (4k textures incur a latency to read / write, so the core can pseudo idle), while forward feeding the ram calls in a close sync. to the core. I strongly suspect that the frequency tables which the card clocks too are within the set of solutions which best serve the "next gen. compute engine/unit" which Vega is built around. By having the ram undervolted again I suspect it is inhibiting the "overshoot" which could cause case penalties from rise and fall times. Going on the assumption this thing works like a Schmitt trigger.

There are some limits to this though, due to the bios on the card, the LC bios is more generous but to go further some modding would need to occur. The next thing will be to determine how much of this is driver side and how much is asic / ic, and the slew rates... There will be a finite limit somewhere on the card... this score was pretty much the end of the road for me.

From a holistic standpoint "everything" needs to be tuned somewhat simultaneously or you eat timing penalties while the either the ram or core twiddles its digital thumbs.









Ideally the core is "ready - idle" when it gets a fetch from the ram, if the ram has to wait... penalty city.








Quote:


> Originally Posted by *NI6HTHAWK*
> 
> I would agree with this, i think the lack of pipeline optimization is causing spikes on the core frequency and that's causing some of these crash events. I'm going to wait on an RMA because i don't think my crashes at stock settings related to a defective piece of silicon, at least not yet!
> 
> With that said I managed 6924 on SP 4k Optimized!
> 
> Ryzen 7 1700 @ 3914MHz
> 2x8GB 16GB G.Skill Trident Z @ 3432MHz
> ASRock Fatal1ty Gaming Professional X370 (V3.0 BIOS)
> Windows 10 Professional 15063.608
> 
> Sapphire Vega 64 Liquid
> GPU Core: 1722Mhz/1200mV
> Mem: 1100MHz/950mV
> Power Limit:+50%
> HBCC Off
> Crimson Relive 17.9.1
> 
> 
> 
> My Firestrike 1.1 score of 19542 was good enough for 3rd for Ryzen 7 1700 / Vega 64 machines
> 
> http://www.3dmark.com/fs/13634929
> 
> Considering the same machine with 1080ti managed 20888 I am pretty impressed with early performance of the Vega 64 LC
> 
> https://www.3dmark.com/fs/12885185
> 
> Of course my i7 2600k @ 5.0GHz / Fury X Crossfire machine still can muster a 20055 and is its own class!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/fs/11698334


HBCC ON, I am sure you can crack into the 7ks on SP. HBCC is hot garbage in everything else. Unless you have a 4k monitor, then it could be useful more than its not.

The LC bios is clocked HOT out of the box. With the power target cranked up its a crap shoot if you will stay stable with the boost clocks set to default... that is the card is set to boost beyond what it may "actually" be able to do for anything approaching more than a couple ticks of the frequency counter.

It should be "binned" well enough to function plug' n' chug; but I wouldn't put any stock in the card being able to go further, not without tweaks and possible mods. Reminds me A LOT of Ryzen. It sorta is what it is unless you mount that rad on an A/C vent.









Now... going forward with software optimized to work within the framework of Vega, I think it will do well and offer a very smooth experience. Vega HBM and core appear to be very interdependent, if one is sorta meh, the whole house of cards crumbles.


----------



## kundica

My loop is half built, EK sent the wrong block for my CPU. I was impatient so I decided to get the GPU up and running for now. Flashed the AIO bios. At the stock core 1750 and HBM at 1100 the card wasn't stable, but it's fine once I dial the core down a bit.I haven't had much time to mess with it much so I'm sure I can find a sweet spot. With p7 at 1702 with stock voltages and +50% power limit, it scores almost as high as my AIO card did. It's only on the 240 rad right now but it idles at 22-23 and hits about 41-42 under load. Looking forward to finishing the loop and dialing the card in.


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> My loop is half built, EK sent the wrong block for my CPU. I was impatient so I decided to get the GPU up and running for now. Flashed the AIO bios. At the stock core 1750 and HBM at 1100 the card wasn't stable, but it's fine once I dial the core down a bit.I haven't had much time to mess with it much so I'm sure I can find a sweet spot. With p7 at 1702 with stock voltages and +50% power limit, it scores almost as high as my AIO card did. It's only on the 240 rad right now but it idles at 22-23 and hits about 41-42 under load. Looking forward to finishing the loop and dialing the card in.


yeah that 1750 is a placebo number, it won't hit it without tweaking the power target, and it cant hold it even if you do; had to down clock my 64 as well on the LC bios, I benefited some by under volting the ram controller. If you got decent silicon lottery you should be able to shed 50-100 mv with minimal issues. I am bench stable at 1742 but i did have to goose up the volts to keep it stable... of course it never actually hits that number; not that I have ever sampled at any rate. Coming up on P6 helps keep the card averaging in the low 1700s which helps the minimums. Have fun!


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> yeah that 1750 is a placebo number, it won't hit it without tweaking the power target, and it cant hold it even if you do; had to down clock my 64 as well on the LC bios, I benefited some by under volting the ram controller. If you got decent silicon lottery you should be able to shed 50-100 mv with minimal issues. I am bench stable at 1742 but i did have to goose up the volts to keep it stable... of course it never actually hits that number; not that I have ever sampled at any rate. Coming up on P6 helps keep the card averaging in the low 1700s which helps the minimums. Have fun!


Good to know, thanks!

What are your current voltages, include the phony HBM voltage?


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> Good to know, thanks!
> 
> What are your current voltages, include the phony HBM voltage?


Not sure how phony that HBM is, it has a real effect in HWinfo64 on the controller peak voltage. I suspect it allows for a tighter tune of the TTL-L, added benefit of shedding a little heat, using a little less power if your custom block'd n' looped.

After much fiddling about:

1662/1742 P6/P7
1166/1222 P6v/P7v
1105/887 HBM freq. / mv
+50%

LC bios HBCC OFF / 3440x1440 native

Your mileage "of course" may vary, please share if you find some better settings which are benchie/game stable!









edit: TTL to TTL-L so not to add to confusion... k thx.


----------



## Reikoji

SOMEHOW !..... Set P6 to 1612/1050mv and P7 to 1712/1130mv... I'm shocked my comp didn't crash.

Its nice and all, but i didn't win any more points in Timespy. In fact I got worse score than auto voltage and -1.5% core. These settings will probably only do me any good @4k in the Superposition game.

Vega, I do not understand you!


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> Not sure how phony that HBM is, it has a real effect in HWinfo64 on the controller peak voltage. I suspect it allows for a tighter tune of the TTL-L, added benefit of shedding a little heat, using a little less power if your custom block'd n' looped.
> 
> After much fiddling about:
> 
> 1662/1742 P6/P7
> 1166/1222 P6v/P7v
> 1105/887 HBM freq. / mv
> +50%
> 
> LC bios HBCC OFF / 3440x1440 native
> 
> Your mileage "of course" may vary, please share if you find some better settings which are benchie/game stable!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> edit: TTL to TTL-L so not to add to confusion... k thx.


Thanks for the info.

By phony, I meant that it changes the memory. It's good to use to adjust the voltage floor for the core if you need it. I'll have to mess around with it some more tomorrow.


----------



## asdkj1740

that is the reason why guru3d has found out the top mosfet temp >100c on the strix 64.
i can replicate the same result under furmark 1080p 0xaa.
it is the design falult asus has made, asus should have redesigned the whole heatsink but not base on the current 1080ti strix heatsink. the height of the strix 64 heatsink is way shorter than the pcb height of strix 64.
the power limit is not yet set to 50% but staying at 0% only.
this is on the open test bench while the room temp is about 22c.


----------



## yeayea911

Has any Vega FE owners gotten any of the gaming drivers to work? I can only use the 17.8.2 pro drivers.


----------



## Paul17041993

Quote:


> Originally Posted by *sega4ever*
> 
> Is it possible to change the fan on the liquid cooled vega cards with something thats silent while not voiding the warranty?


Does the fan have a detachable plug? if not then no you cant change it out. However you could add an extra fan to it for push-pull, that should improve the noise a fraction.

Has anyone else noticed though that the stock cooler doesn't make very good contact with the back line of VRMs...?


other VRM pads for comparison;


----------



## Azazil1190

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> That 8903 combined test score is impressive, I noticed I got a nice boost in the Firestrike combined test score with Vega 64 LC vs my 1080ti FE. I have no idea why that is, I'm guessing maybe the compute advantage?


Yeap dude.
I think so too


----------



## Newbie2009

I was playing a game called Layers of Fear last night and noticed the core clock going about 40mhz higher than set. Normally there is a droop in benchmarks etc. It clocked up to 1760mhz lol in game. Didn't crash.

I checked the power consumption when this was happening and was very low.

The game itself is a bad port. The normal boost would have been around 1700mhz under heavy load. Bizarre.


----------



## biscuittea

I just installed the Raijintek Morpheus II cooler on my Vega 56 and I'm hitting 105C on 'GPU hotspot' in GPU-Z almost immediately when I start up a benchmark. The core and HBM temperatures are quite low though.

I've stuck heatsinks on the components that where originally covered with thermal pads on the base plate with the small heatsinks that came with the Morpheus II.

The smaller heatsinks aren't installed perfectly because this wasn't made for the Vega however I've still covered them up - surely that would at least stop it from rocketing to 105C as soon as I start a benchmark.

Any ideas where I went wrong?

edit: It would also help if anyone knew what the 'GPU hotspot' actually measures in GPU-Z.


----------



## Paul17041993

Quote:


> Originally Posted by *Newbie2009*
> 
> I was playing a game called Layers of Fear last night and noticed the core clock going about 40mhz higher than set. Normally there is a droop in benchmarks etc. It clocked up to 1760mhz lol in game. Didn't crash.
> 
> I checked the power consumption when this was happening and was very low.
> 
> The game itself is a bad port. The normal boost would have been around 1700mhz under heavy load. Bizarre.


Vega has a lot of ryzen's features, including XFR style boosting, however it may also just be an error in the monitoring software...?

Quote:


> Originally Posted by *biscuittea*
> 
> 'GPU hotspot' in GPU-Z.


It's probably the BGA backside temperature, that area can get super hot when it's passing almost 500A to the package...


----------



## pmc25

Quote:


> Originally Posted by *Paul17041993*
> 
> Vega has a lot of ryzen's features, including XFR style boosting, however it may also just be an error in the monitoring software...?
> It's probably the BGA backside temperature, that area can get super hot when it's passing almost 500A to the package...


No, it's genuine.

The problem is that it will boost beyond what the driver finds stable in some load scenarios that don't induce too much heat.

BF1 for me for example. It'll boost to well over 1800Mhz, then the driver crashes every time (meaning I have to restart the computer because even after restarting Radeon Settings the driver performance is gimped).

Doesn't matter what I enter as the highest P7 state ... it'll generally go 75-100Mhz over it in BF1, because the game is 'efficient' and doesn't cause heavy thermal load, if you don't apply AA and heavy post processing.

I can run it at a mix of Ultra (textures) and High, with AA off, motion blur off, post processing medium at VSR of 4K of ~115FPS with minimums of about 90FPS.

If I wack AA and post processing up, FPS falls off a cliff, the clock falls to 30-50Mhz below P7 state, and it doesn't crash, and gets way hotter, and image quality is obviously way lower since I have to use 1920x1080 or 2560x1440, and AA is nowhere near enough to compensate for the lack of 4K both in general resolution and aliasing.

This is why I will be selling my card after the next update, if Radeon Settings isn't either made stable, or VSR is made usable without Radeon Settings installed. Just a total waste of time at the moment. I've never had to restart a computer so much.


----------



## AmateurExpert

Quote:


> Originally Posted by *Paul17041993*
> 
> Does the fan have a detachable plug? if not then no you cant change it out. However you could add an extra fan to it for push-pull, that should improve the noise a fraction.


The AIO fan is connected to the graphics card's fan header via a cable that's run down one of the sleeved hoses (see Vega FE teardown on Gamers Nexus - RX Vega has the same layout). However, the usual fan connectors used on graphics card isn't the 2.54mm pitch 4-pin standard used on motherboards (thus most aftermarket PC fans) but the more compact JST 2.0mm 4-pin standard. It's non-trivial to swap the existing fan, but it's possible for those who are OK with sourcing or making their own custom fan cables.
Quote:


> Originally Posted by *Paul17041993*
> 
> Has anyone else noticed though that the stock cooler doesn't make very good contact with the back line of VRMs...?


Looks fine on mine:


----------



## biscuittea

Just an update from my previous post:

I basically went back and re-applied the thermal paste as well as changing the configuration of the smaller heatsinks. Unigine has been running for a couple of minutes now and the hotspot temperature looks fine now. I'll be running it longer to check the stability.


----------



## Evil Penguin

Quote:


> Originally Posted by *yeayea911*
> 
> Has any Vega FE owners gotten any of the gaming drivers to work? I can only use the 17.8.2 pro drivers.


Gotta wait for a new driver.

It's been about a month now since their last release and hopefully with the upcoming driver they re-enable Gaming mode.


----------



## asdkj1740

how to apply the softpowerplaytable?


----------



## majestynl

Quote:


> Originally Posted by *asdkj1740*
> 
> how to apply the softpowerplaytable?


http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26297003


----------



## asdkj1740

Quote:


> Originally Posted by *majestynl*
> 
> http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26297003


thx, i am now be okay to push the power limit to 142%.
however when i am running fire strike extreme, my vega 64 seems still encountering throttling and the core clock is way below my settings. but when i am running the gpuz build-in test, i can push the core clock to what i set for like 1680mhz.


----------



## Whatisthisfor

Quote:


> Originally Posted by *AmateurExpert*
> 
> The AIO fan is connected to the graphics card's fan header via a cable that's run down one of the sleeved hoses (see Vega FE teardown on Gamers Nexus - RX Vega has the same layout). However, the usual fan connectors used on graphics card isn't the 2.54mm pitch 4-pin standard used on motherboards (thus most aftermarket PC fans) but the more compact JST 2.0mm 4-pin standard. It's non-trivial to swap the existing fan, but it's possible for those who are OK with sourcing or making their own custom fan cables.


Interesting. How is the fan attched to the cables? Ofc. one could open the case and simply add a new cable with adapter but would it not be the better idea to just detach the fan and add a female standard 4-pin fan connector to the existing cable? What do you think is the better solution?


----------



## milkbreak

Quote:


> Originally Posted by *biscuittea*
> 
> Just an update from my previous post:
> 
> I basically went back and re-applied the thermal paste as well as changing the configuration of the smaller heatsinks. Unigine has been running for a couple of minutes now and the hotspot temperature looks fine now. I'll be running it longer to check the stability.


Did you take a picture of your heatsink layout by any chance? I'd like to see it.


----------



## PontiacGTX

Quote:


> Originally Posted by *Paul17041993*
> 
> Vega has a lot of ryzen's features, including XFR style boosting, however it may also just be an error in the monitoring software...?
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> It's probably the BGA backside temperature, that area can get super hot when it's passing almost 500A to the package...


that kidn fo boost exists in Polaris..


----------



## asdkj1740

cant really get the softpowerplaytable work properly.
on wattman i can now push the limit to 142% but all my oc settings except the fan rpm are absent in benchmarkings.
i have reset the wattman first and then clicked the voltage slider to manual state and then clicked apply button and then typed in all vcore and hbm settings and temp target and power limit and fan rpm, lastly pressed the apply button again. but i still get the stock clock and power consumption then...
need some help


----------



## Newbie2009

First timespy run ever. Vega 64. Not sure how it compares.



https://www.3dmark.com/3dm/22189385?


----------



## biscuittea

Quote:


> Originally Posted by *milkbreak*
> 
> Did you take a picture of your heatsink layout by any chance? I'd like to see it.





Spoiler: Warning: Spoiler!








Here's the before where it hits 105C on the hotspot and then the after. The hotspot goes to 90C max on the after setup which is roughly the same as the reference cooler for me, but the core and HBM temps max out close to 65C whereas before it would easily hit 85C on load.

Just a note - this is the first time I've done anything like this so there are probably areas that I could've done better. Also, on the after picture with the 2 mosfets in the middle, the heatsinks are at an angle covering the components above them.


----------



## majestynl

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Not sure how phony that HBM is, it has a real effect in HWinfo64 on the controller peak voltage. I suspect it allows for a tighter tune of the TTL-L, added benefit of shedding a little heat, using a little less power if your custom block'd n' looped.
> 
> After much fiddling about:
> 
> 1662/1742 P6/P7
> 1166/1222 P6v/P7v
> 1105/887 HBM freq. / mv
> +50%
> 
> LC bios HBCC OFF / 3440x1440 native
> 
> Your mileage "of course" may vary, please share if you find some better settings which are benchie/game stable!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> edit: TTL to TTL-L so not to add to confusion... k thx.


Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> SOMEHOW !..... Set P6 to 1612/1050mv and P7 to 1712/1130mv... I'm shocked my comp didn't crash.
> 
> Its nice and all, but i didn't win any more points in Timespy. In fact I got worse score than auto voltage and -1.5% core. These settings will probably only do me any good @4k in the Superposition game.
> 
> Vega, I do not understand you


Yep also hitting nice numbers with LC Bios and Powertables mod

P6: 1662 @ 1166mv
P7: 1742 @ 1222mv

HBM: 1100Mhz / 900mv

Do not see any change in scores between 50%-142% Power for now! Will play more with current settings and keep updated!


----------



## AmateurExpert

Quote:


> Originally Posted by *Whatisthisfor*
> 
> Interesting. How is the fan attched to the cables? Ofc. one could open the case and simply add a new cable with adapter but would it not be the better idea to just detach the fan and add a female standard 4-pin fan connector to the existing cable? What do you think is the better solution?


I don't have a AIO to check (I have the air version but I've put an EK block on it) but I would expect the AIO fan to just have a single longish cable going all the way to the plug on the PCB.

It should be possible to keep the existing section of cable from the PCB, up hose sleeving all way up to the fan frame. You could cut the old fan off here, and solder a Y-splitter (e.g. http://noctua.at/en/products/accessories/na-syc1/specification or a cheap ebay one) minus its plug to the existing cable. This gives the option of fitting two matching fans in push-pull, avoids having to open the shroud, and avoids having to buy plugs, sockets and crimp terminals. However severing the OEM fan in order to reuse its cable and connection would not be desirable from a warranty point of view.


----------



## PontiacGTX

Quote:


> Originally Posted by *biscuittea*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> Here's the before where it hits 105C on the hotspot and then the after. The hotspot goes to 90C max on the after setup which is roughly the same as the reference cooler for me, but the core and HBM temps max out close to 65C whereas before it would easily hit 85C on load.
> 
> Just a note - this is the first time I've done anything like this so there are probably areas that I could've done better. Also, on the after picture with the 2 mosfets in the middle, the heatsinks are at an angle covering the components above them.


why did you use the small heatsink on the VRM? replace it with the big square heatsink

do it like this


(and proper spacing


----------



## biscuittea

Quote:


> Originally Posted by *milkbreak*
> 
> Did you take a picture of your heatsink layout by any chance? I'd like to see it.


Quote:


> Originally Posted by *PontiacGTX*
> 
> why did you use the small heatsink on the VRM? replace it with the big square heatsink
> 
> do it like this
> 
> 
> (and proper spacing


How did you mount the actual cooler on with a setup like that? After my first attempt with the crazy temps on the 'hotspot', I tried to use the bigger square heatsink but they were too long and would stop the heatsink from making contact on the upper side.


----------



## milkbreak

Quote:


> Originally Posted by *PontiacGTX*
> 
> why did you use the small heatsink on the VRM? replace it with the big square heatsink
> 
> do it like this
> 
> 
> (and proper spacing


The problem with that layout is that it partially blocks one of the mounting holes and leaves parts that were covered by thermal pads uncovered completely. I'm planning on doing something similar except I'm going to cut down one of the larger heatsinks to uncover that hole and chop one of the low-profile heatsinks in half to make sure every originally-covered component is actually covered.


----------



## laczarus

Question guys:
The 17.9.1 driver crashes instead of locking up the system (which is convenient), but you still need to reboot in order to not suffer from performance degradation.
Has anyone tried resetting with CRU instead of rebooting?
https://www.monitortests.com/forum/Thread-Custom-Resolution-Utility-CRU

I've used the reset64.exe and it seemed to work, clocks seem to apply properly after using it without rebooting the system. Have to do further testing to see if performance drops or not though.


----------



## PontiacGTX

Quote:


> Originally Posted by *biscuittea*
> 
> How did you mount the actual cooler on with a setup like that? After my first attempt with the crazy temps on the 'hotspot', I tried to use the bigger square heatsink but they were too long and would stop the heatsink from making contact on the upper side.


that image was posted from somewhere else I was onyl suggesting that you could improve the temperature of the VRM by actually using bigger heatsink onto the VRM


----------



## Paul17041993

Quote:


> Originally Posted by *pmc25*
> 
> No, it's genuine.
> 
> The problem is that it will boost beyond what the driver finds stable in some load scenarios that don't induce too much heat.
> 
> BF1 for me for example. It'll boost to well over 1800Mhz, then the driver crashes every time (meaning I have to restart the computer because even after restarting Radeon Settings the driver performance is gimped).
> 
> Doesn't matter what I enter as the highest P7 state ... it'll generally go 75-100Mhz over it in BF1, because the game is 'efficient' and doesn't cause heavy thermal load, if you don't apply AA and heavy post processing.
> 
> I can run it at a mix of Ultra (textures) and High, with AA off, motion blur off, post processing medium at VSR of 4K of ~115FPS with minimums of about 90FPS.
> 
> If I wack AA and post processing up, FPS falls off a cliff, the clock falls to 30-50Mhz below P7 state, and it doesn't crash, and gets way hotter, and image quality is obviously way lower since I have to use 1920x1080 or 2560x1440, and AA is nowhere near enough to compensate for the lack of 4K both in general resolution and aliasing.
> 
> This is why I will be selling my card after the next update, if Radeon Settings isn't either made stable, or VSR is made usable without Radeon Settings installed. Just a total waste of time at the moment. I've never had to restart a computer so much.


Yea, so it is in fact XFR style boosting, what frequency mode you using though? fixed or dynamic?

Quote:


> Originally Posted by *AmateurExpert*
> 
> The AIO fan is connected to the graphics card's fan header via a cable that's run down one of the sleeved hoses (see Vega FE teardown on Gamers Nexus - RX Vega has the same layout). However, the usual fan connectors used on graphics card isn't the 2.54mm pitch 4-pin standard used on motherboards (thus most aftermarket PC fans) but the more compact JST 2.0mm 4-pin standard. It's non-trivial to swap the existing fan, but it's possible for those who are OK with sourcing or making their own custom fan cables.


If you can actually remove the fan and cable from the tubing and housing without cutting anything, then sure, but you'll need to source a card > header adapter from somewhere and make sure the pins are correct...


----------



## Chaoz

Quote:


> Originally Posted by *Paul17041993*
> 
> Yea, so it is in fact XFR style boosting, what frequency mode you using though? fixed or dynamic?
> If you can actually remove the fan and cable from the tubing and housing without cutting anything, then sure, but you'll need to source a card > header adapter from somewhere and make sure the pins are correct...


Gelid has an adapter where you can plug in a normal fan to a GPU header.
http://gelidsolutions.com/thermal-solutions/accessories-pwm-fan-adaptor/

That might solve his problem.


----------



## jearly410

Quote:


> Originally Posted by *Chaoz*
> 
> Gelid has an adapter where you can plug in a normal fan to a GPU header.
> http://gelidsolutions.com/thermal-solutions/accessories-pwm-fan-adaptor/
> 
> That might solve his problem.


That's what I used for the fury x and 390x and will most likely be what is needed for the vega LC.


----------



## AmateurExpert

Quote:


> Originally Posted by *Paul17041993*
> 
> If you can actually remove the fan and cable from the tubing and housing without cutting anything, then sure, but you'll need to source a card > header adapter from somewhere and make sure the pins are correct...


The fan cable is actually in a separate sleeving to the hose and is attached by cable ties (see one of AMDMatt's pics at OcUK forums) - these aren't a problem to remove or replace.


----------



## rv8000

Quote:


> Originally Posted by *asdkj1740*
> 
> that is the reason why guru3d has found out the top mosfet temp >100c on the strix 64.
> i can replicate the same result under furmark 1080p 0xaa.
> it is the design falult asus has made, asus should have redesigned the whole heatsink but not base on the current 1080ti strix heatsink. the height of the strix 64 heatsink is way shorter than the pcb height of strix 64.
> the power limit is not yet set to 50% but staying at 0% only.
> this is on the open test bench while the room temp is about 22c.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> ]


Why am I not surprised ASUS would pull something like this. Wasn't this done by ASUS and MSI on some of the 300 series models?


----------



## PontiacGTX

Quote:


> Originally Posted by *rv8000*
> 
> Why am I not surprised ASUS would pull something like this. Wasn't this done by ASUS and MSI on some of the 300 series models?


ASUS R9 290(x) had VRM and GPU cooling issue at beggining mainly were using same design as 780s, 390X and R9 FURY had not really good GPU Temps compared to other brands So it isnt news they do this Also MSI hasnt done a high end AMD GPU since R9 290x,this is curious

one thing is where are the custom RX VEGA GPUs? MSI?Gigabyte?XFX?Visiontek?HIS? Sapphire?Powercolor?


----------



## Reikoji

Quote:


> Originally Posted by *PontiacGTX*
> 
> ASUS R9 290(x) had VRM and GPU cooling issue at beggining mainly were using same design as 780s, 390X and R9 FURY had not really good GPU Temps compared to other brands So it isnt news they do this Also MSI hasnt done a high end AMD GPU since R9 290x,this is curious
> 
> one thing is where are the custom RX VEGA GPUs? MSI?Gigabyte?XFX?Visiontek?HIS? Sapphire?Powercolor?


where are they even selling the Vegas 56/64 strix ?


----------



## Paul17041993

Quote:


> Originally Posted by *AmateurExpert*
> 
> The fan cable is actually in a separate sleeving to the hose and is attached by cable ties (see one of AMDMatt's pics at OcUK forums) - these aren't a problem to remove or replace.


Well I guess cable ties are simple enough to remove or replace, so that should be just fine really.
Quote:


> Originally Posted by *rv8000*
> 
> Why am I not surprised ASUS would pull something like this. Wasn't this done by ASUS and MSI on some of the 300 series models?


They've done a shoddy job on coolers since the 7970. Got even worse with the 200 series as they wouldn't even change the heatpipes to actually make proper contact, making half the cooler a paperweight. Adding insult to injury their PCB's and VRMs would actually perform a lot worse than the reference cards and very often they wouldn't even run stable at stock settings, so for the most part we just recommended people steer clear of the ROG cards until that brand faded out of existence...


----------



## plywood99

Vega liquid cooled aio.


----------



## Chaoz

Quote:


> Originally Posted by *rv8000*
> 
> Why am I not surprised ASUS would pull something like this. Wasn't this done by ASUS and MSI on some of the 300 series models?


The Asus 390 DC3OC had quite a bit of heat issues. Better thermalpads and better TIM fixed those issues a bit.

Even with the so-called great aftermarket cooler temps still managed to hit 75°C with a decent fan profile.

The 390's are hotheads, tho. 275W TDP is quite high.
That's the reason why I went with watercooling in the first place, if I found the. Bykski 390 waterblock sooner I would've kept my 390, but yeah.


----------



## chris89

Monitor your GPU's with HwInfo, I find my GPU's ramping up to full speed load when away from the PC. The result of Illuminati crypto coin mining using everyone in the world. Especially us on the forums with high end PC's. Disable Network adapter when away, notice difference. Also it's in the Windows Updates. Fresh OS disabling Updates would resolve it.


----------



## chris89

Quote:


> Originally Posted by *plywood99*
> 
> 
> 
> Vega liquid cooled aio.


Weak cheese RX480.. or 480 with 580 bios. Not bad though. My 390X was hitting 29.9fps on older drivers 1080p... maybe it was extreme.


----------



## Paul17041993

Quote:


> Originally Posted by *plywood99*
> 
> 
> Vega liquid cooled aio.


nice, what clocks?
Quote:


> Originally Posted by *chris89*
> 
> Monitor your GPU's with HwInfo, I find my GPU's ramping up to full speed load when away from the PC. The result of Illuminati crypto coin mining using everyone in the world. Especially us on the forums with high end PC's. Disable Network adapter when away, notice difference. Also it's in the Windows Updates. Fresh OS disabling Updates would resolve it.


Relive also has a bug that locks vega into max clocks occasionally, so if you see it happen try turning relive off and back on again.

Otherwise if it's going to full power draw (200+W) then you have a crypto virus/worm...


----------



## chris89

Quote:


> Originally Posted by *Paul17041993*
> 
> nice, what clocks?
> Relive also has a bug that locks vega into max clocks occasionally, so if you see it happen try turning relive off and back on again.
> 
> Otherwise you have a crypto virus/worm...


Yeah this is on the device driver install, inf only. If I disable network adapter, it idles at right away. Or I notice fan ramped up when away, move mouse & its instantly to idle. Happened after background Windows Updates.

I can't get Crossfire to work? Anyone know how or is it a Superposition issue?

Also yeah just 1,250mhz core/ 2,000mhz memory... delimited power 256w tdp/tdc. I think 84C Max VRM limit & 88C hotspot.

I know, I was like .. how? I checked 2nd GPU... at idle & no fan at all. Just one of the Two RX 480's... haha be interested to test crossfire.


----------



## asdkj1740

for 50% power limit on vega64 on fire strike extreme, how much graphics score could get for the air version and liquid/aio version?


----------



## Paul17041993

Quote:


> Originally Posted by *asdkj1740*
> 
> for 50% power limit on vega64 on fire strike extreme, how much graphics score could get for the air version and liquid/aio version?


If you set the air version to turbo and uncapped the fan they'd probably perform about the same, but I don't know what the liquid's turbo settings are...


----------



## plywood99

Quote:


> Originally Posted by *Paul17041993*
> 
> nice, what clocks?
> 
> Stock core, 1100 on mem and +50 power.


----------



## gamervivek

Quote:


> Originally Posted by *gamervivek*
> 
> Finally got the UV/OC to work properly, best FS score with ~1.5Ghz core clock and 900Mhz HBM, about 2-2.5k more than at stock settings,
> 
> 
> 
> Superposition doesn't crash but while it runs the 1.5Ghz core clock at the start, it then gets fixed on a power draw and the clocks drop to 1.2Ghz which is even lower than stock. I was testing on it earlier, stopped when I saw others get similar problem.


Playing around with 3dmark ultra, I see similar downclocking in latter part of the 1st test, so the card is getting power throttled perhaps. I've found that going lower with voltages and slightly lower with clocks gets 1.4Ghz boost in superposition and 3dmark ultra, the performance is 10% better than stock. Interestingly, the gpu tach shows when the card has throttled and when it hasn't. For the latter case, you see all lights on constantly, and for the former, some of them blinking.

One thing I noticed is that the card throttles around 190-200W when it can go upto 240-250W on the 50% PL from the default 160-170W.


----------



## Soggysilicon

Quote:


> Originally Posted by *kundica*
> 
> Thanks for the info.
> 
> By phony, I meant that it changes the memory. It's good to use to adjust the voltage floor for the core if you need it. I'll have to mess around with it some more tomorrow.


Settings had some issues in a few games, had to tune again. Update below.
Quote:


> Originally Posted by *majestynl*
> 
> [/SPOILER]
> Yep also hitting nice numbers with LC Bios and Powertables mod
> 
> P6: 1662 @ 1166mv
> P7: 1742 @ 1222mv
> 
> HBM: 1100Mhz / 900mv
> 
> Do not see any change in scores between 50%-142% Power for now! Will play more with current settings and keep updated!


1666/1736 @ 1167/1244
1105 @ 892
+50
LC-B

If my original get crashie' in some games or workloads you can try these.

My generic "stress test" at the moment is, WPE + HWinfo64 + Chrome w/ a couple vids loaded and one or two playing, Heaven 1080 Ultra/Xtess/4x repeat bench till crash logging scores, checking variance, monitoring cards performance / health.

Above settings are ~ to a >5.5% OC on the air bios... but it runs out of wattage headroom... would need to power play table above 50% to go back to my other settings stable.

Games would "play" but Starpoint gemini warlords menus which overlaid had a very high probability of hosing out, I think the clocks / feedback loop gets wonky when there are multi video threads or overlays. If its happening in that, it will happen in other titles.









The above "stress test" will eventually recreate the issue, silicon seems binned for droop.

I should graph what I got, there are some several instances where going up is going backwards; again.. I suspect timing penalties of some sort.
Quote:


> Originally Posted by *plywood99*
> 
> 
> 
> Vega liquid cooled aio.


Grats! Best 4k score I have seen.









Quote:


> Originally Posted by *chris89*
> 
> Monitor your GPU's with HwInfo, I find my GPU's ramping up to full speed load when away from the PC. The result of Illuminati crypto coin mining using everyone in the world. Especially us on the forums with high end PC's. Disable Network adapter when away, notice difference. Also it's in the Windows Updates. Fresh OS disabling Updates would resolve it.






Quote:


> Originally Posted by *plywood99*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Paul17041993*
> 
> nice, what clocks?
> 
> Stock core, 1100 on mem and +50 power.
Click to expand...

1750 @ 230W @ 1200mv... that pretty much puts the nail in the coffin, the LC AIO seems binned for leakage on the core. Nice silicon!

There are several AIOs which can't run stock once the power target is nudged, your the lotto winner!


----------



## asdkj1740

my power slider in wattman is now 142%. however i cant see any differences between 142% and 50% on power consumption shown on gpuz as well as benchmark results.
i have reset the wattman and then chosen manual voltage and then applied it first but stlil it wont work for me.
are there any special step that i have omitted? really need help on this problem.


----------



## pmc25

Quote:


> Originally Posted by *Paul17041993*
> 
> Yea, so it is in fact XFR style boosting, what frequency mode you using though? fixed or dynamic?


'Dynamic' - though I think it's mislabeled in Radeon Settings. Dynamic should be %, and fixed should be entering P-states manually. Anyway, I'm entering P-states manually, and for both core and HBM I set the highest P-state to both min and max for gaming.
Quote:


> Originally Posted by *laczarus*
> 
> Question guys:
> The 17.9.1 driver crashes instead of locking up the system (which is convenient), but you still need to reboot in order to not suffer from performance degradation.
> Has anyone tried resetting with CRU instead of rebooting?
> https://www.monitortests.com/forum/Thread-Custom-Resolution-Utility-CRU
> 
> I've used the reset64.exe and it seemed to work, clocks seem to apply properly after using it without rebooting the system. Have to do further testing to see if performance drops or not though.


Please do report your findings. That would make the currently utterly broken status quo a little more tolerable.

It's not only driver / Radeon Settings crashes that effects this, putting the computer to sleep also results in performance dropping off (but not as badly, and at least they fixed the hardware acceleration freezes that related to it).

I suspect that what we still have for Vega is a barely modified Fiji gaming driver.


----------



## PontiacGTX

Quote:


> Originally Posted by *Reikoji*
> 
> where are they even selling the Vegas 56/64 strix ?


i was referring to the presence of the cards in the articles/reviews I dont see any of those cards reviewed


----------



## dagget3450

Quote:


> Originally Posted by *chris89*
> 
> Yeah this is on the device driver install, inf only. If I disable network adapter, it idles at right away. Or I notice fan ramped up when away, move mouse & its instantly to idle. Happened after background Windows Updates.
> 
> I can't get Crossfire to work? Anyone know how or is it a Superposition issue?
> 
> Also yeah just 1,250mhz core/ 2,000mhz memory... delimited power 256w tdp/tdc. I think 84C Max VRM limit & 88C hotspot.
> 
> I know, I was like .. how? I checked 2nd GPU... at idle & no fan at all. Just one of the Two RX 480's... haha be interested to test crossfire.


Superposition crosssfire should work if you add the exe from the bin folder to your radeon ui profiles. Then in the options for that superposition exe in radeon ui look for crossfire and set it to 1x1

That should get it wotking, i had a post a long time back with screenshots but i am on my phone right now.


----------



## asdkj1740

Quote:


> Originally Posted by *rv8000*
> 
> Why am I not surprised ASUS would pull something like this. Wasn't this done by ASUS and MSI on some of the 300 series models?


for that mosfet, i would say only 40% of the surface are contacted to the heatsink, which is insane and may blow up the mosfet if some softpowerplaytable mod has done.


----------



## asdkj1740

Quote:


> Originally Posted by *majestynl*
> 
> [/SPOILER]
> Yep also hitting nice numbers with LC Bios and Powertables mod
> 
> P6: 1662 @ 1166mv
> P7: 1742 @ 1222mv
> 
> HBM: 1100Mhz / 900mv
> 
> Do not see any change in scores between 50%-142% Power for now! Will play more with current settings and keep updated!


why flashing lc bios if you are already using powertable mod
btw i have encounter the same problem of no difference between 50%~142%
25,000 is pretty high, mine can only do ~24000 with undervolt to 1.06v @ 1622mhz
i would like to ask how much is your gpu power consumption shown on gpuz, and your core clock range when benchmarking


----------



## majestynl

Quote:


> Originally Posted by *asdkj1740*
> 
> why flashing lc bios if you are already using powertable mod
> btw i have encounter the same problem of no difference between 50%~142%
> 25,000 is pretty high, mine can only do ~24000 with undervolt to 1.06v @ 1622mhz
> i would like to ask how much is your gpu power consumption shown on gpuz, and your core clock range when benchmarking



Superposition: 1650 - 1675Mhz with ~370watt Gpu Power draw
Firestrike: 1675 - 1710Mhz with ~350watt Gpu Power draw

Reason for AIO Bios is the higher max Vcore of 1250mV


----------



## majestynl

Quote:


> Originally Posted by *Soggysilicon*
> 
> Settings had some issues in a few games, had to tune again. Update below.
> 1666/1736 @ 1167/1244
> 1105 @ 892
> +50
> LC-B
> 
> If my original get crashie' in some games or workloads you can try these.
> 
> My generic "stress test" at the moment is, WPE + HWinfo64 + Chrome w/ a couple vids loaded and one or two playing, Heaven 1080 Ultra/Xtess/4x repeat bench till crash logging scores, checking variance, monitoring cards performance / health.
> 
> Above settings are ~ to a >5.5% OC on the air bios... but it runs out of wattage headroom... would need to power play table above 50% to go back to my other settings stable.
> 
> Games would "play" but Starpoint gemini warlords menus which overlaid had a very high probability of hosing out, I think the clocks / feedback loop gets wonky when there are multi video threads or overlays. If its happening in that, it will happen in other titles.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The above "stress test" will eventually recreate the issue, silicon seems binned for droop.
> 
> I should graph what I got, there are some several instances where going up is going backwards; again.. I suspect timing penalties of some sort.


*Thanks for the update,* working like a charm over here








No issues with the few games i played (Overwatch / Wildlands) and several benchmarks. Even no Wattman crashes..
Will play further and keep updated...


----------



## asdkj1740

Quote:


> Originally Posted by *majestynl*
> 
> 
> Superposition: 1650 - 1675Mhz with ~370watt Gpu Power draw
> Firestrike: 1675 - 1710Mhz with ~350watt Gpu Power draw
> 
> Reason for AIO Bios is the higher max Vcore of 1250mV


do you remember the gpu range when you were using the air bios?
my vega64 only run at 1400~1600mhz, no matter what settings... cant reach 1600mhz above


----------



## Whatisthisfor

Quote:


> Originally Posted by *AmateurExpert*
> 
> I don't have a AIO to check (I have the air version but I've put an EK block on it) but I would expect the AIO fan to just have a single longish cable going all the way to the plug on the PCB.
> 
> It should be possible to keep the existing section of cable from the PCB, up hose sleeving all way up to the fan frame. You could cut the old fan off here, and solder a Y-splitter (e.g. http://noctua.at/en/products/accessories/na-syc1/specification or a cheap ebay one) minus its plug to the existing cable. This gives the option of fitting two matching fans in push-pull, avoids having to open the shroud, and avoids having to buy plugs, sockets and crimp terminals. However severing the OEM fan in order to reuse its cable and connection would not be desirable from a warranty point of view.


Thanks for your opinion. I will stick with my serial fan for now. I learned, that going with power save mode keeps it quiet enough. At the moment i lose just two or three frames but save hundreds of watts ingame. Maybe ill reconsider at a later time. I also found an interesting test on youtube regarding power draw vs FPS gain for Vega AIO


----------



## y0bailey

Potato pic of my new setup! Vega 56, Ryzen 1700


----------



## Newbie2009

Quote:


> Originally Posted by *majestynl*
> 
> 
> Superposition: 1650 - 1675Mhz with ~370watt Gpu Power draw
> Firestrike: 1675 - 1710Mhz with ~350watt Gpu Power draw
> 
> Reason for AIO Bios is the higher max Vcore of 1250mV


AIO bios doesn't have a 1250mv stock core.


----------



## majestynl

Quote:


> Originally Posted by *asdkj1740*
> 
> do you remember the gpu range when you were using the air bios?
> my vega64 only run at 1400~1600mhz, no matter what settings... cant reach 1600mhz above


Cant remember well but i believe it was around ~1550Mhz. But there where also some bugs in the actual mhz in prev. bios versions








Quote:


> Originally Posted by *Newbie2009*
> 
> AIO bios doesn't have a 1250mv stock core.


read: Max vcore !









That's what its told to me by hellm


----------



## opty165

Greetings all! It's been a long while since I've been active in any owners threads on here. I've finally completed my new build after running an FX 8350 and R9 290 for a few years. My XFX RX Vega 64 Liquid Cooled finally arrived last week! Any requirements for joining the owners club here? I haven't done much with the card just yet.


----------



## madmanmarz

Hey all, I get my Vega 56 tomorrow or Tuesday along with a Nexxxos GPX waterblock (finally having to do away with the MCW universal block I've used for so many cards).

I'm gonna do the usual undervolt/overclock - what's better the powertable mod or bios flash? Should I just flash the liquid 64 bios and have at it?

Thanks!


----------



## gamervivek

Crysis 3 gives Vega56 a hard time, but it throttles even when not on temp. or power limit. The gpu hotspot temperature showed by gpuz however went up to 109C and now I'm thinking that it's the cause.

Anyone else monitoring this? I'm thinking this might be a vrm temp. and is responsible for the voltages not sticking from wattman.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> Crysis 3 gives Vega56 a hard time, but it throttles even when not on temp. or power limit. The gpu hotspot temperature showed by gpuz however went up to 109C and now I'm thinking that it's the cause.
> 
> Anyone else monitoring this? I'm thinking this might be a vrm temp. and is responsible for the voltages not sticking from wattman.


force higher power States P7 being lower than P6 and make P5 really close to P5 and see if that solves?

also the hotspot can be the cause of the throttling it should be the VRM or some part into the die'


----------



## poisson21

In techpowerup.com i see that the LC bios from the the different vendor are all different while the air 64 are all the same.

Is there any real difference between these differente bios, i can't see any in their description.

I ask because i have a msi air 64 flashed with a sapphire LC bios, can i expect a difference if i use the MSI LC bios ?


----------



## PontiacGTX

All Liquid Cooled cards use same clocks and same power limit?


----------



## poisson21

I think so but why using different bios for aio while using the same for the air version ??

https://www.techpowerup.com/vgabios/?architecture=AMD&manufacturer=&model=RX+Vega+64&interface=&memType=&memSize=&since=


----------



## chris89

I wonder if the Vega alone is more powerful than 2x RX 480 8GB reference. It's nearly 13 TFlops single precision.


----------



## PontiacGTX

Quote:


> Originally Posted by *poisson21*
> 
> I think so but why using different bios for aio while using the same for the air version ??
> https://www.techpowerup.com/vgabios/?architecture=AMD&manufacturer=&model=RX+Vega+64&interface=&memType=&memSize=&since=


maybe different voltage on each power states? or maybe they are modifying their default LC bios for any of the power tables values? fan profiles?
Quote:


> Originally Posted by *chris89*
> 
> I wonder if the Vega alone is more powerful than 2x RX 480 8GB reference. It's nearly 13 TFlops single precision.


in games 2X Ellesmere XT could outperforms VEGA if there is proper CF support

2X 290>Fury X


----------



## gamervivek

Quote:


> Originally Posted by *PontiacGTX*
> 
> force higher power States P7 being lower than P6 and make P5 really close to P5 and see if that solves?
> 
> also the hotspot can be the cause of the throttling it should be the VRM or some part into the die'


Similar performance and the hotspot rises to 101C in 5 minutes of play. It'd be a big disappointment if these were the VRM temps for all that talk about reference cards having great components.


----------



## deadman3000

I'm currently running my 56 on 64 air BIOS and was wondering if I should go back to 56 BIOS and apply the Powerplay tables to get the voltage bump on memory instead? Also is it safe to use a higher voltage than 1350mV on HBM (will it even take?). My memory is unstable above 900Mhz on 56 BIOS and 1000Mhz on 64 BIOS (I get sparklies +50 and crashes +100). I was wonder if there is a way to squeeze anymore from the HBM so I can get to 1100Mhz stable?


----------



## Rootax

Quote:


> Originally Posted by *deadman3000*
> 
> I'm currently running my 56 on 64 air BIOS and was wondering if I should go back to 56 BIOS and apply the Powerplay tables to get the voltage bump on memory instead? Also is it safe to use a higher voltage than 1350mV on HBM (will it even take?). My memory is unstable above 900Mhz on 56 BIOS and 1000Mhz on 64 BIOS (I get sparklies +50 and crashes +100). I was wonder if there is a way to squeeze anymore from the HBM so I can get to 1100Mhz stable?


The powerplay table doesn't work for memory I believe. Not yet anyway.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> Similar performance and the hotspot rises to 101C in 5 minutes of play. It'd be a big disappointment if these were the VRM temps for all that talk about reference cards having great components.


great components doesnt imply great cooling the VEGA Cooling is just similar to a 7970 or the evne worse RX 480 cooling


----------



## gamervivek

Quote:


> Originally Posted by *PontiacGTX*
> 
> great components doesnt imply great cooling the VEGA Cooling is just similar to a 7970 or the evne worse RX 480 cooling


Reference cards made noise but were said to be better for VRMs, at least that's what happened with 290X. The core temps don't go beyond 70C while the hotspot has no trouble shooting past 100C.


----------



## chris89

Quote:


> Originally Posted by *PontiacGTX*
> 
> great components doesnt imply great cooling the VEGA Cooling is just similar to a 7970 or the evne worse RX 480 cooling


I agree.. the actual heatsink is tiny for such a high powered extremely expensive video card.. its like using the weak 390x stock cooler... the 290x cooler had a heatsink twice as heavy and twice as long... cools like 75% better..

The GPU should cost 249.99 .. The prices will be 249.99 in less than a week.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> Reference cards made noise but were said to be better for VRMs, at least that's what happened with 290X. The core temps don't go beyond 70C while the hotspot has no trouble shooting past 100C.


the GPU has probably better power management than older Generatrions and more adaptive voltage than Fiji had reducing heat output, plus if it ere always 1630Mhz like it was supposed to be the card would be reaching temp target and not counting possible heat that isnt dissipared from HBM2 stacks that goes from the GPU and the stacks itself
maybe heatsink designed was improved but it doesnt get good teemps when running at 1630(not undervolted)
RX VEGA 64 heatsink/plate

RX 480 heatsink/plate



the hotspot could be the space in the interposer?


----------



## Whatisthisfor

Regarding cooling:






In this video it is explained that with power safe (-25%) you wont lose much fps, only 2 ore 3. Overclocking isnt worth it, but you will save hundreds of watts (on AIO) if you just stay with power safe. Plus your card will stay cooler and run more silent.


----------



## Chaoz

Quote:


> Originally Posted by *PontiacGTX*
> 
> the GPU has probably better power management than older Generatrions and more adaptive voltage than Fiji had reducing heat output, plus if it ere always 1630Mhz like it was supposed to be the card would be reaching temp target and not counting possible heat that isnt dissipared from HBM2 stacks that goes from the GPU and the stacks itself
> maybe heatsink designed was improved but it doesnt get good teemps when running at 1630(not undervolted)
> RX 480 heatsink/plate
> 
> 
> 
> RX VEGA 64 heatsink/plate
> 
> 
> 
> the hotspot could be the space in the interposer?


You mean other way around? The top one is the Vega stock cooler.


----------



## Paul17041993

Quote:


> Originally Posted by *gamervivek*
> 
> Similar performance and the hotspot rises to 101C in 5 minutes of play. It'd be a big disappointment if these were the VRM temps for all that talk about reference cards having great components.


Looking at it now I'd say it's a collection of temp probes across the VRMs and PCB, as it's only 5C hotter than the core with my EK block and thermalgrizzly pads...

Also the stock cooler has the VRM's cooled more or less passively by the aluminium shroud, it doesn't have a direct connection to the vapor chamber block which is attached via the crossbar on the back.
Quote:


> Originally Posted by *chris89*
> 
> I agree.. the actual heatsink is tiny for such a high powered extremely expensive video card.. its like using the weak 390x stock cooler... the 290x cooler had a heatsink twice as heavy and twice as long... cools like 75% better..
> 
> The GPU should cost 249.99 .. The prices will be 249.99 in less than a week.


The stock 290X cooler is a lot worse actually, and I don't know how you think vega should be below 400, what country is this...?


----------



## seanmacvay

Quote:


> Originally Posted by *Paul17041993*
> 
> Looking at it now I'd say it's a collection of temp probes across the VRMs and PCB, as it's only 5C hotter than the core with my EK block and thermalgrizzly pads...
> 
> Also the stock cooler has the VRM's cooled more or less passively by the aluminium shroud, it doesn't have a direct connection to the vapor chamber block which is attached via the crossbar on the back.
> The stock 290X cooler is a lot worse actually, and I don't know how you think vega should be below 400, what country is this...?


You're talking about the GPU hotspot temp?
I have an EK block as well and mine will hit 108 degrees if I'm running a lot of power through the card.
Maybe I missed a thermal pad or two.


----------



## jearly410

Using the stock air cooler max fan speed the hotspot is 5-10c hotter than the hbm temp usually.


----------



## jearly410

Quote:


> Originally Posted by *seanmacvay*
> 
> You're talking about the GPU hotspot temp?
> I have an EK block as well and mine will hit 108 degrees if I'm running a lot of power through the card.
> Maybe I missed a thermal pad or two.


If/when you replace the thermal pads, could you take a picture before and after to see the difference? Could be the key to unlocking the mystery of the hot spot.


----------



## raysheri

That's pretty strange, with the EK block my hot spot max's out at 55c


----------



## seanmacvay

Quote:


> Originally Posted by *jearly410*
> 
> If/when you replace the thermal pads, could you take a picture before and after to see the difference? Could be the key to unlocking the mystery of the hot spot.


It probably won't be incredibly soon, but I can do that. I'm definitely curious.
Quote:


> Originally Posted by *raysheri*
> 
> That's pretty strange, with the EK block my hot spot max's out at 55c


This is also running the LC bios, at 1250mv and +50% powerlimit. My pc was drawing 720 watts from the wall. At a more reasonable load though mine is still 10-20 degrees higher at best.


----------



## chris89

Quote:


> Originally Posted by *seanmacvay*
> 
> It probably won't be incredibly soon, but I can do that. I'm definitely curious.
> This is also running the LC bios, at 1250mv and +50% powerlimit. My pc was drawing 720 watts from the wall. At a more reasonable load though mine is still 10-20 degrees higher at best.


wow amazing performance. Anyone rockin the Frontier Edition? It's like the TITAN of AMD. haha

Immense performance. I wonder if it surpasses my 2x rx480s at over 13.1 Tera Flops of processng power but hot & loud. Need to add heavy 1/8-1/4" thick piece of pure copper polished & run the vrm at 60C full tilt at over 1,400Mhz.


----------



## shadowxaero

Quote:


> Originally Posted by *chris89*
> 
> wow amazing performance. Anyone rockin the Frontier Edition? It's like the TITAN of AMD. haha
> 
> Immense performance. I wonder if it surpasses my 2x rx480s at over 13.1 Tera Flops of processng power but hot & loud. Need to add heavy 1/8-1/4" thick piece of pure copper polished & run the vrm at 60C full tilt at over 1,400Mhz.
> 
> 
> Spoiler: Warning: Spoiler!


----------



## poisson21

No crossfire enabled yet.


----------



## shadowxaero

Quote:


> Originally Posted by *poisson21*
> 
> 
> 
> No crossfire enabled yet.


Don't think crossfire should matter much. It benched both GPUs and they scaled very well at that.


----------



## poisson21

Yeah i know , i said that because i'm just sad that it is not enabled yet ;'(.


----------



## Rootax

Quote:


> Originally Posted by *shadowxaero*


How the hell did you get 400gb+ in "memory copy" ? My Vega with 1050mhz hbm does approx. 333gb/sec...


----------



## gamervivek

Quote:


> Originally Posted by *Paul17041993*
> 
> Looking at it now I'd say it's a collection of temp probes across the VRMs and PCB, as it's only 5C hotter than the core with my EK block and thermalgrizzly pads...
> 
> Also the stock cooler has the VRM's cooled more or less passively by the aluminium shroud, it doesn't have a direct connection to the vapor chamber block which is attached via the crossbar on the back.
> The stock 290X cooler is a lot worse actually, and I don't know how you think vega should be below 400, what country is this...?


Can you please check how high it goes with different power limits?

I think it's the culprit because I'm not close to temperature or power limits.


----------



## Newbie2009

I hope the next drivers show some actual improvement and bring back adaptive sync. 2 weeks or so since last update.


----------



## shadowxaero

Quote:


> Originally Posted by *Rootax*
> 
> How the hell did you get 400gb+ in "memory copy" ? My Vega with 1050mhz hbm does approx. 333gb/sec...


I'm at 1105Mhz on HBM


----------



## asdkj1740

is gpuz power consumption reading accurate???that is the whole card's power or just the core?


----------



## PontiacGTX

asus RX VEGA 64 STRIX


Spoiler: Warning: Spoiler!








http://greentechreviews.ru/2017/09/18/obzor-videokarty-asus-rog-strix-radeon-rx-vega-64/


----------



## y0bailey

More and more happy I bought the reference card. Granted I only compute with headphones on (and they block a decent amount of noise), but my reference cooler runs cool and isn't overly annoying noise wise.


----------



## TrixX

Anyone know where to grab the upped power profiles for Vega64, looking to do some Overclocking when the water cooling arrives


----------



## steadly2004

Quote:


> Originally Posted by *TrixX*
> 
> Anyone know where to grab the upped power profiles for Vega64, looking to do some Overclocking when the water cooling arrives


Are you talking about flashing the liquid bios? Or modding the power play tables? Those are the two ways to increase power available.

The liquid bios also adds support for using 1.25v on the GPU instead of just 1.2 but I have never done the power play mod so I can't attest to whether or not that is possible. But I believe the latter can be set for a larger possible power limit.


----------



## jearly410

Can anyone tell me the thickness of the stock thermal pads used for the vrms? Thanks!


----------



## TrixX

Quote:


> Originally Posted by *steadly2004*
> 
> Are you talking about flashing the liquid bios? Or modding the power play tables? Those are the two ways to increase power available.
> 
> The liquid bios also adds support for using 1.25v on the GPU instead of just 1.2 but I have never done the power play mod so I can't attest to whether or not that is possible. But I believe the latter can be set for a larger possible power limit.


Power play tables. I already have the LC BIOS I want to flash later when the Block arrives.


----------



## asdkj1740

Quote:


> Originally Posted by *PontiacGTX*
> 
> asus RX VEGA 64 STRIX
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> http://greentechreviews.ru/2017/09/18/obzor-videokarty-asus-rog-strix-radeon-rx-vega-64/


its not a design fault as it is perfect for 1080ti strix pcb, just but not for vega.









1080p 0xaa, the driver wattman is at stock settings.


----------



## steadly2004

Quote:


> Originally Posted by *TrixX*
> 
> Power play tables. I already have the LC BIOS I want to flash later when the Block arrives.


It's here in this thread I believe. Linked from the OP.

http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios


----------



## The EX1

Watercool just announced their Vega blocks.





Aquacomputer has a very nice looking one. I like the Radeon and Vega branding personally.





Swiftech also announced their Komodo in Eco and Luxury versions.


----------



## ilmazzo

Sexy


----------



## PontiacGTX

Quote:


> Originally Posted by *asdkj1740*
> 
> its not a design fault as it is perfect for 1080ti strix pcb, just but not for vega.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 1080p 0xaa, the driver wattman is at stock settings.


then it is a design fault on both


----------



## twan69666

Quote:


> Originally Posted by *jearly410*
> 
> Can anyone tell me the thickness of the stock thermal pads used for the vrms? Thanks!


I think they're 1mm? Whereas the others are .5mm.

I think..I'm not certain


----------



## Azazil1190

Guys one question pls!
Which drivers are better for stability-oc-performance(gaming-benches)?
17.8.2 ?
Or
17.9.1 ?

Thanks in advance!


----------



## chris89

Anyone else noticing exceedingly high temperatures? way more than usual?


----------



## Chaoz

Quote:


> Originally Posted by *chris89*
> 
> Anyone else noticing exceedingly high temperatures? way more than usual?


Not really. Temps are usually around 40°C and now they don't even go over 35°C.


----------



## madmanmarz

Alright yall, just fired up my vega 56 along with a new 850w psu and flashed to 64 liquid bios right away. I have the nexxxos gpx waterblock on it. wish me luck! any nominal values i can shoot for on voltages and clocks?

Few pics, guess I'll update my sig


----------



## Soggysilicon

Quote:


> Originally Posted by *madmanmarz*
> 
> Alright yall, just fired up my vega 56 along with a new 850w psu and flashed to 64 liquid bios right away. I have the nexxxos gpx waterblock on it. wish me luck! any nominal values i can shoot for on voltages and clocks?
> 
> Few pics, guess I'll update my sig


I would suggest starting with your HBM clocks and volts, as that is going to be the first (and only discernible) hard limit. Tweakin' freq. n' volts is another matter entirely; additionally would leave HBCC off unless you plan >2560 res or benching 4k. There is some speculation backed up by some "loose" correlation that the cards have some binning... 56/64 HBM, 64/64LC gpu leak. Good luck n' have fun!


----------



## madmanmarz

Quote:


> Originally Posted by *Soggysilicon*
> 
> I would suggest starting with your HBM clocks and volts, as that is going to be the first (and only discernible) hard limit. Tweakin' freq. n' volts is another matter entirely; additionally would leave HBCC off unless you plan >2560 res or benching 4k. There is some speculation backed up by some "loose" correlation that the cards have some binning... 56/64 HBM, 64/64LC gpu leak. Good luck n' have fun!


Sounds good, should I verify clocks with superposition or will anything work? I noticed with 17.9.1 I can't go over 1050mv on hbm or 800mhz gets locked in, temps look good so far and I intend to stick with 2560x1080 until Freesync 2 comes out. Thanks!!


----------



## Soggysilicon

Driver crashed before the video card... managed to capture that... mmmm... 1792 boost of doom


----------



## Soggysilicon

Quote:


> Originally Posted by *madmanmarz*
> 
> Sounds good, should I verify clocks with superposition or will anything work? I noticed with 17.9.1 I can't go over 1050mv on hbm or 800mhz gets locked in, temps look good so far and I intend to stick with 2560x1080 until Freesync 2 comes out. Thanks!!


Undervolt the HBM, it does not need volts... not in my experience...

In one of the super position tweaks I run HBM at 1105 @ ~850 mv just fine in the 4k runs... that won't hold up in gaming or stress testing; I suspect Johnson Nyquist noise. You just want to find the "actual - physical" HBM frequency that your card is capable of, and work around that. Cause that isn't going to change, just the related voltage you need to keep above the noise floor. It's not like gpu frequency you can't volt your way around it... pretty black n' white.

It will take you longer in reboots than it will in figuring out what it is, cause once you exceed it... your gunna crash pretty instantly... guaranteed!


----------



## asdkj1740

Quote:


> Originally Posted by *PontiacGTX*
> 
> then it is a design fault on both


i suspect asus simply put the strix 1080ti cooler with little modification on the vega64 strix. 1080ti strix cooler on 1080ti strix should be fine.

comparing to strix 1080ti cooler, there are extra fins from the main heatsink for vega64 strix mofets to cool them directly, however that placing is just not good enough. really wish this cooler is not the finalized one.


----------



## chris89

RX480 Crossfire is faster than Vega right? I see 13.13 Teraflops.. I mean thats nearly Vega 64 Liquid speeds.

My RX 480s were 179 a piece so that's only $358...

$550? why.. pointless no point in upgrading

I think AMD should do away with COMPUTE Cores on their GAMING GPU(s) and only have API jazz for gaming specifically.

So the market won't rack up the prices on gaming gpu's for the *crypto-coin-future-economic-collapse-of-all-physical-monetary-possession Communinty...*

DESIGN Compute GPU's & Gamng GPU's... That way the Crypto coin community can pay their over priced costs on GPU's while US """"""GAMERS*********** Can even pay cheaper prices...

VEGA for 199-249 is way more realistic... No one can afford a VEGA 56 for $550.

Maybe use no hardware Compute cores on the Gaming GPU(s) but have some kind of "EMULATION" so that it can at least complete Compute tests yet not perform as good as the actual Hardware Lined up Compute GPU's...

Just need to make Compute Only GPU... This way the crypto coin community can *blindly & foolishly* be the ultimate cause of the **Future Economic Collapse** Because of these *GREEDY Miner's!!!!* We Want The *Money!! Mine For Money!* ... IF you had a *Clue* your *GREED* will be the cause of the Future *"Greatest Depression Of All Time**. Keep my word, if the Crypto Coin Miner's keep at it .... Its all gonna blow up in our faces one day.

UNLESS WE ALL REALIZE THIS NOW. DIAL BACK. *STOP CRYPTO MINING NOW!!!* LET GPU PRICES FALL TO MSRP & The Future Economic Collapse Will *FAIL* with an Epic Demise.

LOGICAL : Crypto-Digital-Currency is Exponentially on the Rise in Currency "WORTH" and in time the Digital Currency will Exceed All Currencies Of The World & Your DOLLAR BILLS will be worthless.

*We Can Only FIX Everything If We Live In The Name Of Humbleness & Charm & Peace & SELFLESSNESS In The Name Of The Lord Our God, Creator Of All Of The Universe & Way Well Beyond.*


----------



## CrazyElf

For those on the fence between the Vega 56 and 64, see the following:

See:
http://www.overclock.net/t/1638276/gamers-nexus-vega-64-vs-vega-56-clock-for-clock-shader-comparison-gaming/0_100

Vega 56 isn't a bad buy if you can get it at around MSRP and undervolt. No Crossfire thus far though is a disappointment.

Mining doesn't look like a better story either.

http://www.tomshardware.com/reviews/radeon-rx-vega-56,5202-20.html


The newer patches have pushed up Vega of course for mining, but it's unlikely the Vega 56 vs 64 position will change much. For gamers and miners, it's tough for Vega 64 to justify its price premium.

Quote:


> Originally Posted by *DMatthewStewart*
> 
> How did you get the power temp to stick at 80? Mine just defaults back to 70 whether its done in the GUI or via profile. Almost everything else is adjustable though. Its the one thing I really need to be adjustable.


Are you using WattMan or WattTool? I'd go with WattTool: http://www.overclock.net/t/1609782/watttool-a-simple-tool-that-combines-overclocking-with-vrm-monitoring-tweaking-for-rx-400-series/0_100

Another note: HBM tends to be very temperature sensitive and IIRC starts to destabilize after 60C.

This is one of those GPUs that you have to watercool more so than others to get the most out of it ... or at least buy a giant cooler like the Raijentek Morpheus:


http://imgur.com/gZVkT




I would recommend high static pressure fans like PWM Gentle Typhoons or those Corsair high static pressure fans for those using a custom cooler with customer 120mm fans, but same idea otherwise. Of course watercooling is going to yield even better temps. The reference cooler is not adequate in my opinion for a GPU of this TDP.

Quote:


> Originally Posted by *Soggysilicon*
> 
> Undervolt the HBM, it does not need volts... not in my experience...
> 
> In one of the super position tweaks I run HBM at 1105 @ ~850 mv just fine in the 4k runs... that won't hold up in gaming or stress testing; I suspect Johnson Nyquist noise. You just want to find the "actual - physical" HBM frequency that your card is capable of, and work around that. Cause that isn't going to change, just the related voltage you need to keep above the noise floor. It's not like gpu frequency you can't volt your way around it... pretty black n' white.
> 
> It will take you longer in reboots than it will in figuring out what it is, cause once you exceed it... your gunna crash pretty instantly... guaranteed!


It depends on how far you push the HBM I'm afraid. There does seem to be some "HBM lottery" going on here. I think you many be one of the luckier ones because there are some folks here who aren't getting >1100 MHz - period.

Another matter to consider is that HBM controls the voltage. So if you undervolt the core without undervolting the GPU, it will set the minimum voltage to the higher of those two.

The Vega 56 cars do seem to have a weaker voltage bin, and worse, they cannot increase the voltage of the HBM past a certain point, a flaw that constrains the card, as Vega 56 is still HBM2 bottlenecked.

Quote:


> Originally Posted by *PontiacGTX*
> 
> ASUS R9 290(x) had VRM and GPU cooling issue at beggining mainly were using same design as 780s, 390X and R9 FURY had not really good GPU Temps compared to other brands So it isnt news they do this Also MSI hasnt done a high end AMD GPU since R9 290x,this is curious
> 
> one thing is where are the custom RX VEGA GPUs? MSI?Gigabyte?XFX?Visiontek?HIS? Sapphire?Powercolor?


See the following:
https://videocardz.com/newz/amd-radeon-rx-vega-custom-cards-no-sooner-than-mid-october

Not until mid-October. Some might not arrive until the end of the year. If we see things like a Vega Lightning, they may not arrive until Q1 of 2018, as Lightning GPUs are usually slower.

Quote:


> Originally Posted by *chris89*
> 
> RX480 Crossfire is faster than Vega right? I see 13.13 Teraflops.. I mean thats nearly Vega 64 Liquid speeds.
> 
> My RX 480s were 179 a piece so that's only $358...
> 
> $550? why.. pointless no point in upgrading
> 
> I think AMD should do away with COMPUTE Cores on their GAMING GPU(s) and only have API jazz for gaming specifically.
> 
> So the market won't rack up the prices on gaming gpu's for the crypto-coin-future-economic-collapse-of-all-physical-monetary-possession Communinty...
> 
> DESIGN Compute GPU's & Gamng GPU's... That way the Crypto coin community can pay their over priced costs on GPU's while US """"""GAMERS*********** Can even pay cheaper prices...
> 
> VEGA for 199-249 is way more realistic... No one can afford a VEGA 56 for $550.
> 
> Maybe use no hardware Compute cores on the Gaming GPU(s) but have some kind of "EMULATION" so that it can at least complete Compute tests yet not perform as good as the actual Hardware Lined up Compute GPU's...


It depends on if the game supports Crossfire.


For games that support CF perfectly, it depends on how they scale
For games that support CF, but have issues, Vega is a better buy (ex: frame time problems)
For games that don't support CF, Polaris has little to offer

Consider this review: https://www.hardocp.com/article/2016/07/11/amd_radeon_rx_480_8gb_crossfire_review/1

Quote:


> Yes, AMD Radeon RX 480 8GB CrossFire costs less than GeForce GTX 1080, a whole lot less, and compares closer to GeForce GTX 1070 on price. AMD Radeon RX 480 8GB CrossFire may be a good value compared to GeForce GTX 1070, offering more performance, for a somewhat equivalent price. That is where it seems to fit best. Compared to GeForce GTX 1080 though, the GTX 1080 is more consistent in every way, performance and frametime offering a smoother experience, literally.


Considering the RX Vega performs around the GTX 1080, it's an interesting comparison. The only part in the review that doesn't apply is the GTX 1080's power efficiency, although undervolting helps.

The size of the die and HBM costs means that they cannnot possibly sell Vega for the price you propose.

Hardware emulation is not possible - you need to design your GPU to perform a certain way. AMD is targeting the pro-market aggressively because that is where the margins are. I don't want to sound like an apologist for AMD, but that's quite a sensible business strategy. Just like releasing Polaris first was because it is the gaming "sweet spot" where most of the money is made.


----------



## chris89

@CrazyElf

You can further prevent throttling by cooling all of the Hot Spot Chips on the PCB... They all get really hot, not monitored by software as they are lined up so no one cools them and power consumption sky high as a result of these in continuous throttle state.


----------



## Chaoz

Quote:


> Originally Posted by *CrazyElf*
> 
> Another note: HBM tends to be very temperature sensitive and IIRC starts to destabilize after 60C.
> 
> This is one of those GPUs that you have to watercool more so than others to get the most out of it ... or at least buy a giant cooler like the Raijentek Morpheus:
> 
> 
> http://imgur.com/gZVkT
> 
> 
> 
> 
> I would recommend high static pressure fans like PWM Gentle Typhoons or those Corsair high static pressure fans for those using a custom cooler with customer 120mm fans, but same idea otherwise. Of course watercooling is going to yield even better temps. The reference cooler is not adequate in my opinion for a GPU of this TDP.
> It depends on how far you push the HBM I'm afraid. There does seem to be some "HBM lottery" going on here. I think you many be one of the luckier ones because there are some folks here who aren't getting >1100 MHz - period.


That's the reason why I bought a Vega 64 with ref cooler. I have a custom loop and planned to add it to my loop aswell. Temps so far are awesome. Never exceeds 40°C on balanced mode with FreeSync enabled on 75Hz.

The performance boost, because it runs cooler, is also quite good and the fps with FreeSync disabled is a serious boost compared to my GTX1070, which got around 100-120fps. With my vega I get around 13-150fps on 1080p Ultra wide with everything maxed out.

So I can really recommend watercooling it. I'm using the EKWB Nickel Acetal block on mine.


----------



## chris89




----------



## Rootax

Quote:


> Originally Posted by *chris89*
> 
> @CrazyElf
> 
> You can further prevent throttling by cooling all of the Hot Spot Chips on the PCB... They all get really hot, not monitored by software as they are lined up so no one cools them and power consumption sky high as a result of these in continuous throttle state.


I don't believe the last two on the right need cooling. It doesn't if I believe the EK WB mounting guide, and, my hot spot is at 58max, with nothing on those two (Vega [email protected] 1100 hbm2, gpu around 43max, hbm 48-53 depending on the stresstest/game)


----------



## alanthecelt

Quote:


> Originally Posted by *CrazyElf*
> 
> For those on the fence between the Vega 56 and 64, see the following:
> 
> Mining doesn't look like a better story either.
> 
> http://www.tomshardware.com/reviews/radeon-rx-vega-56,5202-20.html


I've had 40 out of my 56's on claymore on eth
even now they sit at a relatively mild overclock hitting mid 30's on eth while dual mining other coins


----------



## alanthecelt

edit
just put the latest nicehash miner on and with mild tweaks on my 56's



memory 1025
temp target 68
power limit +20

think theres a lot more potential there


----------



## alanthecelt

and this is why i hate AMD

started tweaking the settings on one card, crashed the miner and wattman defaulted
and can i get anywhere near where it was before? no... not at all

im around 33 per card now... what am i doing wrong with this POS software :S


----------



## biscuittea

I wasn't happy with the hotspot temps so I went and did a repaste on my Vega 56 with the Raijintek Morpheus II.

The core and HBM temps now max out at 60C whereas the hotspot is creeping to around 100C. To put that in context, my previous temps were roughly 70C for both core and HBM and roughly 95C for the hotspot.

Not sure what I can do to make the hotspot temps lower.

edit: Some say that the hotspot measures the VRM but there are others reporting hotspot temps being quite high whereas HWinfo says their VRM temps lower. I also can't seem to get HWInfo to show VRM temps for me, just core and memory.


----------



## PontiacGTX

Quote:


> Originally Posted by *asdkj1740*
> 
> i suspect asus simply put the strix 1080ti cooler with little modification on the vega64 strix. 1080ti strix cooler on 1080ti strix should be fine.
> 
> comparing to strix 1080ti cooler, there are extra fins from the main heatsink for vega64 strix mofets to cool them directly, however that placing is just not good enough. really wish this cooler is not the finalized one.


ASUS did this on R9 290/X and probably aswell on R9 FURY
Quote:


> Originally Posted by *CrazyElf*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> For those on the fence between the Vega 56 and 64, see the following:
> 
> See:
> http://www.overclock.net/t/1638276/gamers-nexus-vega-64-vs-vega-56-clock-for-clock-shader-comparison-gaming/0_100
> 
> Vega 56 isn't a bad buy if you can get it at around MSRP and undervolt. No Crossfire thus far though is a disappointment.
> 
> Mining doesn't look like a better story either.
> 
> http://www.tomshardware.com/reviews/radeon-rx-vega-56,5202-20.html
> 
> 
> The newer patches have pushed up Vega of course for mining, but it's unlikely the Vega 56 vs 64 position will change much. For gamers and miners, it's tough for Vega 64 to justify its price premium.
> Are you using WattMan or WattTool? I'd go with WattTool: http://www.overclock.net/t/1609782/watttool-a-simple-tool-that-combines-overclocking-with-vrm-monitoring-tweaking-for-rx-400-series/0_100
> 
> Another note: HBM tends to be very temperature sensitive and IIRC starts to destabilize after 60C
> 
> This is one of those GPUs that you have to watercool more so than others to get the most out of it ... or at least buy a giant cooler like the Raijentek Morpheus:
> 
> 
> http://imgur.com/gZVkT
> 
> 
> 
> 
> I would recommend high static pressure fans like PWM Gentle Typhoons or those Corsair high static pressure fans for those using a custom cooler with customer 120mm fans, but same idea otherwise. Of course watercooling is going to yield even better temps. The reference cooler is not adequate in my opinion for a GPU of this TDP.
> It depends on how far you push the HBM I'm afraid. There does seem to be some "HBM lottery" going on here. I think you many be one of the luckier ones because there are some folks here who aren't getting >1100 MHz - period.
> 
> Another matter to consider is that HBM controls the voltage. So if you undervolt the core without undervolting the GPU, it will set the minimum voltage to the higher of those two.
> 
> The Vega 56 cars do seem to have a weaker voltage bin, and worse, they cannot increase the voltage of the HBM past a certain point, a flaw that constrains the card, as Vega 56 is still HBM2 bottlenecked.
> See the following:
> https://videocardz.com/newz/amd-radeon-rx-vega-custom-cards-no-sooner-than-mid-october.
> 
> 
> 
> Not until mid-October. Some might not arrive until the end of the year. If we see things like a Vega Lightning, they may not arrive until Q1 of 2018, as Lightning GPUs are usually slower.
> It depends on if the game supports Crossfire.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> For games that support CF perfectly, it depends on how they scale
> For games that support CF, but have issues, Vega is a better buy (ex: frame time problems)
> For games that don't support CF, Polaris has little to offer
> 
> Consider this review: https://www.hardocp.com/article/2016/07/11/amd_radeon_rx_480_8gb_crossfire_review/1
> Considering the RX Vega performs around the GTX 1080, it's an interesting comparison. The only part in the review that doesn't apply is the GTX 1080's power efficiency, although undervolting helps.
> 
> The size of the die and HBM costs means that they cannnot possibly sell Vega for the price you propose.
> 
> Hardware emulation is not possible - you need to design your GPU to perform a certain way. AMD is targeting the pro-market aggressively because that is where the margins are. I don't want to sound like an apologist for AMD, but that's quite a sensible business strategy. Just like releasing Polaris first was because it is the gaming "sweet spot" where most of the money is made.


I odnt know if you mean slower in time to be released to the market but Lightning are one of the fastest and better binned from AMD,the R9 FURY X has no Lightning aswell the 480 had no special cooling/power delivery
Quote:


> Originally Posted by *biscuittea*
> 
> I wasn't happy with the hotspot temps so I went and did a repaste on my Vega 56 with the Raijintek Morpheus II.
> 
> The core and HBM temps now max out at 60C whereas the hotspot is creeping to around 100C. To put that in context, my previous temps were roughly 70C for both core and HBM and roughly 95C for the hotspot.
> 
> Not sure what I can do to make the hotspot temps lower.
> 
> edit: Some say that the hotspot measures the VRM but there are others reporting hotspot temps being quite high whereas HWinfo says their VRM temps lower. I also can't seem to get HWInfo to show VRM temps for me, just core and memory.


No I just saw that Hotspot could be some area in the die doesnt have thermal compound/heatsink contact i am wondering if non molded dies have higher temperature than molded dies(or backwards)
Quote:


> Originally Posted by *alanthecelt*
> 
> and this is why i hate AMD
> 
> started tweaking the settings on one card, crashed the miner and wattman defaulted
> and can i get anywhere near where it was before? no... not at all
> 
> im around 33 per card now... what am i doing wrong with this POS software :S


then a gaming video card is bad because you cant mine with it over 33M/s and a third party software crashes?


----------



## The EX1

Quote:


> Originally Posted by *PontiacGTX*
> 
> I odnt know if you mean slower in time to be released to the market but Lightning are one of the fastest and better binned from AMD,the R9 FURY X has no Lightning aswell the 480 had no special cooling/power delivery


He was talking about when the Lightnings are typically released









Lightnings, Matrix, ARES, and those other extreme cards are always released much later than other AIB models unfortunately.


----------



## madmanmarz

So I've found my approximate limits:
P6: 1000mv/1650mhz P7: 1050mv/1700mhz Actual: 1540mhz
HBM: 950mv/1100mhz
Ambient: 25c Core: 34c HS: 75c HBM: 43c Powerlimit: +50%
Superposition 1080p High: 10061

50mhz higher on core or HBM = freezeup, time to put some voltage on the core and see what I can get out of that. I may end up putting a quiet fan blowing on the air cooled part of the water block that cools VRM etc, maybe even hooked up to the GPU fan spot, as that helped tremendously w/ VRM with my 290.


----------



## PontiacGTX

Quote:


> Originally Posted by *madmanmarz*
> 
> So I've found my approximate limits:
> P6: 1000mv/1650mhz P7: 1050mv/1700mhz Actual: 1540mhz
> HBM: 950mv/1100mhz
> Ambient: 25c Core: 34c HS: 75c HBM: 43c Powerlimit: +50%
> Superposition 1080p High: 10061
> 
> 50mhz higher on core or HBM = freezeup, time to put some voltage on the core and see what I can get out of that. I may end up putting a quiet fan blowing on the air cooled part of the water block that cools VRM etc, maybe even hooked up to the GPU fan spot, as that helped tremendously w/ VRM with my 290.


can you move P5 near P6 maybe the average will be higher


----------



## madmanmarz

I'm using wattman so I can only mess with p6/7


----------



## PontiacGTX

Quote:


> Originally Posted by *madmanmarz*
> 
> I'm using wattman so I can only mess with p6/7


Try wattool?


----------



## madmanmarz

Quote:


> Originally Posted by *PontiacGTX*
> 
> Try wattool?


I guess I will. I was hoping not to have to mess with different programs - just wanna set it and forget it. In superposition for some reason my clocks are lower than specified but in firestrike the clocks go up to the actual speed set in wattman.


----------



## PontiacGTX

Quote:


> Originally Posted by *madmanmarz*
> 
> I guess I will. I was hoping not to have to mess with different programs - just wanna set it and forget it. In superposition for some reason my clocks are lower than specified but in firestrike the clocks go up to the actual speed set in wattman.


have you compare in games?


----------



## madmanmarz

Quote:


> Originally Posted by *PontiacGTX*
> 
> have you compare in games?


It's odd, a little while ago it was hitting 1600s on the core in firestrike and causing instant freeze up, and now all of a sudden it's back to the original plan and doing 1540mhz in benchmarks and games.

edit - update - tried wattool and get the same strange behavior. it seems that some combinations of clocks and voltages give different results than others. like how if you go over 1050mv on the HBM the clocks revert to stock. if i can lock in this 1000mv/1540mhz 950mv/1100mhz then i will be more than happy for now.

EDIT!!! super important question here - what combination of the LED switches on the card do I hit to make both the red and blue lights come on (purple)? I have both in the up position and I have red LEDs. this stupid waterblock doesn't have a cut out for the switch but I can probably get them with a dental pick or something.


----------



## 99belle99

Quote:


> Originally Posted by *madmanmarz*
> 
> It's odd, a little while ago it was hitting 1600s on the core in firestrike and causing instant freeze up, and now all of a sudden it's back to the original plan and doing 1540mhz in benchmarks and games.
> 
> edit - update - tried wattool and get the same strange behavior. it seems that some combinations of clocks and voltages give different results than others. like how if you go over 1050mv on the HBM the clocks revert to stock. if i can lock in this 1000mv/1540mhz 950mv/1100mhz then i will be more than happy for now.
> 
> EDIT!!! super important question here - what combination of the LED switches on the card do I hit to make both the red and blue lights come on (purple)? I have both in the up position and I have red LEDs. this stupid waterblock doesn't have a cut out for the switch but I can probably get them with a dental pick or something.


You cannot get purple, only red and blue separately.


----------



## pengs

Quote:


> Originally Posted by *madmanmarz*
> 
> I guess I will. I was hoping not to have to mess with different programs - just wanna set it and forget it. In superposition for some reason my clocks are lower than specified but in firestrike the clocks go up to the actual speed set in wattman.


Same. Hovers around 1670 in superposition.


----------



## madmanmarz

Quote:


> Originally Posted by *99belle99*
> 
> You cannot get purple, only red and blue separately.


Thanks.

Okay so I think I found my happy place. HBM @ 950mv/1100mhz has been stable in every test. Core has been the issue. I'm putting p6/p7 states the same.

If I put 1100mv/1650mhz, I get 1050mv/1550mhz

If I put 1150mv/1650mhz, I get 1100mv/1590mhz - Results at these clocks:
Firestrike Graphics/Combined: 22985 / 17042
Superposition 1080p High: 10190
Core: 40c HS: 93c HBM: 48c

I think that's as far as I'll push for now with the HS temps and I'll be running 1050mv/1550mhz normally.


----------



## Soggysilicon

Quote:


> Originally Posted by *Rootax*
> 
> I don't believe the last two on the right need cooling. It doesn't if I believe the EK WB mounting guide, and, my hot spot is at 58max, with nothing on those two (Vega [email protected] 1100 hbm2, gpu around 43max, hbm 48-53 depending on the stresstest/game)


"GPU-Z person / rep / guy" said all they had done was read back from the sense that was embedded by AMD. I highly doubt the so called "hot spot" has jack to do with some voltage regs on the side of the board...







I strongly suspect its a current diode embedded near the core closer to the interposer which is giving a Tj reading back from a look up table. There are a couple of factors that separate temps from the junction and the more holistic "core" temp, but typically one could maybe expect a 15-20% difference in readings give or take... which is what your readings seem to indicate which coincide with my own when I looked into it... while munching a twinkie... with *ZFG*.









Decent article on the subject describing an application for input choke.
http://www.analog.com/en/analog-dialogue/articles/esd-diode-doubles-as-temperature-sensor.html



Quote:


> Originally Posted by *madmanmarz*
> 
> Thanks.
> 
> Okay so I think I found my happy place. HBM @ 950mv/1100mhz has been stable in every test. Core has been the issue. I'm putting p6/p7 states the same.
> 
> If I put 1100mv/1650mhz, I get 1050mv/1550mhz
> 
> If I put 1150mv/1650mhz, I get 1100mv/1590mhz - Results at these clocks:
> Firestrike Graphics/Combined: 22985 / 17042
> Superposition 1080p High: 10190
> Core: 40c HS: 93c HBM: 48c
> 
> I think that's as far as I'll push for now with the HS temps and I'll be running 1050mv/1550mhz normally.


The three voltage and frequencies seem to function as a part of a feedforward cascade loop.

The drivers quite frankly are not all that great, and there are clearly some issues with the feedback loop that step outside of what is stable, while not outside of the Nyquist stability criterion (like an asymptotic or resonance spur) are definitely outside the run rules for Vega (maybe when it functions as a positive feedback loop which are inherently unstable)... such as...



I got at least 2 settings which will reliably produce that output... needless to say that's a crash.

My most recent settings... because sharing is caring:

1666 / 1740 @ 1167 / 1250
1105 @ 897 +50 HBCC "ON"
64cu on LC-b
3440 w/ freesync + ultimate engine

Switching over to HBCC "ON" full time now as I have observed in my case that the AMD driver will more often fault before the card hangs; which dumps whatever application I am in rather than locking up the card and getting a Qcode "8" fault code for VGA adapter... restarting the AMD driver while still in the OS keeps me up and running. The performance hit in some applications is made up by a slight boost in others... so... better than rebooting... over and over...

enforced settings and ran real quick...


----------



## Paul17041993

Quote:


> Originally Posted by *chris89*
> 
> @CrazyElf
> 
> You can further prevent throttling by cooling all of the Hot Spot Chips on the PCB... They all get really hot, not monitored by software as they are lined up so no one cools them and power consumption sky high as a result of these in continuous throttle state.


Those are diodes, they don't get hot and they don't need cooling. The rear chokes I don't know, but they don't get hot either.

Quote:


> Originally Posted by *biscuittea*
> 
> I wasn't happy with the hotspot temps so I went and did a repaste on my Vega 56 with the Raijintek Morpheus II.
> 
> The core and HBM temps now max out at 60C whereas the hotspot is creeping to around 100C. To put that in context, my previous temps were roughly 70C for both core and HBM and roughly 95C for the hotspot.
> 
> Not sure what I can do to make the hotspot temps lower.
> 
> edit: Some say that the hotspot measures the VRM but there are others reporting hotspot temps being quite high whereas HWinfo says their VRM temps lower. I also can't seem to get HWInfo to show VRM temps for me, just core and memory.


What pads you using? if it's the EK ones then toss them and get some thermalgrizzly .5 and 1mm strips, one of each 12x2cm strip is plenty for vega.


----------



## rancor

Quote:


> Originally Posted by *Paul17041993*
> 
> Those are diodes, they don't get hot and they don't need cooling. The rear chokes I don't know, but they don't get hot either.
> 
> What pads you using? if it's the EK ones then toss them and get some thermalgrizzly .5 and 1mm strips, one of each 12x2cm strip is plenty for vega.


He is crazy don't listen to him.

But those are caps not diodes probably for input voltage filtering. http://www.farnell.com/datasheets/718274.pdf


----------



## gamervivek

Quote:


> Originally Posted by *biscuittea*
> 
> I wasn't happy with the hotspot temps so I went and did a repaste on my Vega 56 with the Raijintek Morpheus II.
> 
> The core and HBM temps now max out at 60C whereas the hotspot is creeping to around 100C. To put that in context, my previous temps were roughly 70C for both core and HBM and roughly 95C for the hotspot.
> 
> Not sure what I can do to make the hotspot temps lower.
> 
> edit: Some say that the hotspot measures the VRM but there are others reporting hotspot temps being quite high whereas HWinfo says their VRM temps lower. I also can't seem to get HWInfo to show VRM temps for me, just core and memory.


My card doesn't show vrm temps as well. I hope that these vrms are also as good as the default ones.

I was thinking that it's vrm but now that it is on die. My delta between core and Hotspot is 20-30C. Some people have been able to bring it down by reseating the cooler.


__
https://www.reddit.com/r/716edj/rx_vega_56_morpheus_ii_hotspot_temp/

I wish it could be brought down to 10C delta.


----------



## chris89

Quote:


> Originally Posted by *Paul17041993*
> 
> Those are diodes, they don't get hot and they don't need cooling. The rear chokes I don't know, but they don't get hot either.
> What pads you using? if it's the EK ones then toss them and get some thermalgrizzly .5 and 1mm strips, one of each 12x2cm strip is plenty for vega.


Quote:


> Originally Posted by *gamervivek*
> 
> My card doesn't show vrm temps as well. I hope that these vrms are also as good as the default ones.
> 
> I was thinking that it's vrm but now that it is on die. My delta between core and Hotspot is 20-30C. Some people have been able to bring it down by reseating the cooler.
> 
> 
> __
> https://www.reddit.com/r/716edj/rx_vega_56_morpheus_ii_hotspot_temp/%5B/URL
> 
> 
> Could buy a Copper Backplate too... $10 4" X 12" 24 AWG Copper.. Cut to size with tin snips & tap holes and add pads to seperate from the pcb yet thermally conduct to the backplate... Pad back of CORE & ACROSS VRMs & If you have the material... the whole PCB.
> 
> http://www.ebay.com/itm/Copper-Sheet-Metal-230-Alloy-24-Gauge-Bright-Polish-4-X12-/192070131214?hash=item2cb8459e0e:g:cugAAOSwA3dYbV-l
> 
> http://www.ebay.com/itm/24-ga-Copper-Sheet-Metal-Plate-4-x-4-/232439461990?hash=item361e78b866:g:SzkAAOSwTLxZhzut
> 
> http://www.ebay.com/itm/113-5g-Thermally-Conductive-Silicone-Glue-Adhesive-Thermal-Heatsink-BIG-/270717020058?epid=1631560249&hash=item3f07fde79a:gzwAAOSwt5hYZUYg
> 
> http://www.ebay.com/itm/50ml-Thermal-Adhesive-Glue-Tube-Heatsink-Plaster-Silicone-Heat-Sink-Paste-/302360271612?hash=item4666139afc:g:Ku8AAOSwbtVZTfnQ
> 
> http://www.ebay.com/itm/10-Inch-Straight-Cut-Aviation-Tin-Snips-Steel-Compound-Cutting-Sheet-Metal-NEW-/232346535212?hash=item3618eec52c:g:-MAAAOSwSypY-TDT
> 
> http://www.ebay.com/itm/NEW-Thermagon-thermal-gap-filler-pad-T-PLI-2200-A1-12mm-x-12mm-x-5mm-49-per-pack-/172855009184?hash=item283ef61fa0:g:w3wAAOSwAuZX1TRU


----------



## asdkj1740

Quote:


> Originally Posted by *PontiacGTX*
> 
> Try wattool?


can wattool change other p states working?


----------



## Chaoz

Quote:


> Originally Posted by *chris89*
> 
> If there is a chip of any kind of the PCB, it's associated with the Hotspot. Hotspot is "Hot" & "Spots" All over the PCB. Meaning you have the sensors on all chips, so take the time to dial in the cooler to contact all chips.. Can be done on stock cooler... big time increase in performance... hot spot is set to 115c limit.
> 
> Gotta feel it with your finger at load to know.. I'm 100% sure its going to scorch your finger at load... at full load is smoking hot.
> 
> I would buy a 32 awg sheet of copper on ebay for $7 cut out the pieces to cover the holes & add a pad on top so it contacts the copper... while the copper is adhered thermally to the rest of the assembly... cut the Hotspot down to 50C at full tilt, 32C idle. The card is designed to be the most powerful & efficient GPU Ever Made.
> 
> Just gotta do the above modifications. Traders knew it would be a breakthrough & crippled it right before release.
> 
> 
> 
> 
> Could buy a Copper Backplate too... $10 4" X 12" 24 AWG Copper.. Cut to size with tin snips & tap holes and add pads to seperate from the pcb yet thermally conduct to the backplate... Pad back of CORE & ACROSS VRMs & If you have the material... the whole PCB.


Those left 2 chokes don't even get hot. They don't even need thermal pads and don't even touch the waterblock or blower cooler. They don't go over 40°C after gaming for hours on end.

Even the top 3 chokes don't get hot. They go up to around 40°C and stay there. I measured it with a laser thermometer, which is accurate and can measure up to 250°C.

You're seeing things that aren't there.


----------



## Paul17041993

Quote:


> Originally Posted by *rancor*
> 
> But those are caps not diodes probably for input voltage filtering.


Yea, kinda realised a while later that those would actually be large ceramic caps, particularly because down-volt VRMs shouldn't need diodes...

Quote:


> Originally Posted by *chris89*
> 
> If there is a chip of any kind of the PCB, it's associated with the Hotspot. Hotspot is "Hot" & "Spots" All over the PCB. Meaning you have the sensors on all chips, so take the time to dial in the cooler to contact all chips.. Can be done on stock cooler... big time increase in performance... hot spot is set to 115c limit.


Those holes you highlighted exist purely to prevent contact obstruction, best you lay off the pot mate.
Also if you continue these posts I might as well just report them, for once...

Also, just an fyi for the 'hotspot' temps, they most likely seem VRM related as I only get about a 25C delta to the water using 1mm 8W/MK pads, thermalgrizzly as I mentioned before. Just to clarify as well I use their aeronaut paste for the core and HBM (spread generously across the whole surface) which gives a delta of around 10C.



water temp was about 35C, ran the test for about 13 minutes and the temps had stabilised, turbo mode.

Quote:


> Originally Posted by *Chaoz*
> 
> Even the top 3 chokes don't get hot. They go up to around 40°C and stay there. I measured it with a laser thermometer, which is accurate and can measure up to 250°C.


The 3 between the GPU and the display outputs? yea, those are for the displayports I'm pretty sure as they each need a regulated 1.1V supply last I checked, would only get moderately hot if you had a 5kHDR monitor attached to each via 3-5m cables...


----------



## Chaoz

Quote:


> Originally Posted by *Paul17041993*
> 
> The 3 between the GPU and the display outputs? yea, those are for the displayports I'm pretty sure as they each need a regulated 1.1V supply last I checked, would only get moderately hot if you had a 5kHDR monitor attached to each via 3-5m cables...


Meant the ones in the pic where he put arrows next to above the chip on the side of the GPU. Where those holes are in the ref cooler.

But yeah also the ones you mentioned.

Doubt he's smoking pot, he's on crack







.


----------



## madmanmarz

I'm in the same boat that hotspot must be vrm Temps. they are acting exactly the same as vrm temps did on my 290 where the temps drastically increase from higher vcore.

Nonetheless in a few days I am going to add a pwm fan to the GPU fan header pointed at the air-cooled portion of my waterblock that covers the rest of the board and I should quickly be able to verify. If temps decrease with a fan then it's definitely not a temp coming from the die.


----------



## The EX1

Quote:


> Originally Posted by *Chaoz*
> 
> Meant the ones in the pic where he put arrows next to above the chip on the side of the GPU. Where those holes are in the ref cooler.
> 
> But yeah also the ones you mentioned.
> 
> Doubt he's smoking pot, he's on crack
> 
> 
> 
> 
> 
> 
> 
> .


If I could easily just scroll past his posts I wouldn't be bothered as much, but each post takes up my entire screen with his huge picture collages, links, and ridiculous theories.


----------



## chris89

Hilarious. If money were not an issue I would show you, that I know exactly what I'm talking about. I know exactly the results, after doing this. Greater than water cooling performance with the stock blower.


----------



## pengs

Anyone with stability issues on 17.9.1? 17.8.1 was installed when the card was which had similar issues, hard lock-like. I've read that the release drivers are the most stable, 17.7.1.

It's reminiscent of a bad system memory overclock which may be the case for me.

Might as well throw up a TS run I did with at +50% and 1050 hbm.


----------



## Chaoz

Quote:


> Originally Posted by *The EX1*
> 
> If I could easily just scroll past his posts I wouldn't be bothered as much, but each post takes up my entire screen with his huge picture collages, links, and ridiculous theories.


I hear ya. Those posts take uo an entire page.
Quote:


> Originally Posted by *chris89*
> 
> Hilarious. If money were not an issue I would show you, that I know exactly what I'm talking about. I know exactly the results, after doing this. Greater than water cooling performance with the stock blower.


Better than watercooling? Lol, now I'm certain you must be tripping.


----------



## PontiacGTX

Quote:


> Originally Posted by *asdkj1740*
> 
> can wattool change other p states working?


I think someone modified P5 using watttool or maybe the powerplay tables though registry?


----------



## The EX1

17.9.2 Driver for Vega is supposed to be released later today evidently. CROSSFIRE support finally enabled







Checked the driver page but no signs of it yet.


----------



## poisson21

I have only one word : *CROSSFIRE*


----------



## steadly2004

Quote:


> Originally Posted by *The EX1*
> 
> 17.9.2 Driver for Vega is supposed to be released later today evidently. CROSSFIRE support finally enabled
> 
> 
> 
> 
> 
> 
> 
> Checked the driver page but no signs of it yet.


Hot damn! I thought this day would not come.


----------



## asdkj1740

Quote:


> Originally Posted by *The EX1*
> 
> 17.9.2 Driver for Vega is supposed to be released later today evidently. CROSSFIRE support finally enabled
> 
> 
> 
> 
> 
> 
> 
> Checked the driver page but no signs of it yet.


***, i have just done some testings on 17.9.1 driver and now 17.9.2 is near to out?? my efforts are all gone...


----------



## kundica

I didn't see a see a 4k single card video from him for comparison but he did post a lot of 1440p videos of Dirt before.

Anyway, Dirt 4 with Vega 64 Crossfile at 4k:


----------



## rancor

Quote:


> Originally Posted by *Chaoz*
> 
> I hear ya. Those posts take uo an entire page.
> Better than watercooling? Lol, now I'm certain you must be tripping.


I don't think AMD purposely sandbagged Vega with bad cooling.

Its some super strong stuff he is tripping on. I would just block him so his posts are automatically hidden and don't take up so much space. Then you can just open them when you feel like it.


----------



## ducegt

I've been enjoying the comedy in silence. Could be too much of the good stuff or an imbalance of medication and/or just a quirky personality...something not to be poked fun at. Only chiming in because he sure does market his ideas well..at times, but isn't able to prove the value. Engineer turned salesman myself. No way I'm covering any GPU in bling-bling though.


----------



## kundica

It's live. http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.9.2-Release-Notes.aspx


----------



## geoxile

Has there been any indication when dsbr and primitive shaders will be enabled?


----------



## Rootax

By the "known issues", they haven't fix a lot of things...


----------



## IvantheDugtrio

Is it possible to install the RX drivers on a Frontier Edition card? It looks like AMD has been updating it as much.


----------



## cg4200

So I broke down and bought a 56 today one of my 1080 ti's died need to send her in..
I wanted a 64 liquid cooled but not paying 800 will wait for price to drop..
Any hoot I have not run amd graphics cards in awhile. I am gonna put my other ti water cooled in slot 2..
Question is do I just install amd card and drivers with Nvidia drivers still installed?? Thanks


----------



## Trender07

Quote:


> Originally Posted by *cg4200*
> 
> So I broke down and bought a 56 today one of my 1080 ti's died need to send her in..
> I wanted a 64 liquid cooled but not paying 800 will wait for price to drop..
> Any hoot I have not run amd graphics cards in awhile. I am gonna put my other ti water cooled in slot 2..
> Question is do I just install amd card and drivers with Nvidia drivers still installed?? Thanks


Yeah you can uninstall nvidia drivers and then install amd drivers as you want, you can also use DDU if you want


----------



## shadowxaero

Quote:


> Originally Posted by *Rootax*
> 
> By the "known issues", they haven't fix a lot of things...


Bandaid at best...GPU crashed and driver's recovered instead of system restarting...thought to my self "nice". Loaded up stress test again and drivers just went wonky, HBM down clocking constantly, core clocks all over the place. Had to restart anyway to fix issues.....


----------



## poisson21

Arggghhh!!! impossible to install the new driver, system freeze completly near 50%









edit: after multiple ddu and reinstall it work.


----------



## Trender07

Well, at least now I can enable Enhanced Sync (uninstalled drivers with ddu, then installed 17.9.2)


----------



## baakstaff

So apparently my Powercolor Vega 56 shipped with 2 low power BIOSes instead of having a 150w low power and a 165w default BIOS. And someone please correct me if I'm wrong, but the unlocked/flashable BIOS is supposed to be in the switch position closer to the PCIe cables, right? Because that one was locked down for me, couldn't flash the 64 BIOS onto it at all. Luckily the other one was unlocked and flashed successfully, but it's pretty impressive that it happened at all.


----------



## lowdog

Quote:


> Originally Posted by *baakstaff*
> 
> So apparently my Powercolor Vega 56 shipped with 2 low power BIOSes instead of having a 150w low power and a 165w default BIOS. And someone please correct me if I'm wrong, but the unlocked/flashable BIOS is supposed to be in the switch position closer to the PCIe cables, right? Because that one was locked down for me, couldn't flash the 64 BIOS onto it at all. Luckily the other one was unlocked and flashed successfully, but it's pretty impressive that it happened at all.


Flashable bios switch position is closest to PCIe bracket and NOT closest to PCIe cables


----------



## Caldeio

confirm!^

Someone test that crossfire!

Ebay has vega 56's for usd$464.99. Powercolor models but hey!


----------



## steadly2004

Quote:


> Originally Posted by *Caldeio*
> 
> confirm!^
> 
> Someone test that crossfire!
> 
> Ebay has vega 56's for usd$464.99. Powercolor models but hey!


BF1 was studdery.... no setting completely fixed it. Better with 1 GPU. The usage graph looked like a saw tooth, up and down. Terrible...









Doom was very smooth, but i didn't check to see if the 2nd GPU was working and its' smooth as hell with 1.

Witcher 3 I'm trying to get up and running, but haven't gotten it going yet. having troubles downloading with GOG and launching.


----------



## Soggysilicon

Quote:


> Originally Posted by *chris89*
> 
> Hilarious. If money were not an issue I would show you, that I know exactly what I'm talking about. I know exactly the results, after doing this.
> 
> *Greater than water cooling performance with the stock blower*.


https://www.coursera.org/learn/thermodynamics-intro


Quote:


> Originally Posted by *The EX1*
> 
> 17.9.2 Driver for Vega is supposed to be released later today evidently. CROSSFIRE support finally enabled
> 
> 
> 
> 
> 
> 
> 
> Checked the driver page but no signs of it yet.


No sooner than I read this got the notification... time to roll some dice!
Quote:


> Originally Posted by *cg4200*
> 
> So I broke down and bought a 56 today one of my 1080 ti's died need to send her in..
> I wanted a 64 liquid cooled but not paying 800 will wait for price to drop..
> Any hoot I have not run amd graphics cards in awhile. I am gonna put my other ti water cooled in slot 2..
> Question is do I just install amd card and drivers with Nvidia drivers still installed?? Thanks


Under windows 10, I don't see an "immediate" problem; as others have said there is always DDU, the 56 w/ 64 bios should be a winner at any rate.


----------



## jearly410

Quote:


> Originally Posted by *steadly2004*
> 
> BF1 was studdery.... no setting completely fixed it. Better with 1 GPU. The usage graph looked like a saw tooth, up and down. Terrible...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Doom was very smooth, but i didn't check to see if the 2nd GPU was working and its' smooth as hell with 1.
> 
> Witcher 3 I'm trying to get up and running, but haven't gotten it going yet. having troubles downloading with GOG and launching.


Bf1 has always had crossfire problems :/ never got it working right with my furies.
DOOM doesn't support mgpu.


----------



## Soggysilicon

New driver feels like a roll'back. Spiked twice to >1790 mhz... crashes at 1080 scene 12 unigine heaven, 3440 benching scene 12 second pass crashed the driver, with a 10-20s pause, then continued on... scores where lower after subsequent benches, exiting the application could not load radeon software...

The card seems more "locked in", right until it isn't. I feel like there is "something" that is failing driver side... maybe the DBR... shrug... anyone else having any better luck?


----------



## TrixX

Getting the odd green screen flash with the new 17.9.2 drivers. May have to revert to older drivers to get rid of it.

Getting green screen flash with 17.9.1 as well. now. Only started happening today though ambient temps have risen about 5 degrees (30C ambient now).

Raising min fan speed to compensate but unsure why the green screen is occurring.


----------



## chris89

Quote:


> Originally Posted by *TrixX*
> 
> Getting the odd green screen flash with the new 17.9.2 drivers. May have to revert to older drivers to get rid of it.


I too have been hainvg issues.. tomb raider is crashing crossfire 580 bios and 17.9.1 but Forza Apex was just fine and ran incredibly well at 3840x2160 ultra

I wonder why my SAMSUNG 4K HDTV is only showing at 1920x1080 & VSR allowed 4K yet it's upscaled 1080p image... I'm sure my HDMI cables can handle this 3840x2160 60hz.. strange?


----------



## TrixX

Seems like my issues maybe CPU related rather than GPU related. Will investigate further.

EDIT: Back on my R9 290 until my new rig is ready. Too much risk I could damage the Vega in my current rig.


----------



## Skinnered

Quote:


> Originally Posted by *poisson21*
> 
> Arggghhh!!! impossible to install the new driver, system freeze completly near 50%
> 
> 
> 
> 
> 
> 
> 
> 
> 
> edit: after multiple ddu and reinstall it work.


Same here, during install the screen goes off and the systeem reboot itself with no signal and the GPU (liguid) fan spinning up and down...







Damn, was excited how it would run.

Is the W10 creatersupdate requiered for CF to run or something?


----------



## poisson21

Now that it work it look good , but have to make profil for each application to use the best crossfire setting for them.

Seems i can't even start a 3dmark benchmark from now, no time to look why right now.

Try superposition and have a score a little over 11000, i'll try later to meddle with the crossfire setting to see if i can have an improvement.


----------



## cg4200

Hey guys thanks..
I did not word my question to good though..
Can I keep my 1080 ti installed in slot 2 with drivers installed and at same time install my 56?? amd drivers with no conflict??
Reason I would like to be able to play gta with 1080ti herd 56 not so good... And keep 56 for battlefield and so forth..
Also while I am at it can I run a monitor off my 1080 ti for my tv and at same time run my 56 with 49 wasabi freesync ??
Thanks again


----------



## Paul17041993

Quote:


> Originally Posted by *The EX1*
> 
> 17.9.2 Driver for Vega is supposed to be released later today evidently. CROSSFIRE support finally enabled
> 
> 
> 
> 
> 
> 
> 
> Checked the driver page but no signs of it yet.


ooooo I'm more inclined to get a 56 to replace my 290X as the secondary card, not that I actually need 150 fps in the games I play...









why are 4k120 monitors taking so long...
Quote:


> Originally Posted by *cg4200*
> 
> So I broke down and bought a 56 today one of my 1080 ti's died need to send her in..
> I wanted a 64 liquid cooled but not paying 800 will wait for price to drop..
> Any hoot I have not run amd graphics cards in awhile. I am gonna put my other ti water cooled in slot 2..
> Question is do I just install amd card and drivers with Nvidia drivers still installed?? Thanks


When going from nvidia to AMD, _definitely_ use DDU as nvidia drivers tend to leave some nasty stuff that either bricks the system or makes it hang/crash randomly...

Quote:


> Originally Posted by *cg4200*
> 
> Hey guys thanks..
> I did not word my question to good though..
> Can I keep my 1080 ti installed in slot 2 with drivers installed and at same time install my 56?? amd drivers with no conflict??
> Reason I would like to be able to play gta with 1080ti herd 56 not so good... And keep 56 for battlefield and so forth..
> Also while I am at it can I run a monitor off my 1080 ti for my tv and at same time run my 56 with 49 wasabi freesync ??
> Thanks again


maby...? some have used the 1080ti and vega side-by-side but I have no idea how well the drivers for each behave...
and yea, display auto-detect should work for both cards if you're using modern DP and/or HDMI, I use my 64 on my main 4K and the 290X on the 1200p and I can force only one or the other to be enabled by turning either monitor off.


----------



## alanthecelt

seems like madness
I would just be running the TI, or would have bought a 64 and sold the TI
your choice, but yer some people say Nvidia and AMD don't play nice together, no reason it shouldnt work, just got to change primary monitor before launching games? not sure if that could be automated


----------



## Azazil1190

Guys with the latest 17.9.2 im getting far best score at least at f.s. close to 26.000 g.score








Need to do more test at games too

https://www.3dmark.com/3dm/22287057?


----------



## deadman3000

There is a rumor on r/AMD that the new drivers gimp RX Vega 56 flashed to 64 BIOS. If anyone has any proof of this please let us know.


----------



## kundica

Quote:


> Originally Posted by *deadman3000*
> 
> There is a rumor on r/AMD that the new drivers gimp RX Vega 56 flashed to 64 BIOS. If anyone has any proof of this please let us know.


One person reported it and it's highly doubtful. If anything, AMD addressed whatever issue was allowing people to basically cheat benchmarks by pushing the card so high it wasn't rendering everything and that's why his scores changed.


----------



## cg4200

I guess will try out when my card comes in this afternoon and post update later..
I have been called a mad man before..lol
Really not fan boy but like new tech and want the little guys to do good after all competition benefits everyone.
Can't sell 1080 ti till I get a liquid cooled 64 when price is dropped to normal levels..
I happen to get 56 one for launch price so I jumped on it and will flash to 64 bios..


----------



## chris89

PCIe 2.0 Comparison 390X

1200mhz core like 1700mhz memory

With Tesselation

Without Tesselation


----------



## kilgrim2

Hello All

Can you please confirm me that Vega supports PLP (portrait,landscape,portrait)?

Thanks in advance


----------



## pengs

Quote:


> Originally Posted by *TrixX*
> 
> Seems like my issues maybe CPU related rather than GPU related. Will investigate further.
> 
> EDIT: Back on my R9 290 until my new rig is ready. Too much risk I could damage the Vega in my current rig.


This is what I thought also but I'm starting to suspect that my issues are related to the drivers and power states. At full load, BF1 ect., absolutely stable. I believe my crashes are occurring when in power states are transitioning at low frequency, at the end of a benchmark, in a game which uses less power ect.

I thought it may be possible that Vega was putting more strain on either the memory or memory controller until I returned my system memory to factory specs and ended up crashing again. Tempted to try 17.7.1/2 as people are recommending it for Vega.

Next crash will be followed up with disabling ULPS (if that even works), if that fails I'll install the old drivers and then upping the chipset voltage.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Driver crashed before the video card... managed to capture that... mmmm... 1792 boost of doom


This seems to confirm what I have suspected, boost clocks just start to run away. I suspect its when the workload drops suddenly due to pipe-lining issues, then when it picks up a workload it can't roll back the clocks fast enough and then you crash. Anyone try the new driver update yet?


----------



## kundica

Built my full loop this past weekend with my 64 Air card. I haven't done much tweaking since I've been busy this week, but I flashed the LC bios. The card would crash at stock LC bios clocks so I dropped p7 down to 1702 with stock voltages until I have time to work out the best settings. The card already outperforms my LC card at stock settings with the 17.9.1 driver. Also worth noting, I have no difference in performance between 17.9.2 and the previous driver at the current settings.

LC 64 - +50% Power Limit - Stock core - HBM 1100:


Air 64 w/LC Bios - +50% Power limit - p7 at 1702 - HBM 1100:


17.9.2:


----------



## surfinchina

Quote:


> Originally Posted by *kundica*
> 
> Built my full loop this past weekend with my 64 Air card. I haven't done much tweaking since I've been busy this week, but I flashed the LC bios. The card would crash at stock LC bios clocks so I dropped p7 down to 1702 with stock voltages until I have time to work out the best settings. The card already outperforms my LC card at stock settings with the 17.9.1 driver. Also worth noting, I have no difference in performance between 17.9.2 and the previous driver at the current settings.


I'd be interested to see your bench without 4k settings.
I have a 64 air cooled FE with AIO bios (with an EK block), +50 poweer and 1750, 995 HBM.


----------



## kundica

Quote:


> Originally Posted by *surfinchina*
> 
> I'd be interested to see your bench without 4k settings.
> I have a 64 air cooled FE with AIO bios (with an EK block), +50 poweer and 1750, 995 HBM.


I'll be home in a bit and check. I missed one bit of info in my post, my HBM was actually at 1100. I updated the post


----------



## surfinchina

Quote:


> Originally Posted by *kundica*
> 
> I'll be home in a bit and check. I missed one bit of info in my post, my HBM was actually at 1100. I updated the post


Wow my HBM dies at 1000.


----------



## kundica

Quote:


> Originally Posted by *surfinchina*
> 
> Wow my HBM dies at 1000.


What are your HBM temps like? I'm also using an EK block. You might try lowering your p7 clock a little and see if you get higher sustained clocks.

Here's mine with HBM at 995:


And HBM at 1100:


----------



## pmc25

I like how the new drivers, under known issues, mention that after driver installation games and game settings may fail to populate ....

They fail to mention that none of said settings actually work with this or any of the other 4 (5 if FE counted) available drivers for Vega.

IMO 17.9.2 is still an absolute mess. Though I guess it's good for the few who have 2 or more cards.

Also, they still haven't fixed driver crash / putting computer to sleep borking performance until full restart. Facepalm.


----------



## Chaoz

I have no issues with the latest driver. Works perfectly fine in-game at maxed out settings.


----------



## The EX1

Anyone running the Aquacomputer block yet?


----------



## os2wiz

Quote:


> Originally Posted by *pmc25*
> 
> I like how the new drivers, under known issues, mention that after driver installation games and game settings may fail to populate ....
> 
> They fail to mention that none of said settings actually work with this or any of the other 4 (5 if FE counted) available drivers for Vega.
> 
> IMO 17.9.2 is still an absolute mess. Though I guess it's good for the few who have 2 or more cards.
> 
> Also, they still haven't fixed driver crash / putting computer to sleep borking performance until full restart. Facepalm.


I have had zero crashes with Vega 56. If you decided to undervolt you wouyld be better off and get better performance. If you obstinately refuse to do so you will be applying too much voltage creating too much heat and causing your card to stutter and even crash. I am applying 1.1 volts to the card and getting 900mhz memory overclock and 3.5% core overclock. Once my Alphacool Eiswolf GPX 120 aio full cover gpu block arrives, I will do a lot better than this.


----------



## dagget3450

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Is it possible to install the RX drivers on a Frontier Edition card? It looks like AMD has been updating it as much.


I keep trying every new release and no dice yet. I am wondering if this will be permanent because its really sucks to be stuck on 17.6 just because its an FE but really same hardware and pcb as the Rx vegas..

Quote:


> Originally Posted by *kilgrim2*
> 
> Hello All
> 
> Can you please confirm me that Vega supports PLP (portrait,landscape,portrait)?
> 
> Thanks in advance


I can try but i am on Vega FE and things are a bit different driver wise now a big gap between rx vega and FE - I did have issue with eyefinity setups though before on 17.6 Vega FE and certain eyefinity configs

*OWNERS LIST UPDATED FINALLY!*

Please welcome new owners:
FE
surfinchina Vega Frontier
Rootax Vega Frontier
yeayea911 Vega Frontier x2
ashman95 Vega Frontier (WC)

64

opty165 RX Vega 64 LC
plywood99 RX Vega 64 LC
Paul17041993 RX Vega 64 WC
Azazil1190 RX Vega 64 LC
mrnice31 RX Vega 64
Reikoji RX Vega 64 WC
redshoulder RX Vega 64
bir86 RX Vega 64
JunXaos RX Vega 64
zimm16 RX Vega 64 WC
SpaceGorilla47 RX Vega 64 WC
poisson21 RX Vega 64 WC
OMgoo RX Vega 64 (Morpheus II cooler)
Kyle Ragnador XFX RX Vega 64 AIR
NI6HTHAWK Vega 64 Liquid
Tgrove Sapphire RX Vega 64 WC
jehovah3003 Sapphire RX Vega 64
Formula383 RX Vega 64
andreyb RX Vega 64 (AC)
Sufferage RX Vega 64 Sapphire
FlanK3r ROG Strix Vega 64
SuperZan Sapphire Vega 64 air
Skinnered RX Vega 64 x2 (WC)
springs113 RX Vega 64 x2
bogdi1988 Sapphire Vega 64 air
lmiao Sapphire Vega 64 air

56
os2wiz RX Vega 56
baakstaff RX Vega 56
madmanmarz RX Vega 56
y0bailey RX Vega 56
Disharmonic RX Vega 56
SAN-NAS RX Vega 56
diabetes RX Vega 56
sternheim RX Vega 56
alanthecelt RX Vega 56 x3
ookiie RX Vega 56
laczarus RX Vega 56
Greenland RX Vega 56
deadman3000 RX Vega 56
Luftdruck RX Vega 56
Caldeio XFX RX Vega 56
elderblaze RX Vega 56
FelixB RX Vega 56
SAMiN RX Vega 56

I think i got that right, anyways if i missed anyone please PM me or bonk me on the head in the thread and ill try to catch up. Again sorry bout the delays but work and hurricanes kinda delayed things!


----------



## Trender07

Quote:


> Originally Posted by *dagget3450*
> 
> I keep trying every new release and no dice yet. I am wondering if this will be permanent because its really sucks to be stuck on 17.6 just because its an FE but really same hardware and pcb as the Rx vegas..
> I can try but i am on Vega FE and things are a bit different driver wise now a big gap between rx vega and FE - I did have issue with eyefinity setups though before on 17.6 Vega FE and certain eyefinity configs
> 
> *OWNERS LIST UPDATED FINALLY!*
> 
> Please welcome new owners:
> FE
> surfinchina Vega Frontier
> Rootax Vega Frontier
> yeayea911 Vega Frontier x2
> ashman95 Vega Frontier (WC)
> 
> 64
> 
> opty165 RX Vega 64 LC
> plywood99 RX Vega 64 LC
> Paul17041993 RX Vega 64 WC
> Azazil1190 RX Vega 64 LC
> mrnice31 RX Vega 64
> Reikoji RX Vega 64 WC
> redshoulder RX Vega 64
> bir86 RX Vega 64
> JunXaos RX Vega 64
> zimm16 RX Vega 64 WC
> SpaceGorilla47 RX Vega 64 WC
> poisson21 RX Vega 64 WC
> OMgoo RX Vega 64 (Morpheus II cooler)
> Kyle Ragnador XFX RX Vega 64 AIR
> NI6HTHAWK Vega 64 Liquid
> Tgrove Sapphire RX Vega 64 WC
> jehovah3003 Sapphire RX Vega 64
> Formula383 RX Vega 64
> andreyb RX Vega 64 (AC)
> Sufferage RX Vega 64 Sapphire
> FlanK3r ROG Strix Vega 64
> SuperZan Sapphire Vega 64 air
> Skinnered RX Vega 64 x2 (WC)
> springs113 RX Vega 64 x2
> bogdi1988 Sapphire Vega 64 air
> lmiao Sapphire Vega 64 air
> 
> 56
> os2wiz RX Vega 56
> baakstaff RX Vega 56
> madmanmarz RX Vega 56
> y0bailey RX Vega 56
> Disharmonic RX Vega 56
> SAN-NAS RX Vega 56
> diabetes RX Vega 56
> sternheim RX Vega 56
> alanthecelt RX Vega 56 x3
> ookiie RX Vega 56
> laczarus RX Vega 56
> Greenland RX Vega 56
> deadman3000 RX Vega 56
> Luftdruck RX Vega 56
> Caldeio XFX RX Vega 56
> elderblaze RX Vega 56
> FelixB RX Vega 56
> SAMiN RX Vega 56
> 
> I think i got that right, anyways if i missed anyone please PM me or bonk me on the head in the thread and ill try to catch up. Again sorry bout the delays but work and hurricanes kinda delayed things!


Ya forgot about me









Sapphire RX Vega 64 Limited Edition Air Cooler


----------



## ashman95

Yeah Dagg, It's about time for FE's to get some love- supposedly updates the 4th week of every month for FE.


----------



## Caldeio

Quote:


> Originally Posted by *dagget3450*


I have the XFX 56 and now a powercolor 56 now. I'm just stocking up on these things! Gotta get another powersupply and then I can add two more!


----------



## Soggysilicon

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> This seems to confirm what I have suspected, boost clocks just start to run away. I suspect its when the workload drops suddenly due to pipe-lining issues, then when it picks up a workload it can't roll back the clocks fast enough and then you crash. Anyone try the new driver update yet?


It seems to be the case doesn't it? In my follow up post with 19.2 I am able to "consistently" crash the card in very particular benchies at particular scenes... typically transition scenes or where the DOF changes. Load comes off a scene change, GPU is expecting big packets, and nothing... perhaps it references some available power state, and boost boost boost; ka-blamo' crash... still thinking there is some driver related function that is poop'n the bed. If it was "just" frequency, or heat I would expect a Monte Carlo, not something this consistent at the functional level.

Going to try a re-tune based on all my previous runs... (around 100 benchies at this point), and see if there is a performance boundary I can determine.








Quote:


> Originally Posted by *surfinchina*
> 
> I'd be interested to see your bench without 4k settings.
> I have a 64 air cooled FE with AIO bios (with an EK block), +50 poweer and 1750, 995 HBM.


The higher resolution in SP is "very" memory sensitive, but less frequency sensitive... as the memory stalls out the core, it has less work to do. The 4k test (for me) gives a little more fine grain picture of the card (and by virtue of being an OC forum), "an users", particular maximal hypothetical performance. With those settings LC 1750 +50, HBCC ON I would expect 7-7.2K SP @ 4k, as an example... maybe less due to HBM.
Quote:


> Originally Posted by *pmc25*
> 
> I like how the new drivers, under known issues, mention that after driver installation games and game settings may fail to populate ....
> 
> They fail to mention that none of said settings actually work with this or any of the other 4 (5 if FE counted) available drivers for Vega.
> 
> IMO 17.9.2 is still an absolute mess. Though I guess it's good for the few who have 2 or more cards.
> 
> Also, they still haven't fixed driver crash / putting computer to sleep borking performance until full restart. Facepalm.


I'll sprinkle some more salt onto the 9.2 frustration pile...

Just today was tooling up a spare PC with my old R9 280x, using the 9.2 drivers.... clean install; and wouldn't you know it, the old "clock locked" 500 mhz bug was alive and well!!!!!









Had to roll' back to 7.x drivers.... like I had mentioned in a previous post... driver feels like a roll' back... copy pasta from some old code... copied the old bugs to boot!

Thinkin' about rolling back drivers myself on the Vega build... 9.2 driver crashing, unlike 9.1 requires a reboot to get it to relaunch 4 out of 5 events...


----------



## dagget3450

Quote:


> Originally Posted by *Trender07*
> 
> Ya forgot about me
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sapphire RX Vega 64 Limited Edition Air Cooler


Added!


----------



## dagget3450

Quote:


> Originally Posted by *Caldeio*
> 
> I have the XFX 56 and now a powercolor 56 now. I'm just stocking up on these things! Gotta get another powersupply and then I can add two more!


Updated!


----------



## jearly410

17.9.2 is crashing with same oc as 9.1 in bf1. Rolling back no more crashes.


----------



## Soggysilicon

Quote:


> Originally Posted by *jearly410*
> 
> 17.9.2 is crashing with same oc as 9.1 in bf1. Rolling back no more crashes.


Had to redo' the OC to get back to stable... interesting results....



VRMs crapped out...



Over Boost of Doom... again...



Finally gain some ground...



Sweet spot... benchies are +/- 1 pt.

Reluctantly staying with the 9.2 drivers as it seems to have alleviated some of the HBCC deficiencies where frames where negatively impacted...


----------



## TrixX

Quote:


> Originally Posted by *dagget3450*


Ya add me to the list:
HIS Vega64 with Aquacomputer Water Block.


----------



## Tgrove

Quote:


> Originally Posted by *cg4200*
> 
> I guess will try out when my card comes in this afternoon and post update later..
> I have been called a mad man before..lol
> Really not fan boy but like new tech and want the little guys to do good after all competition benefits everyone.
> Can't sell 1080 ti till I get a liquid cooled 64 when price is dropped to normal levels..
> I happen to get 56 one for launch price so I jumped on it and will flash to 64 bios..


The 49" wasabi mango is the reason ive been using amd cards for the past 2 years. Lowered the freesync range from 40-61 to 33-60hz


----------



## TrixX

Quote:


> Originally Posted by *The EX1*
> 
> Anyone running the Aquacomputer block yet?


Will be soon. Just waiting on the fittings and CPU Waterblock to fit it.


----------



## asdkj1740

in what case maxing out power limit to +50% with 1100mhz hbm2 overclocked will performance even worse than the stock balance mode in gaming?
vega64, the driver is 17.9.1


----------



## chris89

I notice the same thing on the 390x. I can go up to like 1,750mhz memory from stock 1,500mhz memory and it's 3-4fps extra with Memory errors

So I just turn down the memory voltage from 1000mv to 875mv at 1,250mhz and lose 2fps & overclock the core from 1050mhz to 1172mhz and gain like 10fps...

I have Core @ 1333mv @ 1172Mhz & Memory @ 875mv from 1000mv @ 1,250mhz & I actually gain performance, lower temperatures & have zero memory errors.

So, check for memory errors. Errors makes the framerate worse. Also fiddle with the voltage at a given clock.. you can gain or lose fps with a difference of +/- 25mv.


----------



## Rootax

Quote:


> Originally Posted by *ashman95*
> 
> Yeah Dagg, It's about time for FE's to get some love- supposedly updates the 4th week of every month for FE.


Honestly, what they call 17.8.2 beta for FE is pretty good for me, better than 17.6 or the blockchain driver. Yeah, their is not game mode, but who care the performances are se same, and, yeah, no wattman but OverdriveNTool is working with it, so...


----------



## gamervivek

Does the overdriveNtool change the lower states as well? Once I hit apply, it seems to only change the P6,7 states.


----------



## Rootax

Quote:


> Originally Posted by *gamervivek*
> 
> Does the overdriveNtool change the lower states as well? Once I hit apply, it seems to only change the P6,7 states.


I only touched P6&7 , and only P7 now.

I just tried with P5 (after taking the screenshot so It doesn't show modified), it didn't throw an error, but I've no time to test right now. The creator sent me a "special" version which force to see all the pstate, because I was having trouble with the FE detection. I don't know if the latest official version is working good or not.



(For the 80% limit, I allowed it with the power play table reg "hack")


----------



## chris89

Check for memory errors hwinfo. It's helpful guiding you to performance gains.


----------



## pmc25

Quote:


> Originally Posted by *os2wiz*
> 
> I have had zero crashes with Vega 56. If you decided to undervolt you wouyld be better off and get better performance. If you obstinately refuse to do so you will be applying too much voltage creating too much heat and causing your card to stutter and even crash. I am applying 1.1 volts to the card and getting 900mhz memory overclock and 3.5% core overclock. Once my Alphacool Eiswolf GPX 120 aio full cover gpu block arrives, I will do a lot better than this.


Why would you assume that?

Of course I undervolt.

The drivers are just a steaming pile.

Also, as others have observed, the boost running away to silly levels under load (but not the most taxing load) is still happening. Previously it would hit just over 1800 in BF1 without AA and black screen crash in 5-20 minutes. Now it does 1850 and crashes the moment it hits it.

It's absolutely stupid. That's 150Mhz over my P7 state at 98-100% load.


----------



## Rootax

Strange, I've not this "boost over P7" bug on my FE.


----------



## pmc25

Quote:


> Originally Posted by *Rootax*
> 
> Strange, I've not this "boost over P7" bug on my FE.


17.9.1 introduced it.

17.9.2 made it worse.


----------



## pmc25

Quote:


> Originally Posted by *kilgrim2*
> 
> Hello All
> 
> Can you please confirm me that Vega supports PLP (portrait,landscape,portrait)?
> 
> Thanks in advance


Yes, it does.

As usual, AMD multimonitor is fairly painless and remains much better than NVIDIA.

I have L-24"1080*1920*144*8bit M-32"2560*1080*144*8bit R-24"1080*1920*144*8bit

Driver instability causes 'link failed' long before actual driver crash. Link doesn't actually fail, but you have to replug the affected monitor to restore 144hz (from 60hz) - always the left portrait for me.

Another thing, my 64 as a blower incinerates the DP plugs ... too hot to touch in my case. Amazed I wasn't getting errors or artifacts. Under water, they're cool. I personally wouldn't run a wall of 3 DP plugs with Vega on air due to this.


----------



## Paul17041993

The only crash bugs I've seen with 17.9.1 are one-offs with running multiple applications that access the sensors, though the radeon panel does crash occasionally after sleep, which has no effect on anything really.

I have however seen the vega fail to run the monitor after sleep, with the 290X monitor then taking over instead, but a simple fix is to just turn the main monitor off and back on again and the vega card will turn back on and take back its priority.


----------



## Rootax

Thx for the info.


----------



## chris89

Here's my x5650x2 48gb hexa channel amd radeon r9 390x 8gb hwinfo. What does the r7 1700 or 1950x threadripper with vega fe hwnfo look like?

Has anyone tested like lower memory speed with less voltage? I notice almost no difference... whats the *GPGPU AIDA look like for VEGA 64?
*


----------



## Paul17041993

Quote:


> Originally Posted by *chris89*
> 
> Here's my x5650x2 48gb hexa channel amd radeon r9 390x 8gb hwinfo. What does the r7 1700 or 1950x threadripper with vega fe hwnfo look like?
> 
> Has anyone tested like lower memory speed with less voltage? I notice almost no difference... whats the *GPGPU AIDA look like for VEGA 64?
> *
> 
> 
> Spoiler: Warning: Spoiler!


Not much different to other GCN cards, why?
read that wrong, don't have aida to check the gpgpu scores...


Spoiler: large


----------



## poisson21

I am on the latest drivers and the HBCC settings disappear for me.

Can someone confirm or is it due to the fact that i have a crossfire setting ?


----------



## chris89

Quote:


> Originally Posted by *Paul17041993*
> 
> Not much different to other GCN cards, why?
> read that wrong, don't have aida to check the gpgpu scores...
> 
> 
> Spoiler: large


Nice.. I see 90C cpu? qute high & 12v rail reported to gpu at idle is 11.88-11.75v... would want to keep idle above 12v & load right around 12.

I was having black screen freezes & issues until I used my 1000w kingwin with 2x seperate cables... 12 in total .. dual 6+2 pins. Need to split it across two wires for the gpu. Its also cooler running as well.


----------



## dagget3450

Quote:


> Originally Posted by *poisson21*
> 
> I am on the latest drivers and the HBCC settings disappear for me.
> Can someone confirm or is it due to the fact that i have a crossfire setting ?


It is the same for me, with cf on hbcc disappears as well, i think its normal? Might need to ask or submit report to see if its as intended. I assumed its by design.


----------



## GroupB

Quote:


> Originally Posted by *pmc25*
> 
> 17.9.1 introduced it.
> 
> 17.9.2 made it worse.


That why Im still on 17.8.1, except the bug that keep the gpu under load sometime ( you just have to reset it) everything work pretty good there no major fix in the last 2 driver that will make me switch. All my game work fine on this driver except arma 3 dont fully load the memory but making the last state the min/max fix it.

I will switch when they start to enable the vega feature that are disable or when they lets us change the lower state.


----------



## owntecx

Just a quick question guys. Im about to flash my vega 56 to 64 bios. Do i have to clear the powertable mod and DDU de drivers and install again?

EDIT: Both of my bios switch's have the normal 165W limit, shouldnt one be 150w? Anyway, the one to flash is the one to the left(close to displayport connectores)?


----------



## milkbreak

Morpheus II install on a Vega 56: 48C max on the core, around 56C max on the HBM2, and 68C max on the hot spot seems pretty decent to me. Is that delta between the core and hot spot anything I should be concerned about?


----------



## milkbreak

edit: doublepost after the webpage errored out


----------



## PontiacGTX

Quote:


> Originally Posted by *pmc25*
> 
> 17.9.1 introduced it.
> 
> 17.9.2 made it worse.


what could be causing this?
Quote:


> Originally Posted by *Paul17041993*
> 
> Not much different to other GCN cards, why?
> read that wrong, don't have aida to check the gpgpu scores...
> 
> 
> Spoiler: large


can you compare the R9 290X to VEGA 64 on FP16?


----------



## pmc25

Quote:


> Originally Posted by *PontiacGTX*
> 
> what could be causing this?


IMO the gaming drivers are still lightly modified Fiji. It's just a bodged together mess with tons of things that don't work at all, don't work properly, weird bugs and poor stability.

I would imagine Raven Ridge launch is their target for some degree of normalcy, since it will sell far more units (particularly OEMs) than RX Vega ever will.

Despite the huge delays to Vega, I don't think it has benefitted the gaming drivers at all ... everything has gone to workstation, deep learning, cloud data, apu integration, hashing, the new mining ISA etc.

So many of the features are pipecleaners for NAVI and MCM too ... Infinity Fabric most of all.


----------



## gamervivek

Quote:


> Originally Posted by *milkbreak*
> 
> Morpheus II install on a Vega 56: 48C max on the core, around 56C max on the HBM2, and 68C max on the hot spot seems pretty decent to me. Is that delta between the core and hot spot anything I should be concerned about?


Many people are getting that delta between core and hotspot depending on the power used by the card. I get 30-40C difference which is on the high side and preventing better clocks with + on power limit.

What are using for testing? I'd suggest superposition.


----------



## chris89

Quote:


> Originally Posted by *PontiacGTX*
> 
> what could be causing this?
> can you compare the R9 290X to VEGA 64 on FP16?


FP16 is like Single Precision & FP32 is like Double Precision .G.Flops.

It would be unreal to see 1-5 Tera Flops of Double Precision Performance on the VEGA 64 or Frontier Edition or The 56.

Yeah I can post my most recent results. Card only using 250 watts full tilt 8gb load. Compared to over 340 watts before.

1225mhz core VS 1250mhz core VS 1275mhz core ... over 7 Tera Flops @ 1,275mhz on 390x. All running memory undervolted and underclocked 875mv memory at 1,250mhz on reference blower.


----------



## chris89

I got it up to nearly 7.3 Tera Flops Single Precision & almost 1 Tera Flop Double Preision... Fastest I have seen so far at 1,125 fps single precision mandrel fps.

I'm looking at 362.5 watts input at the 6 + 8 pin PCIe plus 75 watt PCIe slot

Though it's only able to use 29.69% less though because capacitors heat up & my 6 + 8 pin is insufficient. Would need to divide the 6 + 8 pin over 2x 6 pin & 2x 8 pin to clean up the power & the 12 volt rail at load.

Looking at 281.537 watts used by the core & 70 watts Memory with 1250mhz at 875mv vs 1500mhz 1000mv which calls on like 140 watts. So half power & heat reduction & like 2 fps less for memory.

So basically initial suck-down on the PSU & PCIe slot is 363 watts core & 23 watts memory.. 386 watts from the wall ... Incredibly powerful 28nm GPU with the 390x. If only they took it as it is and made it 14nm. That's half power & probably 2.5x performance using FinFet.


----------



## chris89

1432mhz crossfire rx 480 vs 1300mhz single 390x



These clocks are unheard of on the 390X... 1,325Mhz to 1,335Mhz Core ... nearly 7.5 Tera Flops Single Precision


----------



## chris89

If the 390x could do 14nm Finfet or Vega Clocks it would be unreal. Here I have it up to 1,410mhz core.. haha


----------



## biscuittea

Quote:


> Originally Posted by *milkbreak*
> 
> Morpheus II install on a Vega 56: 48C max on the core, around 56C max on the HBM2, and 68C max on the hot spot seems pretty decent to me. Is that delta between the core and hot spot anything I should be concerned about?


I'd say those are great temps.

After a couple of repastes and reseating cooler, I now max out at 60C on both HBM and Core and 90C on the hotspot.

I've undervotled my card to 1040mV with no change in the clocks. I've also overclocked my HBM to 870MHz (any higher it crashes). I use the Arctic MX-4 paste but I think my terrible temps are due to my fans. I didn't even think about it but I just slapped on some normal case fans that I had lying around.

What fans do you use on your Morpheus?


----------



## chris89

Good Comparsion... VEGA FE is on Intel PCIe 3.0 spec motherboard reporting far higher memory READ/ WRITE which is everything on performance at high resolution.

Crossfire 480s & 390x on PCIe 2.0 board even know GPUs are PCIe 3.0 spec


----------



## Medusa666

So are the new drivers worth installing or better wait out the next?

Just a general question, how can RX Vega 64 have the same performance as Fury X at the same frequency when it has a bigger die size and more transistors? Has there been 0 IPC progress?

I bought a RX Vega 64 reference on release day and I'm happy with it, but I'm also curious if there will be greater performance in the future, or if this is about how good it's going to get.


----------



## chris89

Quote:


> Originally Posted by *Medusa666*
> 
> So are the new drivers worth installing or better wait out the next?
> 
> Just a general question, how can RX Vega 64 have the same performance as Fury X at the same frequency when it has a bigger die size and more transistors? Has there been 0 IPC progress?
> 
> I bought a RX Vega 64 reference on release day and I'm happy with it, but I'm also curious if there will be greater performance in the future, or if this is about how good it's going to get.


The card is capable of more frame rate. However no one here is willing to listen to me & just wish for a miracle cure.

You will need to actively address the HotSpot Temperature to yield as much as 20-40% & close the gap between the VEGA 64 & The 1080 Ti & TITAN XP.

Until then... you'll see an extra frame or two here & there but the gpu will burn out in short time. It runs continuously well over 100 C on the Hotspot...

Some guy here said everything cold & throttling is off the hook... Well look at this video...

Just as I mentioned, those CHIPS that Are Not Cooled Are At Fault... This is so blatantly clear... simply buy the copper sheet, the tin snips, the thermal adhesive, and the thermal pads Thermagon & Your good to go... Only see 60C hotspot rather than 100C bouncing off the limit burning through insane amounts of power on 14nm is nuts.

_*



*_


----------



## cg4200

Quote:


> Originally Posted by *Tgrove*
> 
> The 49" wasabi mango is the reason ive been using amd cards for the past 2 years. Lowered the freesync range from 40-61 to 33-60hz


Yeah I like mine too mine overclocks to 68 fps without skipping 4k.. can't wait to 100 fps 55in!!
S
First off all this card is real late and not finished software wise but man she is a beauty to look at vs my aero dynamic looking founders edition ...
So I can get my 56 vega msi to 10,300 firestrike extreme graphics score only.. so far but I am new to Twatt-man..what is every one else getting average??
Under volted was 360 watts total syatem with my 6850 k at 4.5 32 gb ram 12 fans water pump..
Also my hbm would only do 960 any higher and crash city..
Then comes a flash and boom 1040 on hbm in 1600 for core 1150 p7 1100 p6 10888 still playing around..


----------



## Medusa666

Quote:


> Originally Posted by *chris89*
> 
> The card is capable of more frame rate. However no one here is willing to listen to me & just wish for a miracle cure.
> 
> You will need to actively address the HotSpot Temperature to yield as much as 20-40% & close the gap between the VEGA 64 & The 1080 Ti & TITAN XP.
> 
> Until then... you'll see an extra frame or two here & there but the gpu will burn out in short time. It runs continuously well over 100 C on the Hotspot...
> 
> Some guy here said everything cold & throttling is off the hook... Well look at this video...
> 
> Just as I mentioned, those CHIPS that Are Not Cooled Are At Fault... This is so blatantly clear... simply buy the copper sheet, the tin snips, the thermal adhesive, and the thermal pads Thermagon & Your good to go... Only see 60C hotspot rather than 100C bouncing off the limit burning through insane amounts of power on 14nm is nuts.
> 
> _*
> 
> 
> 
> *_


Is there any good way to modify the reference card / cooler? I need the smaller form factor due to having the card in a Ncase M1.

Is the HBM2 as fragile as the HBM was on the Fury X? I was scared to clean off the residue because I thought it would break from light pressure.

Thanks for a good reply.


----------



## gamervivek

Quote:


> Originally Posted by *Medusa666*
> 
> So are the new drivers worth installing or better wait out the next?
> 
> Just a general question, how can RX Vega 64 have the same performance as Fury X at the same frequency when it has a bigger die size and more transistors? Has there been 0 IPC progress?
> 
> I bought a RX Vega 64 reference on release day and I'm happy with it, but I'm also curious if there will be greater performance in the future, or if this is about how good it's going to get.


Driver is okay, but I think it removes the saved game profiles which is not.

As for it having the same performance, most of the transistors have gone for clockspeed increase which I think is where AMD were being left way behind. The die size isn't bigger and while there are new arch. features, it'd take time for them to be enabled. The memory one helps with Vega having lower bandwidth than fury cards.

It should get better, but how long is anybody's guess. Hawaii got amazing tessellation improvements when it was rereleased as 390 series after 1.5 years of original release.

https://www.hardocp.com/article/2015/06/18/msi_r9_390x_gaming_8g_video_card_review/3


----------



## chris89

Quote:


> Originally Posted by *Medusa666*
> 
> Is there any good way to modify the reference card / cooler? I need the smaller form factor due to having the card in a Ncase M1.
> 
> Is the HBM2 as fragile as the HBM was on the Fury X? I was scared to clean off the residue because I thought it would break from light pressure.
> 
> Thanks for a good reply.


Your welcome. Your good. The Silicon is delicate like any other. Use a Tooth Brush & Goof Off from local Walmart or get it online here... Couple drops scrub.. couple drops scrub and it'll be perfectly clean. You want to spread the paste by finger though from edge-to-edge on the HBM & Core... Arctic Silver Ceramique 2.

http://www.ebay.com/itm/Goof-Off-FG658-Professional-Strength-Remover-Aerosol-12-Ounce-New-/332278294963?epid=2254438840&hash=item4d5d5469b3:g:iGQAAOSwstJZTZW-

http://www.ebay.com/itm/Goof-Off-FG677-Super-Glue-Remover-4Ounce-New-Free-Shipping-/361840176421?epid=2255466511&hash=item543f5aed25:g:TnoAAOSw44BYO1XJ

http://www.ebay.com/itm/Arctic-Silver-Ceramique-2-Tri-Linear-Ceramic-Thermal-Compound-25-g-gram-syringe-/191536297259?epid=2255302025&hash=item2c9873f52b:g:fHoAAOSw-KFXcq6s

Your gonna need to spend a few bucks on ebay first... That's #1... then simply cut the copper to size but take a look at the distance of those chips & clearance with the factory cooling plate... Take plenty of pics comparing and gauging clearance & so we know how much of a pad do we cut etc. Helpful? There's a little more to it, just take the initiative to get er' done ya know...? Unlike everyone else who sits around waiting for a miracle... gotta get at it.. nothing but eternity awaits so get at it already & stop being lazy... So you need others to tell you that its meant to run at 120C 24/7 .. Yeah that's a total load of bullshiz.





http://www.ebay.com/itm/Copper-Sheet-Metal-230-Alloy-24-Gauge-Bright-Polish-4-X12-/192070131214?hash=item2cb8459e0e:g:cugAAOSwA3dYbV-l

http://www.ebay.com/itm/24-ga-Copper-Sheet-Metal-Plate-4-x-4-/232439461990?hash=item361e78b866:g:SzkAAOSwTLxZhzut

http://www.ebay.com/itm/113-5g-Thermally-Conductive-Silicone-Glue-Adhesive-Thermal-Heatsink-BIG-/270717020058?epid=1631560249&hash=item3f07fde79a:gzwAAOSwt5hYZUYg

http://www.ebay.com/itm/50ml-Thermal-Adhesive-Glue-Tube-Heatsink-Plaster-Silicone-Heat-Sink-Paste-/302360271612?hash=item4666139afc:g:Ku8AAOSwbtVZTfnQ

http://www.ebay.com/itm/10-Inch-Straight-Cut-Aviation-Tin-Snips-Steel-Compound-Cutting-Sheet-Metal-NEW-/232346535212?hash=item3618eec52c:g:-MAAAOSwSypY-TDT

http://www.ebay.com/itm/NEW-Thermagon-thermal-gap-filler-pad-T-PLI-2200-A1-12mm-x-12mm-x-5mm-49-per-pack-/172855009184?hash=item283ef61fa0:g:w3wAAOSwAuZX1TRU


----------



## dagget3450

Quote:


> Originally Posted by *chris89*
> 
> The card is capable of more frame rate. However no one here is willing to listen to me & just wish for a miracle cure.
> 
> You will need to actively address the HotSpot Temperature to yield as much as 20-40% & close the gap between the VEGA 64 & The 1080 Ti & TITAN XP.
> 
> Until then... you'll see an extra frame or two here & there but the gpu will burn out in short time. It runs continuously well over 100 C on the Hotspot...
> 
> Some guy here said everything cold & throttling is off the hook... Well look at this video...
> 
> Just as I mentioned, those CHIPS that Are Not Cooled Are At Fault... This is so blatantly clear... simply buy the copper sheet, the tin snips, the thermal adhesive, and the thermal pads Thermagon & Your good to go... Only see 60C hotspot rather than 100C bouncing off the limit burning through insane amounts of power on 14nm is nuts.
> 
> _*
> 
> 
> 
> *_


I have some thin copper thermal pads from some printers i am scrapping. I dont know the thickness of them but i know theh are pretty thin. These are used on the cpu in the formatter board. I may give this a go..

Actually im wondering i have 17mk thermal pads also shouldnt that work as well?


----------



## raysheri

My current settings:

Sapphire Vega 56 flashed to AMD 64 air bios + EK nickel/plexi water block.
Clocks - P6, 1557 @ 1100mv, P7 1652 @ 1150mv, HBM 1070 (will do 1100) @ 1050, +50% power limit.

When Heaven benched - gpu clocks are solid @ 1633 +- 4, memory holds the 1070 no problems
Temps - via gpuz - GPU max 44, HBM max 47, hot spot max 61
Gpu only power draw - max 271 W, VDDC max 1.1125 v

I have achieved higher clocks but so far these settings have given me the most stable trouble free performance.
I continue to read this tread with great interest, keep it going.


----------



## Newbie2009

Aida 64 comparison between a 290x stock and a vega 64 stock. 290X kills the Vega 64 in memory read/write. Whats up with that?


----------



## Soggysilicon

Quote:


> Originally Posted by *Medusa666*
> 
> So are the new drivers worth installing or better wait out the next?
> 
> Just a general question, how can RX Vega 64 have the same performance as Fury X at the same frequency when it has a bigger die size and more transistors? Has there been 0 IPC progress?
> 
> I bought a RX Vega 64 reference on release day and I'm happy with it, but I'm also curious if there will be greater performance in the future, or if this is about how good it's going to get.


I "believe" that the new drivers are a marginal improvement if your not attempting to "edge case" the card, I found some benchies and previous issues (such as with HBCC) to have been improved a few percent - repeatable. The performance boundary for realizable frequency has been reduced, leaving me to conclude initially that their was an IPC increase somewhere in the driver.
Quote:


> Originally Posted by *Rootax*
> 
> Strange, I've not this "boost over P7" bug on my FE.




Overboost to crash...



VRM geek / Overboost to crash...


----------



## Paul17041993

Quote:


> Originally Posted by *chris89*
> 
> Nice.. I see 90C cpu? qute high & 12v rail reported to gpu at idle is 11.88-11.75v... would want to keep idle above 12v & load right around 12.


Nope, 70C peak, which is also fairly inaccurate as the coolant temp is 20C less than that as well (limitation of the indium solder possibly?). The 12V difference between the 290X and the motherboard is also likely a calibration error on the 290X (noiseless single rail supply).

Quote:


> Originally Posted by *chris89*
> 
> FP16 is like Single Precision & FP32 is like Double Precision .G.Flops.


...not even close...

- FP64 (double precision) is physically limited to the scalar ALU, so for every 16 FP32 operations only 1 64 can be performed.
- Two FP32 operations can be performed on every non-scalar ALU every clock cycle, thus the GFLOPS is calculated as GHz * ALUs * 2. This however only fully applies if the shaders allow for rapid FP scheduling.
- Four FP16 operations can be performed per non-scalar ALU, double that of previous GCN revisions. This is a direct doubling of the FP32 figure however only if FP16 operations can be rapidly scheduled, similar again to FP32.

Simple examples of rapid scheduled operations are vector, matrix and colour operations where at least 4 output values (xyzw, rgba etc) are required. Most single-output operations and branch conditions get executed on the scalar ALUs as they have a x4 clock multiplier (which is an area you see GCN massively exceeding pascal in compute as said architecture lacks these ALUs).

Quote:


> Originally Posted by *PontiacGTX*
> 
> can you compare the R9 290X to VEGA 64 on FP16?


Will be getting to it hopefully, though the current heat isn't helping...


----------



## GroupB

Quote:


> Originally Posted by *Medusa666*
> 
> Is there any good way to modify the reference card / cooler? I need the smaller form factor due to having the card in a Ncase M1.
> 
> Is the HBM2 as fragile as the HBM was on the Fury X? I was scared to clean off the residue because I thought it would break from light pressure.
> 
> Thanks for a good reply.


Dont do that mod its worth nothing, same guy that claim his mod are better than a waterblock lol... just saying.

The hotspot have nothing to do with those CAPACITOR btw must are rated at 125C and I dont think amd put a 65C version there for sure.

Just saying a full cover block give you 30C on the core , 50C hotspot and 40C hbm and there nothing touching those cap and there NO AIR going there at all vs the stock cooler that allow some cooling there.
If those cap were so important to hotspot how the hell are we not seeing 100C on hotspot and frying our card with a waterblock that wont allow cooling in those area?

Oh yeah because it had nothing to do with hot spot.


----------



## dagget3450

Quote:


> Originally Posted by *GroupB*
> 
> Dont do that mod its worth nothing, same guy that claim his mod are better than a waterblock lol... just saying.
> 
> The hotspot have nothing to do with those CAPACITOR btw must are rated at 125C and I dont think amd put a 65C version there for sure.
> 
> Just saying a full cover block give you 30C on the core , 50C hotspot and 40C hbm and there nothing touching those cap and there NO AIR going there at all vs the stock cooler that allow some cooling there.
> If those cap were so important to hotspot how the hell are we not seeing 100C on hotspot and frying our card with a waterblock that wont allow cooling in those area?
> 
> Oh yeah because it had nothing to do with hot spot.


Well it seems like cooling any hotspots on the gpu would be good for thermals on some level. Maybe it won't do squat for performance but it seems like it would help some thermals somewhat


----------



## Roboyto

Quote:


> Originally Posted by *milkbreak*
> 
> Morpheus II install on a Vega 56: 48C max on the core, around 56C max on the HBM2, and 68C max on the hot spot seems pretty decent to me. Is that delta between the core and hot spot anything I should be concerned about?


not sure what your settings are, but those temps do seem to be pretty darn solid for air.

Here's what a 64 with FC block running stock settings +50% power looks like after looping heaven for ~40 mins:

GPU max 39C

HBM max 44C

Hotspot max 58C



Need to do get this whole system dialed in again as I finally dumped my 4790K in favor of a 1700. Unbelievable how cool these new AMD chips run, especially under water. 2.5 hours of CPU-Z bench OC'd to 3.7GHz all cores/threads at 1.2V, with 3200 1.35V RAM, and CPU peaked at 52C.

Temps should get better once I narrow in on an UV/OC combination. It also seems that moving to Ryzen has helped drop GPU temps a little







Looks like 4C drop on core/HBM compared to same settings with the i7. I'll take the improvement with my compact custom loop


----------



## Trender07

Here its my Vega 64 Air Cooled Limited Edition Undervolted;

- 1632 Core @ 1007 mV
- 1070 HBM @ 950 mv


Yea it got hot but Im from Spain so its hot and I have Air Conditoner turned off and I already have blower at 4000 rpm lol so Im not going to speed it up more x)


----------



## punchmonster

Why are you still repeating this? Plenty of people here have everything cooled perfectly fine. I've put heatsinks on anything that goes over 50°C and it still doesn't change a damn. The chips are simply pushed to their limit already.
Quote:


> Originally Posted by *chris89*
> 
> The card is capable of more frame rate. However no one here is willing to listen to me & just wish for a miracle cure.
> 
> You will need to actively address the HotSpot Temperature to yield as much as 20-40% & close the gap between the VEGA 64 & The 1080 Ti & TITAN XP.
> 
> Until then... you'll see an extra frame or two here & there but the gpu will burn out in short time. It runs continuously well over 100 C on the Hotspot...
> 
> Some guy here said everything cold & throttling is off the hook... Well look at this video...
> 
> Just as I mentioned, those CHIPS that Are Not Cooled Are At Fault... This is so blatantly clear... simply buy the copper sheet, the tin snips, the thermal adhesive, and the thermal pads Thermagon & Your good to go... Only see 60C hotspot rather than 100C bouncing off the limit burning through insane amounts of power on 14nm is nuts.
> 
> _*
> 
> 
> 
> *_


----------



## milkbreak

Quote:


> Originally Posted by *gamervivek*
> 
> Many people are getting that delta between core and hotspot depending on the power used by the card. I get 30-40C difference which is on the high side and preventing better clocks with + on power limit.
> 
> What are using for testing? I'd suggest superposition.


I was using FireStrike and Heaven. The hotspot temp goes up to 71C in Superposition. Not too shabby.
Quote:


> Originally Posted by *biscuittea*
> 
> I'd say those are great temps.
> 
> After a couple of repastes and reseating cooler, I now max out at 60C on both HBM and Core and 90C on the hotspot.
> 
> I've undervotled my card to 1040mV with no change in the clocks. I've also overclocked my HBM to 870MHz (any higher it crashes). I use the Arctic MX-4 paste but I think my terrible temps are due to my fans. I didn't even think about it but I just slapped on some normal case fans that I had lying around.
> 
> What fans do you use on your Morpheus?


I'm using some cheap-o Cooler Master 120mm fans. Four pack for $13 on Newegg. Also used Thermal Grizzly Kryonaut paste. We did some substantial modification to some of the included heatsinks though.



Chopped the stock VRM heatsink in half, cut a bunch of the smaller heatsinks so that the bracket would fit, used high-temp automotive epoxy to make sure certain things would be less likely to fall off.


----------



## dagget3450

Quote:


> Originally Posted by *TrixX*
> 
> Ya add me to the list:
> HIS Vega64 with Aquacomputer Water Block.


Added, Updated!

Welcome on in!


----------



## dagget3450

Quote:


> Originally Posted by *raysheri*
> 
> My current settings:
> 
> Sapphire Vega 56 flashed to AMD 64 air bios + EK nickel/plexi water block.
> Clocks - P6, 1557 @ 1100mv, P7 1652 @ 1150mv, HBM 1070 (will do 1100) @ 1050, +50% power limit.
> 
> When Heaven benched - gpu clocks are solid @ 1633 +- 4, memory holds the 1070 no problems
> Temps - via gpuz - GPU max 44, HBM max 47, hot spot max 61
> Gpu only power draw - max 271 W, VDDC max 1.1125 v
> 
> I have achieved higher clocks but so far these settings have given me the most stable trouble free performance.
> I continue to read this tread with great interest, keep it going.


Missed this one also, Added! Welcome to the thread!


----------



## diabetes

For all of you who think that "GPU Hotspot" is a part of the PCB or even the VRMs - you are wrong, it is a part of the die or at least also mounted to the interposer.
Techpowerup forums

VRM has an extra sensor, which is not yet exposed on windows, see line 29 struct vega10_temperature:
Official AMD Linux kernel-side driver source code

Several people who were using a Morpheus could fix their hotspot temperatures by remounting the part of the cooler that touches the die (yes, they left VRM heatsinks as is) and by using more thermal paste:

__
https://www.reddit.com/r/716edj/rx_vega_56_morpheus_ii_hotspot_temp/dn90e4x/
Google Translator - Computerbase forums

Also, GamersNexus did some thermal testing on the FETs at stock and with a hybrid cooling mod when pumping 400W through them - VRMs were never an issue:
GamersNexus Vega56 hybrid mod article

In the end it boils down to be some part of the die that just gets extremely hot (like FPU on Intel CPUs when doing FMA3), or hotspot temp being the logic layer of the HBM memory, whereas HBM temp measures at the top of the stack. There has been a statement that real HBM temps are higher than what the card reports and are always way above 95C on stock coolers, wich is also a reason why AIB cards are delayed. I will provide the link if I manage to find it again.

Some people also do not seem to understand, why there are thermal pads on the coils next to the MosFETs. They are there so the mounting pressure of the cooler can prevent coil whine while the pads absorb the vibrations. This is solely a noise dampening measure. And no, these diodes next to them do also not get hot.

The only parts of the PCB that need/benefit from cooling are the VRMs, the die area at the backside of the PCB and the phase doublers at the backside of the PCB.
Here are some thermal images from the backside of the PCB (scroll down): Tomshardware Vega 56 Review - Temperatures

Please yourself a favor and do not destroy your Vegas by causing a short with unnecessary copper shims.


----------



## rdr09

Quote:


> Originally Posted by *Roboyto*
> 
> not sure what your settings are, but those temps do seem to be pretty darn solid for air.
> 
> Here's what a 64 with FC block running stock settings +50% power looks like after looping heaven for ~40 mins:
> 
> GPU max 39C
> HBM max 44C
> Hotspot max 58C
> 
> 
> 
> 
> Need to do get this whole system dialed in again as I finally dumped my 4790K in favor of a 1700. Unbelievable how cool these new AMD chips run, especially under water. 2.5 hours of CPU-Z bench OC'd to 3.7GHz all cores/threads at 1.2V, with 3200 1.35V RAM, and CPU peaked at 52C.
> Temps should get better once I narrow in on an UV/OC combination. It also seems that moving to Ryzen has helped drop GPU temps a little
> 
> 
> 
> 
> 
> 
> 
> Looks like 4C drop on core/HBM compared to same settings with the i7. I'll take the improvement with my compact custom loop


Hi Rob, wut thermal paste did you use? Nice temps.

@milkbreak, wuts your ambient temp? You temps look good for air. Too good.


----------



## Paul17041993

Quote:


> Originally Posted by *diabetes*
> 
> For all of you who think that "GPU Hotspot" is a part of the PCB or even the VRMs - you are wrong, it is a part of the die or at least also mounted to the interposer.
> Techpowerup forums
> 
> VRM has an extra sensor, which is not yet exposed on windows, see line 29 struct vega10_temperature:
> Official AMD Linux kernel-side driver source code
> 
> Several people who were using a Morpheus could fix their hotspot temperatures by remounting the part of the cooler that touches the die (yes, they left VRM heatsinks as is) and by using more thermal paste:
> 
> __
> https://www.reddit.com/r/716edj/rx_vega_56_morpheus_ii_hotspot_temp/dn90e4x/
> Google Translator - Computerbase forums
> 
> Also, GamersNexus did some thermal testing on the FETs at stock and with a hybrid cooling mod when pumping 400W through them - VRMs were never an issue:
> GamersNexus Vega56 hybrid mod article
> 
> In the end it boils down to be some part of the die that just gets extremely hot (like FPU on Intel CPUs when doing FMA3), or hotspot temp being the logic layer of the HBM memory, whereas HBM temp measures at the top of the stack. There has been a statement that real HBM temps are higher than what the card reports and are always way above 95C on stock coolers, wich is also a reason why AIB cards are delayed. I will provide the link if I manage to find it again.
> 
> Some people also do not seem to understand, why there are thermal pads on the coils next to the MosFETs. They are there so the mounting pressure of the cooler can prevent coil whine while the pads absorb the vibrations. This is solely a noise dampening measure. And no, these diodes next to them do also not get hot.
> 
> The only parts of the PCB that need/benefit from cooling are the VRMs, the die area at the backside of the PCB and the phase doublers at the backside of the PCB.
> Here are some thermal images from the backside of the PCB (scroll down): Tomshardware Vega 56 Review - Temperatures
> 
> Please yourself a favor and do not destroy your Vegas by causing a short with unnecessary copper shims.


Yea, so the 'hotspot' could in fact narrow down to my original idea that was the package/PCB temperature in the vicinity. However it could also be a special new system inside the core that takes probes from all around the PCB and converts it to a single value for the drivers and cooler to use. Hard to tell really unless someone did a complete tear-down of every individual component on the board...

But regardless, if you need extra cooling attach some form of cooling to the backside of the PCB itself, such as a metal backplate with thick pads, make sure they're non-conductive etc though. From that you can either use a fan directly on the plate, attach more heatsinks or strap another AIO cooler to the back behind the GPU, should help the temperatures a fair bit.


----------



## chris89

Quote:


> Originally Posted by *dagget3450*
> 
> I have some thin copper thermal pads from some printers i am scrapping. I dont know the thickness of them but i know theh are pretty thin. These are used on the cpu in the formatter board. I may give this a go..
> 
> Actually im wondering i have 17mk thermal pads also shouldnt that work as well?


Well see there is a Hole there on top of those Chips that do not have a Temp Sensor on them for-a-reason... This is beginning to make sense isn't it?

You need to fill the hole... I would adhere on the other side.. thin copper sheet cut to size using thermal adhesive for all that "Holes" where the Chips Soar Sky High with no thermal sensor to FOOL YOU PEOPLE!!

Once you adhere the copper & add the Thermagon or that material you found in an old Printer? that could work possibly... You'll notice a Huge reduction in VRM temperature & the Whole PCB will cool down by 20-40C after you do this... That's 40 degrees celsius.
Quote:


> Originally Posted by *Newbie2009*
> 
> Aida 64 comparison between a 290x stock and a vega 64 stock. 290X kills the Vega 64 in memory read/write. Whats up with that?


My 390X outperforms VEGA on Double Precision which is awesome! haha I love the 390X but the only thing I hate about it is... Lack of HDMI 4k 60Hz... 30Hz 4k is Ultra-Lame. So I need to get the Startech Displayport to HDMI adapter it's shorty stubby adapter allows 4k 60Hz on 390X.



It looks like your PCIe is stuck in PCIe 1.0 it appears on the VEGA & 290X stuck in PCIe 1.1 .... I would revisit the CHIPSET DRIVER & Checkout Anything PCIe GPU Related Settings in the Motherboard BIOS.

The 290X & VEGA if on a PCIe 2.0 motherboard should look at 6,500 MB/s READ/WRITE ... Half is 3,250 MB/s READ/WRITE this is all PCIe Bus here and that's PCIe 1.1 .... Half Of That is PCIe 1.0 Revision 1 ancient & slower than ever...

Good luck... Retest until you get those Read/ Writes Up & Scores will be a lot better as a result, especially at high resolution like 3840x2160 +.


----------



## chris89

Quote:


> Originally Posted by *Paul17041993*
> 
> Yea, so the 'hotspot' could in fact narrow down to my original idea that was the package/PCB temperature in the vicinity. However it could also be a special new system inside the core that takes probes from all around the PCB and converts it to a single value for the drivers and cooler to use. Hard to tell really unless someone did a complete tear-down of every individual component on the board...
> 
> But regardless, if you need extra cooling attach some form of cooling to the backside of the PCB itself, such as a metal backplate with thick pads, make sure they're non-conductive etc though. From that you can either use a fan directly on the plate, attach more heatsinks or strap another AIO cooler to the back behind the GPU, should help the temperatures a fair bit.


These would work great for the back plate & add them between backplate & VRM & Others.. Gotta fix the cooling on the other side to see a real improvement because the Backplate will quickly become completely saturated & hot as ever as a result.

This stuff is 15 watts per meter kelvin & performs precisely on point at way up there on overclocks... Here are my temps using it on memory modules & vrm(s).

Its BORON NITRIDE & Is Very Conductive Stuff. Barely any difference if not any at all between Fujipoly 17w/m K.



http://www.ebay.com/itm/NEW-Thermagon-thermal-gap-filler-pad-T-PLI-2200-A1-12mm-x-12mm-x-5mm-49-per-pack-/172855009184?hash=item283ef61fa0:g:w3wAAOSwAuZX1TRU


----------



## Roboyto

Quote:


> Originally Posted by *rdr09*
> 
> Hi Rob, wut thermal paste did you use? Nice temps.


CPU is Kryonaut.

GPU presently is Kryonaut, however initially I was running CLU. Photo for reference:



I know the application looks perfect, but it was only because the CLU didn't want to stick to anything that wasn't the actual die. Temps weren't bad before, but are better now...pretty certain cooler CPU having the effect...but one would infer that the CLU should have been performing a fair bit better than the Kryonaut?









I just changed system over to Ryzen so I had to pull everything apart and drain the loop. In the process I spilled a pinch of distilled water onto the top of the waterblock and it rolled underneath the block







It wasn't much, buuuut...I pulled the block off to make sure I cleaned it all up not wanting a stupid $600 mistake.

When I was removing the block, most of the screws came out fine...but there were a couple that were loosening/removing the studs instead of just the screws. Thought it was quite odd, but I didn't want to get pliers involved...so I just cleaned up the 2 drops of water that made it way underneath and reassembled the card.

Computer would boot, POST, have video output for 2-5 seconds. Then GPU tach lights would go out, and video signal would be lost. Checked everything I could without removing the block again to no avail.

This time around there were 6 studs from the water block that came off instead of just being able to remove the screws







Down to my basement to get the EK box and instructions to see what I could have possibly done wrong...not my first waterblock rodeo...

Turned out, in my assembling haste, I had used the wrong length screws in all but 2 spots. Screws were too long and bottoming out in the studs, causing them to get wedged. I didn't (seem to) tighten anything more than I normally would have, so my guess is the screws are a fair bit harder than whatever the studs are made out of...nickel plated copper would be my guess? To remove the stuck studs I had to break out a large pair of needle nose pliers to get a grip on them to remove the screws...some were really on there...and I honestly thought I had damaged something more than once from pliers slipping









I was able to remove the block, rectify my screw screw-up







and reinstall the block without draining the loop...Macgyver like skills almost







Bleeding air out of the system is sssoooooooo boring/tedious









Turn her back on and same..exact..problem







At this point I was just about certain card was toast...pulled it out again, without draining the loop, and proceeded to remove my protective thermal tape from around the die and cleaned off all the CLU.

Applied some Kryonaut...and I'm back in business









Only thing I can figure is a pinch of CLU made it somewhere it shouldn't have been and was causing the problem. I once used CLU on a GTX 970 which I ultimately sold to a friend. I had put LET around the die to 'protect' it from the CLU. It's protection lasted around 6 months or so when he told me he was having weird issues with the card black screening. I pulled the cooler off to find that the LET had at some point melted into/onto the CLU and 'dragged' it where it wasn't supposed to be. The LET that was left was hard as concrete...nothing I could do to get 90% of it off. Scraped away what I could and drown the card in 91% isopropyl in hopes of cleaning the CLU out of wherever it ended up. To my surprise it came back to life! Still functioning actually...he's running it with one of the Corsair hydro adapters now.

Close call #2 with CLU and a GPU...hopefully the GTX 1060 in my HTPC doesn't run into any issues







On that card I used some Hondabond HT (high temp) 'liquid' automotive gasket around the die. That stuff is rated for 600F so it *should* fair just fine. If that one happens to run into issues and I can't resuscitate it, then at least I'm only out $199 as I snatched it from Newegg as an 'open box' refurb. Box definitely wasn't opened







Winning for me as back in July 6GB 1060's were fetching a large premium.


----------



## chris89

Quote:


> Originally Posted by *Roboyto*
> 
> Presently Kryonaut, however initially I was running CLU. Photo for reference:
> 
> 
> 
> 
> I know the application looks perfect, but it was only because the CLU didn't want to stick to anything that wasn't the actual die. Temps weren't bad before, but are better now...pretty certain cooler CPU having the effect...but one would infer that the CLU should have been performing a fair bit better than the Kryonaut?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I just changed system over to Ryzen so I had to pull everything apart and drain the loop. In the process I spilled a pinch of distilled water onto the top of the waterblock and it rolled underneath the block
> 
> 
> 
> 
> 
> 
> 
> It wasn't much, buuuut...I pulled the block off to make sure I cleaned it all up not wanting a stupid $600 mistake.
> 
> When I was removing the block, most of the screws came out fine...but there were a couple that were loosening/removing the studs instead of just the screws. Thought it was quite odd, but I didn't want to get pliers involved...so I just cleaned up the 2 drops of water that made it way underneath and reassembled the card.
> 
> Computer would boot, POST, have video output for 2-5 seconds. Then GPU tach lights would go out, and video signal would be lost. Checked everything I could without removing the block again to no avail.
> 
> This time around there were 6 studs from the water block that came off instead of just being able to remove the screws
> 
> 
> 
> 
> 
> 
> 
> Down to my basement to get the EK box and instructions to see what I could have possibly done wrong...not my first waterblock rodeo...
> 
> Turned out, in my assembling haste, I had used the wrong length screws in all but 2 spots. Screws were too long and bottoming out in the studs, causing them to get wedged. I didn't (seem to) tighten anything more than I normally would have, so my guess is the screws are a fair bit harder than whatever the studs are made out of...nickel plated copper would be my guess? To remove the stuck studs I had to break out a large pair of needle nose pliers to get a grip on them to remove the screws...some were really on there...and I honestly thought I had damaged something more than once from pliers slipping
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was able to remove the block, rectify my screw screw-up
> 
> 
> 
> 
> 
> 
> 
> and reinstall the block without draining the loop...Macgyver like skills almost
> 
> 
> 
> 
> 
> 
> 
> Bleeding air out of the system is sssoooooooo boring/tedious
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Turn her back on and same..exact..problem
> 
> 
> 
> 
> 
> 
> 
> At this point I was just about certain card was toast...pulled it out again, without draining the loop, and proceeded to remove my protective thermal tape from around the die and cleaned off all the CLU.
> 
> Applied some Kryonaut...and I'm back in business
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Only thing I can figure is a pinch of CLU made it somewhere it shouldn't have been and was causing the problem. I once used CLU on a GTX 970 which I ultimately sold to a friend. I had put LET around the die to 'protect' it from the CLU. It's protection lasted around 6 months or so when he told me he was having weird issues with the card black screening. I pulled the cooler off to find that the LET had at some point melted into/onto the CLU and 'dragged' it where it wasn't supposed to be. The LET that was left was hard as concrete...nothing I could do to get 90% of it off. Scraped away what I could and drown the card in 91% isopropyl in hopes of cleaning the CLU out of wherever it ended up. To my surprise it came back to life! Still functioning actually...he's running it with one of the Corsair hydro adapters now.
> 
> Close call #2 with CLU and a GPU...hopefully the GTX 1060 in my HTPC doesn't run into any issues
> 
> 
> 
> 
> 
> 
> 
> On that card I used some Hondabond HT (high temp) 'liquid' automotive gasket around the die. That stuff is rated for 600F so it *should* fair just fine. If that one happens to run into issues and I can't resuscitate it, then at least I'm only out $199 as I snatched it from Newegg as an 'open box' refurb. Box definitely wasn't opened
> 
> 
> 
> 
> 
> 
> 
> Winning for me as back in July 6GB 1060's were fetching a large premium.


Sweet dude. You will likely Thank yourself if you start using this thermal paste.. just as your application above looks great.. with this stuff you don't have to worry about anything .. since its not electrically conductive yet very thermally conductive... I use it all the time .. its all I ever use... looking at like 68C Core on the 390X @ 1,320MHZ Core on AIR.







Post some HWInfo & benchmarks and stuff.. I think to see VIDEOS to show the VEGA performance.
http://www.ebay.com/itm/Arctic-Silver-Ceramique-2-Tri-Linear-Ceramic-Thermal-Compound-25-g-gram-syringe-/191536297259?epid=2255302025&hash=item2c9873f52b:g:fHoAAOSw-KFXcq6s


----------



## milkbreak

Quote:


> Originally Posted by *rdr09*
> 
> Hi Rob, wut thermal paste did you use? Nice temps.
> 
> @milkbreak, wuts your ambient temp? You temps look good for air. Too good.


There was a high of 68F today so somewhere around there inside. Keep in mind, this is a 56 with a 56 BIOS with +50% power and a stable P6/P7 undervolt of 1020mV.


----------



## Roboyto

Quote:


> Originally Posted by *chris89*
> 
> Sweet dude. You will likely Thank yourself if you start using this thermal paste.. just as your application above looks great.. with this stuff you don't have to worry about anything .. since its not electrically conductive yet very thermally conductive... I use it all the time .. its all I ever use... looking at like 68C Core on the 390X @ 1,320MHZ Core on AIR.
> 
> 
> 
> 
> 
> 
> 
> Post some HWInfo & benchmarks and stuff.. I think to see VIDEOS to show the VEGA performance.


I knew the risk of applying the CLU, but did it anyway to see what kind of results there would be. Additionally, I have changed CPU/MoBo/RAM...so temperature comparison is far from apples/apples. I'm not 100% certain of power draw differences between a 4790k and 1700, but comparing both overclocked I would bet there to be somewhere around 75W, or more, power draw difference between the two under full stress; this is undoubtedly effecting GPU temps to some extent. To be certain of any gains/benefits I would have to re-apply CLU now and compare, because there is no doubt that LM TIM will provide lower temperatures than normal pastes.

I don't have any 'standard' pastes that are electrically conductive, so that really isn't a boon for me. Not saying it isn't better than what I've got as there are lots of pastes out there, plenty of which I've never used, but...

I'd have to see what exactly you're doing to sustain 68C on an air cooled Hawaii chip at those core speeds. I'm quite familiar with the Hawaii family of GPUs and know very well that few exceeded the 1300 core clock mark. If you are holding that temperature without water then you likely: 1) have a very large heatsink and 2) run the fans at speeds that aren't very quiet. Even the best silicon lottery winning Hawaii chips require(d) plenty of additional voltage/power limit, if not BIOS edits, to reach those core speeds.


----------



## gamervivek

Quote:


> Originally Posted by *diabetes*
> 
> For all of you who think that "GPU Hotspot" is a part of the PCB or even the VRMs - you are wrong, it is a part of the die or at least also mounted to the interposer.
> Techpowerup forums
> 
> VRM has an extra sensor, which is not yet exposed on windows, see line 29 struct vega10_temperature:
> Official AMD Linux kernel-side driver source code
> 
> Several people who were using a Morpheus could fix their hotspot temperatures by remounting the part of the cooler that touches the die (yes, they left VRM heatsinks as is) and by using more thermal paste:
> 
> __
> https://www.reddit.com/r/716edj/rx_vega_56_morpheus_ii_hotspot_temp/dn90e4x/
> Google Translator - Computerbase forums
> 
> Also, GamersNexus did some thermal testing on the FETs at stock and with a hybrid cooling mod when pumping 400W through them - VRMs were never an issue:
> GamersNexus Vega56 hybrid mod article
> 
> In the end it boils down to be some part of the die that just gets extremely hot (like FPU on Intel CPUs when doing FMA3), or hotspot temp being the logic layer of the HBM memory, whereas HBM temp measures at the top of the stack. There has been a statement that real HBM temps are higher than what the card reports and are always way above 95C on stock coolers, wich is also a reason why AIB cards are delayed. I will provide the link if I manage to find it again.
> 
> Some people also do not seem to understand, why there are thermal pads on the coils next to the MosFETs. They are there so the mounting pressure of the cooler can prevent coil whine while the pads absorb the vibrations. This is solely a noise dampening measure. And no, these diodes next to them do also not get hot.
> 
> The only parts of the PCB that need/benefit from cooling are the VRMs, the die area at the backside of the PCB and the phase doublers at the backside of the PCB.
> Here are some thermal images from the backside of the PCB (scroll down): Tomshardware Vega 56 Review - Temperatures
> 
> Please yourself a favor and do not destroy your Vegas by causing a short with unnecessary copper shims.


I was thinking that it's vrm based on 1)the core temp. didn't matter as much as the power drawn, 2)it can go way beyond 100C before throttling in earnest and 3)the delta was simply too high, it gets into 100s while the core is still in 60s if you increase the power.

But some people have vrm readings for their cards(not on mine) so now I don't think it's from vrm but it's still weird to see those temperatures from the die. The cb example you posted isn't doing great either, he's at 25C delta and probably higher if he tested it with more power/load.

edit : tested with 700Mhz and 900Mhz on HBM, no difference in the hotspot temps.


----------



## Paul17041993

Quote:


> Originally Posted by *chris89*
> 
> Well see there is a Hole there on top of those Chips that do not have a Temp Sensor on them for-a-reason... This is beginning to make sense isn't it?
> 
> You need to fill the hole... I would adhere on the other side.. thin copper sheet cut to size using thermal adhesive for all that "Holes" where the Chips Soar Sky High with no thermal sensor to FOOL YOU PEOPLE!!
> 
> Once you adhere the copper & add the Thermagon or that material you found in an old Printer? that could work possibly... You'll notice a Huge reduction in VRM temperature & the Whole PCB will cool down by 20-40C after you do this... That's 40 degrees celsius.
> My 390X outperforms VEGA on Double Precision which is awesome! haha I love the 390X but the only thing I hate about it is... Lack of HDMI 4k 60Hz... 30Hz 4k is Ultra-Lame. So I need to get the Startech Displayport to HDMI adapter it's shorty stubby adapter allows 4k 60Hz on 390X.
> 
> It looks like your PCIe is stuck in PCIe 1.0 it appears on the VEGA & 290X stuck in PCIe 1.1 .... I would revisit the CHIPSET DRIVER & Checkout Anything PCIe GPU Related Settings in the Motherboard BIOS.
> 
> The 290X & VEGA if on a PCIe 2.0 motherboard should look at 6,500 MB/s READ/WRITE ... Half is 3,250 MB/s READ/WRITE this is all PCIe Bus here and that's PCIe 1.1 .... Half Of That is PCIe 1.0 Revision 1 ancient & slower than ever...
> 
> Good luck... Retest until you get those Read/ Writes Up & Scores will be a lot better as a result, especially at high resolution like 3840x2160 +.


uh...

- have you actually run a thermal probe on the board...? because if they got hot there would be blocks that cover the components...
- use a displayport monitor for 4k60+, avoid using nasty conversion adapters...
- 3rd gen GCN and later had the double precision dropped from 1/8 to 1/16, to cut power as it isn't really used for anything on consumer cards, use a workstation card if you need high DP performance.
- highly doubt your 390X is actually stable at 1500, the geometry pipeline cant handle such clocks...
- PCIe 1.1-3.0 mode switching is normal, you can disable it via window's power profiles, don't need drivers unless it's windows 7.
- PCIe 1.0 and 1.1 are exactly the same, only minor documentation and implementation differences
- you should get 15.8GB/s on PCIe 3.0, 16 lanes if the DMA units are used correctly, if you see lower numbers than the application either doesn't support modern CL or it's scheduling badly. 16 lanes of 2.0 or 8 lanes of 3.0 will give you ~8GB/s.


----------



## chris89

Quote:


> Originally Posted by *Roboyto*
> 
> I knew the risk of applying the CLU, but did it anyway to see what kind of results there would be. Additionally, I have changed CPU/MoBo/RAM...so temperature comparison is far from apples/apples. I'm not 100% certain of power draw differences between a 4790k and 1700, but comparing both overclocked I would bet there to be somewhere around 75W, or more, power draw difference between the two under full stress; this is undoubtedly effecting GPU temps to some extent. To be certain of any gains/benefits I would have to re-apply CLU now and compare, because there is no doubt that LM TIM will provide lower temperatures than normal pastes.
> 
> I don't have any 'standard' pastes that are electrically conductive, so that really isn't a boon for me. Not saying it isn't better than what I've got as there are lots of pastes out there, plenty of which I've never used, but...
> 
> I'd have to see what exactly you're doing to sustain 68C on an air cooled Hawaii chip at those core speeds. I'm quite familiar with the Hawaii family of GPUs and know very well that few exceeded the 1300 core clock mark. If you are holding that temperature without water then you likely: 1) have a very large heatsink and 2) run the fans at speeds that aren't very quiet. Even the best silicon lottery winning Hawaii chips require(d) plenty of additional voltage/power limit, if not BIOS edits, to reach those core speeds.


That's so sick dude you got the Ryzen R7 1700 & I'm SOO Jealous. That's such a magnificent 65 watt 16 thread AMD CPU with huge horsepower on that TDP. The performance is exceedingly awesome.

Oh yeah can you post a video capture using ReLive & upload to youtube or something & PM me the CHANNEL? I always bookmark dudes running AMD VEGA & AMD RYZEN, such an amazing combination to look forward to if I ever pickup a Ryzen 1700 board/ cpu/ ram. I think I want the DDR4-4000... Is anyone running 4000Mhz ram? It would be crazy to run an R7 1700 @ 4.00Ghz & 16-32GB DDR4-5000.

I used all the best pastes out there, Diamond, & Indigo Xtreme, which you may look into using? www.frozencpu.com might have AM4 application... It's a solid Liquid Metal that to use it, must run CPU stress to 100C with no fan so the Liquid Metal, melts & cures like the best Bond Ever & Flawless Application.

Although the reason I use Arctic Silver Ceramique 2 is the Temps man, 68C @ 1,320Mhz is unreal. I had it up to 1,335Mhz & it's pulling 7.5 TeraFlops single precision. Which is a lot. Not totally stable though for games, but Compute can complete at 1,300Mhz+ core.

If I run the fan at 50-60%, it levels out 75-80C @ very high clocks. However its just not realistic to run it at those voltages. I like 1,172Mhz & at just 1.3v compared to 1,335Mhz I had set to 1.449v... which is the max.

Yeah my 390X is using a 290X heatsink assembly, way larger heavier heatsink & it cooled the 390X down 15-25C at load & idle. Plus my 390X is modded, I de-soldered the DVI to allow the 50% extra airflow, helped tremendously with requiring less fan RPM & is quieter & cooler.

Before... Hot & Loud .. That DVI was ALL-UP-IN-THE-WAY.. haha


----------



## asdkj1740

maxing out power limit to +50% and overclocking hbm2 to 1100mhz (indicated as "OC" in below pics) result in worse gaming performance compared to stock balance mode on asus vega64 strix.
http://yujihw.com/review/asus-rog-strix-rx-vega64-oc-edition-8gb-hbm2


----------



## Rootax

Maybe because the card throttles like crazy with the OC settings ?


----------



## Newbie2009

Quote:


> Originally Posted by *chris89*
> 
> Well see there is a Hole there on top of those Chips that do not have a Temp Sensor on them for-a-reason... This is beginning to make sense isn't it?
> 
> You need to fill the hole... I would adhere on the other side.. thin copper sheet cut to size using thermal adhesive for all that "Holes" where the Chips Soar Sky High with no thermal sensor to FOOL YOU PEOPLE!!
> 
> Once you adhere the copper & add the Thermagon or that material you found in an old Printer? that could work possibly... You'll notice a Huge reduction in VRM temperature & the Whole PCB will cool down by 20-40C after you do this... That's 40 degrees celsius.
> My 390X outperforms VEGA on Double Precision which is awesome! haha I love the 390X but the only thing I hate about it is... Lack of HDMI 4k 60Hz... 30Hz 4k is Ultra-Lame. So I need to get the Startech Displayport to HDMI adapter it's shorty stubby adapter allows 4k 60Hz on 390X.
> 
> 
> 
> It looks like your PCIe is stuck in PCIe 1.0 it appears on the VEGA & 290X stuck in PCIe 1.1 .... I would revisit the CHIPSET DRIVER & Checkout Anything PCIe GPU Related Settings in the Motherboard BIOS.
> 
> The 290X & VEGA if on a PCIe 2.0 motherboard should look at 6,500 MB/s READ/WRITE ... Half is 3,250 MB/s READ/WRITE this is all PCIe Bus here and that's PCIe 1.1 .... Half Of That is PCIe 1.0 Revision 1 ancient & slower than ever...
> 
> Good luck... Retest until you get those Read/ Writes Up & Scores will be a lot better as a result, especially at high resolution like 3840x2160 +.


It's not that. I'm aware of the 290x being x4, but its just a slave card for 2nd monitor, supposed to be gimped.

The vega is on a x16 gen 3


----------



## asdkj1740

Quote:


> Originally Posted by *Rootax*
> 
> Maybe because the card throttles like crazy with the OC settings ?


the gpu hot spot temp is near 100c under benchmarking and gaming test, maybe this hot spot temp throttles performance.

how high is the hot spot temp shown on gpuz 2.4 on yours vega56/64????


----------



## chris89

Quote:


> Originally Posted by *asdkj1740*
> 
> maxing out power limit to +50% and overclocking hbm2 to 1100mhz (indicated as "OC" in below pics) result in worse gaming performance compared to stock balance mode on asus vega64 strix.
> http://yujihw.com/review/asus-rog-strix-rx-vega64-oc-edition-8gb-hbm2


Typical r9 290x launch-like thermal loss though HBM should be able to achieve immense performance gains once power heat loss is very low & all chps cooled so the Black Chips are cooled around the GPU. You could simply adhere a long copper shim over them & add paste when it contacts.

I set my 390x to 1300mv @ 1172mhz core and memory @ 1250mhz @ 875mv here only 200 watts or so.


----------



## andreyb

Hi,

just installed EK waterblock + backplate on my Vega 64 Air. I am slightly concerned about the GPU Temperature (Hot Spot) running FireStrike Graphics Test 1 in loop. Card is overclocked 1600-1650/1100 +50%PL and running high power Air BIOS. Backplate is also hot - barely can hold my finger on it opposite to gpu side.
What do you think?


----------



## Newbie2009

Quote:


> Originally Posted by *andreyb*
> 
> Hi,
> 
> just installed EK waterblock + backplate on my Vega 64 Air. I am slightly concerned about the GPU Temperature (Hot Spot) running FireStrike Graphics Test 1 in loop. Card is overclocked 1600-1650/1100 +50%PL and running high power Air BIOS. Backplate is also hot - barely can hold my finger on it opposite to gpu side.
> What do you think?


Highest I seen mine was 55c. I took off original backplate and added an EK one with thermal pads.


----------



## laczarus

So is there a general consensus on what the hotspot temp is?
Just tested and I'm reaching 72.4°C max on it during 4k RottR gameplay with my Vega 56 + Morpheus II

correction: GPU-Z was set to avg values. Hotspot max is 97°C


----------



## pmc25

With hotspot temperatures that bad on the rear of the card, would it not be enough to buy some good thermal pads and an ek backplate? As long as there is air flow over the backplate.

To settle the question someone needs to do tests on waterblock without backplate, and with waterblock with ekwb backplate and thermal pads, and see if there is a significant difference in performance / stability / power consumption / oc headroom.


----------



## Newbie2009

Quote:


> Originally Posted by *laczarus*
> 
> So is there a general consensus on what the hotspot temp is?
> Just tested and I'm reaching 72.4°C max on it during 4k RottR gameplay with my Vega 56 + Morpheus II


Don't think so, I would throw out a guess it's VRM or underside of the board/chip


----------



## chris89

Yes, HotSpot is anywhere on the board does it get hot... ie Hot - Spot... Could be many or few... it limits those chips in software so that it won't burn out yet perform terribly because improper cooling of those "black chips" no one can handle or understand the black chips? peanut bouncin around n there or somethin


----------



## Rootax

Quote:


> Originally Posted by *asdkj1740*
> 
> the gpu hot spot temp is near 100c under benchmarking and gaming test, maybe this hot spot temp throttles performance.
> 
> how high is the hot spot temp shown on gpuz 2.4 on yours vega56/64????


My Hotspot on my Vega FE is around 10-15c hotter than the gpu temp on load (same at idle). I'm under water, my gpu under load is around 42-45, my hot spot around 55-58.

Now, under a Firemark Extreme Stresstest (20 loop non stop), it's a little hotter, because my cpu stay "cold", and my fans don't spin up a lot, it's cooler under games.

So, here a screenshot of the max values during the worst case scenario for me.



Quote:


> Originally Posted by *pmc25*
> 
> With hotspot temperatures that bad on the rear of the card, would it not be enough to buy some good thermal pads and an ek backplate? As long as there is air flow over the backplate.
> 
> To settle the question someone needs to do tests on waterblock without backplate, and with waterblock with ekwb backplate and thermal pads, and see if there is a significant difference in performance / stability / power consumption / oc headroom.


I removed the original backplate of my FE, and didn't put a new one on, it seems fine like this. The original backplate didn't "cooled" anything by the look of it, worst case scenario it would trap some heat.


----------



## Reikoji

http://www.3dmark.com/spy/2436480

New rig brings my best graphics score and improved overall, with barely anything changed in wattman:

1667/1752 frequency
1167/1250 mV
1105 mem, 950 mV
+50% power

http://www.3dmark.com/fs/13705718

Also ancient firestrike


----------



## gupsterg

Quote:


> Originally Posted by *dagget3450*
> 
> Well it seems like cooling any hotspots on the gpu would be good for thermals on some level. Maybe it won't do squat for performance but it seems like it would help some thermals somewhat


The hotspot IMO is likely to be a reading from within the die.

VRM components don't have temperature sensors built in, usually a sensor is placed in the vicinity.

Think of it on a BOM basis, they are not going to place sensors all over the place on the PCB. Also think of it on a practicality basis. If every relevant component on the PCB had a temp sensors besides finding space to place these sensors they'd need to have traces, increasing complexity and cost of PCB. Then you'd need a chip to monitor all these sensors (in this numerous sensor setup context). To me it does not seem there is one on PCB, nor would there be such a number of sensors.

I would assume there is one near VRM to give 'indicative' temperature. Based on how over built the ref VRM is it would be ample to use the 'indicative' temp to protect it IMO.

'We' know why HBM would have a temp sensor from the JEDEC PDF. I would think each stack has it's own and then reading 'we' get is an average or highest of two, etc.

I think aftermarket WC/stock AIO is about as good as it gets for VEGA currently. Some modding of ref blower/AIO may yield some small gains but I doubt it will make substantial gains. The Morpheus II usage has also been good from shared/read info.


----------



## PontiacGTX

could the hotspot be the contact area with the Interposer or the edge between the GPU Die and HBM mem?


----------



## gupsterg

No idea.

Highest probability IMO is part of GPU that gets hot when under load. For example on Hawaii The Stilt had said:-
Quote:


> On Hawaii the VDDCI should never be set higher than +50mV (= 1.050V) as the memory PHY / controller is the hottest part of the GPU already.


So something of that 'ilk' would be 'Hotspot'.


----------



## chris89

Quote:


> Originally Posted by *PontiacGTX*
> 
> could the hotspot be the contact area with the Interposer or the edge between the GPU Die and HBM mem?


Quote:


> Originally Posted by *gupsterg*
> 
> No idea.
> 
> Highest probability IMO is part of GPU that gets hot when under load. For example on Hawaii The Stilt had said:-
> So something of that 'ilk' would be 'Hotspot'.


----------



## Soggysilicon

Quote:


> Originally Posted by *Paul17041993*
> 
> Nope, 70C peak, which is also fairly inaccurate as the coolant temp is 20C less than that as well (limitation of the indium solder possibly?). The 12V difference between the 290X and the motherboard is also likely a calibration error on the 290X (noiseless single rail supply).
> 
> *...not even close...
> *
> - FP64 (double precision) is physically limited to the scalar ALU, so for every 16 FP32 operations only 1 64 can be performed.
> - Two FP32 operations can be performed on every non-scalar ALU every clock cycle, thus the GFLOPS is calculated as GHz * ALUs * 2. This however only fully applies if the shaders allow for rapid FP scheduling.
> - Four FP16 operations can be performed per non-scalar ALU, double that of previous GCN revisions. This is a direct doubling of the FP32 figure however only if FP16 operations can be rapidly scheduled, similar again to FP32.
> 
> Simple examples of rapid scheduled operations are vector, matrix and colour operations where at least 4 output values (xyzw, rgba etc) are required. Most single-output operations and branch conditions get executed on the scalar ALUs as they have a x4 clock multiplier (which is an area you see GCN massively exceeding pascal in compute as said architecture lacks these ALUs).
> Will be getting to it hopefully, though the current heat isn't helping...


Post of the day... have some mooor rep! +1








Quote:


> Originally Posted by *Rootax*
> 
> Maybe because the card throttles like crazy with the OC settings ?


Yes, it's this.

Tightening up a cards "average" clocks yields as much performance as one that boost to the moon; the experience gaming is, to me, subjectively better as well. My best "set n' forget" settings have the average clock @ 1700 +/- 2 MHz once the card is under load. Some of the LC samples seem to have been binned a little better and can handle the 1752 boosting, but that is more the exception than the rule... but the sample size and self reporting leaves a lot to be desired as I make that claim.

I heard or read... the other day that MSI AIB was stalled due to a 1735 goal being extremely illusive... I'll see if I can find the source on it...
Quote:


> Originally Posted by *pmc25*
> 
> With hotspot temperatures that bad on the rear of the card, would it not be enough to buy some good thermal pads and an ek backplate? As long as there is air flow over the backplate.
> 
> To settle the question someone needs to do tests on waterblock without backplate, and with waterblock with ekwb backplate and thermal pads, and see if there is a significant difference in performance / stability / power consumption / oc headroom.


The day I WB'd my card... I shimmed the back-plate with some washers to act as standoffs... so the side 200mm fan could do a little work on it. As much as 60 percent of a components heat energy is going to be lost in the tracers into the board. "The whole thing is the answer", is a true today as it was 20 years ago... I recall there being some talk about a 3 fanned Toxic or some such which made a claim about a back-plate acting as a heat sink turning out to be less efficient than not having it on at all.

I like the back plate just to protect the back of the card, little SMT/SMD components are fragile plus it looks sharp! May get around to painting it.








Quote:


> Originally Posted by *Reikoji*
> 
> http://www.3dmark.com/spy/2436480
> 
> New rig brings my best graphics score and improved overall, with barely anything changed in wattman:
> 
> 1667/1752 frequency
> 1167/1250 mV
> 1105 mem, 950 mV
> +50% power
> 
> http://www.3dmark.com/fs/13705718
> 
> Also ancient firestrike




Very similar scores. Your Vega is the LC version? (My P7 is a good bit lower than stock.)


----------



## gupsterg

Quote:


> Originally Posted by *chris89*


Chris89,

I prefer a component to be cooled then not, so I agree in that fact to improve a cooling solution and or case airflow.

I totally disagree that 'Hotspot' is a reading from VRM, coils, etc but most probable a part of GPU die. It makes no sense for AMD to be taking readings from other areas as stated in a earlier post.

A lot of what is being measured in video can take these temps in their 'stride'.

For example a doubler is only multiplexing a PWM signal (below is a random example from datasheet of driver/doubler).




Some motherboards have doublers/drivers on the back side of PCB with 0 airflow and it's a non issue.

Caps, chokes again can take quite high temps. IMO the mosfet is probably the only element I'd be concerned by on cooling, which is cooled on ref PCB/cooling, etc. And is not alarmingly hot or outside of operating temps or reaching throttling temps in PowerPlay from what I have noted of member shares.

GPU, HBM and 'HotSpot' seem the points of concern when pushing these cards. As highlighted by members that have gone WC or Morpheus 2 they have seen good gains in cooling and have headroom not to throttle.


----------



## pengs

Quote:


> Originally Posted by *Soggysilicon*
> 
> I heard or read... the other day that MSI AIB was stalled due to a 1735 goal being extremely illusive... I'll see if I can find the source on it...


Say what?


----------



## Soggysilicon

Quote:


> Originally Posted by *pengs*
> 
> Say what?


Which part?



www.tomshardware.co.uk/amd-vega-custom-graphics-cards-problems,news-56813.html


----------



## pengs

Quote:


> Originally Posted by *Soggysilicon*
> 
> Which part?


Oh I see, nm. Thought you wrote AB not AIB. Was thinking you were talking about Afterburner.


----------



## Soggysilicon

Quote:


> Originally Posted by *pengs*
> 
> AB was stalled ect.
> 
> What do you mean?


AIB, add in board; such as manufacturers and rebranders' such as ASUS, MSI, Gigabyte... MSI noted that looks like they are skipping Vega for a couple reasons... so thats what I read on Toms' UK... what I _heard_ was "1733" Mhz clock stable on frequency was an issue... that part was speculation. Link in the above post.









The other issue was the 40 micro meter HBM gap on some samples; which makes mounting a HS an issue in a 6 sigma environment.


----------



## PontiacGTX

Quote:


> Originally Posted by *chris89*


then the hotspot are the doublers?


----------



## asdkj1740

my vega64 is strange, given the same +50% power limit, i can never get 330w above during benchmarking and gaming like fire strike ultra and bf1.
but running furmark 0xaa 1080p my vega can easily pull 390w.
it shows that my card is somewhat power throttling on general usages...
so how to get as much power as furmark on gerenal usages to prevent power throttling harming the performance??


----------



## chris89

Quote:


> Originally Posted by *PontiacGTX*
> 
> then the hotspot are the doublers?


Yep :


----------



## GroupB

I notice I have a lower hotspot than must of you guys, running in the 50C range at 1682 mhz/1100 hbm , 50% power (32C core,53C hotspot ,44 hbm) I have NO airflow going into the gpu backplate and the backplate is not hot to touch and rather cool when running the bench, I have the stock LE backplate the only difference is I did not use EK pad but the Fujipoly Ultra Extreme at 17mk/w vs ek that are between 3 and 5 mk/w on all the VRM ( ek pad on the choke) and IC diamond on the core and I apply it the regular way since I have a molded core ( pea in the middle).

My guess is if hotspot is not the direct VRM reading, it probably include the vrm in a equation with other sensor, seeing you guy with ek pad have a higher hotspot.

Cooling is done with a 3X140mm and a 2X140mm and I have a 6700k OC in the same loop.

PS: hiding chris post is a must, all those no sense with a page of picture ...


----------



## Caldeio

Quote:


> Originally Posted by *GroupB*
> 
> I notice I have a lower hotspot than must of you guys, running in the 50C range at 1682 mhz/1100 hbm , 50% power (32C core,53C hotspot ,44 hbm) I have NO airflow going into the gpu backplate and the backplate is not hot to touch and rather cool when running the bench, I have the stock LE backplate the only difference is I did not use EK pad but the Fujipoly Ultra Extreme at 17mk/w vs ek that are between 3 and 5 mk/w on all the VRM ( ek pad on the choke) and IC diamond on the core and I apply it the regular way since I have a molded core ( pea in the middle).
> 
> My guess is if hotspot is not the direct VRM reading, it probably include the vrm in a equation with other sensor, seeing you guy with ek pad have a higher hotspot.
> 
> Cooling is done with a 3X140mm and a 2X140mm and I have a 6700k OC in the same loop.
> 
> PS: hiding chris post is a must, all those no sense with a page of picture ...


what size of fuji strip do you recommend?

Do you think the EK backplate is worth it over the LE one? Most people only have the plastic one so they would have to get EK. The plastic one sure gets hot...


----------



## Roboyto

Quote:


> Originally Posted by *GroupB*
> 
> I notice I have a lower hotspot than must of you guys, running in the 50C range at 1682 mhz/1100 hbm , 50% power (32C core,53C hotspot ,44 hbm) I have NO airflow going into the gpu backplate and the backplate is not hot to touch and rather cool when running the bench, I have the stock LE backplate the only difference is I did not use EK pad but the Fujipoly Ultra Extreme at 17mk/w vs ek that are between 3 and 5 mk/w on all the VRM ( ek pad on the choke) and IC diamond on the core and I apply it the regular way since I have a molded core ( pea in the middle).
> 
> My guess is if hotspot is not the direct VRM reading, it probably include the vrm in a equation with other sensor, seeing you guy with ek pad have a higher hotspot.
> 
> Cooling is done with a 3X140mm and a 2X140mm and I have a 6700k OC in the same loop.
> 
> PS: hiding chris post is a must, all those no sense with a page of picture ...


I used the 17 fujis on a 290 I had with XSPC block/backplate. That enabled VRM1 to run a few degrees under core temperature; which was more or less unheard of with Hawaii. I was running a very small GPU only loop in this scenario. 1/4" ID tubing, single 240 standard thickness radiator and weak, but quiet, SilenX Effizio fans.

That backplate did get pretty toasty though as I didn't have any airflow to it.

I thought about using the stock backplate..but with the bracket cutout it is so ugly looking ?

Ya...I blocked his posts. Shenanigans to the Nth degree there.


----------



## gupsterg

Quote:


> Originally Posted by *PontiacGTX*
> 
> then the hotspot are the doublers?


Doublers have no integrated temp sensor.

A lot of the components being highlighted don't.

It makes no sense for anyone to place sensors near those components.

What GroupB is sharing as data is pretty similar I have been reading as experience of WC VEGA users.


----------



## GroupB

Quote:


> Originally Posted by *Caldeio*
> 
> what size of fuji strip do you recommend?
> 
> Do you think the EK backplate is worth it over the LE one? Most people only have the plastic one so they would have to get EK. The plastic one sure gets hot...


I dont know I wanted the black stock one but I end up with a LE because the black cards were out of stock at launch, and the ek plate were too pricey for .... a plate really. I dont know the size you should get really I had a big one and still have lot of it left . I say a 50 MM by 100 MM should do it

Quote:


> Originally Posted by *Roboyto*
> 
> I used the 17 fujis on a 290 I had with XSPC block/backplate. That enabled VRM1 to run a few degrees under core temperature; which was more or less unheard of with Hawaii. I was running a very small GPU only loop in this scenario. 1/4" ID tubing, single 240 standard thickness radiator and weak, but quiet, SilenX Effizio fans.
> 
> That backplate did get pretty toasty though as I didn't have any airflow to it.
> 
> I thought about using the stock backplate..but with the bracket cutout it is so ugly looking ?
> 
> Ya...I blocked his posts. Shenanigans to the Nth degree there.


That exactly why I had the extreme pad left over , both my r9 290 were on xspc block







I still mine with them at 1240 core/1350 memory 1.3V and temps are 55 core/60 vrm


----------



## biscuittea

Just bought a set of ML120 fans to replace the AF120s I had lying around. I've not thoroughly tested them but I'll report back if I see anything interesting.

Worth noting : everytime I take off the Morpheus cooler I see this :


I lost of one of the plastic washers that I'm supposed to use to secure the backplate. This is probably leading to uneven mounting pressure. Maybe that's why my hotspot temps are so high.


----------



## Soggysilicon

This Hot Spot rubbish is a *Nuthin-BurgeR.*


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> Very similar scores. Your Vega is the LC version? (My P7 is a good bit lower than stock.)


Yea Sapphire LC 64.


----------



## Paul17041993

Quote:


> Originally Posted by *chris89*
> 
> Yep :


again... wrong parts to cool, the doublers are on the _back_ of the PCB...



yellow ones are less important.

edit; also, the 'hotspot' could very well be the L2 crossbar and memory controller as that's where most of the power is apparently, however the temperature differences don't entirely add up...

Do those with high hotspot temps lack the moulded and shaven package? I have the moulded package and the 'hotspot' has never surpassed 65C...


----------



## Roboyto

Just did a little over an hour round of Doom with at Eyefinity 3K 5760*1080. Ultra settings used without Vsync to let the power usage run wild







generally sucking down a little over 300W GPU only according to GPU-Z.

First real game session I've had and it ran flawlessly hanging between 100-140 FPS without a hitch. Maybe this was added with 17.9.2?? But I noticed during loading screens that the GPU goes into low clock/power mode so you don't get 1000 FPS and vicious cool whine








Pretty happy with this result, but will need to verify with some other games and do some more tweaking. If I can get UVOC steady to 1700/1100 I would be quite pleased.

*Wattman Settings:*

P6: 1602/1075mV

P7: 1702/1150mV

HBM: 1100/1000mV

Power: 50%

*Max Temps:*

Core: 44

HBM: 53

HS: 66

CPU R7 1700: 54

HBM stayed pegged at 1100. Core was pretty steady with minimum ~1660 and peaked at 1689/1693...HWINFO and GPU-Z logged different peaks. Also appears there is a bug with GPU-Z to report spiked HBM temp; Mine logged something crazy like 1600C max.



Hot Spot temps up a bit from running Heaven loop, not terribly surprising. I may pop the block off this thing one more time to re-apply CLU/Cnaut to see what happens as none of my old screenshots had GPU-Z included in them. If temps come down all around then we would have a better idea of where exactly the HS is.

EK backplate instructions show placing pads on rear of die and VRMs..not surprising as it's standard protocol from my experience:



I'm contemplating grabbing something like this to stick on the backside of die just to see what would happen:


----------



## GroupB

There no thermal pad on stock LE backplate and no hotspot problem ... I dont think its have something to do with the back of the card. From what I read is a sensor inside the die

What I want to know is the difference in temperature between mold core and the other one ...lets say on those who have EK block . Im wondering if higher hotspot are more related to un mold chip or not ( trap hot air between hbm and core maybe close to that sensor)

So 32C core / 44C hbm / 53 Hotspot here 1682/1100 @1100mv 50% power, Molded core , 17mk vrm thermal pad, ek stock on choke and IC diamond on core


----------



## Soggysilicon

Quote:


> Originally Posted by *GroupB*
> 
> There no thermal pad on stock LE backplate and no hotspot problem ... I dont think its have something to do with the back of the card. From what I read is a sensor inside the die
> 
> What I want to know is the difference in temperature between mold core and the other one ...lets say on those who have EK block . Im wondering if higher hotspot are more related to un mold chip or not ( trap hot air between hbm and core maybe close to that sensor)
> 
> So 32C core / 44C hbm / 53 Hotspot here 1682/1100 @1100mv 50% power, Molded core , 17mk vrm thermal pad, ek stock on choke and IC diamond on core


It's this, it was never not this, it has been and always will be this.









The hot spot corresponds with GPU/MEM loads, it never shows any signs of it being anything other than this. The temps respond similar to the core and ram temps tit for tat... Just like a current diode choke down in the core closer to the interposer would... 'cause that's the only place that makes sense to log temps from.

I haven't seen a single commenter running an EK WB that didn't take time and care when mounting their Vega.

The biggest deltas are folks changing out thermal materials, pads, or doin' some custom trick'n ; but no one runnin' a half decent loop is cry'n about hot spots!


----------



## rthomp

*rthomp Radeon RX Vega 64 LC Owner*


----------



## gamervivek

Quote:


> Originally Posted by *Paul17041993*
> 
> again... wrong parts to cool, the doublers are on the _back_ of the PCB...
> 
> 
> 
> yellow ones are less important.
> 
> edit; also, the 'hotspot' could very well be the L2 crossbar and memory controller as that's where most of the power is apparently, however the temperature differences don't entirely add up...
> 
> Do those with high hotspot temps lack the moulded and shaven package? I have the moulded package and the 'hotspot' has never surpassed 65C...


Molded package with Samsung hbm but get very high temperatures on Hotspot with higher power limit, easily a30-40C delta to the core.

I've downclocked and overclocked the memory but doesn't make a difference, the power limit does.


----------



## Reikoji

some GPGPU


----------



## chris89

Quote:


> Originally Posted by *gamervivek*
> 
> Molded package with Samsung hbm but get very high temperatures on Hotspot with higher power limit, easily a30-40C delta to the core.
> 
> I've downclocked and overclocked the memory but doesn't make a difference, the power limit does.


There are Doublers on the front & back of the PCB. Use thermal pad to contact them with the backplate to resolve it on the backplate... It will help a little. Need to cool & contact on the other side first to see 50% reduction in heat & power.
Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> some GPGPU


Epic results my friend... got any pics of your system & gpu?

*Check out the Prototype AMD VEGA 56 Dudes !! It looks cool & like a Nano...*


Spoiler: Warning: Spoiler!


----------



## Chaoz

That looks stupid. They should've just made it 2 fan version, imho.

Also if you're quoting more than one person use the multi quote option, it's there for a reason, so you don't post each reply in a seperate post.


----------



## jehovah3003

After receiving my new Vega 64 Liquid, i noticed i had the exact same noise coming from the radiator block, i tried to unmount the block, put it in my hand and power on the PC, and surprise, NO noise AT ALL.

COMPLETLY SILENT.

So i tried to mount it in differents placements, orientations but nothing helped really except if i don't put the screws all the way in with putting foam between the block and the case, that decreased the noise by like 70% but i can still hear it (so annoying i can hear it through my headset), any tips ?

I wanted to try with zip ties but they aren't long enough, i'll order double faced duct tape and try it that way...


----------



## Paul17041993

Quote:


> Originally Posted by *GroupB*
> 
> What I want to know is the difference in temperature between mold core and the other one ...lets say on those who have EK block . Im wondering if higher hotspot are more related to un mold chip or not ( trap hot air between hbm and core maybe close to that sensor)


This is exactly what I asked already, if people are having hotspot issues with non-mould+shave dies then the temperature increase could be contributed to the thicker GPU silicon. With the moulded dies the whole package gets polished down to the same height, which you can see is evident by the curved threading left by the machine;



Quote:


> Originally Posted by *gamervivek*
> 
> Molded package with Samsung hbm but get very high temperatures on Hotspot with higher power limit, easily a30-40C delta to the core.
> 
> I've downclocked and overclocked the memory but doesn't make a difference, the power limit does.


hmmm alright, so that idea seems to be null...? but what paste you using though with what cooling?

Quote:


> Originally Posted by *chris89*


Pretty much the exact same PCB configuration as the reference and FE cards, they just cut the end off that holds the blower fan (literally all that PCB space is used for really).


----------



## raysheri

I posted my settings and temps to date on page 232 and can confirm that my vega 56 ( does not have molded package as I believe most vega 56's don't and as per advice given about this package, I did not remove any previous tim from between the GPU and the HBM and between the 2 HBM stacks, only removed and cleaned the tim from the surfaces of both.)
with ek bock and using the EK ectotherm tim that came with it gives me gpu temps when bench-marking in the mid 40's, Hbm runs about 3 degrees above that, and hot spot runs about 15 deg above that. For me this ratio and max is consistent in all benches.
I'm using the back plate that came with the card and am considering purchasing the EK back plate but I'm not sure if it will give me any benefit yet.


----------



## Irev

does anyone know of any reviews sites that have tested CF vega64's ???? I'd like to know exact power consumption figures so I can see if it's worth doing


----------



## andreyb

Quote:


> Originally Posted by *andreyb*
> 
> Hi,
> 
> just installed EK waterblock + backplate on my Vega 64 Air. I am slightly concerned about the GPU Temperature (Hot Spot) running FireStrike Graphics Test 1 in loop. Card is overclocked 1600-1650/1100 1150/1200mv +50%PL and running high power Air BIOS. Backplate is also hot - barely can hold my finger on it opposite to gpu side.
> What do you think?


I have molded chip. BTW how many thermal pads did you have with EK backplate? I had only two (1.5mm and 2mm thick) and I had to cut the first one along to cover vertical VRM group and may be that's the problem:


----------



## Chaoz

Quote:


> Originally Posted by *andreyb*
> 
> I have molded chip. BTW how many thermal pads did you have with EK backplate? I had only two (1.5mm and 2mm thick) and I had to cut the first one along to cover vertical VRM group and may be that's the problem:


There are no extra pads for the back of the PCB to make contact with the backplate. Well atleast for me there weren't. It wasn't even listed in the manual. I use the stock backplate, btw. The EK one was too expensive for just a backplate.


----------



## andreyb

Quote:


> Originally Posted by *Chaoz*
> 
> There are no extra pads for the back of the PCB to make contact with the backplate. Well atleast for me there weren't. It wasn't even listed in the manual. I use the stock backplate, btw. The EK one was too expensive for just a backplate.


yes, they were with in backplate package as I mentioned above


----------



## Chaoz

Quote:


> Originally Posted by *andreyb*
> 
> yes, they were with in backplate package as I mentioned above


Oh, didn't see that.
Plus with the stock backplate you still can use the switches.


----------



## alanthecelt

ok
i have had lots of instability on my mining rig... but i think i have cracked it...
i was running the 64 bios on my 56's, out of the box this is a good mod as it ramps up the clocks etc
However
my god was my rig unstable
i've put 60 hours or so on stock 56 bios, temp target 65, p6 852 @ 800 and p7 [email protected] with 945 ram
power limit =50

currently dual mining dagger pascal at 34/1000 average per card
they run 35-37 on dagger/decred like this
the key is they are rock solid now, random reboots nad re setting wattman was balls... i can see barely any difference between running the 64 bios and modding the 56 except for stability

I can only imagine that this is also the case when gaming


----------



## gamervivek

Quote:


> Originally Posted by *Paul17041993*
> 
> hmmm alright, so that idea seems to be null...? but what paste you using though with what cooling?


Paste is some cooler master compound that I got with their cpu cooler and the cooling is the reference card. I doubt either of them matter too much since I'm seeing people with good cooling also get 30C and higher deltas to core with the hotspot.


----------



## owntecx

Does the rx vega 56 uses hynix hbm?


----------



## gamervivek

Mine uses Samsung but many others, probably most, are Hynix.


----------



## Whatisthisfor

Quote:


> Originally Posted by *jehovah3003*
> 
> After receiving my new Vega 64 Liquid, i noticed i had the exact same noise coming from the radiator block, i tried to unmount the block, put it in my hand and power on the PC, and surprise, NO noise AT ALL.
> 
> COMPLETLY SILENT.
> 
> So i tried to mount it in differents placements, orientations but nothing helped really except if i don't put the screws all the way in with putting foam between the block and the case, that decreased the noise by like 70% but i can still hear it (so annoying i can hear it through my headset), any tips ?
> 
> I wanted to try with zip ties but they aren't long enough, i'll order double faced duct tape and try it that way...


Congrats to the LC one, i too have one. I love it. Regarding noise, i can hear the pump too, but its silent enough to not be annoyed by that. Maybe its the metal case natural frequency thats "animated" by the pumps frequency? Forgive my bad english ;-)


----------



## PontiacGTX

http://www.guru3d.com/articles_pages/asus_radeon_rog_rx_vega_64_strix_8gb_review,10.html


----------



## poisson21

I have 2 vega 64 with ek block/backplate with ek pad/tim and i've never seen my hotspot with a temp above 45°C.

Core temp never exceed 37°C and vrm are within 10°C of that temp.

No particular precaution or mod when installing the blocks and backplates, just following the instructions.

The 2 cards have these settings :

p6/p7 : 1667Mhz/1717Mhz at 1167/1250mV

hbm : 1105Mhz at "950mV"

With a power limit at 142%

For the watercooling part i run the gpu in a parallel setup with a big mora3-420 rad.


----------



## owntecx

Quote:


> Originally Posted by *gamervivek*
> 
> Mine uses Samsung but many others, probably most, are Hynix.


How did you check?


----------



## chris89

Does the Vega have greater than 29% input : output power loss?

323.5 watts input to 249.25 watts output ..... 323.5 divided by 249.25 watts = 1.29.789 (29.789% loss by thermals)


----------



## Newbie2009

Sapphire Trixx adds support for RX Vega series.

http://www.sapphiretech.com/catapage_tech.asp?cataid=291&lang=eng


----------



## chris89

Quote:


> Originally Posted by *biscuittea*
> 
> Just bought a set of ML120 fans to replace the AF120s I had lying around. I've not thoroughly tested them but I'll report back if I see anything interesting.
> 
> Worth noting : everytime I take off the Morpheus cooler I see this :
> 
> 
> I lost of one of the plastic washers that I'm supposed to use to secure the backplate. This is probably leading to uneven mounting pressure. Maybe that's why my hotspot temps are so high.


Nice dude looks like those chips are cooled bro.. gotta cool them all though.. lookin good.







Post more pics pretty please. I'm thinking about picking up 2x Morpheus coolers for my rx 480s.


----------



## diabetes

Quote:


> Originally Posted by *chris89*
> 
> 323.5 watts input to 249.25 watts output ..... 323.5 divided by 249.25 watts = 1.29.789 (29.789% loss by thermals)


Absolute calculations are also interesting: 323.5 - 249.25 = 74.25W
Divide that by 12 phases and you dissipate 6.1875W per group of MosFETs (tiny and big FET combined). This is not a lot actually.


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> Sapphire Trixx adds support for RX Vega series.
> 
> http://www.sapphiretech.com/catapage_tech.asp?cataid=291&lang=eng


one question does it allow modding via software the max power limit?


----------



## chris89

Quote:


> Originally Posted by *diabetes*
> 
> Absolute calculations are also interesting: 323.5 - 249.25 = 74,25W
> Divide that by 12 phases and you dissipate 6,1875W group of MosFETs (tiny and big FET combined). This is not a lot actually.


Amazing insight sir, thank you so much. You know I love comments like this, just helpful happy guidance & insight..

Why can't everyone be like Lovely Lil' Diabetes ?







<3


----------



## Newbie2009

Quote:


> Originally Posted by *PontiacGTX*
> 
> one question does it allow modding via software the max power limit?


Download it and see


----------



## PontiacGTX

Quote:


> Originally Posted by *Newbie2009*
> 
> Download it and see


maybe someone who has a more advanced hexadecimal knowledge can find out where is the power limit slider







I dont have a VEGA GPU but i had the idea if this could allow a higher power limit and over ride bios limit(i think there was a video on gamer nexus, did they modified the powerplay tables?)


----------



## chris89

Quote:


> Originally Posted by *PontiacGTX*
> 
> maybe someone who has a more advanced hexadecimal knowledge can find out where is the power limit slider
> 
> 
> 
> 
> 
> 
> 
> I dont have a VEGA GPU but i had the idea if this could allow a higher power limit and over ride bios limit(i think there was a video on gamer nexus, did they modified the powerplay tables?)


@dagget3450 You may consider sticky'ing this info.

*Warning* Cannot flash yet until checksum correction tool is released. Until then its ready to rock bros!*

Sapphire.Vega.64.500W-A.PowerLimit.88C.Limits.zip 136k .zip file


Sapphire.Vega.64.500W.TDC.PowerLimit.60C.Limits.zip 136k .zip file


Judging by the stock temperature limits alone, You could easily save not only nearly 10% power but also save the card as well...

I can make a much faster bios by setting the power limit wattage & tdc to "DeLimited" no longer need to fool with the power limit bar. Then set temperature limits down to 88C for safety & let it eat all day & see marked improvements across the board with efficiency & performance. The PowerLimit is at fault as for fps, not to mention bouncing of the Temperature limits all day. It needs some BIOS perfecting.

578C total stock
528C total if set all 88C
504C total if set all 84C
450C total if set all 75C
384C total if set all 64C
360C total if set all 60C

Save - 9.47% 88C
Save - 14.68% 84C
Save - 28.44% 75C
Save - 50.52% 64C
Save - 60.55% 60C

88C From 320 watts down to 292 watts.
84C From 320 watts down to 279 watts.
75C From 320 watts down to 249 watts.
64C From 320 watts down to 212 watts.
60C From 320 watts down to 199 watts.

I can help. I can change it no problem. The problem is the Checksum correction, need a tool to save to correct checksum to be flashable. Simply download HxD & open your .rom & search for 32 00 ... that might be many but would have to know of other values near it to know it's the correct location.

32 00 (50%) USHORT usPowerControlLimit;

92 05 (250%)

5F 00 (95°C) USHORT usTemperatureLimitHBM;

4B 00 (75°C)

54 00 (84°C)

DC 00 (220W) USHORT usSocketPowerLimit;
DC 00 (220W) USHORT usBatteryPowerLimit;
DC 00 (220W) USHORT usSmallPowerLimit;
2C 01 (300A) USHORT usTdcLimit;
00 00 USHORT usEdcLimit;
59 00 (89°C) USHORT usSoftwareShutdownTemp;
69 00 (105°C) USHORT usTemperatureLimitHotSpot;
49 00 (73°C) USHORT usTemperatureLimitLiquid1;
49 00 (73°C) USHORT usTemperatureLimitLiquid2;
5F 00 (95°C) USHORT usTemperatureLimitHBM;
73 00 (115°C) USHORT usTemperatureLimitVrSoc;
73 00 (115°C) USHORT usTemperatureLimitVrMem;
64 00 (100°C) USHORT usTemperatureLimitPlx;
40 00 (64Ω??) USHORT usLoadLineResistance;


----------



## Tame

Quote:


> Originally Posted by *chris89*
> 
> Does the Vega have greater than 29% input : output power loss?
> 
> 323.5 watts input to 249.25 watts output ..... 323.5 divided by 249.25 watts = 1.29.789 (29.789% loss by thermals)


Your math is flawed. You calculate efficiency like 249.25/323.5 = 0.76. So you get 76 % out of what you put in. That's pretty normal, on very light loads GPU-z report my R9 290 efficiency 87 %, Furmark no oc 77%, heavy oc 70 %.


----------



## gamervivek

Quote:


> Originally Posted by *owntecx*
> 
> How did you check?


gpuz and the fact that the HBM is molded.


----------



## gupsterg

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Someone flash this and let me know how well it works out! I'm at work so I can't
> 
> 
> 
> 
> 
> 
> 
> So glad I won't be limited to a 70C limit on my WC version.


Don't get excited.

Fixing checksum on VEGA VBIOS is not what is stopping using modded VBIOS. As stated before (and what Chris89 is not aware) is 'we' can't update digital signature within VBIOS to reflect modification 'we' are doing.

Chris89 may have read this signature feature within past AMD GPU VBIOS threads, but those GPUs lacked a piece of HW which VEGA has. So at post it detects unauthorized VBIOS and does not allow GPU to function.

On past AMD GPUs only driver referenced the signature in VBIOS. AMD would intially have this check in driver releases and then stop. For a while on Polaris owners used ToastyX tool to patch driver, now check maybe have stopped, like other past AMD GPUs.

In the VEGA VBIOS thread members have flashed cards with simple mods and fixed checksum = no go.

Members have used external bios chip flash tool = no go.

VEGA has been secured by several ways againest VBIOS mod. It is no way the same as past AMD GPUs.


----------



## gupsterg

@WannaBeOCer

This post covers why digital signature was in VBIOS from 7xxx GPUs, basically as AMD use 'Hybrid' VBIOS it is needed to make GOP module validate legacy section of VBIOS for UEFI CSM Off environment (ie pure UEFI). See also page 20 point 3 of this very old PDF from 2011. The 'hash' is not done by vendors AFAIK, AMD have a server that validates/creates a 'hash' that AIB use in Legacy section (VEGA also uses 'Hybrid' VBIOS for backward compatibility). The feature highlighted in slide of VEGA FE is implemented on RX VEGA as well.



Quote:


> Originally Posted by *ducegt*
> 
> At least the registry override works. Haven't seen much interest in it. Is there anything meaningful that can't be changed through the reg method? Aside from memory straps...all I could think of.


The reg mods could go at any time IMO. Some are using some not. When you uninstall driver you lose reg mods and have to redo, where with VBIOS mod driver just picks up as is. There are certain mods which were done on Hawaii/Fiji via VRM controller, the VoltageObjectInfo can not be touched with reg mods (one such example was HBM voltage on Fiji). Recently a few Polaris modders have been doing some ASIC profiling table changes, again reg mod can't do that and as you said quite rightly VRAM_Info table stuff can't be touched.

You are limited to PowerPlay table mods only via reg AFAIK.


----------



## PontiacGTX

Quote:


> Originally Posted by *gupsterg*
> 
> Don't get excited.
> 
> Fixing checksum on VEGA VBIOS is not what is stopping using modded VBIOS. As stated before (and what Chris89 is not aware) is 'we' can't update digital signature within VBIOS to reflect modification 'we' are doing.
> 
> Chris89 may have read this signature feature within past AMD GPU VBIOS threads, but those GPUs lacked a piece of HW which VEGA has. So at post it detects unauthorized VBIOS and does not allow GPU to function.
> 
> On past AMD GPUs only driver referenced the signature in VBIOS. AMD would intially have this check in driver releases and then stop. For a while on Polaris owners used ToastyX tool to patch driver, now check maybe have stopped, like other past AMD GPUs.
> 
> In the VEGA VBIOS thread members have flashed cards with simple mods and fixed checksum = no go.
> 
> Members have used external bios chip flash tool = no go.
> 
> VEGA has been secured by several ways againest VBIOS mod. It is no way the same as past AMD GPUs.


so how do you can flash VEGA 64 bios on VEGA 56?


----------



## diabetes

Quote:


> Originally Posted by *PontiacGTX*
> 
> so how do you can flash VEGA 64 bios on VEGA 56?


As the original Vega 64 bios does have a valid digital certificate, all you need to do is set the physical bios switch on the card to performance mode and flash the bios with atiwinflash.

The backup bios cannot be overwritten.


----------



## gupsterg

Quote:


> Originally Posted by *PontiacGTX*
> 
> so how do you can flash VEGA 64 bios on VEGA 56?


'you' have not modified VBIOS, the VBIOS is 'intact', so pass check.

All RX VEGA share same device ID etc, so you can flash another RX VEGA VBIOS to one.

A 56 will not unlock to 64, as the table that sets SP is not present in 64 VBIOS, as that is full ASIC and does not need it. This is why on Fiji to unlock SP you used AtomTool to modify GFXHarvesting table. If you flash Fury X VBIOS to Fury it will not unlock SP even if the registers are left in writable condition on 'silicon'. GFXHarvesting table was also what The Stilt modded on Hawaii mining VBIOS to drop ROPs not needed for mining to save power, etc.

VEGA FE uses differing IDs. You can not flash it with RX VEGA VBIOS even if you do not mod. And you can't use VEGA FE VBIOS on RX VEGA either.

So the 'HW' security feature is probably also looking at 'fusedID' of ASIC.

An example of this in past is how if you flashed Grenada VBIOS (390/X) to Hawaii (290/X) it did not become Grenada to driver as it read 'fusedID' of ASIC.

IIRC (last read of relevent thread) the modders playing with ASIC profiling table in VBIOS on Polaris are also doing this to make RX 480 seen as RX 580 to driver, by masking ID (as to sucess no idea, been to busy with other things to keep abreast with that thread).


----------



## Reikoji

Quote:


> Originally Posted by *gamervivek*
> 
> gpuz and the fact that the HBM is molded.


Heh, i didn't pay any attention to that in GPU-Z. Mine is Samsung too, whew. I think most should be samsung tho. AMD didnt have Hynix as an HBM2 supplier for a while, iirc.


----------



## gupsterg

https://www.techpowerup.com/forums/threads/amd-rx-vega-56-to-vega-64-bios-flash-no-unlocked-shaders-improved-performance.236632/

https://www.techpowerup.com/236831/psa-flashing-rx-vega-56-with-rx-vega-64-bios-does-not-unlock-shaders

There are more examples on the web within forums, etc, etc ....


----------



## jearly410

With the 9.2 drivers I am getting better frametimes and fps with hbcc enabled in bf1 than 9.1 with hbcc off. Haven't had time to compare to hbcc off yet. Playing at 3440x1440 I see vram usage around 4500mb, at 135% resolution scale I see past 5500mb. No noticeable changes to ram or virtual memory. Will test hbcc off later on.

Overclocking on 9.2 in bf1 is still a problem. Getting crashes during long playing sessions which makes me think it's either power or temp related, even though max temp core is ~70 hbm ~75 and hotspot ~80 and never past 290w. Will monitor better going forward.


----------



## rancor

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Someone flash this and let me know how well it works out! I'm at work so I can't
> 
> 
> 
> 
> 
> 
> 
> So glad I won't be limited to a 70C limit on my WC version.


Flash it if you want but it's just going to give you a black screen and not boot. You cannot modify the Vega bios and have it boot. If you are on Linux it is possible to inject a bios at boot time. On top of that 500A may risk damage to the VRMs.


----------



## redshoulder

So current problem is loud coil whine with vega 64 so I sent to retailer for testing and they found fault (coil whine) and sent me another and I have still same issue.

Both my previous cards (gtx780sli) had low coil whine so I'm thinking the psu is the issue.

So it looks like I may need to replace psu, anyone have the same issue?


----------



## Newbie2009

Quote:


> Originally Posted by *redshoulder*
> 
> So current problem is loud coil whine with vega 64 so I sent to retailer for testing and they found fault (coil whine) and sent me another and I have still same issue.
> 
> Both my previous cards (gtx780sli) had low coil whine so I'm thinking the psu is the issue.
> 
> So it looks like I may need to replace psu, anyone have the same issue?


I had a faulty PSU, some odd little niggles I put down to an older pc. Vega blew up the PSU. When I replaced with a new one all the little niggles were gone.

Odd thing was 290x crossfire ran ok on it. So yeah I wouldn't be surprised if PSU is the issue.


----------



## biscuittea

Quote:


> Originally Posted by *redshoulder*
> 
> So current problem is loud coil whine with vega 64 so I sent to retailer for testing and they found fault (coil whine) and sent me another and I have still same issue.
> 
> Both my previous cards (gtx780sli) had low coil whine so I'm thinking the psu is the issue.
> 
> So it looks like I may need to replace psu, anyone have the same issue?


Have you tried this (if your PSU has enough slots):

__
https://www.reddit.com/r/6y5luf/how_i_fixed_coil_whine_on_my_sapphire_rx_vega_64/


----------



## redshoulder

Yes, I have read this before and tried both methods. Both daisy chain and separate cables

I have 2 options, but should find out more from retailer tomorrow...

1. Send back GPU for full refund and then RMA Corsair replacement PSU. Total cost 20 Euro postage

Cons: Corsair may not find fault and even if I get new PSU may still have same issue

2. Keep GPU and get new PSU (Seasonic Titanium) Total Cost 150-250Euro

Cons: New PSU may still have same issue and my no-quibble return period expires to return gpu(14 days)


----------



## Trender07

Quote:


> Originally Posted by *redshoulder*
> 
> Yes, I have read this before and tried both methods. Both daisy chain and separate cables
> 
> I have 2 options, but should find out more from retailer tomorrow...
> 
> 1. Send back GPU for full refund and then RMA Corsair replacement PSU. Total cost 20 Euro postage
> 
> Cons: Corsair may not find fault and even if I get new PSU may still have same issue
> 
> 2. Keep GPU and get new PSU (Seasonic Titanium) Total Cost 150-250Euro
> 
> Cons: New PSU may still have same issue and my no-quibble return period expires to return gpu(14 days)


Just saying but just deal with the coil whine.
My RX 480 had a bit of coil whine.
I had my "old" psu with my gtx 1080 and it had loud & terrible coil whine.
Changed the PSU for a new and expensive one, same terrible coil whine, so I just accepted the coil whine.
Now with my Vega I also have coil whine so meh, althought this one have less coil whine than my msi 1080


----------



## GroupB

Quote:


> Originally Posted by *andreyb*
> 
> I have molded chip. BTW how many thermal pads did you have with EK backplate? I had only two (1.5mm and 2mm thick) and I had to cut the first one along to cover vertical VRM group and may be that's the problem:


Just looking at your paste picture ... it look like your high temps problem maybe cause by not using enough paste on the core ... your X is kinda weak there If I were you Ill try repaste, remember GPU is not a cpu its always better to put more than not enough on a gpu , plus you have a mold core forget the multiple X and do the pea or a big single X ( did the pea on mine 2 time cause I forgot the thermal pad first time and the coverage was very good). I really think the multiple X on a mold core is a very bad idea creating air pocket.


----------



## Tgrove

What a comment to make


----------



## twan69666

Quote:


> Originally Posted by *redshoulder*
> 
> Yes, I have read this before and tried both methods. Both daisy chain and separate cables
> 
> I have 2 options, but should find out more from retailer tomorrow...
> 
> 1. Send back GPU for full refund and then RMA Corsair replacement PSU. Total cost 20 Euro postage
> 
> Cons: Corsair may not find fault and even if I get new PSU may still have same issue
> 
> 2. Keep GPU and get new PSU (Seasonic Titanium) Total Cost 150-250Euro
> 
> Cons: New PSU may still have same issue and my no-quibble return period expires to return gpu(14 days)


What about a 3rd option of setting a frame/fps limiter? I only have hear coil wine once, and it was in a loading screen where FPS pegged 1000 or something. I just set it to my monitor refresh rate (75hz for me). Nothing since


----------



## Reikoji

Hrm scratch, need to test moar.

Ok... Seems GPU-Z recent version likes to not spool up the GPU properly if HBCC is enabled. If you then close and re-open GPU-Z with HBCC still enabled, comp crash.

At least it does for me.


----------



## Paul17041993

Quote:


> Originally Posted by *andreyb*
> 
> I have molded chip. BTW how many thermal pads did you have with EK backplate? I had only two (1.5mm and 2mm thick) and I had to cut the first one along to cover vertical VRM group and may be that's the problem:


You should have a square 2mm pad that goes directly behind the GPU where the crossbar would be, then you should have enough 1.5mm strips to go along both rows of VRMs, but you only really need to make contact with the multiplier chips (little black square ICs).

Check the manual for how many strips your supposed to have and where you use them, if the manual is missing then do a warranty claim for missing hardware (manual counts as hardware).

Quote:


> Originally Posted by *redshoulder*
> 
> So current problem is loud coil whine with vega 64 so I sent to retailer for testing and they found fault (coil whine) and sent me another and I have still same issue.
> 
> Both my previous cards (gtx780sli) had low coil whine so I'm thinking the psu is the issue.
> 
> So it looks like I may need to replace psu, anyone have the same issue?


How loud actually is it though and does it come directly from the card? there should only be a very slight amount of whine that you could hear with low fan speeds on random occasion, this is normal for high power hardware. You're basically guaranteed to hear a fraction of whine through the motherboard as well, but that's usually easily dealt with by balancing the volume between your audio hardware.


----------



## Soggysilicon

Quote:


> Originally Posted by *GroupB*
> 
> Just looking at your paste picture ... it look like your high temps problem maybe cause by not using enough paste on the core ... your X is kinda weak there If I were you Ill try repaste, remember GPU is not a cpu its always better to put more than not enough on a gpu , plus you have a mold core forget the multiple X and do the pea or a big single X ( did the pea on mine 2 time cause I forgot the thermal pad first time and the coverage was very good). I really think the multiple X on a mold core is a very bad idea creating air pocket.


We could have an entire thread dedicated to TIM application theory. When I went over to lapping chips and heat sinks I went with the razor blade (thin spread) method, which worked really well with soldered IHS and water blocks. Vega was mirror finish... used a thin spread, dot, X on the three surfaces with the EK block... AS-5. Think I went this route cause EK doesn't have any (and I have yet to see) a recommendation for die pressure... that being the case 'just a lill more than normal' was the right move for me. Swedge the goo out just beyond the edge of the chip, not so much it spurgles all over the place, but enough to be "reasonably" confident there isn't some huge potato air pocket in the center.

TIM is one of those things, my experience has been people tend to favor the one that has worked for em in the past. I keep a couple tubes of AS around, plus a junk drawer with all the freebie MXs and whatnot that usually come bundled. Even mixed em before from different manufacturers... alchemy at work!


----------



## madmanmarz

I just want to add that my original assumption that the hotspot was VRM was wrong, due not only to what I've read here, but also that my hotspot temps did not drop AT ALL by adding a 120mm fan blowing on the air cooled portion that cools the rest of the card (Alphacool Nexxxos GPX waterblock). However the same additional fan made a huge difference for VRM temps on my old 290.

Next I will try turning up all my radiator fans to see if that makes a difference with hotspot!


----------



## redshoulder

Regarding the coil whine, I have coil whine with my 780sli, so I deal with it already.
With vega 64 on the other hand its 3x louder, (even with vsync) so fan speed has to reach 2500-3000rpm before it gets muffled.


----------



## pmc25

Re: TIM and application, I really prefer CL LU. There's only one method for application, performance is great, and provided you have a steady hand, there's minimal to no danger.


----------



## Chaoz

Quote:


> Originally Posted by *redshoulder*
> 
> So current problem is loud coil whine with vega 64 so I sent to retailer for testing and they found fault (coil whine) and sent me another and I have still same issue.
> 
> Both my previous cards (gtx780sli) had low coil whine so I'm thinking the psu is the issue.
> 
> So it looks like I may need to replace psu, anyone have the same issue?


My Sapphire Vega 64 whines like crazy when I disable the FreeSync. I have it hooked up to 2 different PCIe ports on my PSU, not daisy chained and it doesn't fix it.

Only whines in-game at 99-100%, not in benchmarks.

With FreeSync enabled the card doesn't run on 99-100%, but up to 70-80%. I run my 64 on balanced mode.

All of my previous GPU's never had coil whine, luckily.


----------



## Medusa666

I just cleaned my RX Vega 64 die and after studying the reflection I noticed like a minimal scratch / mark on one of the HBM modules.

I used only Qtip and paper towel together with Arctic clean, I was extremely gentle when I did this. However, is it possible that I broke the memory somehow? I will assemble the card in a few hours so I'm a bit nervous now : ( The scratch, or mark, is extremely small, looks like a dust particle, and can only be seen in specific light reflection.


----------



## Chaoz

Quote:


> Originally Posted by *Medusa666*
> 
> 
> 
> I just cleaned my RX Vega 64 die and after studying the reflection I noticed like a minimal scratch / mark on one of the HBM modules.
> 
> I used only Qtip and paper towel together with Arctic clean, I was extremely gentle when I did this. However, is it possible that I broke the memory somehow? I will assemble the card in a few hours so I'm a bit nervous now : ( The scratch, or mark, is extremely small, looks like a dust particle, and can only be seen in specific light reflection.


I wouldn't worry about it. Probably happened in the factory. Did it work before you removed the cooler?


----------



## Medusa666

Quote:


> Originally Posted by *Chaoz*
> 
> I wouldn't worry about it. Probably happened in the factory. Did it work before you removed the cooler?


Yeah the card worked perfectly, I'm going to assemble it today again so hopefully it will be fine. I didn't use anything that could have scratched that surface, qtip and paper shouldn't be able to, I applied very little pressure when cleaning.


----------



## Chaoz

Quote:


> Originally Posted by *Medusa666*
> 
> Yeah the card worked perfectly, I'm going to assemble it today again so hopefully it will be fine. I didn't use anything that could have scratched that surface, qtip and paper shouldn't be able to, I applied very little pressure when cleaning.


You should be fine then. Doubt that scrath, which you can't even see in the pic, would cause the GPU to break.

Just make sure you use the X-method to apply TIM on each die.


----------



## andreyb

Quote:


> Originally Posted by *GroupB*
> 
> Just looking at your paste picture ... it look like your high temps problem maybe cause by not using enough paste on the core ... your X is kinda weak there If I were you Ill try repaste, remember GPU is not a cpu its always better to put more than not enough on a gpu , plus you have a mold core forget the multiple X and do the pea or a big single X ( did the pea on mine 2 time cause I forgot the thermal pad first time and the coverage was very good). I really think the multiple X on a mold core is a very bad idea creating air pocket.


Quote:


> Originally Posted by *Paul17041993*
> 
> You should have a square 2mm pad that goes directly behind the GPU where the crossbar would be, then you should have enough 1.5mm strips to go along both rows of VRMs, but you only really need to make contact with the multiplier chips (little black square ICs).
> 
> Check the manual for how many strips your supposed to have and where you use them, if the manual is missing then do a warranty claim for missing hardware (manual counts as hardware).


I have finished warerblock re-install yesterday just before your replies. I re-applied thermal paste (double X method), checked thermal pads adherence and added one more pad claimed from EK before on backplate. All this didn't help actually - I get nearly the same temperatures on both core and hot spot. I am not sure, but maybe it would make sence to apply Coollaboratory Liquid ULTRA next time. I don't wont to drain-fill the loop once again just for the video card


----------



## Medusa666

Quote:


> Originally Posted by *Chaoz*
> 
> You should be fine then. Doubt that scrath, which you can't even see in the pic, would cause the GPU to break.
> 
> Just make sure you use the X-method to apply TIM on each die.


Thank you for your advice.

I got the Thermal Grizzly Kryonaut paste, comes with a spreader, should I still use the X method on all three modules?


----------



## rdr09

Quote:


> Originally Posted by *andreyb*
> 
> I have finished warerblock re-install yesterday just before your replies. I re-applied thermal paste (double X method), checked thermal pads adherence and added one more pad claimed from EK before on backplate. All this didn't help actually - I get nearly the same temperatures on both core and hot spot. I am not sure, but maybe it would make sence to apply Coollaboratory Liquid ULTRA next time. I don't wont to drain-fill the loop once again just for the video card


How's the cpu temp? If it is unusually high, then could be a problem with the cooling system.


----------



## andreyb

Quote:


> Originally Posted by *rdr09*
> 
> How's the cpu temp? If it is unusually high, then could be a problem with the cooling system.


Quote:


> Originally Posted by *andreyb*
> 
> FireStrike Graphics Test 1 in loop.
> Card is overclocked 1600-1650/1100 +50%PL and running high power Air BIOS.


----------



## Chaoz

Quote:


> Originally Posted by *Medusa666*
> 
> Thank you for your advice.
> 
> I got the Thermal Grizzly Kryonaut paste, comes with a spreader, should I still use the X method on all three modules?


Np, I've used TG Kryonaut aswell for mine. Yes, you make an X on all 3 dies. I don't even use the spreader brush thing.


----------



## rdr09

Quote:


> Originally Posted by *andreyb*


I saw a thermal image of the back of vega underload and the hottest part is right at the core. If you have a backplate, might help installing a thermal tape touching the plate to dissipate heat and direct air as well.

Alphacool (iirc) was the only block maker that comes with active cooling backplate. Vega needs one.


----------



## Roboyto

Quote:


> Originally Posted by *Chaoz*
> 
> Np, I've used TG Kryonaut aswell for mine. Yes, you make an X on all 3 dies. I don't even use the spreader brush thing.


Yep, X method on all 3 dies works great with Kryonaut. Experiencing excellent temps with my EK block and small loop.
Quote:


> Originally Posted by *rdr09*
> 
> I saw a thermal image of the back of vega underload and the hottest part is right at the core. If you have a backplate, might help installing a thermal tape touching the plate to dissipate heat and direct air as well.
> 
> Alphacool (iirc) was the only block maker that comes with active cooling backplate. Vega needs one.


Kryographics had this contraption.



This would be swell with a big pad on the back of the die.


----------



## Chaoz

Quote:


> Originally Posted by *Roboyto*
> 
> Yep, X method on all 3 dies works great with Kryonaut. Experiencing excellent temps with my EK block and small loop.


My temps are great with my EK block. My 64 never goed over 40°C. I have quite a big loop, tho. I have a 360 and 480 Rad.


----------



## gamervivek

There is something funky with the card in that I can get higher scores on some run and after a reboot/shut down never see them repeated again.

Is easily repeatable if you install a different driver.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> There is something funky with the card in that I can get higher scores on some run and after a reboot/shut down never see them repeated again.
> 
> Is easily repeatable if you install a different driver.


Boost isnt static and proably games doesnt load the GPU if they arent repeating same exact scene


----------



## gamervivek

Not a game but superposition benchmark and it isn't boost difference since it shows you the card's clocks as well. I've seen a reddit user say the same while using Firestrike.


----------



## Roboyto

Quote:


> Originally Posted by *Chaoz*
> 
> My temps are great with my EK block. My 64 never goed over 40°C. I have quite a big loop, tho. I have a 360 and 480 Rad.


Yeah, you're a full 360 rad beyond my cooling capacity.
Quote:


> Originally Posted by *gamervivek*
> 
> Not a game but superposition benchmark and it isn't boost difference since it shows you the card's clocks as well. I've seen a reddit user say the same while using Firestrike.


I've noticed superposition pushes the cards harder than other benches; hits power limit faster it seems. Also, if you have a crash/hang or need to reboot you should try hitting reset in Wattman and then re-apply your settings; see what happens.


----------



## Medusa666

Quote:


> Originally Posted by *Chaoz*
> 
> Np, I've used TG Kryonaut aswell for mine. Yes, you make an X on all 3 dies. I don't even use the spreader brush thing.


Thank you for your advice and help earlier, it calmed me down somewhat and I used your cross method for the thermal paste









I'm done now, I have installed the Raijintek Morpheus II on my RX Vega 64, and it all fits inside an Ncase M1.

Temps are great and a great improvement from stock, the card doesn't throttle at all and computer is dead silent with the 2x 120mm Noctua slim fans.

I liked the reference cooler alot and I thought long and hard about doing this, unsure if it would even work ,but it did, and I'm extremely happy with the results : )


----------



## Chaoz

Quote:


> Originally Posted by *Medusa666*
> 
> Thank you for your advice and help earlier, it calmed me down somewhat and I used your cross method for the thermal paste
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm done now, I have installed the Raijintek Morpheus II on my RX Vega 64, and it all fits inside an Ncase M1.
> 
> Temps are great and a great improvement from stock, the card doesn't throttle at all and computer is dead silent with the 2x 120mm Noctua slim fans.
> 
> I liked the reference cooler alot and I thought long and hard about doing this, unsure if it would even work ,but it did, and I'm extremely happy with the results : )


Np







. Happy to help.

Yeah, the stock cooler is really bad, but looks really nice, tho. Have fun with the card







.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> Not a game but superposition benchmark and it isn't boost difference since it shows you the card's clocks as well. I've seen a reddit user say the same while using Firestrike.


maybe it is different on each applications


----------



## GroupB

Quote:


> Originally Posted by *PontiacGTX*
> 
> maybe it is different on each applications


I test 9.2 yesterday and did not like the auto boost function past the target you set ( probably a bug) crash my games all the time when it try to reach for 1715 mhz + on 50% power only @ 1100 mv when I set a target of 1662 mhz ... I dont need to reach 1700 + I dont need it since it plenty of mhz for that game already. So I return to .8.1 where the mhz stay at target. I wanted to try 9.2 for the pubg 18% boost and guess what the fps was lower or about the same on the 9.2 vs 8.1

If this boost thing past target is not a bug but a feature of the boost they have to work on it, my temp are in the 30C core 45 hbm so maybe it try to push further because of cooling but did not check the power limit or did not care for the undervolt and push past stability. So far 8.1 is way better since I have no bug that 9.2 fix affecting me.


----------



## Ark-07

Hi all ive joined the club, I didn't see any overclocking guides for the rx vega 64 on the main page can someone send me links if there are any. Also a little confused my default gpu clock is 1630mhz isn't it meant to be 1546mhz?

Also I ran my previous r9 fury at max 63 degress and target at 59 degress with no issues, I assume there is nothing wrong with doing that again? I did however increase the fans max rpm from 2400 to 3000. Whatever happen to ASIC quality on gpu-z? On last thing is there any point using msi afterburner anymore? I mean i have a msi card but the wattman settings handle everything I want.


----------



## Newbie2009

Quote:


> Originally Posted by *Ark-07*
> 
> Hi all ive joined the club, I didn't see any overclocking guides for the rx vega 64 on the main page can someone send me links if there are any. Also a little confused my default gpu clock is 1630mhz isn't it meant to be 1546mhz?
> 
> Also I ran my previous r9 fury at max 63 degress and target at 59 degress with no issues, I assume there is nothing wrong with doing that again? I did however increase the fans max rpm from 2400 to 3000. Whatever happen to ASIC quality on gpu-z? On last thing is there any point using msi afterburner anymore? I mean i have a msi card but the wattman settings handle everything I want.


It's reading the boost. It will probably hit just under 1600 @ stock under load with decent cooling.

Just a tip, My card will run stock clocks @ 1000mv versus 1200mv stock. Difference in power draw for while system is peak 470w vs peak 620w, so will help with temps and the likes.


----------



## Ark-07

Quote:


> Originally Posted by *Newbie2009*
> 
> It's reading the boost. It will probably hit just under 1600 @ stock under load with decent cooling.
> 
> Just a tip, My card will run stock clocks @ 1000mv versus 1200mv stock. Difference in power draw for while system is peak 470w vs peak 620w, so will help with temps and the likes.


I assume you mean the power? How do I go changing that? Never really messed with power settings outside the wattman settings. I also forgot to ask what the switch on the card is? Not the led one but the other one and how do I know what mode or whatever its in?


----------



## Reikoji

I actually couldn't even hear this coil wine until I finally got some active fan control going. I don't think its all that bad really, im use to loud noises.


----------



## Chaoz

Quote:


> Originally Posted by *Ark-07*
> 
> I assume you mean the power? How do I go changing that? Never really messed with power settings outside the wattman settings. I also forgot to ask what the switch on the card is? Not the led one but the other one and how do I know what mode or whatever its in?


You're talking about the switch above the Radeon logo, right?

That switch is the BIOS switch. So you can switch to the second or first BIOS. The way it is now is the first BIOS.

Also the second BIOS draws a bit less power aswell.


----------



## Roboyto

Quote:


> Originally Posted by *Ark-07*
> 
> Hi all ive joined the club, I didn't see any overclocking guides for the rx vega 64 on the main page can someone send me links if there are any. Also a little confused my default gpu clock is 1630mhz isn't it meant to be 1546mhz?
> 
> Also I ran my previous r9 fury at max 63 degress and target at 59 degress with no issues, I assume there is nothing wrong with doing that again? I did however increase the fans max rpm from 2400 to 3000. Whatever happen to ASIC quality on gpu-z? On last thing is there any point using msi afterburner anymore? I mean i have a msi card but the wattman settings handle everything I want.


1630 is meant to be the max capable boost clock; pending power/temperature limits. With reference cooler and stock settings, you will not sustain those clock speeds consistently; without high fan speed/noise anyway. Pending the performance you desire, and your toleration of noise, your best bet is to undervolt and choose the fan speed that gets you the noise level you want. Most cards can drop somewhere between .1V-.2V and get better than the stock performance you likely saw on the major review websites. It drastically cuts down on power draw, temperatures and therefore fan speed/noise.

Depending on where you end up with your voltage, you may be able to overclock a little as well. WattMan is pretty robust as far as features, but it has its quirks if you're trying to push the card to the brink, or hit the lowest voltage threshold for a desired clock speed. Best bet is to reset and re-apply your desired settings every time you encounter a crash, glitch, hang, reboot, etc otherwise the card and/or WattMan will act 'funny'.

Presently no other OC software works with Vega, so ATM it is your only option.

Anyway you're going to try to get extra performance will require maxing the power slider to +50%. Combining this with an undervolt yields more performance, less power draw and lower temperatures than plain stock config.

In Wattman your voltage/clock speed adjustments are only available to the top 2 'P' states; 6 and 7. Stock settings put P7 at max (allowed) voltage of 1200mv. If all you desire is stock speeds, then you can likely drop both P6/P7 those voltages to somewhere in the vicinity of 1000mv...pending how good your card is. You may attain an overclock with a fairly aggressive undervolt...hard to say...you'll have to tinker.

Personally I have been trying to eek out as much as I can since I have a full cover waterblock. Temperatures aren't really a concern for me, so I am running 1075mV/1150mV on P6/P7 and the card is boosting between 1660/1690 for DOOM with the max clock set to 1702. Benchmarking I have attained slightly higher clock speeds, but with higher voltages. I still need to tweak some more to see how low I can get the voltages for max performance.

Max sustainable GPU core clock is going to be in the low-mid 1700's from what I have seen. Anyone else can chime in though if they've breached this threshold.

As far as HBM is concerned, most Vega 64 cards seem to be capped at 1100 1090-1110. BUT, from what I have seen this is possible with an undervolt as well. My card hits 1100 HBM at 1000mV as opposed to factory 1050mV; Some people are hitting these speeds with less than that. HBM2 is fairly sensitive to temperature so that is something to be mindful of since core temperature can/will effect HBM2 temps and it's performance. As temperatures rise, HBM2 timings do as well and performance will drop.

Vega 56, I believe, has a lower default HBM voltage and won't hit the same speeds as 64. I believe this is 'fixable' with a BIOS flash...someone else can chime in as I haven't gone down the BIOS flash road...yet









Your target temps like you ran with your Fury...you will have trouble hitting those specific temperatures with the reference blower. If you have the LC version, you should be fine. If you want low temps, then you will have to keep stock clocks and shoot for lowest voltage possible with +50% power.

As long as you're willing to spend some time tweaking, benchmarking, and recording your results to see what positively benefits performance..you should enjoy this GPU.

Not sure about the ASIC quality...other than that, I think I hit everything.


----------



## MediocreKiller

How do I increase Vega power limit to 100%? I have read many people bring up power tables and stuff, but no mention on how to actually do it. Can Anyone help me out by telling me how or linking a how to? Thanks so much!


----------



## Soggysilicon

Quote:


> Originally Posted by *Ark-07*
> 
> Hi all ive joined the club, I didn't see any overclocking guides for the rx vega 64 on the main page can someone send me links if there are any. Also a little confused my default gpu clock is 1630mhz isn't it meant to be 1546mhz?


Not much in the way of guides considering that the only "real" options are flashing the bios to get to the power, opening up wattman, getting a six pack, a couple of hours... and edge case the settings until it stops crashing... take some screenies... post em up... $PROFIT$
Quote:


> Originally Posted by *Roboyto*
> 
> 1630 is meant to be the max capable boost clock; pending power/temperature limits. With reference cooler and stock settings, you will not sustain those clock speeds consistently; without high fan speed/noise anyway. Pending the performance you desire, and your toleration of noise, your best bet is to undervolt and choose the fan speed that gets you the noise level you want. Most cards can drop somewhere between .1V-.2V and get better than the stock performance you likely saw on the major review websites. It drastically cuts down on power draw, temperatures and therefore fan speed/noise.
> 
> Depending on where you end up with your voltage, you may be able to overclock a little as well. WattMan is pretty robust as far as features, but it has its quirks if you're trying to push the card to the brink, or hit the lowest voltage threshold for a desired clock speed. Best bet is to reset and re-apply your desired settings every time you encounter a crash, glitch, hang, reboot, etc otherwise the card and/or WattMan will act 'funny'.


Not so sure its "just wattman", I am convinced at this point that there are threshold frequencies and powers which, once set, bracket the state into a "set" of frequency and voltage solutions in some sort of feedforward control scheme. If wattman / or driver crap out, the DP link drops, power goes down on the monitor... that bias set by the user gets lost; and bam... erratic zone again. Just my 2c...








Quote:


> Presently no other OC software works with Vega, so ATM it is your only option.
> 
> Anyway you're going to try to get extra performance will require maxing the power slider to +50%. Combining this with an undervolt yields more performance, less power draw and lower temperatures than plain stock config.
> 
> In Wattman your voltage/clock speed adjustments are only available to the top 2 'P' states; 6 and 7. Stock settings put P7 at max (allowed) voltage of 1200mv. If all you desire is stock speeds, then you can likely drop both P6/P7 those voltages to somewhere in the vicinity of 1000mv...pending how good your card is. You may attain an overclock with a fairly aggressive undervolt...hard to say...you'll have to tinker.
> 
> Personally I have been trying to eek out as much as I can since I have a full cover waterblock. Temperatures aren't really a concern for me, so I am running 1075mV/1150mV on P6/P7 and the card is boosting between 1660/1690 for DOOM with the max clock set to 1702. Benchmarking I have attained slightly higher clock speeds, but with higher voltages. I still need to tweak some more to see how low I can get the voltages for max performance.
> 
> Max sustainable GPU core clock is going to be in the low-mid 1700's from what I have seen. Anyone else can chime in though if they've breached this threshold.


On 9.2, utilizing a 10 minute / polled ever 2 seconds cumulative average, 1700 (average) is solid for me. The card still boost, and trying to control that is... to coin the SJW... "problematic".







Gamed on these settings the other night for about ~3 hours... without a hitch. Jayz2cents... Youtuber with the ROG STRIX, thing was crashing all over the place... kinda a joke, throttling in the 1500s and low 1600s, sad sad benchies...











Quote:


> As far as HBM is concerned, most Vega 64 cards seem to be capped around 1090-1110. BUT, from what I have seen this is possible with an undervolt as well. My card hits 1100 HBM at 1000mV as opposed to factory 1050mV; Some people are hitting these speeds with less than that. HBM2 is fairly sensitive to temperature so that is something to be mindful of since core temperature can/will effect HBM2 temps and it's performance. As temperatures rise, HBM2 timings do as well and performance will drop.


I have only heard of one card that could "hold" 1110... a couple do 1105, most are 1100 or less. That 1110, is a bit of an urban forum legend... I don't think a screenie has been posted with those clocks running and holding a bench.
Quote:


> Vega 56, I believe, has a lower default HBM voltage and won't hit the same speeds as 64. I believe this is 'fixable' with a BIOS flash...someone else can chime in as I haven't gone down the BIOS flash road...yet


Most of us around here seem to think 56/64/64LC are binned HBM in that order. I suspect its to give some market coherency and sku separation as well as justify some price point discrepancies.
Quote:


> Your target temps like you ran with your Fury...you will have trouble hitting those specific temperatures with the reference blower. If you have the LC version, you should be fine. If you want low temps, then you will have to keep stock clocks and shoot for lowest voltage possible with +50% power.
> 
> As long as you're willing to spend some time tweaking, benchmarking, and recording your results to see what positively benefits performance..you should enjoy this GPU.
> 
> Not sure about the ASIC quality...other than that, I think I hit everything.


I think theirs only been 1 guy on the forums so far with the 64LC that could just boost the power target on stock and actually play anything without crashing... there is some lottery going on. Same guy that posted the 7200ish bench in SP4K. I do not know if he had freesync on at the time... I'm in the low 7000-7100s with FS/UE +HBCC on. Vega really really shows how memory sensitive it is in this bench, simply closing programs is enough to nurse the score up a couple pts.


----------



## springs113

For those doing custom loop on their RX cards, what are the temps you all are seeing?


----------



## Chaoz

Quote:


> Originally Posted by *springs113*
> 
> For those doing custom loop on their RX cards, what are the temps you all are seeing?


My 64 doesn't go over 40°C.

I have a 360 and 480 rad, rest of my specs are below.


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> Not much in the way of guides considering that the only "real" options are flashing the bios to get to the power, opening up wattman, getting a six pack, a couple of hours... and edge case the settings until it stops crashing... take some screenies... post em up... $PROFIT$
> Not so sure its "just wattman", I am convinced at this point that there are threshold frequencies and powers which, once set, bracket the state into a "set" of frequency and voltage solutions in some sort of feedforward control scheme. If wattman / or driver crap out, the DP link drops, power goes down on the monitor... that bias set by the user gets lost; and bam... erratic zone again. Just my 2c...
> 
> 
> 
> 
> 
> 
> 
> 
> On 9.2, utilizing a 10 minute / polled ever 2 seconds cumulative average, 1700 (average) is solid for me. The card still boost, and trying to control that is... to coin the SJW... "problematic".
> 
> 
> 
> 
> 
> 
> 
> Gamed on these settings the other night for about ~3 hours... without a hitch. Jayz2cents... Youtuber with the ROG STRIX, thing was crashing all over the place... kinda a joke, throttling in the 1500s and low 1600s, sad sad benchies...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have only heard of one card that could "hold" 1110... a couple do 1105, most are 1100 or less. That 1110, is a bit of an urban forum legend... I don't think a screenie has been posted with those clocks running and holding a bench.
> Most of us around here seem to think 56/64/64LC are binned HBM in that order. I suspect its to give some market coherency and sku separation as well as justify some price point discrepancies.
> I think theirs only been 1 guy on the forums so far with the 64LC that could just boost the power target on stock and actually play anything without crashing... there is some lottery going on. Same guy that posted the 7200ish bench in SP4K. I do not know if he had freesync on at the time... I'm in the low 7000-7100s with FS/UE +HBCC on. Vega really really shows how memory sensitive it is in this bench, simply closing programs is enough to nurse the score up a couple pts.


Its not normal to be able to just put the power target to +50% and just play?


----------



## diabetes

Quote:


> Originally Posted by *springs113*
> 
> For those doing custom loop on their RX cards, what are the temps you all are seeing?


I have a Vega 56 with stock bios on a nickel-actetal EKWB. The temperatures differ greatly depending on what I am doing. My rads are 360 + 240 with 45mm thickness in a Corsair 750D chassis with CoolerMaster Silencio (1450RPM edition) fans on them. Both of the rads are used in a pull config. The loop also cools a delidded [email protected]

ATM I use my card with +20% power target and 1000Mhz HBM speed. Voltages are at stock.

When rendering stuff in Blender my temps are [42C GPU | 45C HBM | 55C Hotspot].
In Battlefield 4 @ 1080p Ultra Settings with 4x MSAA, 150% resolution scale the temps highest measured temps are
[53C GPU | 55C HBM | 65 Hotspot] after 2 hours of gameplay.

This is with the fan curve set that the fans never exceed 1050RPM, so it is a silent config. I could bring down temps more if I wanted to, by increasing RPM.

What I noticed is that my card boosts itself to 1610Mhz in Blender (and stays there), whereas in BF4 the core clock fluctuates between 1512Mhz and 1550Mhz.

My card has the unmolded package with Samsung HBM. The TIM used is EK Ektotherm and I applied it in excess by just spreading the whole amount that came with the block over the dies. No X-pattern or anything as I just used the tip of the syringe the TIM came in to make sure everything is covered.


----------



## Roboyto

Quote:


> Originally Posted by *springs113*
> 
> For those doing custom loop on their RX cards, what are the temps you all are seeing?


I have dual 'standard' thickness 240mm rads, fans always in 'stealth' mode so my rig is silent. Ryzen 1700 UVOC 1.1875V for 3.7GHz with 1.35V 3200MHz RAM.

Played Doom for ~70 minutes yesterday and temps were as follows:

Kryonaut TIM for GPU/HBM & CPU

GPU Peak: 44C

HBM Peak: 53C

Hotspot Peak: 66C

Ryzen Peak: 54C

GPU Settings:

P6: 1602/1075mv

P7: 1702/1150mV

HBM: 1100/1000mV

50% power

HBM clock was rock solid. GPU core minimum at 1660, peaking 1690. Ran flawless 100-140 fps on 5760*1080 eyefinity.


----------



## Roboyto

Quote:
Originally Posted by *Soggysilicon* 

Not much in the way of guides considering that the only "real" options are flashing the bios to get to the power, opening up wattman, getting a six pack, a couple of hours... and edge case the settings until it stops crashing... take some screenies... post em up... $PROFIT$

Not so sure its "just wattman", I am convinced at this point that there are threshold frequencies and powers which, once set, bracket the state into a "set" of frequency and voltage solutions in some sort of feedforward control scheme. If wattman / or driver crap out, the DP link drops, power goes down on the monitor... that bias set by the user gets lost; and bam... erratic zone again. Just my 2c...









On 9.2, utilizing a 10 minute / polled ever 2 seconds cumulative average, 1700 (average) is solid for me. The card still boost, and trying to control that is... to coin the SJW... "problematic".







Gamed on these settings the other night for about ~3 hours... without a hitch. Jayz2cents... Youtuber with the ROG STRIX, thing was crashing all over the place... kinda a joke, throttling in the 1500s and low 1600s, sad sad benchies...












I have only heard of one card that could "hold" 1110... a couple do 1105, most are 1100 or less. That 1110, is a bit of an urban forum legend... I don't think a screenie has been posted with those clocks running and holding a bench.
Most of us around here seem to think 56/64/64LC are binned HBM in that order. I suspect its to give some market coherency and sku separation as well as justify some price point discrepancies.


> I think theirs only been 1 guy on the forums so far with the 64LC that could just boost the power target on stock and actually play anything without crashing... there is some lottery going on. Same guy that posted the 7200ish bench in SP4K. I do not know if he had freesync on at the time... I'm in the low 7000-7100s with FS/UE +HBCC on. Vega really really shows how memory sensitive it is in this bench, simply closing programs is enough to nurse the score up a couple pts.


Predetermined bracketed states







intriguing hypothesis...have hardly done any benching since I dropped Ryzen in and have all 3 monitors going. Will have to see if DP drops for me too when there is a crash.

I need to play around with mine some more to see if I can hold 1700. Haven't done much yet on 9.2

Jayz2cents..meh..and ASUS in general...bigger MEH. I'm not denying that these are power hungry beasts in the least bit, but ASUS has taken the short/cheap road on heatsinks before. The DC2 for the 290(X) cards was a joke because they recycled the 780 cooler...didn't utilize all the heatpipes and temps on the ASUS cards were crap. I haven't been keeping up on the Strix Vega news as I have sworn off ASUS due to a horrendous/insulting customer service experience. Have any of the reviewers that have gotten their hands on the Strix attempted undervolting?



HBM info in my post corrected.

I had run TimeSpy and FireStrike at 1105 on older drivers, but haven't attempted recently.

Binning of HBM would seem likely. Anyone confirm there is a voltage difference for HBM on Vega 56?

Haven't run SuperPosition yet with new CPU etc. With my 4790k best score I got was 6883 on 17.8.2? can't remember. Don't have all settings but my screenie labeled undervolt 1682/1100 50% power.

Will try running some of that now and see what happens.


----------



## TrixX

The ASUS Strix Vega64 is just the 1080Ti Strix with some odd additions and adjustments to the cooler. Doesn't look like something designed for Vega straight up. I don't mind re-using a good design if it fits, but some of the stuff on the Strix is just plain odd.


----------



## kundica

Quote:


> Originally Posted by *springs113*
> 
> For those doing custom loop on their RX cards, what are the temps you all are seeing?


I max at 41-42 while gaming for hours. 240 and 280 EK radiators.


----------



## diabetes

Quote:


> Originally Posted by *Roboyto*
> 
> Jayz2cents..meh..and ASUS in general...bigger MEH. I'm not denying that these are power hungry beasts in the least bit, but ASUS has taken the short/cheap road on heatsinks before.


Totally agreed. Jays content quality has decreased as he got more subs. His Strix review was a bad joke as was not even using GPU-Z to monitor hotspot temp. Or his "the card throttles and I dont know why", without using thermal imaging or cross-checking with other reviews.

Also, the Strix Vega hits 120C on the VRMs. No wonder they dont clock higher. I wouldnt be surprised if the failed 2 days after end of warranty.

Check out this russian review of the Strix Vega 64 and scroll down. They have some nice thermal images. There are also some teardown pics on that link. Asus has truly cheaped out on the PCB-layout, -components and the VRM contact area of the cooler. The reference model PCB is way superior. I would recommend getting that and a Morpheus II over the Strix at any time.

Further more there are some contradicting statements from Guru3D and

__ https://twitter.com/i/web/status/912712654912974848 regarding Bios and drivers. One party says its the lack of an optimized Bios, the other says that the special Bios is there and the driver is bogus.

TBH, it surprised me that AMD let Asus exclusively make the first AIB card, especially when Sapphire and XFX have a much better reputation for their AMD custom cards. According to some rumors they have the Strix card sitting at AMD for over a month now for "final validation", so AMD should already be in an active state of regret because of their decision.


----------



## TrixX

TBH I think it's a combination of BIOS being unoptimised and drivers really not being where they should be by now. Hoping on 17.10.1's actually using the card more.

I seem to have a bit of a downclocking issue in games at the moment as it can drop to P3 or 4 quite happily in PUBG which causes massive delay's when encountering players as it drops frames as it tries to ramp back to 100% usage again. Almost like Radeon Chill is active when it's disabled...


----------



## diabetes

Quote:


> Originally Posted by *TrixX*
> 
> Almost like Radeon Chill is active when it's disabled...


Or maybe it is enabled? I noticed on 17.9.2 that Chill auto enabled for me after a reboot and I know that it was disabled before.


----------



## TrixX

Says disabled, but after the Wattman bugs I can't trust it


----------



## diabetes

You could try manually enabling and then disabling again so the registry settings are overwritten. Maybe it helps.


----------



## Ark-07

Thank you everyone for the input!


----------



## JasonMZW20

Yeah, I've had some interesting downclocking behavior as well. In Fallout 4, since I run it with Vsync enabled (no Freesync, 60Hz), it doesn't do a good job of maintaining 60fps. It seems like Vega's clock control is linked more to GPU usage/load, and obviously, when running with Vsync, GPU load is quite low in many places. When looking at my large settlement house at Starlight Drive-in, FPS drops to sub-30s, yet clocks do NOT increase to the max levels. They seem to hover around 800-1100MHz core and 500MHz HBM. This is only at 1080p too, which is a bit disappointing. It's too difficult to pick a lock with Vsync disabled.

For the most part, my HBM has been staying in the 500-800MHz range, with occasional dips to 167MHz and sometimes hits my OC of 1075MHz (1100MHz is doable, but starts artifacting above 85C). FO4 runs fine 80% of the time, but there are times when the game needs more GPU power, yet Vega is unwilling to provide it.










Is anyone else getting stuck clocks after running Superposition with 17.9.2? The score is with +30% power 1542MHz/1025mV P6 and 1632/1075mV P7, temp target at 65C. During benchmark, it only hit around 1570-1595MHz, but was pretty stable within that range. My GPU seems to crash more when I bump the power target up to +50% and hit 75C at lower voltages.










Max temp/power values (if you can't read them):
GPU Core Current: 147A
GPU Memory Current: 17A
GPU Core Power: 150.563W
GPU Memory Power: 23.056W
GPU Chip Power: 222.000W
GPU Clock: 1634.0MHz (only AFTER benchmark completed)
GPU Memory Clock: 1075MHz

I'll have to restart to fix the stuck clocks, it seems.


----------



## Ark-07

I forgot to ask and I'm glad I remembered before playing with overclocking. My power supply is a new gold corsair tx850m. At first I had one cable with dual 6by2 pins. And eventually a few days later I had boot up issues. So to today I'm using two separate 6by2 cables for my vega 64... No issues so far

My concern is whether a 6by2 pin cable does the same job as a 8pin cable? The power supply didn't come with any 8pin connectors I think... I do have a 6 to 8 pin extension cable that came with the gpu not using it though. That said anyone else having "default radeon wattman settings restored" on boot up once in a while? Had this issue with my old power supply and r9 fury as well. Always had to redo my watt fan settings each time.


----------



## TrixX

Corsair 6+2 pins should be fine. I'm running single cable 6+2's to both 8pin slots on mine and even under load they aren't getting warm.

@JasonMZW20

Reset Wattman to apply new settings after each apply. Otherwise it can fail to apply changes correctly or even at all sometimes. After 4-5 resets of Wattman, the card can get stuck going to a max of state P5 instead of 6 or 7 which of course reduces performance in Benching. For me if I get stuck at ~1542MHz in Superposition then I know I need to restart the PC though I'm using a Liquid Cooled BIOS so my P5 state is higher than the Vega64 Air BIOS.

With +xx% power, you don't need to increase that at all, at most 10-25% is more than enough on air as with mine using 950mv for HBM and P6 and 980mv for P7 I still hit 220W power draw under load.

I'd start by finding the lower point of your power requirements before it crashes or doesn't complete Superposition and Firestrike, then increase about 10-20mv above that and see performance. Focus on adjusting the P7 Clock and up the voltage in 10mv increments until you are happy with the results and can achieve the clock stable without going too high on temps.

With your card I would also consider grabbing the Liquid Cooled BIOS even with it's lower temp limits as it keeps the card closer to optimal operating temps.

Oh and I tend to run Superposition on Extreme as I get CPU bottlenecking at medium.


----------



## Paul17041993

Quote:


> Originally Posted by *JasonMZW20*
> 
> Yeah, I've had some interesting downclocking behavior as well. In Fallout 4, since I run it with Vsync enabled (no Freesync, 60Hz), it doesn't do a good job of maintaining 60fps.


Common issue with vsync, turn it off and you'll see the card maintain high clocks. If the card ever dips to low clocks while not being thermal or power bound then it's simply the game not sending enough instructions for it to process, or it's sending _way too many_ instructions (eg; minecraft)...


----------



## JasonMZW20

Quote:


> Originally Posted by *Paul17041993*
> 
> Common issue with vsync, turn it off and you'll see the card maintain high clocks. If the card ever dips to low clocks while not being thermal or power bound then it's simply the game not sending enough instructions for it to process, or it's sending _way too many_ instructions (eg; minecraft)...


Interesting. I guess AMD will just need to make Vega's dynamic clocks smarter in future (for Vsync). Even with the downclocks, it's smoother than my old R9 280 that ran 1175/1575MHz clocks, so I can deal with it. I can't stand screen tearing, so just waiting on some good 4K Freesync 2 monitors.


----------



## OC17

Hi guys!

I´m thinking into join the group and I need some help here before doing that.

I do 3d rendering and I´m wanting to know a bit more about the HBCC/Memory sharing Vega´s system.

GPU rendering, when done through compute APIs, has the limitation, unless the software allows to do what´s called "out of core rendering", that the 3d scene needs to fit in the videocard´s memory.

I´m mainly using Blender for 3d, which works great with Radeon GPUs, and I´m thinking into buy an RX Vega 64.

The point is: Can an RX Vega do "compute rendering" using system memory?

There is a test scene which runs out of memory on 8GB regular videocards like my 390x or a 1080 that con be downloaded from the blender official page:

https://www.blender.org/download/demo-files

It´s the Production Benchmark file from the Goosberry movie project.

To test it it´s needed to enable OpenCL/GPU compute in the system preferences, set the renderer to GPU mode, rise the HBCC shared memory segment and hit render.

If some of you guys could give me a hand with that, maybe some experienced blender user, it would be really a good help to let me decide.

Thanks in advance, greets!


----------



## springs113

Quote:


> Originally Posted by *diabetes*
> 
> I have a Vega 56 with stock bios on a nickel-actetal EKWB. The temperatures differ greatly depending on what I am doing. My rads are 360 + 240 with 45mm thickness in a Corsair 750D chassis with CoolerMaster Silencio (1450RPM edition) fans on them. Both of the rads are used in a pull config. The loop also cools a delidded [email protected]
> 
> ATM I use my card with +20% power target and 1000Mhz HBM speed. Voltages are at stock.
> 
> When rendering stuff in Blender my temps are [42C GPU | 45C HBM | 55C Hotspot].
> In Battlefield 4 @ 1080p Ultra Settings with 4x MSAA, 150% resolution scale the temps highest measured temps are
> [53C GPU | 55C HBM | 65 Hotspot] after 2 hours of gameplay.
> 
> This is with the fan curve set that the fans never exceed 1050RPM, so it is a silent config. I could bring down temps more if I wanted to, by increasing RPM.
> 
> What I noticed is that my card boosts itself to 1610Mhz in Blender (and stays there), whereas in BF4 the core clock fluctuates between 1512Mhz and 1550Mhz.
> 
> My card has the unmolded package with Samsung HBM. The TIM used is EK Ektotherm and I applied it in excess by just spreading the whole amount that came with the block over the dies. No X-pattern or anything as I just used the tip of the syringe the TIM came in to make sure everything is covered.


Thanks for the response, I believe I've got the unmolded as well. I've been having a lot off issues with my system so i wasn't able to test but as of right now ice got Vega64 under water with my 5930k system and another on air with my 1950x system. How do you get the measurement for the hbm gpu hotspot? Gpuz?
Quote:


> Originally Posted by *Roboyto*
> 
> I have dual 'standard' thickness 240mm rads, fans always in 'stealth' mode so my rig is silent. Ryzen 1700 UVOC 1.1875V for 3.7GHz with 1.35V 3200MHz RAM.
> 
> Played Doom for ~70 minutes yesterday and temps were as follows:
> 
> Kryonaut TIM for GPU/HBM & CPU
> 
> GPU Peak: 44C
> HBM Peak: 53C
> Hotspot Peak: 66C
> Ryzen Peak: 54C
> 
> GPU Settings:
> P6: 1602/1075mv
> P7: 1702/1150mV
> HBM: 1100/1000mV
> 50% power
> 
> HBM clock was rock solid. GPU core minimum at 1660, peaking 1690. Ran flawless 100-140 fps on 5760*1080 eyefinity.
> [/quGpu? thanks, I'd like to start the undervolting, but I've been so busy. I'm using a Dell 2711 iirc 1440p monitor and gaming on it had been so smooth.
> Quote:
> 
> 
> 
> Originally Posted by *kundica*
> 
> I max at 41-42 while gaming for hours. 240 and 280 EK radiators.
> 
> 
> 
> Thanks so far I've only seen 38 max on my vega64 using a 480mm gts nemesis.
Click to expand...


----------



## poisson21

@JasonMZW20

Don't take Fallout 4 into account when testing your vega, a lot of area are CPU bound so the load on your card didn't matter at all.


----------



## kundica

Quote:


> Originally Posted by *OC17*
> 
> Hi guys!
> 
> I´m thinking into join the group and I need some help here before doing that.
> 
> I do 3d rendering and I´m wanting to know a bit more about the HBCC/Memory sharing Vega´s system.
> 
> GPU rendering, when done through compute APIs, has the limitation, unless the software allows to do what´s called "out of core rendering", that the 3d scene needs to fit in the videocard´s memory.
> 
> I´m mainly using Blender for 3d, which works great with Radeon GPUs, and I´m thinking into buy an RX Vega 64.
> 
> The point is: Can an RX Vega do "compute rendering" using system memory?
> 
> There is a test scene which runs out of memory on 8GB regular videocards like my 390x or a 1080 that con be downloaded from the blender official page:
> 
> https://www.blender.org/download/demo-files
> 
> It´s the Production Benchmark file from the Goosberry movie project.
> 
> To test it it´s needed to enable OpenCL/GPU compute in the system preferences, set the renderer to GPU mode, rise the HBCC shared memory segment and hit render.
> 
> If some of you guys could give me a hand with that, maybe some experienced blender user, it would be really a good help to let me decide.
> 
> Thanks in advance, greets!


I tried that Blender test about a month ago and it would crash before it started to do anything when using GPU rendering. I was told by someone on reddit that it's a bug with Normal Node that also happens with the Fury. I was specifically running the render to test HBCC. Most of the other Cycles tests ran fine.


----------



## Paul17041993

Quote:


> Originally Posted by *JasonMZW20*
> 
> Interesting. I guess AMD will just need to make Vega's dynamic clocks smarter in future (for Vsync). Even with the downclocks, it's smoother than my old R9 280 that ran 1175/1575MHz clocks, so I can deal with it. I can't stand screen tearing, so just waiting on some good 4K Freesync 2 monitors.


The clock management is already as smart as it needs to be though, as @poisson21 basically just confirmed for me fallout4 is very much CPU-bound and the parts you see running at 30 fps would simply be the game switching between 1:1 and 1:2 frame modes, so if you had an area that may hover around 65-55 fps due to CPU or draw call limits it'd only run at 30 unless you turn v-sync off.

You could try limiting the fps via FRTC in the panel, don't know how that affects screen tearing though as I use a freesync display...


----------



## OC17

Quote:


> Originally Posted by *kundica*
> 
> I tried that Blender test about a month ago and it would crash before it started to do anything when using GPU rendering. I was told by someone on reddit that it's a bug with Normal Node that also happens with the Fury. I was specifically running the render to test HBCC. Most of the other Cycles tests ran fine.


Thanks for replying Kundika.


----------



## pengs

Quote:


> Originally Posted by *Roboyto*
> 
> As far as HBM is concerned, most Vega 64 cards seem to be capped at 1100 1090-1110. BUT, from what I have seen this is possible with an undervolt as well. My card hits 1100 HBM at 1000mV as opposed to factory 1050mV; Some people are hitting these speeds with less than that. HBM2 is fairly sensitive to temperature so that is something to be mindful of since core temperature can/will effect HBM2 temps and it's performance. As temperatures rise, HBM2 timings do as well and performance will drop.


Odd, wattman is telling me the VID is 950mV on the HBM, Samsung btw.
I've clocked 1100/950mV for quite a few Superposition runs and had a crash right at the end. Otherwise 1050 runs w/o any trouble at all. A few extra mV's may do it for 1100

Has anyone figured out what the memory strappings are compared to Fiji?
No doubt one starts at 800 and 945.


----------



## GroupB

Today I did some run of firestike to tune my watt/perf and I had something weird show up...

First I start at my usual gaming mhz 1662/1100 @ 1075mv and got 24 718 graphic, then I keep increase till it crash, it crash at [email protected] so I increase my voltage to 1200mv to see where I can push it then I wanted to tune down the voltage to find out what mhz require what voltage. The thing is after I increase my voltage the score went WAY up about 25 723 for [email protected] while [email protected] gave me 24 938, so 10 mhz give me that kind of bump I dont think so!

So I decide to redo 1682 at 1075 and at 1200 and the result are kind weird

[email protected] = 24 938
[email protected] = 25 557

So a voltage increase boost my score .. I dont understand why, im not power throttling at all I set my power to 142% and each 10 mhz test bump saw a 10W increase on combined test looking normal.

so why the hell is a bump in voltage give a fps boost, same clock, same stability, the clock stay at my target everytime no throttle there (using 8.1 driver without the overshoot target bug) power limit was out of equation same as temp since I have a waterblock.

For me voltage bump is only to stabilize the gpu if it cant manage the clock... what is happening with vega ?


----------



## Rootax

Quote:


> Originally Posted by *OC17*
> 
> Thanks for replying Kundika.


For the record, and I doubt it's helping,but I tried the rendering on my Vega FE. The scene take approx 13gb on vram, so no need so "use" hbcc. But I have it enabled anyway, and the driver or blender didn't crash, so there is that (I use 17.8.2 beta for FE) (and it rendered in 26 minutes).


----------



## pengs

Quote:


> Originally Posted by *GroupB*
> 
> Today I did some run of firestike to tune my watt/perf and I had something weird show up...
> 
> First I start at my usual gaming mhz 1662/1100 @ 1075mv and got 24 718 graphic, then I keep increase till it crash, it crash at [email protected] so I increase my voltage to 1200mv to see where I can push it then I wanted to tune down the voltage to find out what mhz require what voltage. The thing is after I increase my voltage the score went WAY up about 25 723 for [email protected] while [email protected] gave me 24 938, so 10 mhz give me that kind of bump I dont think so!
> 
> So I decide to redo 1682 at 1075 and at 1200 and the result are kind weird
> 
> [email protected] = 24 938
> [email protected] = 25 557
> 
> So a voltage increase boost my score .. I dont understand why, im not power throttling at all I set my power to 142% and each 10 mhz test bump saw a 10W increase on combined test looking normal.
> 
> so why the hell is a bump in voltage give a fps boost, same clock, same stability, the clock stay at my target everytime no throttle there (using 8.1 driver without the overshoot target bug) power limit was out of equation same as temp since I have a waterblock.
> 
> For me voltage bump is only to stabilize the gpu if it cant manage the clock... what is happening with vega ?


Correctable errors maybe. The whole 'I'm undervolting by 0.25v' thing seemed kinda... iffy tbh.
Is that your highest sustained or average clock throughout that benchmark?


----------



## GroupB

That my clock, there no average or highest sustained on 8.1 driver , if your temps is right and you allow enough power it just stay at where you put it.


----------



## TrixX

Quote:


> Originally Posted by *GroupB*
> 
> Today I did some run of firestike to tune my watt/perf and I had something weird show up...
> 
> First I start at my usual gaming mhz 1662/1100 @ 1075mv and got 24 718 graphic, then I keep increase till it crash, it crash at [email protected] so I increase my voltage to 1200mv to see where I can push it then I wanted to tune down the voltage to find out what mhz require what voltage. The thing is after I increase my voltage the score went WAY up about 25 723 for [email protected] while [email protected] gave me 24 938, so 10 mhz give me that kind of bump I dont think so!
> 
> So I decide to redo 1682 at 1075 and at 1200 and the result are kind weird
> 
> [email protected] = 24 938
> [email protected] = 25 557
> 
> So a voltage increase boost my score .. I dont understand why, im not power throttling at all I set my power to 142% and each 10 mhz test bump saw a 10W increase on combined test looking normal.
> 
> so why the hell is a bump in voltage give a fps boost, same clock, same stability, the clock stay at my target everytime no throttle there (using 8.1 driver without the overshoot target bug) power limit was out of equation same as temp since I have a waterblock.
> 
> For me voltage bump is only to stabilize the gpu if it cant manage the clock... what is happening with vega ?


Run Superposition instead of Firestrike so you can see live GPU Frequency.

Also I'm running 1752 on P7 and 1667 on P6 Frequency wise, however I limit the power to limit the speeds currently. Running 950mv P7 I'll get ~4100 in 1080p Extreme, with 980mv P7 I'll get ~4200. 100 points for 30mv??? However the avg GPU clock speeds for the first run were around ~1580MHz avg and 1600MHz avg for the 2nd.

Upping that to 1200mv gets me TONS more however I also start drawing more power than the cooling can dissipate so have to cancel the run early as my cooling won't keep it below my max target of 70C and it just underclocks the CPU to ~1400MHz to keep it below the temp targets.

If I try to hit the Frequency targets with the 17.9.2 then it goes all weird. However with my current settings I still pull 1600MHz sustained and at roughly 210W Power draw for the GPU/HBM only (according to GPU-z) while pulling 550-600W total system with a 3930K pulling stupid amounts.


----------



## pengs

Quote:


> Originally Posted by *GroupB*
> 
> That my clock, there no average or highest sustained on 8.1 driver , if your temps is right and you allow enough power it just stay at where you put it.


You may want to check GPUz and find the average throughout the benchmark. But upping the voltage and scoring better most likely means that the card needed that voltage to operate fully - independent of power limit.

The power delivery has changed quite a bit with Vega, coupling that with the dynamic boost clocks, I don't doubt the operating threshold has loosened enough to undervolt to insane levels while operating as normal ("normal")


----------



## GroupB

I can see the clock just find using afterburner while benching, im posivite it do not move and stay at the target I use, power is at 142% and cooling well core is under 35C so im covered there. I just find it weird that there no clock throttling if there not enough voltage or like all other gpu just crash... .nah mine work the same but score higher or lower. Ill try the same test at very low mhz see if I see the same result

Edit:
1442 @ 1200mv got 22 716
1442 @1075 got 22 942

Now I dont undertand its the other way around.


----------



## pengs

Quote:


> Originally Posted by *GroupB*
> 
> I can see the clock just find using afterburner while benching, im posivite it do not move and stay at the target I use, power is at 142% and cooling well core is under 35C so im covered there. I just find it weird that there no clock throttling if there not enough voltage or like all other gpu just crash... .nah mine work the same but score higher or lower. Ill try the same test at very low mhz see if I see the same result


If the clocks are being reported truely by the software then it's either the core erroring or Vega is doing something power related in hardware which allows it to run at that frequency and slow itself down. I think Buildzoid spoke about it in one of his videos.

I'm assuming correctable erroring is not reported otherwise we'd had a tool years ago to help with OCing.
Quote:


> Originally Posted by *TrixX*
> 
> Run Superposition instead of Firestrike so you can see live GPU Frequency.
> 
> Also I'm running 1752 on P7 and 1667 on P6 Frequency wise, however I limit the power to limit the speeds currently. Running 950mv P7 I'll get ~4100 in 1080p Extreme, with 980mv P7 I'll get ~4200. 100 points for 30mv??? However the avg GPU clock speeds for the first run were around ~1580MHz avg and 1600MHz avg for the 2nd.
> 
> Upping that to 1200mv gets me TONS more however I also start drawing more power than the cooling can dissipate so have to cancel the run early as my cooling won't keep it below my max target of 70C and it just underclocks the CPU to ~1400MHz to keep it below the temp targets.
> 
> If I try to hit the Frequency targets with the 17.9.2 then it goes all weird. However with my current settings I still pull 1600MHz sustained and at roughly 210W Power draw for the GPU/HBM only (according to GPU-z) while pulling 550-600W total system with a 3930K pulling stupid amounts.


Right, so the convention becomes voltage vs. power limit governed by temperature. So if you undervolted to the _point of_ degradation, bumped the core +10mV and then finalized by using the power limit to keep from throttling I wonder if your score would change for the worse or better?


----------



## OC17

Quote:


> Originally Posted by *Rootax*
> 
> For the record, and I doubt it's helping,but I tried the rendering on my Vega FE. The scene take approx 13gb on vram, so no need so "use" hbcc. But I have it enabled anyway, and the driver or blender didn't crash, so there is that (I use 17.8.2 beta for FE) (and it rendered in 26 minutes).


Yep, good to know anyway.


----------



## GroupB

I did more test using the same freq I use most of the time test were done with 142% powerplay table for vega 64 ( this is a vega 64 still on air bios ) using a water block temp are under 32C core 40 hbm driver are 8.1 clock do not fluctuate and stay at target.

Ill give graphic score of firestrike only

HBM 1100 @ 950

[email protected] = 24 700

[email protected] = 25 608

[email protected] = 25 724

1662 @1150 = 25 748

1662 @1175 = 25 640

1662 @1200 = 25 396

Previous best score was [email protected] = 25 723, combined test hit 600W system

Its look to me that there something happening at 1175mv + reducing the score , but increasing the core and voltage use more watt but those were still under the previous 600W pull at 1692 so there still some watt to use, the freq report ( at least what afterburn and gpu-z report) do not fluctuate or reduce. I guess there something happening hardware side that dont report or the 8.1 driver are weird like that. I dont understand what cause this reduce score , its not the temps, its not the watt...


----------



## milan616

17.9.3 is out, not a lot changed
Quote:


> Radeon Software Crimson ReLive Edition 17.9.3 Highlights
> 
> Support For
> Total War: WARHAMMER™ II
> Radeon Chill profile added
> Multi GPU support enabled
> Forza™ Motorsport 7
> 
> Fixed Issues
> The drop-down option to enable Enhanced Sync may be missing in Radeon Settings on Radeon RX Vega Series Graphics Products.
> ReLive may cause higher idle clocks on the secondary Radeon RX Vega Series Graphics Product in a multi-GPU configuration on certain AMD Ryzen based systems.
> Negative scaling in F1™ 2017 may be observed on Radeon RX 580 Series Graphics products in multi-GPU system configurations.
> 
> Known Issues
> Unstable Radeon WattMan profiles may not be restored to default after a system hang. A workaround is to launch Radeon WattMan after reboot and restore settings to default.
> Wattman may fail to apply user adjusted voltage values on certain configurations.
> Radeon Settings may not populate game profiles after Radeon Software's initial install.
> Overwatch™ may experience a random or intermittent hang on some system configurations.
> GPU Scaling may fail to work on some DirectX®11 applications.
> Secondary displays may show corruption or green screen when the display/system enters sleep or hibernate with content playing.
> Bezel compensation in mixed mode Eyefinity cannot be applied.
> When recording with Radeon ReLive on Radeon RX Vega Series graphics products GPU usage and clocks may remain in high states.


----------



## kundica

Quote:


> Originally Posted by *Rootax*
> 
> For the record, and I doubt it's helping,but I tried the rendering on my Vega FE. The scene take approx 13gb on vram, so no need so "use" hbcc. But I have it enabled anyway, and the driver or blender didn't crash, so there is that (I use 17.8.2 beta for FE) (and it rendered in 26 minutes).


Which version of Blender did you use? I used what was beta at the time so I might try again when I get home.


----------



## Rootax

Quote:


> Originally Posted by *kundica*
> 
> Which version of Blender did you use? I used what was beta at the time so I might try again when I get home.


2.79


----------



## Trender07

Quote:


> Originally Posted by *milan616*
> 
> 17.9.3 is out, not a lot changed


I swear with every new driver version I have to add more and more voltage. I can't even hold my old stable OC settings, it just crashes!


----------



## kundica

Quote:


> Originally Posted by *Rootax*
> 
> 2.79


Yeah, it still crashes for me as soon as it starts to render. I'm currently using 17.9.2 but I've tried it on 17.8.1 up to x.9.2. When you used set it to render what was your process? For me it defaults to CPU render. If I change it to GPU it doesn't actually engage so have to go to preferences and set the OpenCL render to my Vega card. After that it allows me to start the GPU render. Of course, it crashes when the actual rendering starts. This isn't an OC issue either, I've tried it on stock settings.


----------



## Ark-07

Due the size of respones and questions I won't reply to each person but i hope you all read this... Thank you once again forums for the support and time you all take to respond to me.


----------



## Rootax

Quote:


> Originally Posted by *kundica*
> 
> Yeah, it still crashes for me as soon as it starts to render. I'm currently using 17.9.2 but I've tried it on 17.8.1 up to x.9.2. When you used set it to render what was your process? For me it defaults to CPU render. If I change it to GPU it doesn't actually engage so have to go to preferences and set the OpenCL render to my Vega card. After that it allows me to start the GPU render. Of course, it crashes when the actual rendering starts. This isn't an OC issue either, I've tried it on stock settings.


Same thing, go into preferences, enable openCL, and then I can render using GPU.FE having "pro" driver,maybe that's why it's not crashing for me (even though it should not crash for RX... Except if it's a vram problem ?)


----------



## OC17

Quote:


> Originally Posted by *kundica*
> 
> Yeah, it still crashes for me as soon as it starts to render. I'm currently using 17.9.2 but I've tried it on 17.8.1 up to x.9.2. When you used set it to render what was your process? For me it defaults to CPU render. If I change it to GPU it doesn't actually engage so have to go to preferences and set the OpenCL render to my Vega card. After that it allows me to start the GPU render. Of course, it crashes when the actual rendering starts. This isn't an OC issue either, I've tried it on stock settings.


I think that you are setting it ok, the test is ready for cpu rendering by default.

Usually you get a Blender message saying that the memory is full and it doesn´t even start to render. So maybe that´s something.

I wouldn´t be surprised if this changes in a near future. For the moment is good to know that at least according to your test how the thing is going.

Thanks a lot!


----------



## OC17

Quote:


> Originally Posted by *Rootax*
> 
> Same thing, go into preferences, enable openCL, and then I can render using GPU.FE having "pro" driver,maybe that's why it's not crashing for me (even though it should not crash for RX... Except if it's a vram problem ?)


I think it is, that scene usually doesn´t fit in 8GB of vram.


----------



## diabetes

Quote:


> Originally Posted by *kundica*
> 
> Yeah, it still crashes for me as soon as it starts to render. I'm currently using 17.9.2 but I've tried it on 17.8.1 up to x.9.2. When you used set it to render what was your process? For me it defaults to CPU render. If I change it to GPU it doesn't actually engage so have to go to preferences and set the OpenCL render to my Vega card. After that it allows me to start the GPU render. Of course, it crashes when the actual rendering starts. This isn't an OC issue either, I've tried it on stock settings.


There is another rendering setting for the OpenCL feature set. Change this from "Experimental" to "Supported".


----------



## kundica

Quote:


> Originally Posted by *Rootax*
> 
> Same thing, go into preferences, enable openCL, and then I can render using GPU.FE having "pro" driver,maybe that's why it's not crashing for me (even though it should not crash for RX... Except if it's a vram problem ?)


Quote:


> Originally Posted by *OC17*
> 
> I think that you are setting it ok, the test is ready for cpu rendering by default.
> 
> Usually you get a Blender message saying that the memory is full and it doesn´t even start to render. So maybe that´s something.
> 
> I wouldn´t be surprised if this changes in a near future. For the moment is good to know that at least according to your test how the thing is going.
> 
> Thanks a lot!


Quote:


> Originally Posted by *OC17*
> 
> I think it is, that scene usually doesn´t fit in 8GB of vram.


It's an issue with using Normal Node with Cycles on certain drivers. When I experienced the issue before someone on Reddit shared some steps to test it. If I find that info I'll share it here.

I suspect Rootax doesn't have the issue because he's using the FE driver.

Edit: Here's how you can test if it's Normal Node.


Quote:


> Originally Posted by *diabetes*
> 
> There is another rendering setting for the OpenCL feature set. Change this from "Experimental" to "Supported".


It defaults to supported.


----------



## Nuke33

Quote:


> Originally Posted by *MediocreKiller*
> 
> How do I increase Vega power limit to 100%? I have read many people bring up power tables and stuff, but no mention on how to actually do it. Can Anyone help me out by telling me how or linking a how to? Thanks so much!


I suggest you look into this post of @hellm
He explained it quite nicely I think. If you still need help, feel free to ask.

http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/250#post_26297003


----------



## Nuke33

I recently got my Alphacool GPX Vega 120 AIO and I am loving it.









Assembly was a real nightmare though. I have never seen a more frustrating assembly than this one. I used a Vega64 Limited Edition, so the slot bracket is different. I had to remove the screw closest to the PCIe slot so the backplate would fit. And those thick thermalpads for the doublers did not stick at all. I had to use very heavy pressure and pray they kept their position.










Alphacool if you are by chance reading this, please revise mounting on your Vega AIO.

Otherwise it is a very nice cooler. I am getting below 40°C GPU and HBM temps most of the time with only one Nidec Gentle Typhoon in push configuration @1200rpm. TIM is Thermalgrizzly Conductonaut on a molded ASIC.

Pump noise is very low and can be reduced even further to absolute silence by throttling rpms. At low rpms cooling is not that great anymore though. I only reduce it when idle via speedfan.

After endless hours of testing ,I am quite certain now that HBM memory voltage is related to the memory controller on the ASIC.

Everything above 950mv gives me access to 1105mhz HBM.
The lower I go the worse it gets, @ 900mv I can´t even sustain stock HBM clocks.
My Vega64 Liquid had the exact same issues.

Core clocks aren´t affected though. They have the usual drop of clocks with lower voltage but no instability due to too little voltage.

My testing was done with powerplay tables only.

GPU and HBM temps never exeeded 44°C. Hotspot reached 55°C max.


----------



## andreyb

Back to my Hot Spot temperatures problem. Today I re-installed waterblock once again. Here is what I found after waerblock was unmounted:



I am not a expert in analysis of thermal paste application quality, but it doesn't look good for me. Looks like chip it slightly cambered. I decided to change thermal paste and application method. Zalman ZM-STG2 was switched on TG Kryonaut (damn, it's pricy) and spread it by thick layer on a whole chip. Nevertheless any of these helped. I got exactly the same thermals under load.
I have the idea to replace EK's backplate to the original backplate with X-shaped bracket. Maybe it works better in terms of avoiding chip "bending".


----------



## Chaoz

Quote:


> Originally Posted by *andreyb*
> 
> Back to my Hot Spot temperatures problem. Today I re-installed waterblock once again. Here is what I found after waerblock was unmounted:
> 
> 
> 
> I am not a expert in analysis of thermal paste application quality, but it doesn't look good for me. Looks like chip it slightly cambered. I decided to change thermal paste and application method. Zalman ZM-STG2 was switched on TG Kryonaut (damn, it's pricy) and spread it by thick layer on a whole chip. Nevertheless any of these helped. I get exactly the same thermals under load.
> I have the idea to replace EK's backplate to the original backplate with X-shaped bracket. Maybe it works better in terms of avoiding chip "bending".


That's a lot of TIM that you used. Did you use the X-method, like in the manual? I don't have issues with mine. Temps are great. I use the original backplate and EK Nickel-Acetal block. My Hot Spot doesn't go that high either. Maybe 10°C more than the core itself, but that's it.


----------



## Ark-07

So i think my undervolting worked the image is below, I forgot to use default hmb memory speed though it was still a success. Any tips from anyone? My overclocks failed three times so not gonna bother anymore until the time comes where I need to overclock for some new game.

Any input?

https://i.imgur.com/IulH7pq.png

https://i.imgur.com/LtkFo4D.png

Update system froze after playing rainbow six siege then trying firestrike again..... I give up just gonna work my fan to keep temps low.

Found the fault running specy and gpuz at the same time causes a system freeze.


----------



## Medusa666

What is normal temp for the hotspot?

I put a Morpheus 2 on my card and both the HBM and core has gone down significantly, like 20-25c improvement.

But the hotspot is still 80-100c under load.

Any ideas?


----------



## biscuittea

Quote:


> Originally Posted by *Medusa666*
> 
> What is normal temp for the hotspot?
> 
> I put a Morpheus 2 on my card and both the HBM and core has gone down significantly, like 20-25c improvement.
> 
> But the hotspot is still 80-100c under load.
> 
> Any ideas?


I've seen some users getting relatively low hotspot temps, around 60c or 70c but I easily hit 90c under load whilst my core and HBM temps sit at around 60c.

Just for the record, I'm using MX-4 paste. I've also reseated the cooler like 4 times now.


----------



## diabetes

Quote:


> Originally Posted by *Medusa666*
> 
> What is normal temp for the hotspot?
> 
> I put a Morpheus 2 on my card and both the HBM and core has gone down significantly, like 20-25c improvement.
> 
> But the hotspot is still 80-100c under load.
> 
> Any ideas?


Remount your cooler. Hotspot on Morpheus should be between 65C and 75C from what I've seen. Use an excess of thermal paste and the cross pattern for tightening the screws (see below).
Quote:


> Originally Posted by *andreyb*
> 
> Back to my Hot Spot temperatures problem. Today I re-installed waterblock once again. Here is what I found after waerblock was unmounted (...)
> 
> I am not a expert in analysis of thermal paste application quality, but it doesn't look good for me. Looks like chip it slightly cambered. I decided to change thermal paste and application method. Zalman ZM-STG2 was switched on TG Kryonaut (damn, it's pricy) and spread it by thick layer on a whole chip. Nevertheless any of these helped. I got exactly the same thermals under load.
> I have the idea to replace EK's backplate to the original backplate with X-shaped bracket. Maybe it works better in terms of avoiding chip "bending".


You had too much mounting force in the middle of the chip and too less at the top. Try screwing in the 4 corner screws first in a cross pattern while starting at a corner that is away from the HBM. Dont screw the first screw in fully when starting but instead go over each screw multiple times. When the 4 corner screws are mounted securely, you can add the 3 missing ones.


----------



## Medusa666

Quote:


> Originally Posted by *biscuittea*
> 
> I've seen some users getting relatively low hotspot temps, around 60c or 70c but I easily hit 90c under load whilst my core and HBM temps sit at around 60c.
> 
> Just for the record, I'm using MX-4 paste. I've also reseated the cooler like 4 times now.


Quote:


> Originally Posted by *diabetes*
> 
> Remount your cooler. Hotspot on Morpheus should be between 65C and 75C from what I've seen. Use an excess of thermal paste and the cross pattern for tightening the screws (see below).
> You had too much mounting force in the middle of the chip and too less at the top. Try screwing in the 4 corner screws first in a cross pattern while starting at a corner that is away from the HBM. Dont screw the first screw in fully when starting but instead go over each screw multiple times. When the 4 corner screws are mounted securely, you can add the 3 missing ones.


Thanks for your quick replies, I don't know what to make of this.

For now I'm going to leave it as is, performance is good and I'm happy overall, hopefully it doesn't kill the card.


----------



## GroupB

Quote:


> Originally Posted by *Ark-07*
> 
> So i think my undervolting worked the image is below, I forgot to use default hmb memory speed though it was still a success. Any tips from anyone? My overclocks failed three times so not gonna bother anymore until the time comes where I need to overclock for some new game.
> 
> Any input?
> 
> https://i.imgur.com/IulH7pq.png
> 
> https://i.imgur.com/LtkFo4D.png
> 
> Update system froze after playing rainbow six siege then trying firestrike again..... I give up just gonna work my fan to keep temps low.
> 
> Found the fault running specy and gpuz at the same time causes a system freeze.


If your vega is anything lime mine try the same clock but add voltage and you will get better score , check my today post you will see my result


----------



## Ark-07

Im confused however changing my power settings to lower p6/p7 @1010/1050 with my custom fan settings im getting max 55 degress underload? And my gameplay feels far more fluid and better thats normal right?

I was thinking more power means more performance? Nothing wierd with my settings either? Image below, I'm guessing because the gpu is running in a much cooler state means better performance? Also someone said to me that undervolting my gpu can damage it in the long run? I should mention that I didn't change the power setting under the fan section only the p6/p7 part even though I was told to add 50% power in the fan section or are they different from each other?

https://i.imgur.com/uX4hpiH.png

My firestrik score was lower though 18k to 17k


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> Its not normal to be able to just put the power target to +50% and just play?


Ahh, Hey! Your the guy! on the AIO bios, that's a big nega'teeevo'. Your the first and only one I have heard of that can just +50 without an issue or at-minimum, some selective tuning.
Quote:


> Originally Posted by *springs113*
> 
> For those doing custom loop on their RX cards, what are the temps you all are seeing?


Sig Rig.
rad 1) 397 x 124 x 60mm push/pull 1300 -2300 rpm / 1200 rpm
rad 2) 278 x 124 x 30mm push - decoupled pull 2x 200mm fans 1200 rpm / 600 rpm
~.75 US gal. fluid, DDC 3.2 w/ XSPC Res-Top 18w, 1/2" ID

Typical low/high/average 

Ref64 on AIOLC bios.
Quote:


> Originally Posted by *TrixX*
> 
> TBH I think it's a combination of BIOS being unoptimised and drivers really not being where they should be by now. Hoping on 17.10.1's actually using the card more.
> 
> I seem to have a bit of a downclocking issue in games at the moment as it can drop to P3 or 4 quite happily in PUBG which causes massive delay's when encountering players as it drops frames as it tries to ramp back to 100% usage again. Almost like Radeon Chill is active when it's disabled...


I would like to see what Vega does with RPM, like FM Serra.







On that topic I expect a "proper" driver before Far Cry 5 and Wolf 2... since they held my card hostage unless I bought Wolf 2...

Anyone know when that is going available to the regular John Q. Public?
Quote:


> Originally Posted by *Ark-07*
> 
> I forgot to ask and I'm glad I remembered before playing with overclocking. My power supply is a new gold corsair tx850m. At first I had one cable with dual 6by2 pins. And eventually a few days later I had boot up issues. So to today I'm using two separate 6by2 cables for my vega 64... No issues so far
> 
> My concern is whether a 6by2 pin cable does the same job as a 8pin cable? The power supply didn't come with any 8pin connectors I think... I do have a 6 to 8 pin extension cable that came with the gpu not using it though. That said anyone else having "default radeon wattman settings restored" on boot up once in a while? Had this issue with my old power supply and r9 fury as well. Always had to redo my watt fan settings each time.


Wire gauge or equivalent stranded wire bundle gauge is important here. This is due to the wires offering resistance to the current and heating up. Some impedance as well but that is minor with a typical PSUs ripple... Additionally the connectors are sometimes a source for concern as they can get hot (mixed materials for construction, oddball geometry). With extension cables your going to have an increase in resistance especially across the physical junction between the cables. If ya' got 2 cables use 2, as far as the 6 + 2, usually the gauge is 'good nuff. Now if its 1 cable 6+2 with some janky paralleled 6+2, I would look at offloading that for a better solution. I suspect there is a rail on that 6 pin (12v) which is going to get taxed pretty heavily. Need to look at that PSU data sheet for that rails power delivery capabilities.

If you ever see any carbon on the metal or surrounding plastic, dump that cable... its overloaded and melting the cheap'o plastic.

Wattman crappin' out is just wattman being wattman.








Quote:


> Originally Posted by *JasonMZW20*
> 
> Interesting. I guess AMD will just need to make Vega's dynamic clocks smarter in future (for Vsync). Even with the downclocks, it's smoother than my old R9 280 that ran 1175/1575MHz clocks, so I can deal with it. I can't stand screen tearing, so just waiting on some good 4K Freesync 2 monitors.


That extended sync... I think is about all I would expect, better for them to push their own technologies to give some separation in the market space. @3440 freesync is a must, at 4k I can't see even bothering without some sort of frame buffering... FS/GS. I thought 9.2 turned it back on? Extended...?
Quote:


> Originally Posted by *pengs*
> 
> Odd, wattman is telling me the VID is 950mV on the HBM, Samsung btw.
> I've clocked 1100/950mV for quite a few Superposition runs and had a crash right at the end. Otherwise 1050 runs w/o any trouble at all. A few extra mV's may do it for 1100
> 
> Has anyone figured out what the memory strappings are compared to Fiji?
> No doubt one starts at 800 and 945.


I would try going in the other direction TBH as you have seemed to have indicated.

[email protected] is perfectly fine for me... I have benched it on SP4k around the 800 mark just fine... optimal was just below 850... lower the scores started to bleed out... gaming closer to 900 was good... there is a relationship with this setting and core frequency... so it'll take some test n' tune to find your sweet spot for your use cases. Timings seem to loosen up in the lower to mid 40's C, so there are trade offs. As best as I can figure the HBM voltage is adjusting some sort of TTL-L threshold.
Quote:


> Originally Posted by *GroupB*
> 
> Today I did some run of firestike to tune my watt/perf and I had something weird show up...
> 
> First I start at my usual gaming mhz 1662/1100 @ 1075mv and got 24 718 graphic, then I keep increase till it crash, it crash at [email protected] so I increase my voltage to 1200mv to see where I can push it then I wanted to tune down the voltage to find out what mhz require what voltage. The thing is after I increase my voltage the score went WAY up about 25 723 for [email protected] while [email protected] gave me 24 938, so 10 mhz give me that kind of bump I dont think so!
> 
> So I decide to redo 1682 at 1075 and at 1200 and the result are kind weird
> 
> [email protected] = 24 938
> [email protected] = 25 557
> 
> So a voltage increase boost my score .. I dont understand why, im not power throttling at all I set my power to 142% and each 10 mhz test bump saw a 10W increase on combined test looking normal.
> 
> so why the hell is a bump in voltage give a fps boost, same clock, same stability, the clock stay at my target everytime no throttle there (using 8.1 driver without the overshoot target bug) power limit was out of equation same as temp since I have a waterblock.
> 
> For me voltage bump is only to stabilize the gpu if it cant manage the clock... what is happening with vega ?


I've been speculating for a little while now that Vega is utilizing a feedforward control loop (or at least a scheme which is 'model-able' by one. Voltage and power seem to be positional with frequency being velocity terms, to reduce system lag outputs are predicted rather than explicitly fed-back. Such that if you have available power and voltage then a ramp can be set and fedforward to the clock multiplier term, hence why in some cases one can get an "overboost". As such freqs. set by the user are bias, and not explicit, I suspect its referenced proportionally to a set of lookup tables... could be wrong, but systems like this have had a lot of published post grad work done over the past 10 years or so. Also think in 9.2 they put a hook in the drivers to try to catch such events and dump the driver rather than crash the card out-right.

Just speculation... as I don't have any way to test the idea with anything remotely approaching certainty or predictability. Black box guess work.


----------



## Ark-07

Quote:


> Originally Posted by *Soggysilicon*
> 
> Ahh, Hey! Your the guy! on the AIO bios, that's a big nega'teeevo'. Your the first and only one I have heard of that can just +50 without an issue or at-minimum, some selective tuning.
> Sig Rig.
> rad 1) 397 x 124 x 60mm push/pull 1300 -2300 rpm / 1200 rpm
> rad 2) 278 x 124 x 30mm push - decoupled pull 2x 200mm fans 1200 rpm / 600 rpm
> ~.75 US gal. fluid, DDC 3.2 w/ XSPC Res-Top 18w, 1/2" ID
> 
> Typical low/high/average
> 
> Ref64 on AIOLC bios.
> I would like to see what Vega does with RPM, like FM Serra.
> 
> 
> 
> 
> 
> 
> 
> On that topic I expect a "proper" driver before Far Cry 5 and Wolf 2... since they held my card hostage unless I bought Wolf 2...
> 
> Anyone know when that is going available to the regular John Q. Public?
> Wire gauge or equivalent stranded wire bundle gauge is important here. This is due to the wires offering resistance to the current and heating up. Some impedance as well but that is minor with a typical PSUs ripple... Additionally the connectors are sometimes a source for concern as they can get hot (mixed materials for construction, oddball geometry). With extension cables your going to have an increase in resistance especially across the physical junction between the cables. If ya' got 2 cables use 2, as far as the 6 + 2, usually the gauge is 'good nuff. Now if its 1 cable 6+2 with some janky paralleled 6+2, I would look at offloading that for a better solution. I suspect there is a rail on that 6 pin (12v) which is going to get taxed pretty heavily. Need to look at that PSU data sheet for that rails power delivery capabilities.
> 
> If you ever see any carbon on the metal or surrounding plastic, dump that cable... its overloaded and melting the cheap'o plastic.
> 
> Wattman crappin' out is just wattman being wattman.
> 
> 
> 
> 
> 
> 
> 
> 
> That extended sync... I think is about all I would expect, better for them to push their own technologies to give some separation in the market space. @3440 freesync is a must, at 4k I can't see even bothering without some sort of frame buffering... FS/GS. I thought 9.2 turned it back on? Extended...?
> I would try going in the other direction TBH as you have seemed to have indicated.
> 
> [email protected] is perfectly fine for me... I have benched it on SP4k around the 800 mark just fine... optimal was just below 850... lower the scores started to bleed out... gaming closer to 900 was good... there is a relationship with this setting and core frequency... so it'll take some test n' tune to find your sweet spot for your use cases. Timings seem to loosen up in the lower to mid 40's C, so there are trade offs. As best as I can figure the HBM voltage is adjusting some sort of TTL-L threshold.
> I've been speculating for a little while now that Vega is utilizing a feedforward control loop (or at least a scheme which is 'model-able' by one. Voltage and power seem to be positional with frequency being velocity terms, to reduce system lag outputs are predicted rather than explicitly fed-back. Such that if you have available power and voltage then a ramp can be set and fedforward to the clock multiplier term, hence why in some cases one can get an "overboost". As such freqs. set by the user are bias, and not explicit, I suspect its referenced proportionally to a set of lookup tables... could be wrong, but systems like this have had a lot of published post grad work done over the past 10 years or so. Also think in 9.2 they put a hook in the drivers to try to catch such events and dump the driver rather than crash the card out-right.
> 
> Just speculation... as I don't have any way to test the idea with anything remotely approaching certainty or predictability. Black box guess work.


Thanks for the input if you could check out my latest post that would be awesome.


----------



## Soggysilicon

Quote:


> Originally Posted by *Ark-07*
> 
> Im confused however changing my power settings to lower p6/p7 @1010/1050 with my custom fan settings im getting max 55 degress underload? And my gameplay feels far more fluid and better thats normal right?
> 
> I was thinking more power means more performance? Nothing wierd with my settings either? Image below, I'm guessing because the gpu is running in a much cooler state means better performance? Also someone said to me that undervolting my gpu can damage it in the long run? I should mention that I didn't change the power setting under the fan section only the p6/p7 part even though I was told to add 50% power in the fan section or are they different from each other?
> 
> https://i.imgur.com/uX4hpiH.png
> 
> My firestrik score was lower though 18k to 17k


Quote:


> Originally Posted by *Ark-07*
> 
> Thanks for the input if you could check out my latest post that would be awesome.


Sure.

Ok so if I am reading this right, your using the stock freq. with undervolt (w/ respect to stock) settings? Additionally you have the power limit set to +50? (picture has you at stock power limit).

Just to clear up some definitions, power is a rate, voltage is akin to pressure, with current being closer to flow. Strictly DC power is P = VxA. Soooo.... by decreasing a bias for how much pressure will be applied with a certain amount of power available, current can rise considerably, as P / V = A, as V -> a very small number > 0, A -> P. We are holding P constant.

So it stands to reason that, as you have said, power = performance. In this sense, more power allows for more current and voltage to be present within the circuit at the same time (we can play with this some utilizing phasing).

In your scenario you have reduced the voltage, at the stock power target, so hypothetically you have more current available. There is a nuance here. You haven't actually changed the "power" going to the card if you left the power target = +/- 0.

If you think in terms of a battery in a very simple circuit like a flash light, the voltage x the IR of the battery ^2 (edit this makes no sense...) = power potential of that battery. So the ability or rate of work that can be done is implicit to the battery itself. A smaller battery would usually mean less output from the bulb given equivalent internal resistance.

Now in a transistor scenario, where we have external sources, we are using voltages in a couple of different places and for a variety of reasons and applications.

I suspect your game play subjectively feels better because there is far less throttling in your frequencies as the voltage swings to bias the current / and freq. tables are tighter. A couple factors can come into play here. Are you using a frame buffering technology like freesync or vsync?

If not, increased frame-rate, up to, such a time that "half buffers" are being output the display you get screen tearing which general is "sh_t" when it comes to feel.

This happens because there is no arbitrary wait states in the output from the card to the monitor, its just a stream of whatever, and as such, we can get some phasing issues, where the card and monitor are working but not working together.

Another potential explanation is that at the reduced frequency and at your current HBM frequency settings (tying back to freq. throttling) your not having the HBM throttle as much, causing case penalties in the fabric.

I would suggest you set to stock, and first up your HBM freq. at stock volts till it flat out crashes when you run something... heaven works good for this. Once you get the memory clock set I think you will find that you can benefit from a little more aggression in your power settings, by upping the power limit at the stock volts. You can also work this the other way round' as well. Upping the power target will allow for the card to take greater liberties when it "boost" the frequency, boosting more consistently will deliver more consistent frames at the faster rate. (the penalty of course is heat).

As far as long term damage...

Heat is the mortal enemy of silicon, control the heat and your going to be fine (no warranty expressed or implied, your mileage may vary). The card, by design, will shut it self off before you get into trouble, so I wouldn't worry an immediate catastrophe.

As far as "damage", electron migration and "quantum tunneling" are always possibilities... with billions / trillions of transistors the odds that a couple are crapped out is always high. The effects of this are all but transparent to the end user. These effects will result in latched states on the transistor... gets complicated quick how this is handled... but lets say... the card is built with a 10+ year service life using accelerated life testing and a certain number of internal failures building up... you may knock off 2 years from those estimates, worse case? And that's not a "broken" card, just one that doesn't quite hit the scores it use too? (driver improvements over time again would make this transparent).

The reality is... you'll be onto another card by then... tinkering with the card from "our" standpoint, usually means working around the heat to get back performance which is already there, but for "general use" is too difficult / time consuming / silicon lottery to test for and still realize a decent margin.

Average person isn't going to run that blower on the ref card at 3000+ rpm all the time, the noise is unacceptable, but if you can cope with it... sure crank up the juice.

Consumer stuff is the best possible performance for the least amount of input to maximize the margins on production and insure to least amount of defects to the consumer.









It also considers yields as being within a certain spec. so maybe you got the Unicorn HBM and Core? AMD doesn't care, it gets set to the same settings as the potato card sitting next to it.


----------



## asdkj1740

http://yujihw.com/review/asus-rog-strix-rx-vega-64-oc-8gb-retest
under volt really helps in lowering temps. while 1.2v is too massive to a even well designed air cooler from aib.
aio is what you should go for.

btw, changing the testing platform from ryzen 1700 to intel 5820k, as well as the driver from 17.9.1 to 17.9.2, resulted in freaking higher graphics scores on fire strike ultra and timespy.

ryzen at 17.9.1


intel at 17.9.2


----------



## TrixX

Futuremark stuff always seems to be slightly better optimised for Intel stuff, so not much surprise you'll get better scores with a good Intel chip. I doubt there's optimisation for Ryzen in Timespy/Firestrike yet.


----------



## cez4r

Hi!

A little bit off-topic question:

I've just bought Vega 56 (in Germany) and received the coupon code for the Game Pack to be redeemed here:
https://www.amdrewards.com/amdrewards/

Because I want to sell the Pack (Prey + Sniper Elite 4), now this is my question:

Do I need to register first at the above site and redeem the Pack? And then to try sell it?
Or to try sell the code I have in my email w/o any registering and redeeming?

Forgive me this, I'm a newbie in this subject ...









TIA


----------



## Ark-07

Quote:


> Originally Posted by *Soggysilicon*
> 
> Sure.
> 
> Ok so if I am reading this right, your using the stock freq. with undervolt (w/ respect to stock) settings? Additionally you have the power limit set to +50? (picture has you at stock power limit).
> 
> Just to clear up some definitions, power is a rate, voltage is akin to pressure, with current being closer to flow. Strictly DC power is P = VxA. Soooo.... by decreasing a bias for how much pressure will be applied with a certain amount of power available, current can rise considerably, as P / V = A, as V -> a very small number > 0, A -> P. We are holding P constant.
> 
> So it stands to reason that, as you have said, power = performance. In this sense, more power allows for more current and voltage to be present within the circuit at the same time (we can play with this some utilizing phasing).
> 
> In your scenario you have reduced the voltage, at the stock power target, so hypothetically you have more current available. There is a nuance here. You haven't actually changed the "power" going to the card if you left the power target = +/- 0.
> 
> If you think in terms of a battery in a very simple circuit like a flash light, the voltage x the IR of the battery ^2 = power potential of that battery. So the ability or rate of work that can be done is implicit to the battery itself. A smaller battery would usually mean less output from the bulb given equivalent internal resistance.
> 
> Now in a transistor scenario, where we have external sources, we are using voltages in a couple of different places and for a variety of reasons and applications.
> 
> I suspect your game play subjectively feels better because there is far less throttling in your frequencies as the voltage swings to bias the current / and freq. tables are tighter. A couple factors can come into play here. Are you using a frame buffering technology like freesync or vsync?
> 
> If not, increased frame-rate, up to, such a time that "half buffers" are being output the display you get screen tearing which general is "sh_t" when it comes to feel.
> 
> This happens because there is no arbitrary wait states in the output from the card to the monitor, its just a stream of whatever, and as such, we can get some phasing issues, where the card and monitor are working but not working together.
> 
> Another potential explanation is that at the reduced frequency and at your current HBM frequency settings (tying back to freq. throttling) your not having the HBM throttle as much, causing case penalties in the fabric.
> 
> I would suggest you set to stock, and first up your HBM freq. at stock volts till it flat out crashes when you run something... heaven works good for this. Once you get the memory clock set I think you will find that you can benefit from a little more aggression in your power settings, by upping the power limit at the stock volts. You can also work this the other way round' as well. Upping the power target will allow for the card to take greater liberties when it "boost" the frequency, boosting more consistently will deliver more consistent frames at the faster rate. (the penalty of course is heat).
> 
> As far as long term damage...
> 
> Heat is the mortal enemy of silicon, control the heat and your going to be fine (no warranty expressed or implied, your mileage may vary). The card, by design, will shut it self off before you get into trouble, so I wouldn't worry an immediate catastrophe.
> 
> As far as "damage", electron migration and "quantum tunneling" are always possibilities... with billions / trillions of transistors the odds that a couple are crapped out is always high. The effects of this are all but transparent to the end user. These effects will result in latched states on the transistor... gets complicated quick how this is handled... but lets say... the card is built with a 10+ year service life using accelerated life testing and a certain number of internal failures building up... you may knock off 2 years from those estimates, worse case? And that's not a "broken" card, just one that doesn't quite hit the scores it use too? (driver improvements over time again would make this transparent).
> 
> The reality is... you'll be onto another card by then... tinkering with the card from "our" standpoint, usually means working around the heat to get back performance which is already there, but for "general use" is too difficult / time consuming / silicon lottery to test for and still realize a decent margin.
> 
> Average person isn't going to run that blower on the ref card at 3000+ rpm all the time, the noise is unacceptable, but if you can cope with it... sure crank up the juice.
> 
> Consumer stuff is the best possible performance for the least amount of input to maximize the margins on production and insure to least amount of defects to the consumer.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It also considers yields as being within a certain spec. so maybe you got the Unicorn HBM and Core? AMD doesn't care, it gets set to the same settings as the potato card sitting next to it.


Well talk about going the extra mile, I finally feel comfortable playing with my power settings never did before... Thank you


----------



## madmanmarz

Does anyone else's card not ever go into boost? I flashed right away to aio bios, card is on custom loop with low temps. I'm just running my p6/p7 states the same, setting p7 higher doesn't do anything. Bios related maybe?


----------



## kundica

I hadn't paid any attention to the hot spot temps until last night after reading several posts here. While my core temp maxes at 40 while gaming it appears the hot spot peaks over 30 degrees higher hitting 73. I haven't had any stability issues running my card, in fact, it performs better than the LC 64 I had but the OCD in me is going nuts now. I really don't want to drain the loop I just built, however I'm pretty sure I did a meh job applying the TIM. I used the included TIM with the EK block. Perhaps I can do a partial drain.

Some benches of my card on 17.9.2, both at +50 power limit, P7 at 1702/1200, HBM 1100.



Firestrike - 18892, GS 25588
https://www.3dmark.com/3dm/22385654


----------



## L36

Anyone else had issue with system hard reboots when setting wattman profile anything over balanced? My system would crash at 20-30 minutes into the gaming session if set to turbo. Power supply is a corsair AX 860i so I don't think its to blame.


----------



## pmc25

Looks like 17.9.3 doesn't fix anything really.

Seriously hope 17.10.1 is a major improvement, and that they start acknowledging in known issues more of the major problems, if they don't fix them - Wattman shenanigans, game profiles doing diddly squat, wattman / driver crash resulting in permanently gimped peformance until reboot, massive boost overshoot and resulting crashes etc etc.

Given the timing of Wolfenstein2 and EW2, one has to imagine it will enable RPM and a few other things, even if they're 'alpha'.


----------



## Nuke33

Quote:


> Originally Posted by *cez4r*
> 
> Hi!
> 
> A little bit off-topic question:
> 
> I've just bought Vega 56 (in Germany) and received the coupon code for the Game Pack to be redeemed here:
> https://www.amdrewards.com/amdrewards/
> 
> Because I want to sell the Pack (Prey + Sniper Elite 4), now this is my question:
> 
> Do I need to register first at the above site and redeem the Pack? And then to try sell it?
> Or to try sell the code I have in my email w/o any registering and redeeming?
> 
> Forgive me this, I'm a newbie in this subject ...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> TIA


Redeem the codes and sell the steam keys afterwards. AMDs code has an expiration date, steam keys do not.


----------



## punchmonster

Your hotspot really shouldn't be that high. This is under full stress on both memory and core @ 1080mV after 15 minutes with the Morpheus 2


Quote:


> Originally Posted by *Medusa666*
> 
> What is normal temp for the hotspot?
> 
> I put a Morpheus 2 on my card and both the HBM and core has gone down significantly, like 20-25c improvement.
> 
> But the hotspot is still 80-100c under load.
> 
> Any ideas?


----------



## cez4r

Quote:


> Originally Posted by *Nuke33*
> 
> Redeem the codes and sell the steam keys afterwards. AMDs code has an expiration date, steam keys do not.


Big thx!









+Rep


----------



## biscuittea

Quote:


> Originally Posted by *punchmonster*
> 
> Your hotspot really shouldn't be that high. This is under full stress on both memory and core @ 1080mV after 15 minutes with the Morpheus 2


Damn, your core and hotspot temps are so much lower than mine. I get 60c on HBM and core with 90+ on the hotstpot.

What thermal paste did you use?


----------



## Medusa666

Quote:


> Originally Posted by *punchmonster*
> 
> Your hotspot really shouldn't be that high. This is under full stress on both memory and core @ 1080mV after 15 minutes with the Morpheus 2


Thanks for your reply, do you have any idea as to why this is? Maybe airflow? My card is in a Ncase M1, it is small so airflow is restricted.

I just don't feel like re seating the cooler again : (


----------



## kundica

Quote:


> Originally Posted by *Medusa666*
> 
> I just don't feel like re seating the cooler again : (


It could be worse. My hotspot is high and I'm running a full loop.


----------



## OMgoo

Quote:


> Originally Posted by *punchmonster*
> 
> Your hotspot really shouldn't be that high. This is under full stress on both memory and core @ 1080mV after 15 minutes with the Morpheus 2


did you use the original or the Morpheus x-brace?


----------



## Medusa666

Quote:


> Originally Posted by *kundica*
> 
> It could be worse. My hotspot is high and I'm running a full loop.


Then this is truly a mystery, same with the guy who have re-seated his Morpheus 2 cooler 4 times, I just can't believe it, has to have to do with something else.

I have only done 1 placement, but I would like to believe i did it right as I paid attention to details.

Thing is; Is it really a problem, or just OCD, who knows








Quote:


> Originally Posted by *OMgoo*
> 
> did you use the original or the Morpheus x-brace?


I used the Morpheus X-brace.


----------



## kundica

Quote:


> Originally Posted by *Medusa666*
> 
> Then this is truly a mystery, same with the guy who have re-seated his Morpheus 2 cooler 4 times, I just can't believe it, has to have to do with something else.


Hmm... I just assumed I didn't do a good job with the TIM. I might old off redoing it then since my card seems to run fine despite the hotspot being much higher than the core and HBM.


----------



## Medusa666

Same here, it runs superb, the decrease of HBM and core temps have made a huge difference in performance in all games I play.


----------



## punchmonster

I used the included x-brace. I had weird temp issues when doing the cross method. Second time I used a creditcard to spread the TIM properly and had much better results than with triple cross method. As for thermal paste used I used the thermal grease that came with the Morpheus 2.
Quote:


> Originally Posted by *biscuittea*
> 
> Damn, your core and hotspot temps are so much lower than mine. I get 60c on HBM and core with 90+ on the hotstpot.
> 
> What thermal paste did you use?


Quote:


> Originally Posted by *Medusa666*
> 
> Thanks for your reply, do you have any idea as to why this is? Maybe airflow? My card is in a Ncase M1, it is small so airflow is restricted.
> 
> I just don't feel like re seating the cooler again : (


Quote:


> Originally Posted by *OMgoo*
> 
> did you use the original or the Morpheus x-brace?


Quote:


> Originally Posted by *Medusa666*
> 
> Then this is truly a mystery, same with the guy who have re-seated his Morpheus 2 cooler 4 times, I just can't believe it, has to have to do with something else.
> 
> I have only done 1 placement, but I would like to believe i did it right as I paid attention to details.
> 
> Thing is; Is it really a problem, or just OCD, who knows
> 
> 
> 
> 
> 
> 
> 
> 
> I used the Morpheus X-brace.


----------



## pengs

Quote:


> Originally Posted by *punchmonster*
> 
> I used the included x-brace. I had weird temp issues when doing the cross method. Second time I used a creditcard to spread the TIM properly and had much better results than with triple cross method. As for thermal paste used I used the thermal grease that came with the Morpheus 2.


Was it the tim spreading method or the x-brace which lowered your hotspot temp?
Is your core epoxied?


----------



## pmatio

Quote:


> Originally Posted by *L36*
> 
> Anyone else had issue with system hard reboots when setting wattman profile anything over balanced? My system would crash at 20-30 minutes into the gaming session if set to turbo. Power supply is a corsair AX 860i so I don't think its to blame.


I have the same PSU and had these crashes with 17.9.1 driver. I installed a fresh windows build with the 17.9.2 driver and it works fine now, none crashes anymore.


----------



## Medusa666

I have done alot of testing the last few hours, and I have noticed that my card crashes when hotspot temp goes over 100c, around 105-110c there is always a hard freeze of the system, HBM and core is at 60-70c, this is heavily OC with 50% powerlimit.

Whenever I decrease the PL and temps of hotspot decreases, the card stops freezing up.

My die is not epoxied.

Edit; It seems to me that HBM speed affects the hotspot temps the most. if I lower it, I get lower hotspot.


----------



## punchmonster

Molded die. And the TIM spreading method.
Quote:


> Originally Posted by *pengs*
> 
> Was it the tim spreading method or the x-brace which lowered your hotspot temp?
> Is your core epoxied?


----------



## Medusa666

I got an un-molded die, and I used Thermal Grizzly Kryonaut which is a very thin paste.

Maybe I should re apply the heatsink, with a thicker paste and spread it, especially filling out the spaces between the HBM memory and the core?

I used the X method on all of the three modules, but I did not put any paste in the spaces between the first time.

I'm seeing 50-60c on HBM and Core, but 90-100c on hot spot.


----------



## twan69666

Quote:


> Originally Posted by *Medusa666*
> 
> I have done alot of testing the last few hours, and I have noticed that my card crashes when hotspot temp goes over 100c, around 105-110c there is always a hard freeze of the system, HBM and core is at 60-70c, this is heavily OC with 50% powerlimit.
> 
> Whenever I decrease the PL and temps of hotspot decreases, the card stops freezing up.
> 
> My die is not epoxied.
> 
> Edit; It seems to me that HBM speed affects the hotspot temps the most. if I lower it, I get lower hotspot.


I think the M1 has a lot to do with it. I also have an NCase M1 as well as a full loop with a cpu block and the EK block for my Vega 64. I just have one 240 rad cooling it all.

After about an hour or 2 of stressing I end up at a max cor of 60, but a hotspot of 86. The NCase is just too small


----------



## Soggysilicon

Quote:


> Originally Posted by *Ark-07*
> 
> Well talk about going the extra mile, I finally feel comfortable playing with my power settings never did before... Thank you


Hey no problem... while I was at work today... I chuckled thinkin' I may have written a little nonsense in the battery analogy... going to write that off as being late night late night.







As long as you got something out of it, good times. Let me or others know if there's something that you got a question about and share your experiences with your Vega!


----------



## springs113

You all Vega seem to be running real hot. My cards core starts around 37c, inn around 45c and hot spot around the same but never really passes 55c.


----------



## prom

Quote:


> Originally Posted by *Medusa666*
> 
> Thanks for your reply, do you have any idea as to why this is? Maybe airflow? My card is in a Ncase M1, it is small so airflow is restricted.
> 
> I just don't feel like re seating the cooler again : (


Wait, Vega + that cooler fits in an NCase?
STOP GIVING ME IDEAS FOR MY RIG!


----------



## Medusa666

Quote:


> Originally Posted by *prom*
> 
> Wait, Vega + that cooler fits in an NCase?
> STOP GIVING ME IDEAS FOR MY RIG!


Yeah it does fit, I was unsure but it did







It is very close though, you have to use two slim fans, I got two Noctua NH-F12x15 that I attached to the bottom of the chassi as I couldn't fit them on the cooler.


----------



## Medusa666

Quote:


> Originally Posted by *twan69666*
> 
> I think the M1 has a lot to do with it. I also have an NCase M1 as well as a full loop with a cpu block and the EK block for my Vega 64. I just have one 240 rad cooling it all.
> 
> After about an hour or 2 of stressing I end up at a max cor of 60, but a hotspot of 86. The NCase is just too small


I don't know if the case is to blame.

This phenomenon with high hot spot temps are also evident in setups with full size cases, and for me it occurs after only a few minutes of stress testing, it climbs rapidly and then stays around 95-105c.

I'm going to try and reseat the cooler once, just for the sake of it to rule out the possibility of a ****ty setting.


----------



## pmc25

Quote:


> Originally Posted by *Medusa666*
> 
> Yeah it does fit, I was unsure but it did
> 
> 
> 
> 
> 
> 
> 
> It is very close though, you have to use two slim fans, I got two Noctua NH-F12x15 that I attached to the bottom of the chassi as I couldn't fit them on the cooler.


You just identified your own problem. You NEED fans mounted on the heatsink for sufficient static pressure.


----------



## Medusa666

Quote:


> Originally Posted by *pmc25*
> 
> You just identified your own problem. You NEED fans mounted on the heatsink for sufficient static pressure.


I hear you, and I considered this alot before mounting it.

Thing is, it was very close as it is, and the cooler is pressed against the fans frames without being attached, that is how low tolerances are in the Ncase M1 with this setup.

I also made it so that the fans airflow is reversed, blowing air OUT of the case, i.e sucking hot air from the cooler and pushing it out. I did this because users with Ncase and Accelero cooler had gotten better temps with that, than with traditional airflow where air is blowing into the cooler and over the card.

Either way, if hot spot is a die-temp and it has to do with airflow and mounting of fans, I should also have abysmal HBM and core temps, but I don't, they usually stick around 50-65c after 1-2 hours of gaming, while the hot spot rises to 95-100c.


----------



## pmc25

Quote:


> Originally Posted by *Medusa666*
> 
> I hear you, and I considered this alot before mounting it.
> 
> Thing is, it was very close as it is, and the cooler is pressed against the fans frames without being attached, that is how low tolerances are in the Ncase M1 with this setup.
> 
> I also made it so that the fans airflow is reversed, blowing air OUT of the case, i.e sucking hot air from the cooler and pushing it out. I did this because users with Ncase and Accelero cooler had gotten better temps with that, than with traditional airflow where air is blowing into the cooler and over the card.
> 
> Either way, if hot spot is a die-temp and it has to do with airflow and mounting of fans, I should also have abysmal HBM and core temps, but I don't, they usually stick around 50-65c after 1-2 hours of gaming, while the hot spot rises to 95-100c.


I got those temps on the stock cooler, at high fan speeds, so you do have bad temps.

Do what you want, but you can't expect morpheus to work properly (or your card) without proper cooling.


----------



## biscuittea

Quote:


> Originally Posted by *pmc25*
> 
> I got those temps on the stock cooler, at high fan speeds, so you do have bad temps.
> 
> Do what you want, but you can't expect morpheus to work properly (or your card) without proper cooling.


I get the same temps as him and I have a set of ML120s properly attached to the Morpheus.

While I do agree that installing the fans straight to the cooler is obviously the most optimal method, there's definitely a lot of variance when it comes to hotspot temps which I doubt is being caused by his configuration.

Some users get a 10c delta, whereas Medusa666 and I both see 20-30c deltas between the core/HBM and hotspot.


----------



## pmc25

Quote:


> Originally Posted by *biscuittea*
> 
> I get the same temps as him and I have a set of ML120s properly attached to the Morpheus.
> 
> While I do agree that installing the fans straight to the cooler is obviously the most optimal method, there's definitely a lot of variance when it comes to hotspot temps which I doubt is being caused by his configuration.
> 
> Some users get a 10c delta, whereas Medusa666 and I both see 20-30c deltas between the core/HBM and hotspot.


I don't know about hotspot, but he would certainly see large falls in core and HBM temps if he attached a couple of fans with good static pressure. Particularly GTs.


----------



## Medusa666

Quote:


> Originally Posted by *biscuittea*
> 
> I get the same temps as him and I have a set of ML120s properly attached to the Morpheus.
> 
> While I do agree that installing the fans straight to the cooler is obviously the most optimal method, there's definitely a lot of variance when it comes to hotspot temps which I doubt is being caused by his configuration.
> 
> Some users get a 10c delta, whereas Medusa666 and I both see 20-30c deltas between the core/HBM and hotspot.


I did re-apply the cooler.

First I cleaned the old TIM off, didn't look too good as it hadn't spread properly in between the HBM and core, I will provide pictures here;

Here is after removing the cooler



I spread it out with a credit card after, and this is how I ended up;



I first put a dot between all three dies, and pushed it out in the spaces between, then I continued to add more paste until I could not make out visually where the HBM or core was. I did not add anything to the coolers surface. I almost made sure it covered the entire GPU die.

The results are surprising and encouraging, I have about 10-15c less on the hot spot temperature after doing the exact same stress testing as before. Now it maxed out at 87c, core at 55c, and HBM at 61c. Previous temps had been 101c at hotspot and core at 60c, and HBM at 65-70c.

So it paid off for me at least to spread out the paste evenly, and I used quite alot compared to what I usually do when I install CPUs.

I did try a different paste too, this time it was the Arctic MX-4 instead of the Thermal Grizzly Kryonaut, the Kryonaut has a bit more viscosity and I figured that the spaces between the HBM and core would be more suited for a thicker paste.

Anyhow, happy times : )


----------



## biscuittea

Interesting... I made a post a few days ago with a pic of what the die looks like after I take the cooler off:


This is using the line method with a tiny blob over each HBM module. I saw basically the same thing when I did the cross method as well - the one that EK recommends for people using custom loops.

I'll need to give the spreading out method a go but I might've used up most of the paste since I reseated the cooler so many times. Either way, I'll give an update when I get around to doing it.


----------



## Medusa666

It should also add that I have a non molded die, and I believe that is why the hot spot is higher, this is just wild speculation though.


----------



## biscuittea

OK - I went and reseated my cooler again using the spread method.

First of all, I was greeted with this when I took the cooler off:


Think it's quite obvious what the 'hotspot' must be.

I did the spread method like this:


Gave the cooler a few twists before I screwed it in.

These are my results after a bit of Witcher 3:


For reference, before when I used the X method I would hover 90c and would peak at 96c on the hotspot when playing TW3. The core and HBM temps are pretty much the same but as long as I'm not hitting 90c on the hotspot I'm happy.

Probably worth noting that this is a Vega 56 flashed to the 64 bios.

p.s. thanks Medusa666 for reporting back the results otherwise I would've just left my cooler as it is.


----------



## Papa Emeritus

I sent my card back due to having temp issues even with an EK block.







I got really high temps across the board even when running the gpu in a single 360mm loop. I removed & reapplied the block 3 times. My die was the unmolded one. I'm not sure if i should try a new one, wait for aib cards or simply get a 1080/ti


----------



## gupsterg

If you have FreeSync or used to variable refresh rate you will miss it with 1080/Ti. I am







.

Shipped my Fury X a few days ago, I had kept to benching only on 1080, which was sweet. Tried gaming and BAM! missed FreeSync greatly







. Even if FPS is better variable refresh rate really makes a difference.

Currently scouting for a VEGA deal, gonna sell the MSI GTX 1080 Sea Hawk EK X, don't wanna swap my MG279Q for a G-Sync screen.


----------



## gamervivek

Quote:


> Originally Posted by *biscuittea*
> 
> OK - I went and reseated my cooler again using the spread method.
> 
> ......
> 
> For reference, before when I used the X method I would hover 90c and would peak at 96c on the hotspot when playing TW3. The core and HBM temps are pretty much the same but as long as I'm not hitting 90c on the hotspot I'm happy.
> 
> Probably worth noting that this is a Vega 56 flashed to the 64 bios.
> 
> p.s. thanks Medusa666 for reporting back the results otherwise I would've just left my cooler as it is.


Quote:


> Originally Posted by *Medusa666*
> 
> It should also add that I have a non molded die, and I believe that is why the hot spot is higher, this is just wild speculation though.
> 
> ...


Have a molded die same problem, similar looking paste distribution. Will try the spreading method, it looks like too much tightening and pressure in the middle.


----------



## shadowxaero

Quote:


> Originally Posted by *gupsterg*
> 
> If you have FreeSync or used to variable refresh rate you will miss it with 1080/Ti. I am
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Shipped my Fury X a few days ago, I had kept to benching only on 1080, which was sweet. Tried gaming and BAM! missed FreeSync greatly
> 
> 
> 
> 
> 
> 
> 
> . Even if FPS is better variable refresh rate really makes a difference.
> 
> Currently scouting for a VEGA deal, gonna sell the MSI GTX 1080 Sea Hawk EK X, don't wanna swap my MG279Q for a G-Sync screen.


The fact that I own an Acer XF270HU is the onky reason I bought Vega as an upgrade to my Fury lol.

Not paying that 200 dollar premium for GSynk, Though I do want to sell me Acer monitor to get the new freesync 2 Sammy HDR monitor.


----------



## GroupB

I feel lucky with my molded one hovering at 32 core 50 ish hotspot 40 ish hbm ( win the chip lottery I guess), seeing all the problem you guys having with temps, off course if you on air you cant expect that and if your on water you need more than a 240 rad for that... since I had to cool 2 X 290 and a 1090T oc @ 4.4 with 1.56V I went big back in the day with cooling 3X140 + 2X140 its kind of too much for my loop now i7 6700k and vega but I dont regret it seeing idle to 100% vega only jump from 23 to 32 and off course 23 or even 32 are not precise since its probably outside the range of the sensor range and the room temps is more around 26 all the time.

The ek block is really doing a good job on vega its cold to the touch the backplate too vs the LE air cooler its was really hot to touch or even my xspc block on 290 was kinda hot.


----------



## Papa Emeritus

Quote:


> Originally Posted by *gupsterg*
> 
> If you have FreeSync or used to variable refresh rate you will miss it with 1080/Ti. I am
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Shipped my Fury X a few days ago, I had kept to benching only on 1080, which was sweet. Tried gaming and BAM! missed FreeSync greatly
> 
> 
> 
> 
> 
> 
> 
> . Even if FPS is better variable refresh rate really makes a difference.
> 
> Currently scouting for a VEGA deal, gonna sell the MSI GTX 1080 Sea Hawk EK X, don't wanna swap my MG279Q for a G-Sync screen.


I have the MG279Q which has been great with a modified sync range


----------



## Paopawdecarabao

can I still buy a rx vega black pack? or any other packs that they offer?


----------



## pengs

Quote:


> Originally Posted by *Medusa666*
> 
> I did re-apply the cooler.
> 
> First I cleaned the old TIM off, didn't look too good as it hadn't spread properly in between the HBM and core, I will provide pictures here;
> 
> Here is after removing the cooler
> 
> 
> 
> I spread it out with a credit card after, and this is how I ended up;
> 
> 
> 
> I first put a dot between all three dies, and pushed it out in the spaces between, then I continued to add more paste until I could not make out visually where the HBM or core was. I did not add anything to the coolers surface. I almost made sure it covered the entire GPU die.
> 
> The results are surprising and encouraging, I have about 10-15c less on the hot spot temperature after doing the exact same stress testing as before. Now it maxed out at 87c, core at 55c, and HBM at 61c. Previous temps had been 101c at hotspot and core at 60c, and HBM at 65-70c.
> 
> So it paid off for me at least to spread out the paste evenly, and I used quite alot compared to what I usually do when I install CPUs.
> 
> I did try a different paste too, this time it was the Arctic MX-4 instead of the Thermal Grizzly Kryonaut, the Kryonaut has a bit more viscosity and I figured that the spaces between the HBM and core would be more suited for a thicker paste.
> 
> Anyhow, happy times : )


Your culinary skills must be fantastic









It's possible that the hotspot is actually on the interposer or directly between the core and HBM and the wrong amount of thermal paste is thermally insulating it. In this situation the thermal paste is so thick that it's connecting the interposer and heatsink completely and sealing off the gap. The connecting of interposer and heatsink may be dissipating the hotspots heat or it may just be the fact that there is absolutely no air trapped to super-heat within the gap.

Have you tried the credit card method while keeping the gap completely cleaned?

The guy calling this a nothing burger may be right. I'm starting the think it's either a dummy sensor placed by AMD to get a air reading in the gap or a sensor related to the HBM located on the empty interposer spot just at the left edge of the die in a pcb shot.
Also, early Vega samples were not epoxied iirc and this may had been something to do with finding air temperature in the board at it's hottest point. Would be interesting to find out exactly what it is and if it should be worried about.


----------



## The EX1

Quote:


> Originally Posted by *Paopawdecarabao*
> 
> can I still buy a rx vega black pack? or any other packs that they offer?


Yes. Newegg.com has lots of packs left.


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> If you have FreeSync or used to variable refresh rate you will miss it with 1080/Ti. I am
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Shipped my Fury X a few days ago, I had kept to benching only on 1080, which was sweet. Tried gaming and BAM! missed FreeSync greatly
> 
> 
> 
> 
> 
> 
> 
> 
> . Even if FPS is better variable refresh rate really makes a difference.
> 
> Currently scouting for a VEGA deal, gonna sell the MSI GTX 1080 Sea Hawk EK X, don't wanna swap my MG279Q for a G-Sync screen.


We knew you'd come around ;-)

Seriously though, I was gaming with a 980Ti at 1440p non variable refresh rate until I bought the Nixeus 27 EDG and used a RX 470 I had in my server while I waited for Vega. I was very surprised by the difference in gaming experience. I had to turn down some settings for the 470 but the overall gaming experience was more enjoyable.


----------



## lowdog

Just for a reference here is my 56 flashed with Air 64 bios - (tried the 64 AIO bios but was not stable at Balanced when stressing with Fire Strike even though it would happily do 1720MHz core but power draw was too high for my liking + heat as well







)

My card is in a custom loop with 1800X @ 3.9GHz, it has an EK block and I used the credit card method to spread the thermal grease (arctic silver - Arctic Alumina Ceramic) on it (molded die) Sapphire card with Samsung ram but it doesn't like 1100MHz HBM so I have to settle for 1050MHz...oh well







.......and I have a MG279Q as well









3dMark stress test - Time Spy

Anyway here is solid 1610MHz core - S6 @ 1542 / 1025mv and S7 @ 1637 / 1075mv with HBM @ 1050MHz with 950mv



And here is solid 1660MHz core - S6 @ 1577 / 1090mv and S7 @ 1672 / 1140mv with HBM @ 1050MHz with 1000mv


----------



## Paopawdecarabao

Quote:


> Originally Posted by *The EX1*
> 
> Yes. Newegg.com has lots of packs left.


I see them but the prices are inflated. Probably I will wait BF to build a new system.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *pengs*
> 
> Your culinary skills must be fantastic
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's possible that the hotspot is actually on the interposer or directly between the core and HBM and the wrong amount of thermal paste is thermally insulating it. In this situation the thermal paste is so thick that it's connecting the interposer and heatsink completely and sealing off the gap. The connecting of interposer and heatsink may be dissipating the hotspots heat or it may just be the fact that there is absolutely no air trap and heat within the gap.
> 
> Have you tried the credit card method while keeping the gap completely cleaned?
> 
> The guy calling this a nothing burger may be right. I'm starting the think it's either a dummy sensor placed by AMD to get a air reading in the gap or a sensor for related to the HBM located on the empty interposer spot just at the left edge of the die in a pcb shot.
> Also, early Vega samples were not epoxied iirc and this may had been something to do with finding air temperature in the board at it's hottest point. Would be interesting to find out exactly what it is and if it should be worried about.


Maybe the "Socket"Temp?


----------



## ICDP

I modded my Vega 64 (moulded core) AC card with a Kraken G10 and a Corsair H110. The Kraken G10 had to have new holes drilled with 64mm spacing. Originally I had uses the X method on the core and HBM. At 1000mV, after a long warm up period, core temps were ~47c, HBM ~54c and hot spot ~75c. At 1200mV the hotspot shot up to 105c.

At first I was OKish with this because I run my voltage at 1000mV and this gave me a ~30c delta, 47c core and 77c hot spot. Which is much lower and quieter then stock. Though seeing the images in this thread showing of poor thermal paste spread, I decided to redo it using the credit card method. I'm glad I did because new hot spot temps are much improved. Core and HBM temps are identical but hotspot is now ~64c, or a ~17c delta. Even at 1200mV with +50% PL, the hotspot sits at ~80c.

Thanks for the tips and info on thermal paste application guys, much happier now.


----------



## Worldwin

Where is this hotspot you guys are referring to?


----------



## ICDP

It's clearly located on the core, because I made no modifications other than re-applying the thermal past in a more efficient manner.


----------



## pengs

Quote:


> Originally Posted by *ICDP*
> 
> It's clearly located on the core, because I made no modifications other than re-applying the thermal past in a more efficient manner.


That's true. Which is probably why the X method is failing and the smoothed out application is winning. In hindsight I probably overthought the whole thing.

On another note:


Quote:


> Nvidia confirms the backlog
> The ranking in Forza 7 is very unusual. Nvidia has confirmed ComputerBase, however, that the results are so correct, so there is no problem with the system in the editorial regarding GeForce.





Spoiler: Warning: Spoiler!







https://www.computerbase.de/2017-09/forza-7-benchmark/2/#diagramm-forza-7-1920-1080


----------



## madmanmarz

reposting this









Does anyone else's card not ever go into boost? I flashed right away to aio bios, card is on custom loop with low temps. I'm just running my p6/p7 states the same, setting p7 higher doesn't do anything. Bios related maybe?


----------



## punchmonster

Quote:


> Originally Posted by *Medusa666*
> 
> I hear you, and I considered this alot before mounting it.
> 
> Thing is, it was very close as it is, and the cooler is pressed against the fans frames without being attached, that is how low tolerances are in the Ncase M1 with this setup.
> 
> I also made it so that the fans airflow is reversed, blowing air OUT of the case, i.e sucking hot air from the cooler and pushing it out. I did this because users with Ncase and Accelero cooler had gotten better temps with that, than with traditional airflow where air is blowing into the cooler and over the card.
> 
> Either way, if hot spot is a die-temp and it has to do with airflow and mounting of fans, I should also have abysmal HBM and core temps, but I don't, they usually stick around 50-65c after 1-2 hours of gaming, while the hot spot rises to 95-100c.


blowing air out is an AWFUL idea.

You are getting ZERO airflow over your powerdelivery

Also since I also had better luck with temperatures with spreading rather than using an X I think it's safe to say equal distribution is more important than no chance at airbubbles?


----------



## Medusa666

Quote:


> Originally Posted by *punchmonster*
> 
> blowing air out is an AWFUL idea.
> 
> You are getting ZERO airflow over your powerdelivery
> 
> Also since I also had better luck with temperatures with spreading rather than using an X I think it's safe to say equal distribution is more important than no chance at airbubbles?


Seems to work fine though, I have not noticed anything. I could do a simple test with my IR thermometer and measure the temperature of the VRM heatsink during stress testing : P


----------



## gupsterg

Quote:


> Originally Posted by *shadowxaero*
> 
> The fact that I own an Acer XF270HU is the onky reason I bought Vega as an upgrade to my Fury lol.
> 
> Not paying that 200 dollar premium for GSynk, Though I do want to sell me Acer monitor to get the new freesync 2 Sammy HDR monitor.


I guess this is probably the main reason for AMD users wanting VEGA to have been released. On OCuk a recent discussion was started with the slant that if FreeSync was the reason to stick to AMD why not go G-Sync with nVidia. Basically :-

i) FreeSync wider range of panels
ii) Cost is lower.

Which is obvious many AMD users would already know







.
Quote:


> Originally Posted by *Papa Emeritus*
> 
> I have the MG279Q which has been great with a modified sync range


Yeah they are nice. If I swapped say from a MG279Q to PG279Q cost is ~£155 to £200 difference, that's today prices from UK source I compared. If I compared what I paid for my panel ~1yr ago it is greater.
Quote:


> Originally Posted by *kundica*
> 
> We knew you'd come around ;-)
> 
> Seriously though, I was gaming with a 980Ti at 1440p non variable refresh rate until I bought the Nixeus 27 EDG and used a RX 470 I had in my server while I waited for Vega. I was very surprised by the difference in gaming experience. I had to turn down some settings for the 470 but the overall gaming experience was more enjoyable.


LOL







.

So true







.

SWBF I have ~150hrs clocked up on 1440P Ultra using Fury X with FreeSync and 88 FPS FRTC. The GTX 1080 even with ~1975MHz clock, no V-Sync, 144Hz on screen, had certain moments where I could tell variable refresh rate would have given a smoother experience.

Then I tried Lords of the fallen, AFAIK an nVidia title, I have completed that so have fair experience how Fury X was with it. GTX 1080 with VRR again failed to impress.

The driver panel like dagget warned is pretty much the same as I recall when I had a GTX 280







. I couldn't even set FPS cap within it, I have been playing with other settings and still feel like I'm losing out not using VRR.

I missed out on a open box PowerColor VGA 64 today, I should have bought it yesterday. I hesitated as it wasn't from Amazon UK. I know on their invoice they don't state open box, etc so manufacturer see buyer as first owner, no chance of it not getting honoured from past experience.

The card was with OCuk, when speaking to them on phone and querying what they state on invoice as item description they said we state it's "used B stock". I felt not worth the risk over £60 difference between new/used.


----------



## Medusa666

Quote:


> Originally Posted by *punchmonster*
> 
> blowing air out is an AWFUL idea.
> 
> You are getting ZERO airflow over your powerdelivery
> 
> Also since I also had better luck with temperatures with spreading rather than using an X I think it's safe to say equal distribution is more important than no chance at airbubbles?


I have now measured the temperatures during two loops of the Unigine Superposition 4K optimized benchmark using an IR thermometer with +50% power limit (not my daily driver setting) just to make sure I'm stressing the VRM.

Temperature measured at different heatsinks put on the following highlighted components gave a maximum temperature of 66,1c, that is very low and much lower than I kinda expected. That only shows that it works with the fans blowing air out ( sucking from cooler ). If I had a regular case, mATX or ATX, I would use the fans in a normal orientation, but in the Ncase M1 that would put alot of heat on my motherboard, hard drives, and CPU.

I did not come up with it myself btw, I read about several ppl having Accelero cooler on GPUs in Ncase, and that the reversed fan orientation actually rendered better temps.


----------



## GroupB

IR thermometer dont work nice on metal , specially reflective one and pretty often unless you got a high end thermometer will give wrong result , just saying... Using a multimeter thermocouple is more accurate if you have one, also allow you to see how bad the IR thermometer are on metal.


----------



## jehovah3003

Well, i'm ending up sending back my RX Vega 64 Liquid Edition once again, the noise is just so loud to get headaches and i never heard any fan making that much noise, the NOISE was the point of this liquid edition, good job AMD, another failed AIO card just like the Fury X.


----------



## Soggysilicon

So the hotspot...









It scales with gpu core. Letting my loop temp up then driving it back down by re-enabling fans (as I had previously sshot) clearly demonstrates this... (again). As it drives the "hot spot" right back down under the same stress test. I highly suspect the sense is being taken on the core near the interposer somewhere between the HBM and core IHS (on the edge of the core), as this will present a quick n' dirty way to capture the temp gradient... this could be useful due to HBM stacks being at different temps (speculating). A physical consequence of stack'n.

The Setup:

A used - couple year old tube of artic silver 5 to mount the EK block...

Cleaned out all the stock junk... took ~2 hours. Including getting it up from the "z" packs (the ceramic pu/pd resistors circling the die / decoupling caps...). Magnifying. glass... helping hands... qtips... 2 part artic silver cleaning solutions.

"Spread" method... which I refer to as "razor" method... this is a very very "thin" coating on the IHS(s). Not cake mix, not frosting a cake... this is just to fill the fissures, imperfections, and machining slop from the factory... this "dopes or tins" the surface. Followed by a "thin" X on the three parts. (The X is for my slop during installation).

Evenly place the block on the CCA, engage by hand 2 screws opposing each other, engage by hand 2 more screws opposing each other.

Using the X(cross) tighten method, gently engage threads. This is the same technique used on automobiles and propellers. Allow the thread engagement to "swedge" the TIM. At the "slightest" resistance on a screw, move to an opposing screw. At about 3/4 of the final tightness engage all the other screws around the pcb to help stiffen the card, final tighten to what I would "guess" to be about 4-6 ish' inch lbs. (take something that's about a pound, hold it out about 4-6 inches, and that's your engagement force).

Had I had thought about it I would of used a torque tool and taken the original thread torque before swappin' the ref cooler with the EK block...

Some of these temps I see posted leave me to conclude...

1) Very poor installation practices
2) Terrible case ventilation
3) Too much TIM! (the best TIM is next to no TIM... heat has to drive through the material to get to the block...)
4) Boot leg TIM
5) Weak low flow loop
6) Misaligned mounting due to the 40um offset on the package... (guess you just have to TIM through that... sorry 4 ur luck).









Beyond that, I still think it's a nuthin' burger... (although 90s and 100s is awfully high).


----------



## Roboyto

Quote:


> Originally Posted by *Soggysilicon*
> 
> Some of these temps I see posted leave me to conclude...
> 
> *1) Very poor installation practices*
> *2) Terrible case ventilation*
> *3) Too much TIM! (the best TIM is next to no TIM... heat has to drive through the material to get to the block...)*
> 4) Boot leg TIM
> 5) Weak low flow loop
> 6) Misaligned mounting due to the 40um offset on the package... (guess you just have to TIM through that... sorry 4 ur luck).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Beyond that, I still think it's a nuthin' burger... (although 90s and 100s is awfully high).


I would say some combination of 1/2/3 personally, as my tiny loop with fans barely spinning yields more than acceptable temperatures with the GPU running at the brink.

The 40um difference may have some effect....a very, very small one IMO...that's .04 millimeters or .0015 inches. I went searching for some information on AnandTech's forums from a member named IDontCare as he did some in depth testing on Intel CPU delidding; way back in 2012. The conclusion he came to was that the gap created by the glue holding on the IHS was approximately 0.14mm, and this was/is the primary cause for poor temperatures on Intel CPUs with that chalky thermal 'paste' under the lid. At one point he reused the stock Intel paste after delidding and its performance surpassed Noctua NT-H1. It is unfortunate that all the photos no longer show up in his posts.

If you want to read it can be found here: https://forums.anandtech.com/threads/delidded-my-i7-3770k-loaded-temperatures-drop-by-20%C2%B0c-at-4-7ghz.2261855/

Considering his information on a CPU having ~.14mm gap that are still able to be cooled, mostly, appropriately with a decent cooler. This would lead me to believe that the .04mm gap that is there on some Vega cards, is going to do approximately jack and squat as far as contributing to poor temperatures.

Personal experience...If you just put a heatsink, of any kind, onto something and the temperatures are not what are to be expected...you probably did something wrong.


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> Some of these temps I see posted leave me to conclude...
> 
> 1) Very poor installation practices
> 2) Terrible case ventilation
> 3) Too much TIM! (the best TIM is next to no TIM... heat has to drive through the material to get to the block...)
> 4) Boot leg TIM
> 5) Weak low flow loop
> 6) Misaligned mounting due to the 40um offset on the package... (guess you just have to TIM through that... sorry 4 ur luck).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Beyond that, I still think it's a nuthin' burger... (although 90s and 100s is awfully high).


IDK, I did a pretty good job of mine and my hotspot is higher than I'd like. I've mention here before that there's a possibility I didn't do the greatest job with the tim, but my temps are pretty good for my setup. I have a 280 and 240 rad with Silent Wings 3 fans. Right now my ambient is 23 while my card idles at 24 with HBM at 24. Under extended gaming load the core hits 41 while my HBM is just 2 higher at 43, but my hotspot reaches high 60s. I'll probably reapply the tim at some point in the next few weeks but I currently don't feel like draining my loop to redo the card, especially since I don't seem to have any issues with it.

I find it interesting that your HBM is so much higher than your core while mine is almost the same.


----------



## bill1971

I Own a vega 56!stock,but when I undervolt I have noice and temps up to 80 degrees.


----------



## Medusa666

Quote:


> Originally Posted by *Soggysilicon*
> 
> 
> 
> So the hotspot...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It scales with gpu core. Letting my loop temp up then driving it back down by re-enabling fans (as I had previously sshot) clearly demonstrates this... (again). As it drives the "hot spot" right back down under the same stress test. I highly suspect the sense is being taken on the core near the interposer somewhere between the HBM and core IHS (on the edge of the core), as this will present a quick n' dirty way to capture the temp gradient... this could be useful due to HBM stacks being at different temps (speculating). A physical consequence of stack'n.
> 
> The Setup:
> 
> A used - couple year old tube of artic silver 5 to mount the EK block...
> 
> Cleaned out all the stock junk... took ~2 hours. Including getting it up from the "z" packs (the ceramic pu/pd resistors circling the die / decoupling caps...). Magnifying. glass... helping hands... qtips... 2 part artic silver cleaning solutions.
> 
> "Spread" method... which I refer to as "razor" method... this is a very very "thin" coating on the IHS(s). Not cake mix, not frosting a cake... this is just to fill the fissures, imperfections, and machining slop from the factory... this "dopes or tins" the surface. Followed by a "thin" X on the three parts. (The X is for my slop during installation).
> 
> Evenly place the block on the CCA, engage by hand 2 screws opposing each other, engage by hand 2 more screws opposing each other.
> 
> Using the X(cross) tighten method, gently engage threads. This is the same technique used on automobiles and propellers. Allow the thread engagement to "swedge" the TIM. At the "slightest" resistance on a screw, move to an opposing screw. At about 3/4 of the final tightness engage all the other screws around the pcb to help stiffen the card, final tighten to what I would "guess" to be about 4-6 ish' inch lbs. (take something that's about a pound, hold it out about 4-6 inches, and that's your engagement force).
> 
> Had I had thought about it I would of used a torque tool and taken the original thread torque before swappin' the ref cooler with the EK block...
> 
> Some of these temps I see posted leave me to conclude...
> 
> 1) Very poor installation practices
> 2) Terrible case ventilation
> 3) Too much TIM! (the best TIM is next to no TIM... heat has to drive through the material to get to the block...)
> 4) Boot leg TIM
> 5) Weak low flow loop
> 6) Misaligned mounting due to the 40um offset on the package... (guess you just have to TIM through that... sorry 4 ur luck).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Beyond that, I still think it's a nuthin' burger... (although 90s and 100s is awfully high).


Is your die molded?


----------



## Trender07

Any of you getting ur UV settings restored on Windows startup sometimes?


----------



## diabetes

Quote:


> Originally Posted by *Trender07*
> 
> Any of you getting ur UV settings restored on Windows startup sometimes?


Yes, although it is not my UV but just a higher HBM clock and more powerlimit. It seems like Wattman mistakens shutting down the computer with a crash sometimes.


----------



## laczarus

from what I gather there is still no certainty about the hotspot
Its either on the die or its the doublers above it, the hottest components


----------



## OMgoo

i'm pretty pissed

reseated my morpheus II several times, TIM overload TIM medium , low TIM, screws really tight, medium and loose, nothing helped getting the hotspot lower ... -.-


----------



## Tgrove

Quote:


> Originally Posted by *madmanmarz*
> 
> reposting this
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Does anyone else's card not ever go into boost? I flashed right away to aio bios, card is on custom loop with low temps. I'm just running my p6/p7 states the same, setting p7 higher doesn't do anything. Bios related maybe?


Sapphire 64 liquid version Windows 7, ive never seen thos behavior. Make sure your temps are in check as performance degrades with voltage and heat (timings, core speeds, etc). I wouldnt go over 65c
Quote:


> Originally Posted by *gupsterg*
> 
> I guess this is probably the main reason for AMD users wanting VEGA to have been released. On OCuk a recent discussion was started with the slant that if FreeSync was the reason to stick to AMD why not go G-Sync with nVidia. Basically :-
> 
> i) FreeSync wider range of panels
> ii) Cost is lower.
> 
> Which is obvious many AMD users would already know
> 
> 
> 
> 
> 
> 
> 
> .


Im right there with you. G sync has an absolutely horrible range of panels. The biggest 4k 16:9 is 32", you can get all the way up to 65" with freesync. Ive been on a 49" 4k for 2 years and still no g sync equivalent. So coming from fury x crossfire to 1 liquid cooled vega has been like a dream. Freesync was definitely a huge play in the amd book. I honestly think g sync is done once hdmi 2.1 tvs start coming out.

I even ate the $100 upcharge @ $800 because i sold both fury x for more than that. Miner sponsored upgrade cost me nothing in the end lol


----------



## kundica

Quote:


> Originally Posted by *madmanmarz*
> 
> reposting this
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Does anyone else's card not ever go into boost? I flashed right away to aio bios, card is on custom loop with low temps. I'm just running my p6/p7 states the same, setting p7 higher doesn't do anything. Bios related maybe?


Can you elaborate? The way Vega is designed it doesn't necessarily run at p7. In addition to heat and voltage, the type of work/load has impacts the highest clock it's able to reach. Running certain test or rendering a video for example, my card will run at p7 or even slightly higher, while gaming it tends to run somewhere between p6 and p7. This isn't a fault, just how the card works. An AMD rep over at OCUK posted about this a while back, if I find the post again I'll share what he wrote here.


----------



## gupsterg

Quote:


> Originally Posted by *Tgrove*
> 
> Im right there with you. G sync has an absolutely horrible range of panels. The biggest 4k 16:9 is 32", you can get all the way up to 65" with freesync. Ive been on a 49" 4k for 2 years and still no g sync equivalent. So coming from fury x crossfire to 1 liquid cooled vega has been like a dream. Freesync was definitely a huge play in the amd book. I honestly think g sync is done once hdmi 2.1 tvs start coming out.
> 
> I even ate the $100 upcharge @ $800 because i sold both fury x for more than that. Miner sponsored upgrade cost me nothing in the end lol










.
Quote:


> Originally Posted by *kundica*
> 
> Can you elaborate? The way Vega is designed it doesn't necessarily run at p7. In addition to heat and voltage, the type of work/load has impacts the highest clock it's able to reach. Running certain test or rendering a video for example, my card will run at p7 or even slightly higher, while gaming it tends to run somewhere between p6 and p7. This isn't a fault, just how the card works. An AMD rep over at OCUK posted about this a while back, if I find the post again I'll share what he wrote here.


Is this the post? link.


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> IDK, I did a pretty good job of mine and my hotspot is higher than I'd like. I've mention here before that there's a possibility I didn't do the greatest job with the tim, but my temps are pretty good for my setup. I have a 280 and 240 rad with Silent Wings 3 fans. Right now my ambient is 23 while my card idles at 24 with HBM at 24. Under extended gaming load the core hits 41 while my HBM is just 2 higher at 43, but my hotspot reaches high 60s. I'll probably reapply the tim at some point in the next few weeks but I currently don't feel like draining my loop to redo the card, especially since I don't seem to have any issues with it.
> 
> I find it interesting that your HBM is so much higher than your core while mine is almost the same.


Your loop similar size to mine and I'm getting same temps. I would say you have nothing to worry about. For any component on these cards, I would assume that high 60s Celsius shouldn't be a concern.

What kind of temps does hotspot hit with reference blower? I had blower off before GPU-Z was able to read it.


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> Is this the post? link.


That's it.

Quote:


> Originally Posted by *Roboyto*
> 
> Your loop similar size to mine and I'm getting same temps. I would say you have nothing to worry about. For any component on these cards, I would assume that high 60s Celsius shouldn't be a concern.
> 
> What kind of temps does hotspot hit with reference blower? I had blower off before GPU-Z was able to read it.


I didn't bother to check since it really wasn't on my radar at the time.

The road to get to a place where I'm pleased with my setup has been a long one. I initially bought an Air 64 card but I wasn't super happy with the performance/noise so I sold it and bought a LC 64. That card ended up being faulty so I returned it but since it couldn't be replaced due to stock I bought a 2nd one which had issues too just not as bad. After returning that one as well I decided it was time to go custom loop so I bought an Air 64. At that point things would've been great except that Newegg sent me a used card. How used, I'm not sure, but it arrived with the seal broken and fingerprints all over the card. They issued an advance RMA on the card which brings me to where I'm at now.

I'm quite pleased with the performance of my current card in the loop, it outdoes my LC 64s when they weren't crashing. I'll probably forget about the hotspot for now and redo the TIM when I decide to go hard tubing some months down the road.


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> That's it.
> I didn't bother to check since it really wasn't on my radar at the time.
> 
> The road to get to a place where I'm pleased with my setup has been a long one. I initially bought an Air 64 card but I wasn't super happy with the performance/noise so I sold it and bought a LC 64. That card ended up being faulty so I returned it but since it couldn't be replaced due to stock I bought a 2nd one which had issues too just not as bad. After returning that one as well I decided it was time to go custom loop so I bought an Air 64. At that point things would've been great except that Newegg sent me a used card. How used, I'm not sure, but it arrived with the seal broken and fingerprints all over the card. They issued an advance RMA on the card which brings me to where I'm at now.
> 
> I'm quite pleased with the performance of my current card in the loop, it outdoes my LC 64s when they weren't crashing. I'll probably forget about the hotspot for now and redo the TIM when I decide to go hard tubing some months down the road.


I think we're putting too much thought/concern into the hotspot...I'm sure it's actual source will come to light at some point... But we do know without a doubt once there is a FC block in place, everything is running at a fraction of the temperature it was with the stock blower. If it is fine running with the stock blower, then anyone who has upgraded to something besides the leaf blower should have little to worry about.

My thoughts in the situation anyway. I'm just going to try and enjoy the card as I'm not having many problems in games. Doom runs great, Quake Champions running without a hitch. Got invited to beta test for Project 1v1 and that was fine for for several hours last night. Interesting concept for a shooter. All matches 1 on 1 and you have a deck of cards that give you your weapons and special abilities for that match. Collect and level up your cards..swap them out according to who you may be facing or the map you're in.

Gonna load up Prey tonight and see how that goes.


----------



## Soggysilicon

Quote:


> Originally Posted by *Roboyto*
> 
> I would say some combination of 1/2/3 personally, as my tiny loop with fans barely spinning yields more than acceptable temperatures with the GPU running at the brink.


Your right here, it has been my experience that if one thing is off... there are a couple things that are "off".
Quote:


> The 40um difference may have some effect....a very, very small one IMO...that's .04 millimeters or .0015 inches....
> Considering his information on a CPU having ~.14mm gap that are still able to be cooled, mostly, appropriately with a decent cooler. This would lead me to believe that the .04mm gap that is there on some Vega cards, is going to do approximately jack and squat as far as contributing to poor temperatures.
> 
> Personal experience...If you just put a heatsink, of any kind, onto something and the temperatures are not what are to be expected...you probably did something wrong.


I don't disagree here either, that gap is hardly worth mentioning. I do simply from an "idealized" perspective... ideally, I would want metal to metal contact on all surfaces, the gap is the gap, and as such requires an interface material. For the most part these materials are somewhere between "pretty good" and "really good" dependent on your conductivity (electric) tolerance. Excess arctic silver on one of those cap packs (for example) is undesirable, so... cleanliness is next to overclocki'ness.

Additionally because there is some spacing of the dies in the die arrangement I could see some misalignment and PCB warping being a headache for all of us. I do not know how "fragile" the chips ecology is, or how temperamental they are to thermal expansion. My arrangement is of the unmolded variety, so it is something (and as I recall when I first setup the card) took into consideration.

Last thing I want to do is fracture a ball on the array.









Throw it down and hope for the best?








Quote:


> Originally Posted by *kundica*
> 
> IDK, I did a pretty good job of mine and my hotspot is higher than I'd like. I've mention here before that there's a possibility I didn't do the greatest job with the tim, but my temps are pretty good for my setup. I have a 280 and 240 rad with Silent Wings 3 fans. Right now my ambient is 23 while my card idles at 24 with HBM at 24. Under extended gaming load the core hits 41 while my HBM is just 2 higher at 43, but my hotspot reaches high 60s. I'll probably reapply the tim at some point in the next few weeks but I currently don't feel like draining my loop to redo the card, especially since I don't seem to have any issues with it.
> 
> I find it interesting that your HBM is so much higher than your core while mine is almost the same.


On that test the GPU, while it is clocking up and holding, I can't really say its doing all that much in the way of "work". That test seems to put the hurting on the memory / controller. I used the test because its very stable, tends to keep the frequency very consistent, and I knew I could get the loop to thermal equilibrium. From there I could ramp the fans up (which I have calculated some time ago for the physical wattage dissipation increase), and see if I could drive the HS temp down under the same loaded conditions.

The point... to get around to making one, is the the HS directly corresponds to the GPU core load tit for tat. This is telling me that it is electrically bound to the same circuitry, and is a reference current leak device.
Quote:


> Originally Posted by *Medusa666*
> 
> Is your die molded?


From the reference material floating around... un-molded, it looks very much like the engineering sample boards. There is no molding between the HBM stacks and GPU. I could not detect a "height" difference between the GPU and HBM, but that doesn't mean there isn't one; just not one that I found to be "measurable" with my gear.
Quote:


> Originally Posted by *Trender07*
> 
> Any of you getting ur UV settings restored on Windows startup sometimes?


Before I game / bench / or process with the GPU I reset settings and reapply. Wattman is simply not reliable in this regard. I have had these "sorts" of problems in the past with the AMD drivers with specific settings for games "not sticking", between reboots. The issue seems to come and go and come right back with driver iterations.
Quote:


> Originally Posted by *laczarus*
> 
> from what I gather there is still no certainty about the hotspot
> Its either on the die or its the doublers above it, the hottest components


It is most assuredly not in the voltage doublers or around them, there is no reason to take a measurement here, not in a production for retail device.

Going forward rather than trying to disprove something that isn't, the folks making this claim need to provide the circuit schematic for how this so called measurement is taking place, and give at least a reasonable hypothesis as to why it was implemented.

Additionally that hypothesis needs to be testable and generally reproducible by myself and others. Yes I "can" measure the doublers using IR or thermocouples, I do use the OEM plastic fantastic back plate... no that measurement does not correspond to the HS temp as generated or read from the sensor.

There is no hunny in this pot.








Quote:


> Originally Posted by *OMgoo*
> 
> i'm pretty pissed
> 
> reseated my morpheus II several times, TIM overload TIM medium , low TIM, screws really tight, medium and loose, nothing helped getting the hotspot lower ... -.-


Could of just damaged it and you have excess current flowing through the device now reading back a temp that isn't accurate. Could of damaged a resistor... could have a short between multiple resistors and with them in parallel you have more current flowing through the device again... who knows... going to have to dig deeper or just stop looking at it... like a scratch on your Ferrari's paint... nibbles away at your mind!


----------



## madmanmarz

Quote:


> Originally Posted by *tgrove*
> 
> Sapphire 64 liquid version Windows 7, ive never seen thos behavior. Make sure your temps are in check as performance degrades with voltage and heat (timings, core speeds, etc). I wouldnt go over 65c.


Quote:


> Originally Posted by *kundica*
> 
> Can you elaborate? The way Vega is designed it doesn't necessarily run at p7. In addition to heat and voltage, the type of work/load has impacts the highest clock it's able to reach. Running certain test or rendering a video for example, my card will run at p7 or even slightly higher, while gaming it tends to run somewhere between p6 and p7. This isn't a fault, just how the card works. An AMD rep over at OCUK posted about this a while back, if I find the post again I'll share what he wrote here.


My fault it does appear to be working, i forget that some combinations of clocks/voltages for whatever reason don't change anything sometimes. I dropped my voltage to 1000mv and was able to still keep my clocks around 1550 (1700 in wattman) and got the boost to work.


----------



## Caldeio

I need some help! crossfire borked my pc!!

nope once i added the second card everything is broke. it wont boot or display.

keyboard lights up but just black screen with display on and off.

my oldest card the XFX doesnt boot at all. neither does powercolor









now nothing boots again. sometimes it'll boot to black screen or blue screen. single card only doesnt work

ok a single card, in the top slot! is working. but i get BSOD, critical proccess dies and thats it. Reinstalling windows again!


----------



## rancor

Quote:


> Originally Posted by *Caldeio*
> 
> I need some help! crossfire borked my pc!!
> 
> nope once i added the second card everything is broke. it wont boot or display.
> 
> keyboard lights up but just black screen with display on and off.
> 
> my oldest card the XFX doesnt boot at all. neither does powercolor
> 
> 
> 
> 
> 
> 
> 
> 
> 
> now nothing boots again. sometimes it'll boot to black screen or blue screen. single card only doesnt work
> 
> ok a single card, in the top slot! is working. but i get BSOD, critical proccess dies and thats it. Reinstalling windows again!


While your power supply shouldn't cause you to have booting issues a 750W is not realistically large enough for two of thease cards.


----------



## Reikoji

Quote:


> Originally Posted by *rancor*
> 
> While your power supply shouldn't cause you to have booting issues a 750W is not realistically larger enough for two of thease cards.


2 vegas with a 750w? I'd say the PSU is all of his booting issues right now :3. Those vegas probably melted that thing. time to order that 1600w.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *andreyb*
> 
> Back to my Hot Spot temperatures problem. Today I re-installed waterblock once again. Here is what I found after waerblock was unmounted:
> 
> 
> 
> I am not a expert in analysis of thermal paste application quality, but it doesn't look good for me. Looks like chip it slightly cambered. I decided to change thermal paste and application method. Zalman ZM-STG2 was switched on TG Kryonaut (damn, it's pricy) and spread it by thick layer on a whole chip. Nevertheless any of these helped. I got exactly the same thermals under load.
> I have the idea to replace EK's backplate to the original backplate with X-shaped bracket. Maybe it works better in terms of avoiding chip "bending".






thanks for making me feel better.
i have the same Samsung molded chip
could not figure for the life of me why my temps were so hot especially the hotspot.
i changed the flow of the water cooler...drained bled blah.
using the stock backplate (and frig does that thing get hot 70 degrees in a couple of places but yeah.

but now after reading a few pages i will not bother removing and reapplying i did the x way but with a bit thicker x's using kryonaut.
now my theory...the molde dones are getting the extra heat from the hbm (which generally are around 4/5 degrees hotter than the gpu and overclocked even more.)
and spreading it all over, now my brain tells me this is a good thing but then i think 3 heat sources separated would probably on the whole be cooler than 3 heat sources combined when cooled with the same surface.

so theories?

apart from lowering the voltages which does work i honestly think there is no way thermal paste application etc to better cooled the molded chips they just seem to crank out more heat.


----------



## Irev

what overclocking utility works well?

Sapphire trixx drops my clocks down to low 1300's at stock settings .... who knows why
asus gpu tweak does some strange crap to my pc and causes pc to black screen and reboot
msi AB doesnt allow for manual fan control
Wattman doesnt save profiles or apply automatically

another thing that bugs me is that if I set target fan speed on wattman to say 2600 rpm it just does what it wants after a while and goes back down to 2400 rpm. what gives????


----------



## laczarus

Quote:


> Originally Posted by *Irev*
> 
> what overclocking utility works well?
> 
> Sapphire trixx drops my clocks down to low 1300's at stock settings .... who knows why
> asus gpu tweak does some strange crap to my pc and causes pc to black screen and reboot
> msi AB doesnt allow for manual fan control
> Wattman doesnt save profiles or apply automatically
> 
> another thing that bugs me is that if I set target fan speed on wattman to say 2600 rpm it just does what it wants after a while and goes back down to 2400 rpm. what gives????


OverdriveNTool worked fine on my Vega 56. Also supports saving profiles
https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/


----------



## Tame

Someone needs to develop direct die water cooling. Seal the area around the die and inside metal frame with epoxy. Put a block on with rubber seal that goes against the metal frame. Water runs over the die, height difference doesn't matter x)


----------



## dagget3450

Man look at all the defense of nvidia in forza 7. Vega faster than 1080ti and they go nuts. Above 60fps dont matter, nvidia optimizations must be cause, 1080ti wins at 4k blowing doors off so 1080 or 1440 dont matter. Except at 4k vega has better min fps so its not really winning there.

Anyways hows everyone doing with their vegas? Anyone not happy or happy with their purchase?


----------



## TrixX

Yeah the Forza benches are causing a lot of green salt. There maybe something with the core locking going on, but it's still hilarious. Almost like they are emotionally invested in a corporation that likes to just rape their wallets.


----------



## Reikoji

Quote:


> Originally Posted by *dagget3450*
> 
> Man look at all the defense of nvidia in forza 7. Vega faster than 1080ti and they go nuts. Above 60fps dont matter, nvidia optimizations must be cause, 1080ti wins at 4k blowing doors off so 1080 or 1440 dont matter. Except at 4k vega has better min fps so its not really winning there.
> 
> Anyways hows everyone doing with their vegas? Anyone not happy or happy with their purchase?


Quite satisfied, can wait to get another.
Quote:


> Originally Posted by *TrixX*
> 
> Yeah the Forza benches are causing a lot of green salt. There maybe something with the core locking going on, but it's still hilarious. Almost like they are emotionally invested in a corporation that likes to just rape their wallets.


Yea its pretty funny. All kinds of hardware excuses being thrown at why Vega outperformed the 1080 ti in something, other than the 1080 ti itself.

I bet if that 64 had some tinkering done to it, which it probably didnt especially memory wise, it would probably have stomped in 4k too. They didn't mention any fine tuning on any of the cards so I assume it was just a stock air 64.


----------



## Reikoji

Microcrap wont let you download the Forza 7 demo unless you are on the Creators Edition. Sneaky bastages...

and microsoft hijacked my windows login


----------



## Nuke33

Quote:


> Originally Posted by *Tame*
> 
> Someone needs to develop direct die water cooling. Seal the area around the die and inside metal frame with epoxy. Put a block on with rubber seal that goes against the metal frame. Water runs over the die, height difference doesn't matter x)


That would be an interesting project indeed.

But I highly doubt any manufacturer would consider building such a thing due to difficulty of maintaining the seal.
Also I am not sure if it would really be better since you can´t control the flow direction any more without some sort of tunneling.


----------



## Soggysilicon

Quote:


> Originally Posted by *dagget3450*
> 
> Man look at all the defense of nvidia in forza 7. Vega faster than 1080ti and they go nuts. Above 60fps dont matter, nvidia optimizations must be cause, 1080ti wins at 4k blowing doors off so 1080 or 1440 dont matter. Except at 4k vega has better min fps so its not really winning there.
> 
> Anyways hows everyone doing with their vegas? Anyone not happy or happy with their purchase?


As others have said... this assumes its the stock oob card, a well tuned card would of been very interesting indeed.









However, these results are not unexpected. This trend will continue going forward so long as the title developers are working with AMD support to deliver a product which benefits from more cores on a cpu, and takes advantage of the low level APIs available to Vega; with the caveat that it is at higher resolutions.

Higher resolutions clearly diminish the advantages of raw frequency in CPU.

Consoles will continue to have more less powerful cores working across interconnect fabrics, and I strongly "believe" that Pascal and Vega are going to be the last monolithic dies generally commercially available, primarily due to yields. Electronics manufacturers are going to continue to push a 4k narrative, as well as VR.

I strongly suspect the Vega upset will continue for some years to come as, again, others have said; it is clearly the "pipecleaner" or place holder / test and development platform for the "real" forward move, glued together chips on a single PCB.

Nvidia is still "at least" a generation beyond Radeon and will continue to be very much competitive (if not the out in out defacto' standard) going forward with their new and improved architectures. Pascal has been around for some years now... _the me to's would eventually catch up to it... eventually._ If for any reason people "defecting" from one job to the other taking some of the juicy IP with them.









However, AMD has done very well this last year which is a solid boon to the consumer as prices have already been chipped away in the CPU market, more PCs are being built and upgraded too in the last year than in previous years; and there are "at least" some interest in the graphics benchmarks again for newer titles.

Companies will jump up and down eagerly telling you about innovation and "new" designs, but behind that hype machine, most, if not ALL of them hate it, hate change, don't wanna bother... just want to make .05% improvements from year to year. In a large heavily bureaucratic corporation all this change makes for endless meetings, mountains of paper work, very close micro management... and quite frankly, its exhausting.

Look at Raja... 3 months off... does not surprise me in the least! Nvidia I am sure will respond to this "outrage", by requiring mandatory overtime for the software team to drop a new driver that brings the frames up in this use case...




This video captures the next conversation over at N. land Friday afternoon... poor guy.









As far as Vega, I like it. Its "something different", a nice alternative to nVidia offerings. Like it even better if Bannerlords 2 would hurry up and come out!


----------



## Nuke33

Quote:


> Originally Posted by *dagget3450*
> 
> Man look at all the defense of nvidia in forza 7. Vega faster than 1080ti and they go nuts. Above 60fps dont matter, nvidia optimizations must be cause, 1080ti wins at 4k blowing doors off so 1080 or 1440 dont matter. Except at 4k vega has better min fps so its not really winning there.
> 
> Anyways hows everyone doing with their vegas? Anyone not happy or happy with their purchase?


Hehe finally they sweat a little, serves them right.









Vega is the best graphicscard purchase in years.
All my Nvidia cards before were crap compared to it. Gaming is way smoother and responsive than with any Nvidia card. I tested a GTX1080Ti and while it had higher frames it never felt so smooth as Vega did, even without any Sync tech.

Only downside are the occasional boosts of death. I really think AMD should include a switch to cap max boost Mhz to a user configurable value in wattman custom.

PS: MinerGate on Etherium kills my Card 100% of times with boostspikes to 1800+, altough I set P7 to 1752. In every other possible scenario those clocks were stable. MinerGate became my final stability test now, since it delivers reproduceable boost spikes.


----------



## Tame

Quote:


> Originally Posted by *Nuke33*
> 
> That would be an interesting project indeed.
> 
> But I highly doubt any manufacturer would consider building such a thing due to difficulty of maintaining the seal.
> Also I am not sure if it would really be better since you can´t control the flow direction any more without some sort of tunneling.


Haha yeah, I guess it would be unpractical because of the possibility of the water making its way to wrong places. But just as a physics mind experiment, it's cool idea.

That way we wouldn't have the copper and the contact surfaces in the way of the heat transfer. Say put inflow port at left and outflow at right, channel at middle which goes over the die(s). Make channel height over the die area low for increased flow rate over the dies.

I wonder how well thin oil works as a coolant









On another note, using traditional blocks I think too much tim is better than too little tim. When you tighten the block down properly, excess tim will be squeezed out. You always get about the same amount of tim between the die and block, unless you put too little.


----------



## Tgrove

Quote:


> Originally Posted by *dagget3450*
> 
> Man look at all the defense of nvidia in forza 7. Vega faster than 1080ti and they go nuts. Above 60fps dont matter, nvidia optimizations must be cause, 1080ti wins at 4k blowing doors off so 1080 or 1440 dont matter. Except at 4k vega has better min fps so its not really winning there.
> 
> Anyways hows everyone doing with their vegas? Anyone not happy or happy with their purchase?


Absolutely loving it, undervolting and hbm oc changed the card completly. Being able to drop crossfire, have enough vram, and still stay well within freesync range in all games (33-60hz) has been a dream so far. Now realizing how much of a headache mgpu was.


----------



## Nuke33

Quote:


> Originally Posted by *Tame*
> 
> Haha yeah, I guess it would be unpractical because of the possibility of the water making its way to wrong places. But just as a physics mind experiment, it's cool idea.
> 
> That way we wouldn't have the copper and the contact surfaces in the way of the heat transfer. Say put inflow port at left and outflow at right, channel at middle which goes over the die(s). Make channel height over the die area low for increased flow rate over the dies.
> 
> I wonder how well thin oil works as a coolant


Hmm yeah that actually sounds quite doable, nice idea.









It wouldn´t be that hard to machine a simple copperblock with tubing terminals I assume. The real hard part will be to find a seal matrial that does not get porous from exessive heat over time and has the same strech factor as copper.

Oil is not transferring heat fast enough, water is a better medium for fast heat exchange. I think some sort of cryogenic coolant would be interesting, altough that would complicate sealing even more.








Quote:


> On another note, using traditional blocks I think too much tim is better than too little tim. When you tighten the block down properly, excess tim will be squeezed out. You always get about the same amount of tim between the die and block, unless you put too little.


Unless you use liquid metal TIM you are probably right.


----------



## Azazil1190

Guys which drivers are better for oc stability and performance?

Im on 17.9.1 and im full stable at anything gaming-ocing-bench

I gave a try to 17.9.2 and had better fps on benches but not very good oc result and stability even at balance profile i had crash on the beginning of firestrike but stable in games

Same results for the 17.9.3

Thanks in advance.


----------



## Caldeio

Quote:


> Originally Posted by *rancor*
> 
> While your power supply shouldn't cause you to have booting issues a 750W is not realistically large enough for two of thease cards.


Quote:


> Originally Posted by *Reikoji*
> 
> 2 vegas with a 750w? I'd say the PSU is all of his booting issues right now :3. Those vegas probably melted that thing. time to order that 1600w.


Sorry got it working just fine!! 750w fine lol The cards use 130w and 160w mining. or about 220 each bemchmarking.

I can almost hit 11k in 4k Superpoistition.







Got 6880 on the xfx before so its a bit low?? 17.9.3 driver, now im on blockchain. Ill test again soon.


----------



## TrixX

Quote:


> Originally Posted by *Caldeio*
> 
> Sorry got it working just fine!! 750w fine lol The cards use 130w and 160w mining. or about 220 each bemchmarking.
> 
> I can almost hit 11k in 4k Superpoistition.
> 
> 
> 
> 
> 
> 
> 
> Got 6880 on the xfx before so its a bit low?? 17.9.3 driver, now im on blockchain. Ill test again soon.


Ya wot :O

11k in 4k Superposition? I'm just about pushing 6900 with mine on Air, what settings are you using to get 11k???


----------



## Caldeio

Quote:


> Originally Posted by *TrixX*
> 
> Ya wot :O
> 
> 11k in 4k Superposition? I'm just about pushing 6900 with mine on Air, what settings are you using to get 11k???


With two cards!! crossfire 1x1 setup with no frame pacing. Afterburner and gpu-z running for each gpu. (this did not help scores lol) hwinfo64 running too.


I got 6880 i think with my XFX 56. Never tested my powercolor, but together they get.

Next run will be with nothing running.







Also gonna run my fans higher than stock lol and turn on my BIG exhaust fan for the room.


----------



## TrixX

Thank Fark for that. I was utterly worried I had lost the ability to OC correctly there









Quite happy with this then running ~1620MHz for run.


----------



## Azazil1190

Here is mine!
Superposition dosnt read right the core clock ?



And one run on 1080p Extreme Really heavy on these settings



Much better score undervolatge the card


----------



## pengs

Quote:


> Originally Posted by *dagget3450*
> 
> Man look at all the defense of nvidia in forza 7.


It is funny. The cognition is not too strong with these ones.
Quote:


> Originally Posted by *Reikoji*
> 
> I bet if that 64 had some tinkering done to it, which it probably didnt especially memory wise, it would probably have stomped in 4k too. They didn't mention any fine tuning on any of the cards so I assume it was just a stock air 64.


That's also what I'm thinking and who knows if it was throttling. I'd like to know if the double fp16 feature and primitive discard is being used. A properly cooled Vega 64 alone would had probably matched the Ti at 4K.
Quote:


> Originally Posted by *Nuke33*
> 
> Only downside are the occasional boosts of death. I really think AMD should include a switch to cap max boost Mhz to a user configurable value in wattman custom.


Are you using 17.3? I've read that people are having an easier time recovering from the runaway boost clock. It seems to be dumping the driver before locking up.


----------



## springs113

I'd say all this hotspot talk led me to do some research of my own.
Here's one of my cards right after looping valley and heaven for 30 minutes.


----------



## Azazil1190

https://videocardz.net/gigabyte-radeon-rx-vega-64-8gb-gaming-oc/


----------



## Reikoji

Quote:


> Originally Posted by *Azazil1190*
> 
> https://videocardz.net/gigabyte-radeon-rx-vega-64-8gb-gaming-oc/


----------



## PontiacGTX

Quote:


> Originally Posted by *Azazil1190*
> 
> https://videocardz.net/gigabyte-radeon-rx-vega-64-8gb-gaming-oc/


390 didnt overheat with lower TDP?


----------



## Soggysilicon

Quote:


> Originally Posted by *Azazil1190*
> 
> Guys which drivers are better for oc stability and performance?
> 
> Im on 17.9.1 and im full stable at anything gaming-ocing-bench
> 
> I gave a try to 17.9.2 and had better fps on benches but not very good oc result and stability even at balance profile i had crash on the beginning of firestrike but stable in games
> 
> Same results for the 17.9.3
> 
> Thanks in advance.


9.2 and 9.3 may as well be the same driver as best as I can tell. 9.2 there seems to be a little more (for lack of a better description) IPC, but the OC'n doesn't seem as strong.

That said, the benches once I came off my OC' a touch with 9.2 where just as strong in some cases and outright better per clock than 9.1. Soooo... I am sticking with 9.2; I HOPE the next set of drivers open up RPM and that we get the serra benchmark before FC5/Wolf2 drop.
Quote:


> Originally Posted by *TrixX*
> 
> Ya wot :O
> 
> 11k in 4k Superposition? I'm just about pushing 6900 with mine on Air, what settings are you using to get 11k???


mGPU







A solid performance for 64cu on fluids is 7-7.1k, maybe 7.2 if the gods of silicon lottery smiled upon you.








Quote:


> Originally Posted by *Azazil1190*
> 
> https://videocardz.net/gigabyte-radeon-rx-vega-64-8gb-gaming-oc/


I LIKE that backplate, wish they would just sell me that!








Quote:


> Originally Posted by *Azazil1190*
> 
> Guys which drivers are better for oc stability and performance?
> 
> Im on 17.9.1 and im full stable at anything gaming-ocing-bench
> 
> I gave a try to 17.9.2 and had better fps on benches but not very good oc result and stability even at balance profile i had crash on the beginning of firestrike but stable in games
> 
> Same results for the 17.9.3
> 
> Thanks in advance.


Same advice as above, except I will add that I can successfully utilize HBCC on 9.2 and not get crippled in some applications. Came off the OC' a touch and score as well or better than 9.1, overboosting is an issue, so you'll need to tinker. Be sure to reload your driver or flat out reboot your machine before getting into comparison benches... drivers (all flavors) are a little suspect.


----------



## Azazil1190

Quote:


> Originally Posted by *PontiacGTX*
> 
> 390 didnt overheat with lower TDP?


hahaha agree but i hope for better


----------



## Azazil1190

Quote:


> Originally Posted by *Soggysilicon*
> 
> 9.2 and 9.3 may as well be the same driver as best as I can tell. 9.2 there seems to be a little more (for lack of a better description) IPC, but the OC'n doesn't seem as strong.
> 
> That said, the benches once I came off my OC' a touch with 9.2 where just as strong in some cases and outright better per clock than 9.1. Soooo... I am sticking with 9.2; I HOPE the next set of drivers open up RPM and that we get the serra benchmark before FC5/Wolf2 drop.
> mGPU
> 
> 
> 
> 
> 
> 
> 
> A solid performance for 64cu on fluids is 7-7.1k, maybe 7.2 if the gods of silicon lottery smiled upon you.
> 
> 
> 
> 
> 
> 
> 
> 
> I LIKE that backplate, wish they would just sell me that!
> 
> 
> 
> 
> 
> 
> 
> 
> Same advice as above, except I will add that I can successfully utilize HBCC on 9.2 and not get crippled in some applications. Came off the OC' a touch and score as well or better than 9.1, overboosting is an issue, so you'll need to tinker. Be sure to reload your driver or flat out reboot your machine before getting into comparison benches... drivers (all flavors) are a little suspect.


I like the backplate of giga too but i prefer the bp of my lc the truth is








Btw im gonna stay with the 9.1 i think untill amd give us a good stable-perf. drivers

thanks for your answers bro


----------



## Soggysilicon

Quote:


> Originally Posted by *Azazil1190*
> 
> I like the backplate of giga too but i prefer the bp of my lc the truth is
> 
> 
> 
> 
> 
> 
> 
> 
> Btw im gonna stay with the 9.1 i think untill amd give us a good stable-perf. drivers
> 
> thanks for your answers bro


Np man, I am seriously thinking about rolling back myself... played Prey the other night for about 2 hours n' change without a hitch... now tonight... crash city... must be a little more intense area with more effects, went to 9.3 just to see and its the same ***** show...







feels like the overboost issue, settings are not sticking.


----------



## Reikoji

1440p forza.





8k upscale no MSAA





8k upscale 8x MSAA

Demo framerate lock :|


----------



## Soggysilicon

Prey was crashing back to back... went to test my OC on Heaven... no issues, ran Prey again... played for hours... no issues...

Leaves me to conclude that the card gets "set" somehow when something is executed (such as Heaven), perhaps its an issue with DP / Freesync? Anyone else have similar experience?

So, boot up, set OC, run Prey... crash-ville... usually within 3-5 minutes....

Boot up, set OC, run Heaven... then run Prey... no problems... flawless...


----------



## asdkj1740

Quote:


> Originally Posted by *Azazil1190*
> 
> https://videocardz.net/gigabyte-radeon-rx-vega-64-8gb-gaming-oc/


there are thermal pads between pcb and blackplate as well as pcb and heatsink, which are used to cool gddr5 chips generally.

and i really doubt the cooling performance of this cooler according to its size.


----------



## Azazil1190

Quote:


> Originally Posted by *Soggysilicon*
> 
> Prey was crashing back to back... went to test my OC on Heaven... no issues, ran Prey again... played for hours... no issues...
> 
> Leaves me to conclude that the card gets "set" somehow when something is executed (such as Heaven), perhaps its an issue with DP / Freesync? Anyone else have similar experience?
> 
> So, boot up, set OC, run Prey... crash-ville... usually within 3-5 minutes....
> 
> Boot up, set OC, run Heaven... then run Prey... no problems... flawless...


I think is just unstable drivers the 9.2 and 9.3
im on 9.1 and im stable 100%









if you have c.f setup i know the only way is the 9.2 but if not roll back at 9.1


----------



## lowdog

Quote:


> Originally Posted by *Soggysilicon*
> 
> Prey was crashing back to back... went to test my OC on Heaven... no issues, ran Prey again... played for hours... no issues...
> 
> Leaves me to conclude that the card gets "set" somehow when something is executed (such as Heaven), perhaps its an issue with DP / Freesync? Anyone else have similar experience?
> 
> So, boot up, set OC, run Prey... crash-ville... usually within 3-5 minutes....
> 
> Boot up, set OC, run Heaven... then run Prey... no problems... flawless...


I get the same, sometimes just firing up 3DMark Fire Strike with a specific overclock on my 56 @ 64bios will cause a system freeze vid driver reset/reboot and thos same overclock settings I had just previously gamed for hours with.

Same scenario happens with 17.9.1 - 9.2 - 9.3 ........I reckon it's Wattman, it's flakey full stop!....some times oc settings seem to "STICK" and other times they appear to but something is flakey. Perhaps it has to do with the whole architecture of Vega and it's boosting....I dunno


----------



## Rootax

Same sort of thing here. My OC works 24h+ in futurmark, heaven, etc... I can play hours Xcom2. Then stop, then start again, and the driver will crash (direct3d device lost) a few minutes in. I think in the end It's a driver issue, because some weird stuff like that happened at stock clock speed too.


----------



## TrixX

Interesting, I've had a couple of crashes from the CoD WW2 Beta (crap game BTW







) though none with games I use regularly.

Moved over to OverdriveNTool as it's more stable applying OC/UV to the system, has profiles and reloads them at startup unlike Wattfail.

One thing I have noticed though is the most stable clocks I have is when I limit them via the voltage rather than the clocks directly. I know for instance that I'll get consistent ~1580MHz at 950mv on the P7 state with the actual clock value at 1752. Up the the mv to 1000 and it'll be ~1600MHz solid.

If I put voltage to stock at 1200mv on P7 and try to set clocks it just throttles them hard everytime the heat exceeds the crappy air coolers capabilities.


----------



## Azazil1190

Im gonna give a try to the new beta

4.4.0 beta 18

https://forums.guru3d.com/threads/rtss-6-7-0-beta-1.412822/page-40#post-5477649

Changes list includes:

- Partial AMD Vega support:
+ Fan control for Vega is supported now. Please take a note that Vega doesn't provide native support for traditional PWM duty cycle based fan control (i.e. it doesn't allow setting desired fixed fan speed in % directly). So fan speed percent scale in MSI Afterburner is internally linearly mapped to RPM scale.
+ Low-level voltage control via direct access to SMC is currently not implemented for Vega, so voltage is controlled via AMD ADL API. Which means that you cannot set voltages higher than allowed by Wattman's. Honestly I'm not sure if it worth implementing low-level voltage control for those cards, they are really power hungry and power limited even on default clocks/voltages.
- Now drag and drop is supported for multiple selected graphs in active hardware monitoring graphs list in "Monitoring" tab. So it is a bit easier to rearrange the graphs list now.
- Now you may right click active hardware monitoring graphs list in "Monitoring" tab and select "Reset order" command from the context menu to reset default active graphs order.
- Added Prolog and Epilog properties to "Separators" property node in OSD layout editor. Prolog and epilog allow you to display any custom text info above and below OSD (e.g. branding text, URL, your system specs etc). Both prolog and epilog support macro variables, allowing you to insert desired system specs automatically. The list of supported macro variables includes: %CPU%, %FullCPU%, %RAM%, %GPU%, %FullGPU%, %Driver% and %Time%.
- Added new "Group separator" property node to OSD layout editor. Group separators can be used to vertically split the groups if necessary.
- Added new "modern web" OSD layout to OSD. The layout is using new prolog and epilog to render branding text and system specs in OSD and group separators to split GPU, CPU/RAM and 3D application related statistics in OSD.
- Both MSI Afterburner and RTSS installers are preserving installation path now.


----------



## maxpowers1122

Quote:


> Originally Posted by *Caldeio*
> 
> With two cards!! crossfire 1x1 setup with no frame pacing. Afterburner and gpu-z running for each gpu. (this did not help scores lol) hwinfo64 running too.
> 
> 
> I got 6880 i think with my XFX 56. Never tested my powercolor, but together they get.
> 
> Next run will be with nothing running.
> 
> 
> 
> 
> 
> 
> 
> Also gonna run my fans higher than stock lol and turn on my BIG exhaust fan for the room.


How did you get it to recognize crossfire? When I run mine in crossfire only 1 gpu is working, gets same score as a single card. On the results screen it says Vega x2 instead of having 2 gpu listed like yours shows. Crossfire works fine in Time Spy and Fire Strike.

Here is my scores for Time Spy and Fire Strike Ultra.

https://www.3dmark.com/spy/2470135

https://www.3dmark.com/fs/13758068

Dual Vega 56 running 64 bios.


----------



## Nuke33

Quote:


> Originally Posted by *pengs*
> 
> Are you using 17.3? I've read that people are having an easier time recovering from the runaway boost clock. It seems to be dumping the driver before locking up.


Yes I am using 17.9.3, but unfortunately it does not prevent my card from doing the occasional boost that is completely above set limits.
I am using a fully unlocked PT. The card never throttles from power nor from temperature.


----------



## TrixX

Quote:


> Originally Posted by *Nuke33*
> 
> Yes I am using 17.9.3, but unfortunately it does not prevent my card from doing the occasional boost that is completely above set limits.
> I am using a fully unlocked PT. The card never throttles from power nor from temperature.


I never get that overboost, but then I'm throttling the Core MHz using power rather than setting core and using higher or stock power. It can't overboost if it hasn't got the power to do so









P7 - 1752MHz - 950mv
P6 - 1667MHz - 900mv
HBM - 1050MHz - 900mv


----------



## The EX1

Quote:


> Originally Posted by *jehovah3003*
> 
> Well, i'm ending up sending back my RX Vega 64 Liquid Edition once again, the noise is just so loud to get headaches and i never heard any fan making that much noise, the NOISE was the point of this liquid edition, good job AMD, another failed AIO card just like the Fury X.


That is coil whine, not the fan. Having a near silent cooling solution will let you hear what is normally masked by the sound of an air cooler.


----------



## Nuke33

Quote:


> Originally Posted by *The EX1*
> 
> That is coil whine, not the fan. Having a near silent cooling solution will let you hear what is normally masked by the sound of an air cooler.


I second that, coil whine is very noticeable on LC.


----------



## Nuke33

Quote:


> Originally Posted by *TrixX*
> 
> I never get that overboost, but then I'm throttling the Core MHz using power rather than setting core and using higher or stock power. It can't overboost if it hasn't got the power to do so
> 
> 
> 
> 
> 
> 
> 
> 
> 
> P7 - 1752MHz - 950mv
> P6 - 1667MHz - 900mv
> HBM - 1050MHz - 900mv


Yeah that would probably work, but I kind of dislike the idea of not having the full potential of this card all the time if temps are reasonable enough.









I have finished a powerplay mod which seems to have alleviated the overboost problem. But I have not tested it enough. Is someone else willing to give it a shot ?

It is basically as follows:

p0&p1 untouched
p2 - 950mv - 1302mhz
p3 - 950mv - 1348mhz
p4 - 965mv - 1408mhz
p5 - 980mv - 1667mhz
p6&p7 - 1065mv - 1792mhz
PT 265W
HBM clocks untouched

Powerplay reg file -->> https://ufile.io/apmxh

By setting p5 to a higher clock and p6&p7 to the same clocks and voltage, the overall clock distribution seems more predictable now. On 17.9.3 at least.
The bug that lets the driver jump to p5 when setting p6&p7 to the same clockvalue does not happen with this mod, at least not on 17.9.3.

1065mv results in 1704mhz max clocks on very low load.
Max GPU only powerconsumption with +50%PT is at roughly 280W in Firestrike Ultra grpahicstest 1 loop in 4K. Real consumption is probably around 360W.

Firestrike Ultra with that profile gets me 6009 graphics score.


----------



## tlblight

After watching Gamers Nexus new video on clock 56 Vs 64, he says that the 56 can clock higher than 64 and be more stable during gaming with higher clocks due to the lack of additional shaders that the 56 has to power. So I was wondering if it's possible to bios flash a 64 with a 56 bios to find this out. Wondering if anyone can test it out?


----------



## The EX1

Quote:


> Originally Posted by *tlblight*
> 
> After watching Gamers Nexus new video on clock 56 Vs 64, he says that the 56 can clock higher than 64 and be more stable during gaming with higher clocks due to the lack of additional shaders that the 56 has to power. So I was wondering if it's possible to bios flash a 64 with a 56 bios to find this out. Wondering if anyone can test it out?


You may be hard pressed to find someone who would do that. I watched the same video and the 64 was usually 2-3% ahead of the 56. This is probably around what you would get with another 40mhz of overclock headroom anyway (assuming your 64 clocks badly). Moving to the 56 BIOS will also limit the HBM clock you can attain so that will hamper performance further.

It has also been shown that extra shaders are not enabled when flashing a 56 to 64, so I doubt a BIOS flash would change shader count going from a 64 to a 56 anyway.


----------



## tlblight

I'm just wondering, can't you edit the registry for the HBM to compensate for The memory ? I know the 56 has hynix memory and the 64 has Samsung memory. Man I really want to know... My64 is water-cooled and it hits 24989 on firestrike and 121ö6 on extreme. At 1727/1100


----------



## Nuke33

Quote:


> Originally Posted by *tlblight*
> 
> I'm just wondering, can't edit the registry for the HBM to compensate for that? I know the 56 has hynix memory and the 64 has Samsung memory. Man I really want to know... My64 is water-cooled and it hits 24989 on firestrike and 121ö6 on extreme. At 1727/1100


IIRC the HBM voltage is predetermined by the bios. So modding the registry will not be beneficial as you are limited to 1.25v HBM of the vega56 bios.


----------



## tlblight

Man... I'm really annoyed by all this... I'm just brain storming at this point... Ty for your replys.guys.


----------



## Azazil1190

26K graphics score niceeee
















https://www.3dmark.com/3dm/22470767?


----------



## pmc25

Btw to add to data about GPU Hot Spot:

RX 64 Air with EKWB block.
Stock card's plastic backplate.
Stock EK heatpads for VRMs etc to block contact.
Liquid Metal Ultra on HBM and GPU.
Non-moulded die.
EK Predator 240 'AIO' (not realllyyy an AIO) on a dedicated loop.

3 runs of Metro Redux, with GPU core oscillating between 1670 and 1685Mhz mostly. HBM2 locked to 1090Mhz.

Hot Spot's Max was 56C for a fraction of a second, but spent most of the time in the 42-48C range (1st run no real difference to last), and would drop to ~30C in a fraction of a second in the half second or so between the end of 1 run and the beginning of another.
GPU Max temp was 33C.
HBM2 Max temp was 35C.

What ever the Hot Spot is, heat dissipates extremely quickly when load is backed off.

Also, I suspect my results for it are better than most due to LMU, despite a smaller loop, no metal backplate, and no Fujipoly. Thereby pretty much confirming Hot Spot is somewhere on the die or memory, or around them.


----------



## ducegt

Do you guys think Vega will have HDMI 2.1 support? It enables 4K120hz and FreeSync to be enjoyed on large TVs. DP to HDMI 2.1 adapter a possibility? I'm gunna wait for the new TVs, and Vega has made me good at waiting.


----------



## rancor

Quote:


> Originally Posted by *tlblight*
> 
> After watching Gamers Nexus new video on clock 56 Vs 64, he says that the 56 can clock higher than 64 and be more stable during gaming with higher clocks due to the lack of additional shaders that the 56 has to power. So I was wondering if it's possible to bios flash a 64 with a 56 bios to find this out. Wondering if anyone can test it out?


The 56 bios has a lower HBM voltage 1.25V vs 1.35V so you will sacrifice HBM clocks probably forcing you down to the 950 MHz range.


----------



## pmc25

Quote:


> Originally Posted by *ducegt*
> 
> Do you guys think Vega will have HDMI 2.1 support? It enables 4K120hz and FreeSync to be enjoyed on large TVs. DP to HDMI 2.1 adapter a possibility? I'm gunna wait for the new TVs, and Vega has made me good at waiting.


I highly doubt 2018 TVs will support this, even if they 'support' HDMI 2.1.

At a guess, some of the high end 2018 LG OLEDs *might* support it with later firmware upgrade.


----------



## Nuke33

Quote:


> Originally Posted by *Nuke33*
> 
> Yeah that would probably work, but I kind of dislike the idea of not having the full potential of this card all the time if temps are reasonable enough.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have finished a powerplay mod which seems to have alleviated the overboost problem. But I have not tested it enough. Is someone else willing to give it a shot ?
> 
> It is basically as follows:
> 
> p0&p1 untouched
> p2 - 950mv - 1302mhz
> p3 - 950mv - 1348mhz
> p4 - 965mv - 1408mhz
> p5 - 980mv - 1667mhz
> p6&p7 - 1065mv - 1792mhz
> PT 265W
> HBM clocks untouched
> 
> By setting p5 to a higher clock and p6&p7 to the same clocks and voltage, the overall clock distribution seems more predictable now. On 17.9.3 at least.
> The bug that lets the driver jump to p5 when setting p6&p7 to the same clockvalue does not happen with this mod, at least not on 17.9.3.
> 
> 1065mv results in 1704mhz max clocks on very low load.
> Max GPU only powerconsumption with +50%PT is at roughly 280W in Firestrike Ultra grpahicstest 1 loop in 4K. Real consumption is probably around 360W.
> 
> Firestrike Ultra with that profile gets me 6009 graphics score.


Here is my modified powerplay-table. Would be great if someone would give it a try and let me know if they encounter any problem, especially in regard to overboosting.

https://ufile.io/apmxh

PS: I recommend to only use it with watercoolers or AIOs.


----------



## TrixX

Quote:


> Originally Posted by *Nuke33*
> 
> IIRC the HBM voltage is predetermined by the bios. So modding the registry will not be beneficial as you are limited to 1.25v HBM of the vega56 bios.


Isn't it 1.15v for the 56, 1.2v for the 64 and 1.25v for the LC 64??


----------



## pmc25

Quote:


> Originally Posted by *TrixX*
> 
> Isn't it 1.15v for the 56, 1.2v for the 64 and 1.25v for the LC 64??


No. All 64 and 64LC are 1.35V.

I don't know what 56 is, but it has lower bin SK Hynix HBM2 anyway.


----------



## Nuke33

Quote:


> Originally Posted by *TrixX*
> 
> Isn't it 1.15v for the 56, 1.2v for the 64 and 1.25v for the LC 64??


I believe those are the core voltages. HBM are 1.35v for Vega64 and 1.25v for Vega56.

Edit: PMC25 was faster


----------



## TrixX

Quote:


> Originally Posted by *Nuke33*
> 
> Yeah that would probably work, but I kind of dislike the idea of not having the full potential of this card all the time if temps are reasonable enough.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have finished a powerplay mod which seems to have alleviated the overboost problem. But I have not tested it enough. Is someone else willing to give it a shot ?
> 
> It is basically as follows:
> 
> p0&p1 untouched
> p2 - 950mv - 1302mhz
> p3 - 950mv - 1348mhz
> p4 - 965mv - 1408mhz
> p5 - 980mv - 1667mhz
> p6&p7 - 1065mv - 1792mhz
> PT 265W
> HBM clocks untouched
> 
> By setting p5 to a higher clock and p6&p7 to the same clocks and voltage, the overall clock distribution seems more predictable now. On 17.9.3 at least.
> The bug that lets the driver jump to p5 when setting p6&p7 to the same clockvalue does not happen with this mod, at least not on 17.9.3.
> 
> 1065mv results in 1704mhz max clocks on very low load.
> Max GPU only powerconsumption with +50%PT is at roughly 280W in Firestrike Ultra grpahicstest 1 loop in 4K. Real consumption is probably around 360W.
> 
> Firestrike Ultra with that profile gets me 6009 graphics score.


Well I can't run Firestrike on this machine. It had a crap out just after install and won't re-install correctly.

Will test your clocks now and see how they go, but I don't think my card can cool that well to run 1700+ clocks.


----------



## Nuke33

Quote:


> Originally Posted by *TrixX*
> 
> Well I can't run Firestrike on this machine. It had a crap out just after install and won't re-install correctly.
> 
> Will test your clocks now and see how they go, but I don't think my card can cool that well to run 1700+ clocks.


Cool, can´t wait to hear your results









Please use the powerplaytable I posted though. --> https://ufile.io/apmxh

Most of the time you will not reach 1704mhz. More like 1620mhz.


----------



## TrixX

Quote:


> Originally Posted by *Nuke33*
> 
> Cool, can´t wait to hear your results
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please use the powerplaytable I posted though. --> https://ufile.io/apmxh
> 
> Most of the time you will not reach 1704mhz. More like 1620mhz.


Initial run crashes my machine unfortunately. I have a feeling that's more to do with instability in the CPU though. It doesn't like power draw above 240ish for the GPU and tends to just hard crash and forget it's BIOS profiles









May have to wait for the RAM to arrive for my TR4 build before going further


----------



## Nuke33

Quote:


> Originally Posted by *TrixX*
> 
> Initial run crashes my machine unfortunately. I have a feeling that's more to do with instability in the CPU though. It doesn't like power draw above 240ish for the GPU and tends to just hard crash and forget it's BIOS profiles
> 
> 
> 
> 
> 
> 
> 
> 
> 
> May have to wait for the RAM to arrive for my TR4 build before going further


Oh okay sorry to hear that. You could try lowering P6&P7 to 980mv. That would give you around 70W less powerdraw and clocks get reduced to 1500-1550mhz and topping out at around 1632mhz.


----------



## TrixX

Quote:


> Originally Posted by *Nuke33*
> 
> Oh okay sorry to hear that. You could try lowering P6&P7 to 980mv. That would give you around 70W less powerdraw and clocks get reduced to 1500-1550mhz and topping out at around 1632mhz.


With my current settings for just normal usage I get around ~1580MHz solid in Superposition at 950mv. When I push for a bit more, I don't adjust the clocks I just up the mv and I can hit ~1620 solid with 1100mv, maybe a bit less. but seeing as ambient temps here are usually 30 at the moment it's a bit tough to do repeat testing. The TR4 with a water loop should solve that nicely though









Will give it another go 2mo night.


----------



## Nuke33

Quote:


> Originally Posted by *TrixX*
> 
> With my current settings for just normal usage I get around ~1580MHz solid in Superposition at 950mv. When I push for a bit more, I don't adjust the clocks I just up the mv and I can hit ~1620 solid with 1100mv, maybe a bit less. but seeing as ambient temps here are usually 30 at the moment it's a bit tough to do repeat testing. The TR4 with a water loop should solve that nicely though
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Will give it another go 2mo night.


I do not use superposition so I don´t know how high the boost clocks would be there. I generally use firestrike ultra @4k graphicstest 1 in an endless loop to determine stability.

1580mhz @ 950mv is very good. If that is also the case for high loads you have a very good asic









Thanks for giving it a shot


----------



## Trender07

Quote:


> Originally Posted by *TrixX*
> 
> With my current settings for just normal usage I get around ~1580MHz solid in Superposition at 950mv. When I push for a bit more, I don't adjust the clocks I just up the mv and I can hit ~1620 solid with 1100mv, maybe a bit less. but seeing as ambient temps here are usually 30 at the moment it's a bit tough to do repeat testing. The TR4 with a water loop should solve that nicely though
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Will give it another go 2mo night.


Hoho nice tried ur clocks and now running superundervolt at [email protected] mV!


----------



## lowdog

Quote:


> Originally Posted by *pmc25*
> 
> No. All 64 and 64LC are 1.35V.
> 
> I don't know what 56 is, but it has lower bin SK Hynix HBM2 anyway.


No it doesn't. I have a Sapphire 56 that has Samsung HBM on it.


----------



## tlblight

tell me more! what is this hot spot you speak of sir?


----------



## diabetes

Quote:


> Originally Posted by *lowdog*
> 
> No it doesn't. I have a Sapphire 56 that has Samsung HBM on it.


Same here, Sapphire Vega 56 bought from Caseking Germany at launch day. The card can do 1000Mhz HBM on stock bios with an EKWB attached to it (havent tested higher clocks yet).


----------



## diabetes

Quote:


> Originally Posted by *tlblight*
> 
> tell me more! what is this hot spot you speak of sir?


Hotspot is a sensor reading that is shown in GPU-Z on Vega cards. Speculation is going on about what exactly is measured. The most common theories are that it is part of the GPU die (L2 cache, IMC) or the HBM logic layer, whereas the actual HBM temp is taken from the top/middle of the HBM stack. It could also be the temp between GPU die and interposer because there have been reports that there are insulating air pockets in this area.

It has been ruled out that this is part of the PCB or the VRMs.
https://www.techpowerup.com/forums/threads/devs-what-is-gpu-temperature-hot-spot-on-rx-vega.236843/
https://cgit.freedesktop.org/~agd5f...hwmgr/vega10_thermal.h?h=amd-staging-drm-next

On some stock cards or when a custom cooler is not mounted properly this can hit 105C. When this happens, the card throttles. What concerns me about it is that AMD does not provide any information on this. This might lead to dead cards in the future, caused by a partly burnt out chip.

Even with custom cooling, the hotspot is warmer than GPU and HBM. From what I have seen, the lowest delta one could get under load was 10°C. Typical ranges are 15C-25C delta compared to GPU core temp with custom cooling.

Please reply to the AMD community forums with a "I do also want to know this" and ask Steve from GamersNexus if he could cover this topic in his next AskGN by leaving a short comment on his video.
https://community.amd.com/thread/220022


----------



## tlblight

heres my set up
vega 64
ekwb block with ekwb 360 slim kit (GPU only in that loop)
regedit the power table im getting 100% power limit,
and 1250 edit as well even though im only running with 1200 mv on voltage control.
1100 mem frequency 950 voltage control.
Here are some links to firestrike.
https://www.3dmark.com/fs/13769830
https://www.3dmark.com/fs/13769737

im just looking to improve anything i can if you guys have any suggestions, im all ears.


----------



## laczarus

Quote:


> Originally Posted by *tlblight*
> 
> and 1250 edit as well even though im only running with 1200 mv on voltage control.


If you're looking to increase the graphics score I don't think that much voltage is needed tbh
You should be able to break the 25k in graphics score, considering my 56 with 64 bios got 24412 at about 1150mV.
Combined score goes down though, leading to less overall score.
Found this to be a sweetspot of some sort regarding FS graphics scores. For me at least.
I don't have a power table edit applied either.
https://www.3dmark.com/fs/13624182


----------



## tarot

can anyone with the nice shiny overclock and undervolt etc run firestrike stress test for me and show results with the settings.

getting this vega setup right is trickier than i imagined


----------



## tlblight

with those settings my 64 gets 23497.


----------



## raysheri

haven't heard of any vega 56 with Hynix memory. My 56 has Samsung


----------



## tlblight

https://videocardz.com/72173/there-are-at-least-three-variants-of-vega-10-gpu-packages

this is what im talking about i thought vega 56 had different memory...


----------



## Ipak

Just bought Vega 64 Limited Edition from Shapphire for 2600 PLN (that's around 700 USD, also VAT included), which is cheaper then some Vegas 56 around here LOL


----------



## Soggysilicon

Quote:


> Originally Posted by *Azazil1190*
> 
> Im gonna give a try to the new beta
> 
> 4.4.0 beta 18
> 
> https://forums.guru3d.com/threads/rtss-6-7-0-beta-1.412822/page-40#post-5477649
> 
> Changes list includes...


REALLY interested in how this works out for you! I miss AB and my RTSS on my G15...







Been banished from my Rig since Vega.








Quote:


> Originally Posted by *tlblight*
> 
> After watching Gamers Nexus new video on clock 56 Vs 64, he says that the 56 can clock higher than 64 and be more stable during gaming with higher clocks due to the lack of additional shaders that the 56 has to power. So I was wondering if it's possible to bios flash a 64 with a 56 bios to find this out. Wondering if anyone can test it out?


Yeaaahhh... I saw that video myself... a couple folk seemed to have already chimed in on it. So the short of it is in a trade off, more CU (which at present) are not being fully utilized by much of anything, which themselves incur a stability (or driver cost), or less CU, more stability and a higher clock... from a value perspective.

The 56 is certainly the better "buy" if your looking at a strictly cost <-> performance metric, today; out of the box, purchase vs. purchase.

I would point out that the 64 cards have "generally" shown to have better HBM clocks. Properly tuned I have "yet" to see a 56 bench "competitively" with 64 cards on fluids n' proper coolers. A best tuned vs. best tuned. I certainly haven't seen a large number of 56's shaming 64's either.

There is certainly some lottery here and I suspect that a 56 is a 64 with CU deactivated from lot sample screening. In a sense, its Ryzen all over again... you want 4 Ghz... 1800X... its not that a 1700 can't do it, but AMD isn't going to guarantee it through XFR. I digress though, bottom line is that the HBM certainly appears to be binned to the more expensive cards... so even if the core performance is the same, there is that; and HBM either works at the set clocks or crashes, it does not respond well to "pour on the volts".

"If" this is the case, then a 56 will rarely if ever catch a 64, as the higher frequency would put a demand on the memory BW, so sure its hypothetically maybe even testable faster... but that memory clock... can't feed it, bottleneck...









Now THAT all said, IF (and its a big IF), 64s could disable some CU to get a higher stable freq. AND have the better binned memory... that would be remarkable... but I don't think we have that capability, I do not believe "down flashing" would have this result. additionally you would loose voltage / wattage potentials wrecking the up clocks... sooo... can't win for loosing.
Quote:


> Originally Posted by *tlblight*
> 
> tell me more! what is this hot spot you speak of sir?


HAHAHA! This topic just won't die!


----------



## kundica

Quote:


> Originally Posted by *tlblight*
> 
> https://videocardz.com/72173/there-are-at-least-three-variants-of-vega-10-gpu-packages
> 
> this is what im talking about i thought vega 56 had different memory...


Just stop referring to Videocardz as a legitimate source. Just like WhyCry posted Gigabyte wasn't going to make a Vega card despite the Gigabyte rep on reddit saying it wasn't true.

__
https://www.reddit.com/r/6t5g57/aorus_x_rx_vega/

Here's WhyCry trying to save face yesterday after news of Gigabyte making one surfaced again.
Quote:


> Contrary to previous reports (which by the way were true, but Gigabyte changed their story after they received a new batch of Vega chips) the company is indeed making a custom Radeon RX Vega 64.


----------



## Azazil1190

Quote:


> Originally Posted by *Soggysilicon*
> 
> REALLY interested in how this works out for you! I miss AB and my RTSS on my G15...
> 
> 
> 
> 
> 
> 
> 
> Been banished from my Rig since Vega.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So far a.b isnt ready yet for vega
> I dont know if only happen to me but the power target dosnt work right via a.b.
> I dont know if matter that im on 9.1 drivers anyway i didnt made more tests
> So back to overdriventool


----------



## TrixX

Quote:


> Originally Posted by *Nuke33*
> 
> Cool, can´t wait to hear your results
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please use the powerplaytable I posted though. --> https://ufile.io/apmxh
> 
> Most of the time you will not reach 1704mhz. More like 1620mhz.


Right well after more tinkering I cannot replicate your sub P6/P7 setup as OverdriveNTool doesn't allow me to keep the edits to P0-P5.

Also didn't see an improvement over my settings as with the LC BIOS I've already got an increase power table and +25% already hits more than required. Going above 1100mv on mine currently can cause CPU instability (for whatever reason, CPU is slightly degraded so the fact I have some stable clocks is quite cool anyway!). I don't think I can get better results without an improved cooling solution as even with fan at 4900RPM I can't keep it below 70C above ~1620MHz. With the stock BIOS I would have headroom to 85C but not really willing to push the HBM that hot.


----------



## pmc25

Quote:


> Originally Posted by *lowdog*
> 
> No it doesn't. I have a Sapphire 56 that has Samsung HBM on it.


It's still lower bin.


----------



## gupsterg

Quote:


> Originally Posted by *pmc25*
> 
> No. All 64 and 64LC are 1.35V.
> 
> I don't know what 56 is, but it has lower bin SK Hynix HBM2 anyway.
> Quote:
> 
> 
> 
> Originally Posted by *lowdog*
> 
> No it doesn't. I have a Sapphire 56 that has Samsung HBM on it.
> Quote:
> 
> 
> 
> Originally Posted by *pmc25*
> 
> It's still lower bin.
> 
> 
> 
> 
> 
> Click to expand...
Click to expand...

Don't know where this info has come from, but VRAM_Info of all RX VEGA variants shows only 1 IC type supported, Samsung KHA843801B.

1.25V is what PowerPlay of RX VEGA 56 highlights as HBM voltage. All other VEGA cards are 1.35V.

I believe "yield/quality" was not as good and to reach higher speeds they had to pump extra voltage to HBM2, just like HBM1.


----------



## TrixX

Quote:


> Originally Posted by *gupsterg*
> 
> Don't know where this info has come from, but VRAM_Info of all RX VEGA variants shows only 1 IC type supported, Samsung KHA843801B.
> 
> 1.25V is what PowerPlay of RX VEGA 56 highlights as HBM voltage. All other VEGA cards are 1.35V.
> 
> I believe "yield/quality" was not as good and to reach higher speeds they had to pump extra voltage to HBM2, just like HBM1.


GPU-z shows my friend's one to be a Micron HBM card (Vega64 and terrible OC capabilities) and I've seen another Hynix based card around too.


----------



## kundica

Quote:


> Originally Posted by *TrixX*
> 
> GPU-z shows my friend's one to be a Micron HBM card (Vega64 and terrible OC capabilities) and I've seen another Hynix based card around too.


He needs to update to the newest GPU-Z. They all said Micro before it was updated to actually work with Vega.


----------



## TrixX

Ah that makes a bit more sense, will let him know.


----------



## tlblight

Since some ppl ddon'tsee videocardz as a reputable source, here's something from Tom's Hardware. scroll all the way to the bottom of the page to see the article about Vega 56 being possible SK hynix memory. I'm not saying either way, at bare minimal this is a good read something to learn about your video card.


----------



## kundica

Quote:


> Originally Posted by *tlblight*
> 
> Since some ppl ddon'tsee videocardz as a reputable source, here's something from Tom's Hardware. scroll all the way to the bottom of the page to see the article about Vega 56 being possible SK hynix memory. I'm not saying either way, at bare minimal this is a good read something to learn about your video card.


There are supposed to be cards with Hynix coming they're just not available yet.


----------



## tlblight

http://www.tomshardware.com/news/amd-vega-package-problem,35281.html Sry I totally forgot to link. Lol


----------



## gupsterg

Anybody thinking they have anything but Samsung can check very easily.

i) Dump a copy of your card's VBIOS.

ii) Download AtomBiosReader (linked in OP here). Open the VBIOS in program and at the location of VBIOS there will be a txt with tables list created. Within the txt will be a VRAM_INFO table offset location. Use HxD to view that area of VBIOS.





Other than VEGA FE (again Samsung) I have seen no VBIOS with any other HBM IC support.



Members can also view the details in TechPowerUp GPU VBIOS database as well, last time I checked again only Samsung.


----------



## milan616

Quote:


> Originally Posted by *TrixX*
> 
> Right well after more tinkering I cannot replicate your sub P6/P7 setup as OverdriveNTool doesn't allow me to keep the edits to P0-P5.


Are people able to edit their P0-P5 settings? Also is there a way to disconnect voltage from clockspeed in OverdriveNTool? Right now I'm stuck at 1500-ish MHz if I have 1000 mV set and I'd like to see what, if any, more headroom is left.


----------



## TrixX

Quote:


> Originally Posted by *milan616*
> 
> Are people able to edit their P0-P5 settings? Also is there a way to disconnect voltage from clockspeed in OverdriveNTool? Right now I'm stuck at 1500-ish MHz if I have 1000 mV set and I'd like to see what, if any, more headroom is left.


Not entirely sure it's needed.

I'm running:

P6 - 1667MHz - 900mv
P7 - 1752MHz - 950mv

HBM - 1050MHz - 900mv (here adjust the HBM MHz to what's stable for your sample)

Power is at +10% for daily use and +25% when benchmarking

Fans I set to 500 - 3800 RPM for daily and 1200 - 4900 RPM for benchmarking.

My target temps are Max 70C and Target 65C as I'm on the Liquid BIOS.

During bench runs I usually get ~1588MHz sustained (no fluctuations or downclocking).
If I want to get higher I can up the P7 mv to 1100mv and I'll get a sustained ~1620MHz.

This is on air. I'm heavily limited by temps as anything more than above and the cooler cannot deal with the heat produced and causes driver crashes.

Waiting to do some testing on my water loop when the CPU block finally arrives


----------



## owntecx

Quote:


> Originally Posted by *TrixX*
> 
> Not entirely sure it's needed.
> 
> I'm running:
> 
> P6 - 1667MHz - 900mv
> P7 - 1752MHz - 950mv
> 
> HBM - 1050MHz - 900mv (here adjust the HBM MHz to what's stable for your sample)
> 
> Power is at +10% for daily use and +25% when benchmarking
> 
> Fans I set to 500 - 3800 RPM for daily and 1200 - 4900 RPM for benchmarking.
> 
> My target temps are Max 70C and Target 65C as I'm on the Liquid BIOS.
> 
> During bench runs I usually get ~1588MHz sustained (no fluctuations or downclocking).
> If I want to get higher I can up the P7 mv to 1100mv and I'll get a sustained ~1620MHz.
> 
> This is on air. I'm heavily limited by temps as anything more than above and the cooler cannot deal with the heat produced and causes driver crashes.
> 
> Waiting to do some testing on my water loop when the CPU block finally arrives


Thats realy strange, How u can go so low on voltages while u set p7 so high, I can set p7 above stock to get more mhz, but not always stable from my testing. But thats not it. How 150mv get u only 30mhz increase from 1588 to 1620. I can get almost 50% scale from voltage. Like setting p7 to 1642 with 1000mv gives me about 1520. settings it to 1100 i have about 1600mhz without change to p7. Other thing i find out is, if i let hbm voltage to 950mv, i get less score, and less coreclock (same settings). matching it with p7 voltage i get like 30 mhz boost


----------



## TrixX

Quote:


> Originally Posted by *owntecx*
> 
> Thats realy strange, How u can go so low on voltages while u set p7 so high, I can set p7 above to get more mhz, but not always stable from my testing. But thats not it. How 150mv get u only 30mhz increase from 1588 to 1620. I can get almost 50% scale from voltage. Like setting p7 to 1642 with 1000mv gives me about 1520. settings it to 1100 i have about 1600mhz without change to p7. Other thing i find out is, if i let hbm voltage to 950mv, i get less score, and less coreclock (same settings). matching it with p7 voltage i get like 30 mhz boost


Basically I'm using the voltage as the clock throttle. There's a weird relationship with P6 and P7 where it seems to avg between them for the clocks it wants to use. Using the MSI 8774 Liquid Cooled BIOS the P7 defaults to 1752MHz stock and P6 to 1667MHz. I then just run the minimum voltage I can to get the lowest heat through the core/HBM. If I go much below 900 for P6 is sometimes gets stuck in P5 as it's got 1100mv to play with there.

If I ramp the P6 to 1700MHz then my speeds go up dramatically however power draw goes up just as dramatically and the cooler can't cope. It's happy place is ~1600MHz, though in PUBG for example (using clockblocker as otherwise it just get's bored and hangs about in P4) I get roughly ~1640MHz most of the time and temps are never an issue (64C max and fan ramps to around 2800-3200RPM in load scenario's).


----------



## owntecx

Quote:


> Originally Posted by *TrixX*
> 
> Basically I'm using the voltage as the clock throttle. There's a weird relationship with P6 and P7 where it seems to avg between them for the clocks it wants to use. Using the MSI 8774 Liquid Cooled BIOS the P7 defaults to 1752MHz stock and P6 to 1667MHz. I then just run the minimum voltage I can to get the lowest heat through the core/HBM. If I go much below 900 for P6 is sometimes gets stuck in P5 as it's got 1100mv to play with there.
> 
> If I ramp the P6 to 1700MHz then my speeds go up dramatically however power draw goes up just as dramatically and the cooler can't cope. It's happy place is ~1600MHz, though in PUBG for example (using clockblocker as otherwise it just get's bored and hangs about in P4) I get roughly ~1640MHz most of the time and temps are never an issue (64C max and fan ramps to around 2800-3200RPM in load scenario's).


Yes, im doing the same, the only thing i find funny its u having 1752p7 and somehow it doesnt crash. I have my p7 1652. i had it 1662 for some time, but if crashed randomly at 975mv. This Vegas are like wild beast, and u got to tame them somehow


----------



## milan616

@TrixX are you sure you're getting those speeds? It looks like I can set whatever the hell I want in OverdriveNTool for clock speed and it won't go over ~1500 MHz when I have it set to 1000 mV. If I go up in voltage the clock is allowed to go higher.


----------



## TrixX

Here's my current settings exactly.

Though I am using the bios from here:
MSI RX Vega 64 Liquid Cooled BIOS

I don't reach 1752MHz with the current cooler and voltage as the voltage is the regulator of the clocks. So I sit around ~1620MHz for the most part in games, though it can run up to ~1688MHz if the temps are low.

I use the P6 and P7 states as a high point for the clock speed if temp and power allow and let the voltage control the temp and the power.

I'll up an Afterburner graph image in a min...

EDIT: And here it is. The Fan speed reads zero as the BIOS registers 3300 as the max fan speed for the card, whereas it's set to 3800 in OverdriveNTool which uses the stock fan speed settings (can set to 4900). About a 20 min run of pCARS 2 is shown from start to finish.


----------



## raysheri

what driver are you using? Also can't read that AB graph.


----------



## Chaoz

Quote:


> Originally Posted by *pmc25*
> 
> Btw to add to data about GPU Hot Spot:
> 
> RX 64 Air with EKWB block.
> Stock card's plastic backplate.
> Stock EK heatpads for VRMs etc to block contact.
> Liquid Metal Ultra on HBM and GPU.
> Non-moulded die.
> EK Predator 240 'AIO' (not realllyyy an AIO) on a dedicated loop.
> 
> 3 runs of Metro Redux, with GPU core oscillating between 1670 and 1685Mhz mostly. HBM2 locked to 1090Mhz.
> 
> Hot Spot's Max was 56C for a fraction of a second, but spent most of the time in the 42-48C range (1st run no real difference to last), and would drop to ~30C in a fraction of a second in the half second or so between the end of 1 run and the beginning of another.
> GPU Max temp was 33C.
> HBM2 Max temp was 35C.
> 
> What ever the Hot Spot is, heat dissipates extremely quickly when load is backed off.
> 
> Also, I suspect my results for it are better than most due to LMU, despite a smaller loop, no metal backplate, and no Fujipoly. Thereby pretty much confirming Hot Spot is somewhere on the die or memory, or around them.


Stock backplate isn't plastic, fyi.


----------



## raysheri

pmc25, yeah looks like a max 20-25 deg delta from GPU to Hot spot, I get that also and seems to be fairly common.


----------



## owntecx

I need someone to test something. Whatever p7 or p6 set. without powerplay tables.( somehow i get less mhz, cause less voltage on p4/5), all voltages, p6/7 and hbm, 1001mv i get 1540mhz. well. if i set both p6/7 andhbm to 1000, yes, 1 mv, i get 1490 mhz, almost 50mhz just from 1mv.


----------



## IvantheDugtrio

I just got my Vega FE air on a waterblock and am doing initial testing. It looks like with the August 13th drivers it still throttles to ~1200 MHz with occasional spikes to ~1600 MHz when running FireStrike. Temperatures seem to peak at 35C when measured with WattTool 0.92.

I tried the powerplay regex mod from a month ago but it looks like it doesn't work anymore. Since it's a Vega FE air it looks like I'm out of luck on BIOS mods and any driver tweaks. I'll try downgrading to the July driver to see if it's any more stable.


----------



## Zero4549

For all of you having massive throttling on liquid cooled cards, something seems wrong. I'm mining on my vega 64 with the dinky black air cooler with 100% GPU and VRAM usage 24/7. Temps never exceed 70c, never throttles. The card is basically stock, with the only adjustments made being undervolting and custom fan profiles in wattman.

This is in an enclosed MATX case with only moderate airflow at best. Only issue I ever have is the fan in the vega occasionally getting mildly annoying when it gets above 2500RPM or so. Typically only if my room is warmer than 78f.

If you guys haven't tried undervolting yet, you probably should. These cards have some absurd headroom in that department.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Zero4549*
> 
> For all of you having massive throttling on liquid cooled cards, something seems wrong. I'm mining on my vega 64 with the dinky black air cooler with 100% GPU and VRAM usage 24/7. Temps never exceed 70c, never throttles. The card is basically stock, with the only adjustments made being undervolting and custom fan profiles in wattman.
> 
> This is in an enclosed MATX case with only moderate airflow at best. Only issue I ever have is the fan in the vega occasionally getting mildly annoying when it gets above 2500RPM or so. Typically only if my room is warmer than 78f.
> 
> If you guys haven't tried undervolting yet, you probably should. These cards have some absurd headroom in that department.


I've tried undervolting but 1150 mV was the lower limit that FireStrike would run stable. Maybe I just have a really bad bin. Also with the state of the drivers I don't know if I can trust the reported clock speeds or voltages. Only performance can tell if there is an improvement.


----------



## Zero4549

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> I've tried undervolting but 1150 mV was the lower limit that FireStrike would run stable. Maybe I just have a really bad bin. Also with the state of the drivers I don't know if I can trust the reported clock speeds or voltages. Only performance can tell if there is an improvement.


That's fair.

I know my undervolt works simply because it shows an obvious reduction in temps, and similarly a reduction in power being drawn from my metered UPS.

According to wattman, I'm, running at 1040mV. Who knows what that figure actually translates to, all I know is it is lower than stock as evidenced above, and has had no ill effect on my mining performance.

Could be you have a bad card (or perhaps mine is golden), or our numbers are just reporting VERY differently, or maybe mine isn't even firestrike stable but none of the mining algorithms I use care.


----------



## Soggysilicon

Quote:


> Originally Posted by *Azazil1190*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Soggysilicon*
> 
> REALLY interested in how this works out for you! I miss AB and my RTSS on my G15...
> 
> 
> 
> 
> 
> 
> 
> Been banished from my Rig since Vega.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So far a.b isnt ready yet for vega
> I dont know if only happen to me but the power target dosnt work right via a.b.
> I dont know if matter that im on 9.1 drivers anyway i didnt made more tests
> So back to overdriventool
Click to expand...

Blah, well thanks for getting back to me on it. I was reaaallly hoping AB had this worked out. Looks like I am going to roll-back to 9.1, the 9.2/9.3 drivers are janky'Mc Jank. Setting p7 to 1776 1250 mv +50pwr and benching... scores, of course, where going down down down until the driver would completely hang... complete nonsense.

I am "highly" suspicious at this point that executing an older direct X build somehow "locks" the card, or modifies how it functions. This behavior (if it was present) was not nearly as bad in 9.1.

Hence why I am able to run Prey for example with certain OC's only "after" having of launched Heaven, for example.
Quote:


> Originally Posted by *Zero4549*
> 
> For all of you having massive throttling on liquid cooled cards, something seems wrong. I'm mining on my vega 64 with the dinky black air cooler with 100% GPU and VRAM usage 24/7. Temps never exceed 70c, never throttles. The card is basically stock, with the only adjustments made being undervolting and custom fan profiles in wattman.
> 
> This is in an enclosed MATX case with only moderate airflow at best. Only issue I ever have is the fan in the vega occasionally getting mildly annoying when it gets above 2500RPM or so. Typically only if my room is warmer than 78f.
> 
> If you guys haven't tried undervolting yet, you probably should. These cards have some absurd headroom in that department.


Card behavior "seems" very much dictated by the direct X api's from which the card is executing instructions, driver rev, and power settings. If your program isn't attempted to access certain functions, you may never see the overboost issue... I assure you, it does happen, and in some cases can reliably repeat the event.

Below is an example where I can reliably recreate the overboost event in Heaven at a precise moment, at will. The problem (or problems) is in the driver.


----------



## Zero4549

Quote:


> Originally Posted by *Soggysilicon*
> 
> Blah, well thanks for getting back to me on it. I was reaaallly hoping AB had this worked out. Looks like I am going to roll-back to 9.1, the 9.2/9.3 drivers are janky'Mc Jank. Setting p7 to 1776 1250 mv +50pwr and benching... scores, of course, where going down down down until the driver would completely hang... complete nonsense.
> 
> I am "highly" suspicious at this point that executing an older direct X build somehow "locks" the card, or modifies how it functions. This behavior (if it was present) was not nearly as bad in 9.1.
> 
> Hence why I am able to run Prey for example with certain OC's only "after" having of launched Heaven, for example.
> Card behavior "seems" very much dictated by the direct X api's from which the card is executing instructions, driver rev, and power settings. If your program isn't attempted to access certain functions, you may never see the overboost issue... I assure you, it does happen, and in some cases can reliably repeat the event.
> 
> Below is an example where I can reliably recreate the overboost event in Heaven at a precise moment, at will. The problem (or problems) is in the driver.


Makes sense. Hopefully that gets fixed soon for you guys. I wanted to play around with it on my gaming rig but couldn't really justify it since I also have a 1080TI. I guess this is more reason to hold off.


----------



## TrixX

Quote:


> Originally Posted by *owntecx*
> 
> I need someone to test something. Whatever p7 or p6 set. without powerplay tables.( somehow i get less mhz, cause less voltage on p4/5), all voltages, p6/7 and hbm, 1001mv i get 1540mhz. well. if i set both p6/7 andhbm to 1000, yes, 1 mv, i get 1490 mhz, almost 50mhz just from 1mv.


Interesting, I've noticed some janky results with certain numbers used. Wattman has jumps that it moves to, OverdriveNTool doesn't, so I tend to mirror those jumps when using OverdriveNTool.

For instance instead of setting just any number I tend to use a 2 or a 7 at the end so 1752 or 1667 for instance. Wattman used to adjust mine all the time so just copying the behaviour incase there was a multiplier restriction or something like that. I don't know why this occurred in Wattman.
Quote:


> Originally Posted by *IvantheDugtrio*
> 
> I've tried undervolting but 1150 mV was the lower limit that FireStrike would run stable. Maybe I just have a really bad bin. Also with the state of the drivers I don't know if I can trust the reported clock speeds or voltages. Only performance can tell if there is an improvement.


Maybe, a friend who got the same card as me can't OC above the stock clocks at all (instant crash) and when undervolting has mixed and uneven results.

Sounds like you are running the stock BIOS too, so maybe worth trying the MSI 8774 BIOS linked in my earlier post as when I moved to that I got a fairly substantial stability increase.


----------



## Dolk

Anyone ever try this? Results are interesting.


----------



## TrixX

Quote:


> Originally Posted by *Dolk*
> 
> 
> 
> Anyone ever try this? Results are interesting.


Is it actually taking though? It could be misreporting. I always test my OC/UV changes with a Superposition run on 4K or 1080p Extreme and see if I get expected numbers for that speed. If not then likely the prog is misreporting.


----------



## poisson21

I tried to set the min to different Pstate and it seems to work, i use Aida64 combine to logitech arx control to monitor it on my phone and i can see a direct change of the clock when i change the place of the minimum setting.

Edit: with that you can also see that the Pstat of the hbm are directly associated wiht certain Pstate for the core.

P7/P6/P5/P4/P3 are associated with P3 of hbm (1105Mhz in my case)

P2 with the one at 800Mhz (P2)

P1 with the one at 167Mhz. (P0)

and the P1 of hbm (500mhz) is not use in any case.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *TrixX*
> 
> 
> 
> Here's my current settings exactly.
> 
> Though I am using the bios from here:
> MSI RX Vega 64 Liquid Cooled BIOS
> 
> I don't reach 1752MHz with the current cooler and voltage as the voltage is the regulator of the clocks. So I sit around ~1620MHz for the most part in games, though it can run up to ~1688MHz if the temps are low.
> 
> I use the P6 and P7 states as a high point for the clock speed if temp and power allow and let the voltage control the temp and the power.
> 
> I'll up an Afterburner graph image in a min...
> 
> EDIT: And here it is. The Fan speed reads zero as the BIOS registers 3300 as the max fan speed for the card, whereas it's set to 3800 in OverdriveNTool which uses the stock fan speed settings (can set to 4900). About a 20 min run of pCARS 2 is shown from start to finish.





can you throw in a firestrike stress test with temps for me.
i can undervolt overclock to my hearts content and in most games all is good but as soon as you do the stress test boom.

i just want to see if that app is working better than old wattman


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *gupsterg*
> 
> Anybody thinking they have anything but Samsung can check very easily.
> 
> i) Dump a copy of your card's VBIOS.
> 
> ii) Download AtomBiosReader (linked in OP here). Open the VBIOS in program and at the location of VBIOS there will be a txt with tables list created. Within the txt will be a VRAM_INFO table offset location. Use HxD to view that area of VBIOS.
> 
> 
> 
> 
> 
> Other than VEGA FE (again Samsung) I have seen no VBIOS with any other HBM IC support.
> 
> 
> 
> Members can also view the details in TechPowerUp GPU VBIOS database as well, last time I checked again only Samsung.






or....you could use gpuz.








unless i,m slow and someone else suggested it...


----------



## TrixX

Quote:


> Originally Posted by *tarot*
> 
> can you throw in a firestrike stress test with temps for me.
> i can undervolt overclock to my hearts content and in most games all is good but as soon as you do the stress test boom.
> 
> i just want to see if that app is working better than old wattman


Firestrike is fubar on my machine. Will have to do that when I re-install again as I can't seem to clean it fully even with CCleaner (unless someone can point me to a Futuremark removal tool







).

I have stress tests with Superpostition on 1080p Extreme and 4K optimised...

Here's a 1080p Extreme I just did with these settings (Idle temp is 30C):



Solid ~1580MHz at 950mv using Clockblocker to keep it in P7 as scene 10 can cause it to drop P states for a split second.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *TrixX*
> 
> Firestrike is fubar on my machine. Will have to do that when I re-install again as I can't seem to clean it fully even with CCleaner (unless someone can point me to a Futuremark removal tool
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> I have stress tests with Superpostition on 1080p Extreme and 4K optimised...
> 
> Here's a 1080p Extreme I just did with these settings (Idle temp is 30C):
> 
> 
> 
> Solid ~1580MHz at 950mv using Clockblocker to keep it in P7 as scene 10 can cause it to drop P states for a split second.






thanks for that and yeah i, m beginning to think 3dmark is half the problem it locks up or locks up the computer every time i change a setting.
wattman is not helping me either









i forgot to ask or look is that under water or air? if its air that is some pretty low temps.

problem with mine is if i try anything like those settings it will lock up jump out of the case and try and strangle me dressed like pennywise the freakin clown...very scary









might have to try that tool and see how i go.. again thanks


----------



## TrixX

Quote:


> Originally Posted by *tarot*
> 
> thanks for that and yeah i, m beginning to think 3dmark is half the problem it locks up or locks up the computer every time i change a setting.
> wattman is not helping me either


I haven't had much fun with 3DMark being stable at the moment. Didn't matter whether it was Timespy or Firestrike (though more frequently with FS) it would eventually just lock up or crash the driver. Each time it occurred FS became less stable in particular. Eventually it just wouldn't start and wouldn't un-install. So I cleaned it but now it won't re-install correctly (hangs at 95%).
Quote:


> Originally Posted by *tarot*
> 
> i forgot to ask or look is that under water or air? if its air that is some pretty low temps.


On air. Running the Liquid Cooled BIOS (8774) at the moment so max temp is 70 and target 65 by default. I did notice a stability increase with this BIOS over the stock BIOS (8730) for this card when running lower voltage. Seeing as there's performance loss when over 60C anyway I thought it not a bad compromise to run at current settings. The low temps are purely from lowered voltage for both Core and HBM (though the HBM is not technically HBM voltage and I usually keep it the lowest of the 3 voltage settings) and having an aggressive fan profile to counter the clock speed fluctuations.
Quote:


> Originally Posted by *tarot*
> 
> problem with mine is if i try anything like those settings it will lock up jump out of the case and try and strangle me dressed like pennywise the freakin clown...very scary
> 
> 
> 
> 
> 
> 
> 
> 
> 
> might have to try that tool and see how i go.. again thanks


Wattman is utter crap currently. Doesn't apply things correctly, crashes a lot and causes more problems than it's worth. I just set the HBCC to on and set the Wattman to custom and then close it.

OverdriveNTool does seem to work a lot more reliably and doesn't crash after 5 changes


----------



## SlushPuppy007

Just checking in on all the Vega folks!

What is the highest highest sustained core clock and hbm clock achieved so far?

Does anyone know if the current VEGA 10 GPU will also get the 12nm GloFo refresh in 2018?


----------



## pmc25

Quote:


> Originally Posted by *SlushPuppy007*
> 
> Just checking in on all the Vega folks!
> 
> Does anyone know if the current VEGA 10 GPU will also get the 12nm GloFo refresh in 2018?


Likely depends largely on how soon NAVI will be ready on 7nm, which I suspect will depend more on HBM3 / faster & cheaper HBM2 availability than 7nm.

Vega's going to be very short lived IMO.


----------



## Paul17041993

Quote:


> Originally Posted by *Zero4549*
> 
> For all of you having massive throttling on liquid cooled cards, something seems wrong. I'm mining on my vega 64 with the dinky black air cooler with 100% GPU and VRAM usage 24/7. Temps never exceed 70c, never throttles. The card is basically stock, with the only adjustments made being undervolting and custom fan profiles in wattman.
> 
> This is in an enclosed MATX case with only moderate airflow at best. Only issue I ever have is the fan in the vega occasionally getting mildly annoying when it gets above 2500RPM or so. Typically only if my room is warmer than 78f.
> 
> If you guys haven't tried undervolting yet, you probably should. These cards have some absurd headroom in that department.


Vega dynamically determines the active clocks it should use based on the active shader code in use, so it doesn't 'throttle' purely based on power and temps, however raising the power limit or undervolting will allow it to run at higher clocks for longer periods of time more reliably on shaders that it normally doesn't.

In my case however, undervolting makes the card extremely unstable unless I were to force it to remain below 1600Mhz on the core...

Quote:


> Originally Posted by *tarot*
> 
> or....you could use gpuz.
> 
> 
> 
> 
> 
> 
> 
> 
> unless i,m slow and someone else suggested it...


huh, I've got a more recent BIOS than you...



pretty sure my card's from the second major batch after the first batch hit stock limits...

edit; oh and sammy dies as well, does your vega have the full moulded and polished package?


----------



## elox

Hi all,

Just installed the morpheus 2 on my xfx vega 64.
Can someone tell me if heatsinks are needed at the red square?
I was able to put heatsinks everywhere (blue square) and of course on the mosfets but not on those next to the chip.
I've one case fan blowing directly from the side onto the vrm plus two noisblockers on the cooler itself. Will that be enough?
However will try to get some nuts that fit so i can use the backplate and cool the vrm via the backplate.


----------



## springs113

Quote:


> Originally Posted by *elox*
> 
> Hi all,
> 
> Just installed the morpheus 2 on my xfx vega 64.
> Can someone tell me if heatsinks are needed at the red square?
> I was able to put heatsinks everywhere (blue square) and of course on the mosfets but not on those next to the chip.
> I've one case fan blowing directly from the side onto the vrm plus two noisblockers on the cooler itself. Will that be enough?
> However will try to get some nuts that fit so i can use the backplate and cool the vrm via the backplate.


Is there a reason a lot of ppl are going the morphe Us route?


----------



## Trender07

Quote:


> Originally Posted by *TrixX*
> 
> Not entirely sure it's needed.
> 
> I'm running:
> 
> P6 - 1667MHz - 900mv
> P7 - 1752MHz - 950mv
> 
> HBM - 1050MHz - 900mv (here adjust the HBM MHz to what's stable for your sample)
> 
> Power is at +10% for daily use and +25% when benchmarking
> 
> Fans I set to 500 - 3800 RPM for daily and 1200 - 4900 RPM for benchmarking.
> 
> My target temps are Max 70C and Target 65C as I'm on the Liquid BIOS.
> 
> During bench runs I usually get ~1588MHz sustained (no fluctuations or downclocking).
> If I want to get higher I can up the P7 mv to 1100mv and I'll get a sustained ~1620MHz.
> 
> This is on air. I'm heavily limited by temps as anything more than above and the cooler cannot deal with the heat produced and causes driver crashes.
> 
> Waiting to do some testing on my water loop when the CPU block finally arrives


*** lol? I can't even 1632 stock setting at 1000 mV and u do 900 mV lol? And 1700 @950 lool


----------



## TrixX

Quote:


> Originally Posted by *Trender07*
> 
> *** lol? I can't even 1632 stock setting at 1000 mV and u do 900 mV lol? And 1700 @950 lool












Though to clarify, those are just the values set for P6 and P7 (which are the stock for the Liquid Cooled Vega 64 BIOS I use). I tend to look at them as the maximum possible for those Power States. During benchmarking with my daily setup I get a steady ~1580MHz actual core frequency which only fluctuates mildly. If I run more aggressive power settings so 1000mv+ then I tend to draw more power than the cooler can cope with and end up with core and HBM throttling which is kinda pointless.

But those settings will run day in, day out on here and using ClockBlocker to make sure I don't drop to lower P states when gaming it's hanging around the ~1620-1680MHz range in games which are far less harsh than Superposition.


----------



## pmc25

Btw I belatedly tried BF1 again.

17.9.3 no longer seems to cause overboost and resulting crashes.

Getting stable 120-160FPS, average just over 140FPS at 3200x1800 all Ultra except motion blur, ambient occlusion and anti aliasing all set to off.

4K doesn't hit average frame rates much at all, with these settings. Only about 10FPS, though it does dip below 100FPS significantly more frequently.

Ruling out AA and upping resolution, Vega is very, very fast in some games. Of which BF1 is one.


----------



## laczarus

Quote:


> Originally Posted by *springs113*
> 
> Is there a reason a lot of ppl are going the morphe Us route?


Morpheus II seems to be the only one compatible with Vega due to its 64x64 mount support.
Thats why I got it


----------



## Reikoji

Hard to tell, but the 17.9.3 drivers re-added as whql signed, OCT 2

Probably doesn't matter. I'm new.


----------



## Ipak

Add me in







Sapphire RX Vega 64 Limited Edition



that's some nice sexy piece of aluminium


----------



## Reikoji

Quote:


> Originally Posted by *Ipak*
> 
> Add me in
> 
> 
> 
> 
> 
> 
> 
> Sapphire RX Vega 64 Limited Edition
> 
> 
> 
> that's some nice sexy piece of aluminium


Congrats. Now to remove that sexy aluminum for a water block !


----------



## springs113

Quote:


> Originally Posted by *Ipak*
> 
> Add me in
> 
> 
> 
> 
> 
> 
> 
> Sapphire RX Vega 64 Limited Edition
> 
> 
> 
> that's some nice sexy piece of aluminium


id love to have it but I was going to strip it anyway to put a block on it.


----------



## Trender07

Quote:


> Originally Posted by *TrixX*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Though to clarify, those are just the values set for P6 and P7 (which are the stock for the Liquid Cooled Vega 64 BIOS I use). I tend to look at them as the maximum possible for those Power States. During benchmarking with my daily setup I get a steady ~1580MHz actual core frequency which only fluctuates mildly. If I run more aggressive power settings so 1000mv+ then I tend to draw more power than the cooler can cope with and end up with core and HBM throttling which is kinda pointless.
> 
> But those settings will run day in, day out on here and using ClockBlocker to make sure I don't drop to lower P states when gaming it's hanging around the ~1620-1680MHz range in games which are far less harsh than Superposition.


Thats still impressive UV, have you tried Time Spy and or Fire Strike? Sometimes mine didn't crashed on Super Position but did on Time Spy or Fire Strike. Btw may it be cos ur using Overdriventool? Just saw it, but, are your p1-p5 with volts higher than p6p7?


----------



## TrixX

My P1-P5 are indeed higher mv however I have confirmed the voltages are working correctly. Easy to check actually as my max bench run was done at 1100mv pushing ~1630 stable though heat was too much for anything longer than Superposition resulting in downclocking to maintain target/max temps (still 65 and 70).

Unfortunately I can't run Timespy. After the Firestrike crash fest it failed to uninstall correctly and even after using CCleaner it won't re-install properly. Windows install is only a week old now









P1-P5 are still not able to be edited with OverdriveNTool. You can change the values but they don't get applied.


----------



## dosenfisch

Today, I replaced the stock cooler of my Vega FE (one of the first available at the end of June) with a Morpheus 2 and was really surprised. It's using an Korean package, so it's not molded. According to Tom's Hardware, the FE should normally use a molded one from Taiwan. After testing for a few hours and changing the thermal paste several times, it's a bit disappointing. The GPU core and the HBM stays ~30°C cooler, but the GPU Hotspot is hardly changing at all. The temperature is in the 85-95°C range instead of 90-100°C with the stock cooler. I tried everything, from the perfect amount of thermal paste to a way to thick layer an the Core and HBM temperatures changed by more then 10°C, but the hotspot temperature didn't even increase.


----------



## Trender07

Quote:


> Originally Posted by *TrixX*
> 
> My P1-P5 are indeed higher mv however I have confirmed the voltages are working correctly. Easy to check actually as my max bench run was done at 1100mv pushing ~1630 stable though heat was too much for anything longer than Superposition resulting in downclocking to maintain target/max temps (still 65 and 70).
> 
> Unfortunately I can't run Timespy. After the Firestrike crash fest it failed to uninstall correctly and even after using CCleaner it won't re-install properly. Windows install is only a week old now
> 
> 
> 
> 
> 
> 
> 
> 
> 
> P1-P5 are still not able to be edited with OverdriveNTool. You can change the values but they don't get applied.


Yeah man, had that same problem with 17.9.3. Always tested with Superposition and TimeSpy/FS on 17.9.2, but after installing 17.9.3, it broke whole 3DMark lol and had a crash fest on everything, bugged uninstall. I ended finally fixing it with revouninstaller


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Paul17041993*
> 
> Vega dynamically determines the active clocks it should use based on the active shader code in use, so it doesn't 'throttle' purely based on power and temps, however raising the power limit or undervolting will allow it to run at higher clocks for longer periods of time more reliably on shaders that it normally doesn't.
> 
> In my case however, undervolting makes the card extremely unstable unless I were to force it to remain below 1600Mhz on the core...
> huh, I've got a more recent BIOS than you...
> 
> 
> 
> pretty sure my card's from the second major batch after the first batch hit stock limits...
> 
> edit; oh and sammy dies as well, does your vega have the full moulded and polished package?






yes mine does, i can get higher but the volts and the temps go up the same way and really not worth it...upping the memory though definitely a plus.

i tried the liquid bios (ups the power as well) but it was flaky at best so went back to stock.
lot more fiddling to do









ok seems the 17.9.3 drivers were updated to whql

http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.9.3-Release-Notes.aspx

and also they are 6560 meg twice the size of the 27th sept ones... what the...
grabbing now (well as soon as i can due to too many daughters and too many Netflix accounts)

in other news just tested 1537/1025 1632/1075 1050/1000 25/50 percent power and it worked...in superposition but i doubt it will work in firestrike.
problem is superposition or any unigine engine title is just not ballsy enough to push it







i will have to find an evil game


----------



## Tgrove

Dying light. That game will expose any coil whine and crash any unstable settinga for sure. I dont think ive seen amotjer game beat on any cards ive own like dying light at 4k


----------



## Reikoji

It would be nice if there were versions sold without the air coolers attached. would save people a lot of trouble taking them off far a water block.
Quote:


> Originally Posted by *dosenfisch*
> 
> Today, I replaced the stock cooler of my Vega FE (one of the first available at the end of June) with a Morpheus 2 and was really surprised. It's using an Korean package, so it's not molded. According to Tom's Hardware, the FE should normally use a molded one from Taiwan. After testing for a few hours and changing the thermal paste several times, it's a bit disappointing. The GPU core and the HBM stays ~30°C cooler, but the GPU Hotspot is hardly changing at all. The temperature is in the 85-95°C range instead of 90-100°C with the stock cooler. I tried everything, from the perfect amount of thermal paste to a way to thick layer an the Core and HBM temperatures changed by more then 10°C, but the hotspot temperature didn't even increase.


Wouldnt the hotspot be the back of the card?


----------



## Trender07

Quote:


> Originally Posted by *TrixX*
> 
> Firestrike is fubar on my machine. Will have to do that when I re-install again as I can't seem to clean it fully even with CCleaner (unless someone can point me to a Futuremark removal tool
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> I have stress tests with Superpostition on 1080p Extreme and 4K optimised...
> 
> Here's a 1080p Extreme I just did with these settings (Idle temp is 30C):
> 
> 
> 
> Solid ~1580MHz at 950mv using Clockblocker to keep it in P7 as scene 10 can cause it to drop P states for a split second.


Just tried yours clocks and volts and it just instanly crashes even with Overdriventool lol, you got crazy undervolts there x) you must got a golden chip


----------



## Tgrove

I cant even do 1752 core on 1.2mv with aio 64


----------



## Paul17041993

Quote:


> Originally Posted by *Reikoji*
> 
> It would be nice if there were versions sold without the air coolers attached. would save people a lot of trouble taking them off far a water block.
> Wouldnt the hotspot be the back of the card?


The hotspot is the infinity fabric crossbar from what its behaviour suggests, if it runs hot then you need to re-apply your paste (I use a generous even spread across the whole surface, thermalgrizzly aeronaut).

Also if a company sold bare PCB's, they'd have to deal with the dolts that have no idea how the hardware works and either run it without proper cooling or without cooling entirely, then complain about it catching fire...


----------



## Trender07

Quote:


> Originally Posted by *Tgrove*
> 
> I cant even do 1752 core on 1.2mv with aio 64


I will just stuck with my 1592 [email protected] mV, I prefer less temps as I got the air cooled, that guy got a gold chip lol


----------



## Reikoji

Quote:


> Originally Posted by *Paul17041993*
> 
> The hotspot is the infinity fabric crossbar from what its behaviour suggests, if it runs hot then you need to re-apply your paste (I use a generous even spread across the whole surface, thermalgrizzly aeronaut).
> 
> Also if a company sold bare PCB's, they'd have to deal with the dolts that have no idea how the hardware works and either run it without proper cooling or without cooling entirely, then complain about it catching fire...


As long as its known that its a coolerless PCB on the packaging as well i dont think it would be a big deal. Would be a definite buy for card WB'ers


----------



## tarot

Quote:


> Originally Posted by *Paul17041993*
> 
> The hotspot is the infinity fabric crossbar from what its behaviour suggests, if it runs hot then you need to re-apply your paste (I use a generous even spread across the whole surface, thermalgrizzly aeronaut).
> 
> Also if a company sold bare PCB's, they'd have to deal with the dolts that have no idea how the hardware works and either run it without proper cooling or without cooling entirely, then complain about it catching fire...


yeah i think that is half my problem(too small rad being the other) i just did the 3 crosses like in the instructions not really thinking about the molded bit so when i strip it own to put in the bigger rad i will reapply it.
does anyone run thermal pads on the back of the card under the backplate? i cooked bacon and eggs on it today








Quote:


> Originally Posted by *Trender07*
> 
> I will just stuck with my 1592 [email protected] mV, I prefer less temps as I got the air cooled, that guy got a gold chip lol


same here i,m sticking with lowdogs one 1537 /1025 1632/1075 and 1050 on the ram give me a nice balance.

i wonder if the better ones are the ones without the molded chip...


----------



## Soggysilicon

Quote:


> Originally Posted by *TrixX*
> 
> Interesting, I've noticed some janky results with certain numbers used. Wattman has jumps that it moves to, OverdriveNTool doesn't, so I tend to mirror those jumps when using OverdriveNTool.
> 
> For instance instead of setting just any number I tend to use a 2 or a 7 at the end so 1752 or 1667 for instance. Wattman used to adjust mine all the time so just copying the behaviour incase there was a multiplier restriction or something like that. I don't know why this occurred in Wattman.
> Maybe, a friend who got the same card as me can't OC above the stock clocks at all (instant crash) and when undervolting has mixed and uneven results.
> 
> Sounds like you are running the stock BIOS too, so maybe worth trying the MSI 8774 BIOS linked in my earlier post as when I moved to that I got a fairly substantial stability increase.


The 1-2 digit "stickiness" has been a point of speculation for some time now... my money is that its an edge setting which references a table with frequencies and / or voltages under different use cases. Being right on the edge your on the high side of one table or the low side of another... additionally I think its a bias setting rather than a hard fixed value, certainly not when your on the bottom of the performance table at any rate.

Now you got me wondering if that table, or some element of the feedforward control loop is part of the bios... has anyone done a compare on the file contents?
Quote:


> Originally Posted by *Tgrove*
> 
> I cant even do 1752 core on 1.2mv with aio 64


Thats Reikoji sans golden chip... he is the only one I have seen that can run P7 1752 at stock core with +50 power and bench. Think he hit a 7.2k ish in SP4k. With a ton of tweaking I got into the mid 7.1s... 7.2 "I think" is doable... but not with wattman / current release drivers... way to many issues and anomalies... the chip is either golden bin... or not... hard to tweak through it...









Been working with uplifting P6 and holding back P7 on the 9.1 driver to see if I can get a sustained 1700+ Mhz average over 10 mins. In the 1690s ATM, on 9.3 I was 1700s, but that overboost monkey... never could shake it off my back. Not with the card behavior changing (and from what another poster wrote) dependent on shader access... but does not explain why running Heaven before another title locks it to whatever Heaven was doing... or does it... but then where just back on driver issues.


----------



## IvantheDugtrio

Did a bit of testing with MSI Afterburner's 4.4.0 Beta 12 and it seems maxing out the power limit to 50% solved all of my throttling issues. It's a shame AMD removed WattMan from the Frontier Edition drivers otherwise I wouldn't have had all these throttling issues.



I got a FireStrike graphics score of 23694 with the following core speed of 1670 MHz and HBM speed of 1050 MHz. All voltages were left stock.

Should I be concerned about the delta between GPU temperature and Hot Spot temperature being 25-30C? At idle they are within 1-2C.


----------



## elox

Quote:


> Originally Posted by *springs113*
> 
> Is there a reason a lot of ppl are going the morphe Us route?


Quote:


> Originally Posted by *dosenfisch*
> 
> Today, I replaced the stock cooler of my Vega FE (one of the first available at the end of June) with a Morpheus 2 and was really surprised. It's using an Korean package, so it's not molded. According to Tom's Hardware, the FE should normally use a molded one from Taiwan. After testing for a few hours and changing the thermal paste several times, it's a bit disappointing. The GPU core and the HBM stays ~30°C cooler, but the GPU Hotspot is hardly changing at all. The temperature is in the 85-95°C range instead of 90-100°C with the stock cooler. I tried everything, from the perfect amount of thermal paste to a way to thick layer an the Core and HBM temperatures changed by more then 10°C, but the hotspot temperature didn't even increase.


Did the same yesterday. Morpheus 2 with the backplate. GPU Hotspot and HBM2 two degrees above room temp in idle. Under heavy load and even in PUBG Hotspot goes up to 86 degree max while GPU/HBM Temp is at 54 degree.


----------



## TrixX

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Did a bit of testing with MSI Afterburner's 4.4.0 Beta 12 and it seems maxing out the power limit to 50% solved all of my throttling issues. It's a shame AMD removed WattMan from the Frontier Edition drivers otherwise I wouldn't have had all these throttling issues.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> I got a FireStrike graphics score of 23694 with the following core speed of 1670 MHz and HBM speed of 1050 MHz. All voltages were left stock.
> 
> Should I be concerned about the delta between GPU temperature and Hot Spot temperature being 25-30C? At idle they are within 1-2C.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


Do you have HBCC on? Instant ~400 score in Superposition 1080p Extreme.

I'd use OverdriveNTool instead of AB. There's a beta 18 version out with partial Vega support but still not full, so OverdriveNTool is probably better for messing with Vega's currently.

Are you WC or on Air?


----------



## cephelix

Hey there fellas,

Just have a little question, since the ref vega 56 comes with 3 x DP and 1 x HDMI and my monitors both use DVI-D, could I just use a DP - DVI-D cable straight or would it not work? BTW, this is for gaming at 60fps, 1080p.

Thanks a bunch.


----------



## Newbie2009

Quote:


> Originally Posted by *cephelix*
> 
> Hey there fellas,
> 
> Just have a little question, since the ref vega 56 comes with 3 x DP and 1 x HDMI and my monitors both use DVI-D, could I just use a DP - DVI-D cable straight or would it not work? BTW, this is for gaming at 60fps, 1080p.
> 
> Thanks a bunch.


Not sure on the normal dvi, but dvi-d (dual link) I would avoid. The adapters are crazy expensive (active) and hit and miss with regards to performance. (I returned mine)

I would wait for a partner card with a dvi connector, after my experience. I ended up getting a newer model of my monitor with a display port and left my old 290x in running the dvi-d monitor.

1600p/60 needs active. 1080/60 might be ok with just a cable.


----------



## pmc25

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> It's a shame AMD removed WattMan from the Frontier Edition drivers otherwise I wouldn't have had all these throttling issues.


Literally the sole reason to even have Radeon Settings / WattMan installed at all is so you can use Virtual Super Resolution ... which is a pretty big thing, since increasing resolution both outperforms MSAA and SSAA in image quality, and is far less costly in terms of FPS, on Vega. OverdriveNTool and the beta Afterburner both work better than WattMan, and the only toggle besides VSR which actually works in Radeon Settings for games, for Vega, is HBCC, which at this stage seems to do very little.

If someone could find a way to enable VSR without Radeon Settings, that would make me pretty happy. I guess external enablement of HBCC for those on W10 (HBCC is only on W10) would be good too.

If you don't use VSR, you're essentially missing nothing by uninstalling Radeon Settings. That may change if game settings (global and individual) begin working, but I'm not holding my breath given that we're 6 drivers in since Vega launch, 8 if FE drivers are counted, and they don't work at all yet.


----------



## LionS7

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Did a bit of testing with MSI Afterburner's 4.4.0 Beta 12 and it seems maxing out the power limit to 50% solved all of my throttling issues. It's a shame AMD removed WattMan from the Frontier Edition drivers otherwise I wouldn't have had all these throttling issues.


Can you test more games to see is it really no throttling in all cases.


----------



## GroupB

Quote:


> Originally Posted by *cephelix*
> 
> Hey there fellas,
> 
> Just have a little question, since the ref vega 56 comes with 3 x DP and 1 x HDMI and my monitors both use DVI-D, could I just use a DP - DVI-D cable straight or would it not work? BTW, this is for gaming at 60fps, 1080p.
> 
> Thanks a bunch.


Got 3 cheapo display to dvi cable under 6-8$ each here for my vega and its working no problem, Cant say the same of my active adaptor I got way back with my 6970 for my eyefinity setup, it start to go bad last year.

60hz 1080p no problem


----------



## Reikoji

Forza motorsport 7 seems to not want to play nice with Power target above 0. 50% and it eventually crashes, and also does not draw as much power as you would expect. Game will fail to load passed the press enter screen so long as the power target remains above balanced level. Has the ability to bring GPU core above 1700 just at 0 power target either way, and still delivers excellent framerate. haven't tested at -50.

On LC edition Vega 64


----------



## kundica

Quote:


> Originally Posted by *Reikoji*
> 
> Forza motorsport 7 seems to not want to play nice with Power target above 0. 50% and it eventually crashes, and also does not draw as much power as you would expect. Game will fail to load passed the press enter screen so long as the power target remains above balanced level. Has the ability to bring GPU core above 1700 just at 0 power target either way, and still delivers excellent framerate. haven't tested at -50.
> 
> On LC edition Vega 64


I had that issue with the Beta but only when running an overlay like Rivatuner. Disabling overlays fixed that issue for me. You can also try launching then maximizing the screen(it will remain in windowed mode) until the hit enter to start part. Other than that, it runs fun on my watercooled 64 with an EK block,+50 power limit, core 1702, hbm 1075.


----------



## Reikoji

Quote:


> Originally Posted by *kundica*
> 
> I had that issue with the Beta but only when running an overlay like Rivatuner. Disabling overlays fixed that issue for me. You can also try launching then maximizing the screen(it will remain in windowed mode) until the hit enter to start part. Other than that, it runs fun on my watercooled 64 with an EK block,+50 power limit, core 1702, hbm 1075.


Does it give you fairly consistent GPU usage levels?


----------



## kundica

Quote:


> Originally Posted by *Reikoji*
> 
> Does it give you fairly consistent GPU usage levels?


Yeah. As long as you set the graphics settings to something that will push the card.


----------



## Reikoji

Quote:


> Originally Posted by *kundica*
> 
> Yeah. As long as you set the graphics settings to something that will push the card.


Its weird for me. Even with Unlocked FPS, @ 1440p everything maxxed except MSAA is off, the card will variate usage all the way down to 40% and never actually hold anything, definitely not 100%, and power draw anywhere between 40w to 270w, all while maintaining above 110fps. I would expect it to hold 100% usage and give my more fps, but not so much. Turn MSAA x8 on and the usage levels increase, but I get around the same FPS average with lower on the max. Was hoping to see 160+ averages but card wont push like i expect it to.

4k MSAA off or on the card will manage to stay above 90% utilized, but msaa off I get the same average as 1440p MSAA off.


----------



## kundica

Quote:


> Originally Posted by *Reikoji*
> 
> Its weird for me. Even with Unlocked FPS, @ 1440p everything maxxed except MSAA is off, the card will variate usage all the way down to 40% and never actually hold anything, definitely not 100%, and power draw anywhere between 40w to 270w, all while maintaining above 110fps. I would expect it to hold 100% usage and give my more fps, but not so much. Turn MSAA x8 on and the usage levels increase, but I get around the same FPS average with lower on the max. Was hoping to see 160+ averages but card wont push like i expect it to.
> 
> 4k MSAA off or on the card will manage to stay above 90% utilized, but msaa off I get the same average as 1440p MSAA off.


Hmm... I could be wrong. I'll have to do more testing once I get my system put back together. I decided I needed to run 360 + 280 rads instead of 280 + 240 so I'm in the process of installing the new rad.


----------



## cephelix

Quote:


> Originally Posted by *Newbie2009*
> 
> Not sure on the normal dvi, but dvi-d (dual link) I would avoid. The adapters are crazy expensive (active) and hit and miss with regards to performance. (I returned mine)
> 
> I would wait for a partner card with a dvi connector, after my experience. I ended up getting a newer model of my monitor with a display port and left my old 290x in running the dvi-d monitor.
> 
> 1600p/60 needs active. 1080/60 might be ok with just a cable.


Quote:


> Originally Posted by *GroupB*
> 
> Got 3 cheapo display to dvi cable under 6-8$ each here for my vega and its working no problem, Cant say the same of my active adaptor I got way back with my 6970 for my eyefinity setup, it start to go bad last year.
> 
> 60hz 1080p no problem


Thanks guys for the reply! hoping I'll get mine soon. In the meantime I'll need to order a waterblock and sleeve some cables.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *TrixX*
> 
> Do you have HBCC on? Instant ~400 score in Superposition 1080p Extreme.
> 
> I'd use OverdriveNTool instead of AB. There's a beta 18 version out with partial Vega support but still not full, so OverdriveNTool is probably better for messing with Vega's currently.
> 
> Are you WC or on Air?





wow your right popped up to 4926 in extreme.
going to trya few more now.
i,m also interested to see if that is air or water as my water temps sort of suck


----------



## TrixX

Quote:


> Originally Posted by *tarot*
> 
> wow your right popped up to 4926 in extreme.
> going to trya few more now.
> i,m also interested to see if that is air or water as my water temps sort of suck


Mine is running on air at the moment and my highest 1080p Extreme score is 5031 with an 1100mv P7 and 4900 RPM on the fan. Currently anything higher and I get thermal throttling. I think I avg'd around 1630MHz during the run too even though P6 was at 1667 and P7 was 1752.

Highest registered water cooled score is in the ~5200's


----------



## tarot

yeah i have something messed up but the rad and the pump tubes etc all get very warm so my guess is the block is on ok just not enough rad/fan to cool it.

i forgot you are on the LC bios aren't you? and power slider is 50 percent?

i tried the sapphire one form techpowerup and it just want to boost to 1780 instant boom


----------



## TrixX

Quote:


> Originally Posted by *tarot*
> 
> i forgot you are on the LC bios aren't you? and power slider is 50 percent?
> 
> i tried the sapphire one form techpowerup and it just want to boost to 1780 instant boom


I'm using this MSI LC BIOS as it's the newest BIOS revision. Much more stable than my stock 8730 Air BIOS.

My Power is set to 10% for normal use and 25% for Benchmarking. I didn't get any benefit for higher Power settings in benching as I get thermal throttled first.


----------



## IvantheDugtrio

Looks like I should still have some overclocking headroom. I just enabled HBCC with the same settings as before.

I'm using a Vega FE Air card with an EKWB block and 3x120mm rad. Considering how my GPU hotspot temperature would max out at 75C I could probably take it up to 85-90C with overvolting. That is unless of course I reach the +50% power limit.





Not bad: 11,473 GPU score FireStrike Extreme


----------



## TrixX

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> 
> 
> Looks like I should still have some overclocking headroom. I just enabled HBCC with the same settings as before.
> 
> I'm using a Vega FE Air card with an EKWB block and 3x120mm rad. Considering how my GPU hotspot temperature would max out at 75C I could probably take it up to 85-90C with overvolting. That is unless of course I reach the +50% power limit.
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Not bad: 11,473 GPU score FireStrike Extreme
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


Have you tried undervolting P6 and P7 to 1000mv or 900mv to see what the results are like?

The HBM voltage acts as a floor for the voltages to start from for the Core/HBM so setting that as low as possible (I run 900mv stable) can do wonders.

I still think you are throttling though if you are getting ~4700 in Superposition. Does the Core Frequency stay solid in the top right during Superposition or does if fluctuate down?

TBH I'd be testing something like this:


As a caveat I'm basing this off your settings, not 100% my own. Personally I'd be running 25% power as even then you shouldn't be hitting any barriers and lower temp limits due to the loss of performance over 60C.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *IvantheDugtrio*
> 
> 
> 
> Looks like I should still have some overclocking headroom. I just enabled HBCC with the same settings as before.
> 
> I'm using a Vega FE Air card with an EKWB block and 3x120mm rad. Considering how my GPU hotspot temperature would max out at 75C I could probably take it up to 85-90C with overvolting. That is unless of course I reach the +50% power limit.
> 
> 
> 
> 
> 
> Not bad: 11,473 GPU score FireStrike Extreme






well that makes me feel a little better i get similar temps (hot spot)on a 64 rx with a single 140 mm rad and the ek block but my usual bench temps get to around 45 to 58 for both gpu and hbm i have seen slightly higher hotspot temps so maybe a bigger rad will do.

Quote:


> Originally Posted by *TrixX*
> 
> I'm using this MSI LC BIOS as it's the newest BIOS revision. Much more stable than my stock 8730 Air BIOS.
> 
> My Power is set to 10% for normal use and 25% for Benchmarking. I didn't get any benefit for higher Power settings in benching as I get thermal throttled first.


yes it looks to eb the same as the bios i tried. but sapphire but of coure newer....details seem the same though.

i might give that a whirl and see how i do.

now as for the hbcc memory thing i tried it ona few other things and in some cases firestrike got a nudge but things like cpu benches cinebench cpuz wprime would take a hit so it seems to work very well in some things not so much for others.
didn't get around to testing in an actual game








as for powerlimits i find if i don't stick to 50 i generally will get a crash in along session but i,ll give that a go as well when i do the msi bios.

thanks for all that.


----------



## TrixX

Quote:


> Originally Posted by *tarot*
> 
> yes it looks to eb the same as the bios i tried. but sapphire but of coure newer....details seem the same though.
> 
> i might give that a whirl and see how i do.
> 
> now as for the hbcc memory thing i tried it ona few other things and in some cases firestrike got a nudge but things like cpu benches cinebench cpuz wprime would take a hit so it seems to work very well in some things not so much for others.
> didn't get around to testing in an actual game
> 
> 
> 
> 
> 
> 
> 
> 
> as for powerlimits i find if i don't stick to 50 i generally will get a crash in along session but i,ll give that a go as well when i do the msi bios.
> 
> thanks for all that.


Yeah as always YMMV. Mine is happy with really low power and voltage, though I've seen some that aren't stable without power up at +50% so working out the cards idiosyncrasies is half the fun


----------



## elox

Planning to flash the lc64 bios on my vega64 (morpheus2) today. Do I have to disable the GPU in device-manager before flashing or uninstalling driver?
Or just flashing and a reboot?


----------



## TrixX

Quote:


> Originally Posted by *elox*
> 
> Planning to flash the lc64 bios on my vega64 (morpheus2) today. Do I have to disable the GPU in device-manager before flashing or uninstalling driver?
> Or just flashing and a reboot?


Just use the winflash 2.77 it'll flash the BIOS in Windows, no need to disable anything.


----------



## punchmonster

Don't. Just use powerplay tables mod.
Quote:


> Originally Posted by *elox*
> 
> Planning to flash the lc64 bios on my vega64 (morpheus2) today. Do I have to disable the GPU in device-manager before flashing or uninstalling driver?
> Or just flashing and a reboot?


----------



## elox

Quote:


> Originally Posted by *punchmonster*
> 
> Don't. Just use powerplay tables mod.


Uhm with the powerplay table mod you mean the registry "hack" to get a higher powerlimit. Is that correct?
Is there a benefit of using the reg hack to get a higher powerlimit instead of just flashing the LC Bios?


----------



## TrixX

Depends on the card. I found mine more stable on the LC BIOS than the stock 8730 AC BIOS. Though it limits your upper temps to 70C instead of 85C. The accidental benefit of that is that you tend to stay in the high performance zone of the card easier than with the 8730 BIOS.

The Powertables mod does allow more power, but on Air there's literally no need for more power as you are likley going to be thermal throttling anyway at the stock 1200mv on P7 state.

You can test the Undervolts without changing to the LC BIOS though as I mentioned for me it was more stable, for my friend's card it was no difference and others have reported it was more unstable. Vega has a massive ASIC variance it seems


----------



## elox

Quote:


> Originally Posted by *TrixX*
> 
> Depends on the card. I found mine more stable on the LC BIOS than the stock 8730 AC BIOS. Though it limits your upper temps to 70C instead of 85C. The accidental benefit of that is that you tend to stay in the high performance zone of the card easier than with the 8730 BIOS.
> 
> The Powertables mod does allow more power, but on Air there's literally no need for more power as you are likley going to be thermal throttling anyway at the stock 1200mv on P7 state.
> 
> You can test the Undervolts without changing to the LC BIOS though as I mentioned for me it was more stable, for my friend's card it was no difference and others have reported it was more unstable. Vega has a massive ASIC variance it seems


Will give the LC bios a shot. I think the limited upper temps wont be a problem as long thermal throtteling refers to the GPU Temp and not the Hotspot. I use the Morpheus 2 with the original backplate(gets hot as f***) and 1500rpm NB eLoop. When i do one hour of stress testing the GPU and HBM temp is around 50-60 degree while Hotspot can reach 89 max. I know we have seen much worser Hotspot temps with vega+morpheus but also better ones. I´ve already reapplied the thermal paste and tighten the screws but no real difference. Currently my settings are P6 1570mhz\1040mv P7 1670\1080 [email protected]\940mV.
Please excuse my english, i`m from europe.


----------



## Soggysilicon

Quote:


> Originally Posted by *tarot*
> 
> wow your right popped up to 4926 in extreme.
> going to trya few more now.
> i,m also interested to see if that is air or water as my water temps sort of suck




Weeeeee...


----------



## TrixX

Noice, what settings Soggy?


----------



## raysheri

Maybe some one can help me with some insight.
Can run (for example) p7 1652 (set) 1625 (actual) @ 1100 mv, HBM 1080, +50% PL, Vega 56( 64 air bios) EK waterblock.
Superposition all day, but 3dmark Timespy or Firestrike would freeze or crash with those settings and I have to push the volts up to 1150 or more to get 3d mark to run.
Is there some bug with 3d mark and vega or what??


----------



## 113802

Also enabled HBCC and scores skyrocketed


----------



## gamervivek

Tried using clockblocker, it can put it at p7(actually a bit higher) but drops once superposition starts. Also, for some reason it keeps resetting the p6 state.


----------



## bill1971

[
Quote:


> Originally Posted by *WannaBeOCer*
> 
> Also enabled HBCC and scores skyrocketed


How much ram did you add?


----------



## elox

I see similar issues with Firestrike. Seems like more voltage is needed than in any other Game or bench. With 17.9.3 it seems more stable for me and I can run Firestrike with these settings: P7 1670 with 1150mv P6 with 1570\1085mv. HBM [email protected] Core clock of 1622-1640mhz during bench, 290watt powerdraw. 24.400 Points graphics score.
@Trixx: LC Bios was not stable for me :/


----------



## raysheri

I went ahead and purchased 3d mark advanced as I was running the free basic edition. The program runs better and more stable and I don't have to put up with the demo.
Was able to run Timespy @ lower volts but not Firestrike. Similiar results to elox and as per usual.
What I noticed with Superposition is that if you lower your p7 volts then it runs at lower mhz clock speed as if to compensate. Now I don't think it's Vega doing this but the Superposition program and if not then 3dmark certainly isn't doing the same thing.


----------



## GroupB

my best firestrike is 26 112, but superposition runs poor with the same clock not even 6567 4k test. I dont know why I cant hit those 7k score like you guys hit if I can get this much on firestrike. Something is weird with superposition benchmark.


----------



## TrixX

Quote:


> Originally Posted by *GroupB*
> 
> my best firestrike is 26 112, but superposition runs poor with the same clock not even 6567 4k test. I dont know why I cant hit those 7k score like you guys hit if I can get this much on firestrike. Something is weird with superposition benchmark.


HBCC on for Superposition = ~10% extra score.

I'll grab the paid version of Timespy/Firestrike soon and give that a good run to see if it's more stable. TBH though I don't think Superposition is odd, but I think Firestrike is outdated. Timespy seems to run smoother than FS and ultimately more stable with the demo version at least.


----------



## gamervivek

Superposition seems to stress the card more, but firestrike ultra stressed it even more. The best test of stability I use is one of the levels in crysis 3 which digitalfoundry use in their reviews, it crashes pretty quickly there even if it has passed those two benchmarks easily.

The card probably gets stuck at lower p-states if it doesn't get enough power for the clocks at the given voltage, I get better results with auto settings on clocks and voltage once that happens. This is almost always the case with superposition once you get past the initial scene.


----------



## TrixX

Quote:


> Originally Posted by *gamervivek*
> 
> Superposition seems to stress the card more, but firestrike ultra stressed it even more. The best test of stability I use is one of the levels in crysis 3 which digitalfoundry use in their reviews, it crashes pretty quickly there even if it has passed those two benchmarks easily.
> 
> The card probably gets stuck at lower p-states if it doesn't get enough power for the clocks at the given voltage, I get better results with auto settings on clocks and voltage once that happens. This is almost always the case with superposition once you get past the initial scene.


The P5 sticking problem is annoying, I've resorted to using ClockBlocker to force a P7 state in testing. Gives me more accurate results to work with.

What level of Crysis 3 is it that gives you the crash? I'd be interested to test that out


----------



## gamervivek

It's the jungle stage where you ram the train through a wall, it loads the card well enough, earlier stages are more cpu limited for me. Another reason I think for the instability is varying loads, superposition loads the card constantly but games are more variable and likely to crash.


----------



## TrixX

Quote:


> Originally Posted by *gamervivek*
> 
> It's the jungle stage where you ram the train through a wall, it loads the card well enough, earlier stages are more cpu limited for me. Another reason I think for the instability is varying loads, superposition loads the card constantly but games are more variable and likely to crash.


Well after pissing about a bit for a couple of hours, can't get that to crash mine with my current settings. Though after watching DF's video on Vega64 and having done tons of testing I think either my card is just insane or an outlier but I'm not seeing the low performance they did. Admittedly I'm not running stock clocks and running a very undervolted 24/7 setup as well as clockblocker to eliminate the P5 locking and scene switch frame dropping (very noticeable when switching to scene 10 in Superposition).

Running Clockblocker maybe why I'm not seeing the instability with P state switches too.


----------



## 113802

Quote:


> Originally Posted by *bill1971*
> 
> [
> How much ram did you add?


I kept the default 11792

My highest FireStrike score so far

https://www.3dmark.com/fs/13805929


----------



## gamervivek

I'm not seeing clockblocker work that way, it sets the clocks on idle but once superposition loads the chip, it's back to lower clockspeeds with the same power usage which is pretty frustrating.


----------



## TrixX

Quote:


> Originally Posted by *gamervivek*
> 
> I'm not seeing clockblocker work that way, it sets the clocks on idle but once superposition loads the chip, it's back to lower clockspeeds with the same power usage which is pretty frustrating.


Set the rules up properly and it'll be using P7 even if the clock speed isn't P7's stated speeds. The ACG limits the Clocks to usable range based off the available voltage. Basically once you have have set the clocks they are like a target clock rather than a set clock. This way you can confirm voltage settings and then mess with the power, voltage and cooling settings properly.


----------



## Nuke33

@TrixX
CoreBlocker is a nice solution.

I opened a new thread for everyone interested in ultra low undervolting with a bunch of powerplay soft tables ranging from 800mv to 950mv.

http://www.overclock.net/t/1639595/rx-vega-undervolting-efficiency-thread


----------



## pmc25

Perhaps of interest to a few people. Was doing some tests on Ashes Escalation in game bench, mainly in Vulkan, and seeing how easy it is to do without AA - due to probably little performance loss at higher resolutions once AA is turned off. Pretty easy.

Bear in mind Vulkan still isn't particularly optimised yet, but DX11 gets hammered on the heavier batches, and I can't use DX12.

1920*1080 crazy 4*msaa Vulkan
Avg. 63.3FPS 15.8ms
Small 71.7FPS 14ms
Medium 65FPS 15.4ms
Heavy 55.4FPS 18.1ms

1920*1080 crazy no aa Vulkan
Avg. 62FPS 16.1ms
Small 70.2FPS 14.2ms
Medium 62FPS 16.1ms
Heavy 55.6FPS 18ms

2560*1440 crazy 4*aa Vulkan
Avg. 53.6FPS 18.7ms
Small 62.5FPS 16ms
Medium 54FPS 18.5ms
Heavy 46.6FPS 21.4ms

2560*1440 crazy no aa Vulkan
Avg. 56FPS 17.9ms
Small 63.9FPS 15.6ms
Medium 57.9FPS 17.3ms
Heavy 48.3FPS 20.7ms

3200*1800 crazy no aa Vulkan
Avg. 55.2FPS 18.1ms
Small 63.7FPS 15.7ms
Medium 55.6FPS 18ms
Heavy 48.3FPS 20.7ms

3840*2160 crazy no aa Vulkan
Avg. 51.9FPS 19.3ms
Small 61.5FPS 16.3ms
Medium 52.8FPS 19ms
Heavy 44.2FPS 22.6ms

3840*2160 crazy no aa DX11
Avg. 38.4FPS 26ms
Small 64.9FPS 15.4ms
Medium 45.8FPS 21.8ms
Heavy 24.5FPS 40.9ms

Performance drop off is surprisingly minimal, even in a game as tough as Ashes. Weirdly though, in Ashes Vulkan, 4*MSAA @1920*1080 is faster than no AA. No idea what causes this anomaly.

The virtually no performance drop between 2560*1440 and 3200*1800 (no AA) seems to be a consistent picture across most of the games that I've tested. Clear sweetspot, unless you're on a huge low DPI monitor (requiring some AA).

W7 x64, 3770k @4.8Ghz, 16GB 1866Mhz.


----------



## Nuke33

Quote:


> Originally Posted by *pmc25*
> 
> Perhaps of interest to a few people. Was doing some tests on Ashes Escalation in game bench, mainly in Vulkan, and seeing how easy it is to do without AA - due to probably little performance loss at higher resolutions once AA is turned off. Pretty easy.
> 
> Bear in mind Vulkan still isn't particularly optimised yet, but DX11 gets hammered on the heavier batches, and I can't use DX12.
> 
> 1920*1080 crazy 4*msaa Vulkan
> Avg. 63.3FPS 15.8ms
> Small 71.7FPS 14ms
> Medium 65FPS 15.4ms
> Heavy 55.4FPS 18.1ms
> 
> 1920*1080 crazy no aa Vulkan
> Avg. 62FPS 16.1ms
> Small 70.2FPS 14.2ms
> Medium 62FPS 16.1ms
> Heavy 55.6FPS 18ms
> 
> 2560*1440 crazy 4*aa Vulkan
> Avg. 53.6FPS 18.7ms
> Small 62.5FPS 16ms
> Medium 54FPS 18.5ms
> Heavy 46.6FPS 21.4ms
> 
> 2560*1440 crazy no aa Vulkan
> Avg. 56FPS 17.9ms
> Small 63.9FPS 15.6ms
> Medium 57.9FPS 17.3ms
> Heavy 48.3FPS 20.7ms
> 
> 3200*1800 crazy no aa Vulkan
> Avg. 55.2FPS 18.1ms
> Small 63.7FPS 15.7ms
> Medium 55.6FPS 18ms
> Heavy 48.3FPS 20.7ms
> 
> 3840*2160 crazy no aa Vulkan
> Avg. 51.9FPS 19.3ms
> Small 61.5FPS 16.3ms
> Medium 52.8FPS 19ms
> Heavy 44.2FPS 22.6ms
> 
> 3840*2160 crazy no aa DX11
> Avg. 38.4FPS 26ms
> Small 64.9FPS 15.4ms
> Medium 45.8FPS 21.8ms
> Heavy 24.5FPS 40.9ms
> 
> Performance drop off is surprisingly minimal, even in a game as tough as Ashes. Weirdly though, in Ashes Vulkan, 4*MSAA @1920*1080 is faster than no AA. No idea what causes this anomaly.
> 
> The virtually no performance drop between 2560*1440 and 3200*1800 (no AA) seems to be a consistent picture across most of the games that I've tested. Clear sweetspot, unless you're on a huge low DPI monitor (requiring some AA).
> 
> W7 x64, 3770k @4.8Ghz, 16GB 1866Mhz.


Nice info, thanks









Performance drop of 1080p NoAA is probably due to switch to a lower P-state because of lower GPU load.


----------



## pmc25

Quote:


> Originally Posted by *Nuke33*
> 
> Nice info, thanks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Performance drop of NoAA is probably due to switch to a lower P-state because of lower GPU load.


No, I lock it at max Pstate for both core and HBM2. Neither shifts.

Could just be the Vulkan renderer being a bit wobbly. 2.4 was their first pass, and subsequent updates haven't polished it.

I think 2.6 or 2.7 they are aiming to improve it significantly.


----------



## Nuke33

Quote:


> Originally Posted by *pmc25*
> 
> No, I lock it at max Pstate for both core and HBM2. Neither shifts.
> 
> Could just be the Vulkan renderer being a bit wobbly. 2.4 was their first pass, and subsequent updates haven't polished it.
> 
> I think 2.6 or 2.7 they are aiming to improve it significantly.


Okay, in that case probably software issue.


----------



## madmanmarz

Well I remounted my Nexxxos GPX block, and when I took it off I could see that part of the HBM was not really covered with paste. This time I used Kryonaut and the included spreader, and where as my GPU Hot Spot was previously reaching 75-80c at 1000mv 1500/1100, it is now reaching 60c.

yay


----------



## tarot

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I kept the default 11792
> 
> My highest FireStrike score so far
> 
> https://www.3dmark.com/fs/13805929


what settings are you using for the 1700 odd gpu

i,m sort of happy with mine 1652/1100 1537 1075 and 1050 on the ram 1050 on the lower volt settings stable in everything.
so far the most heat i have generated is diablo 3...yeah i know right







but it had a few dumps down to some very low fps so still looking into that whether it is enhanced vsync or freesync or a combination or somehtig else.

the biggest torture test and the one that seems to bring it all crashning down so far is firestrike stress test but again i, m not sure i trust it as so many people have lockups etc but no lockups anywhere else


Spoiler: Warning: Spoiler!



.
Quote:


> Originally Posted by *madmanmarz*
> 
> Well I remounted my Nexxxos GPX block, and when I took it off I could see that part of the HBM was not really covered with paste. This time I used Kryonaut and the included spreader, and where as my GPU Hot Spot was previously reaching 75-80c at 1000mv 1500/1100, it is now reaching 60c.
> 
> yay






thanks for that i believe that s one reason for my high scores so when i rip the loops apart tomorrow i will redo mine as well


----------



## 113802

Quote:


> Originally Posted by *tarot*
> 
> what settings are you using for the 1700 odd gpu
> 
> i,m sort of happy with mine 1652/1100 1537 1075 and 1050 on the ram 1050 on the lower volt settings stable in everything.
> so far the most heat i have generated is diablo 3...yeah i know right
> 
> 
> 
> 
> 
> 
> 
> but it had a few dumps down to some very low fps so still looking into that whether it is enhanced vsync or freesync or a combination or somehtig else.
> 
> the biggest torture test and the one that seems to bring it all crashning down so far is firestrike stress test but again i, m not sure i trust it as so many people have lockups etc but no lockups anywhere else
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> .
> 
> 
> 
> thanks for that i believe that s one reason for my high scores so when i rip the loops apart tomorrow i will redo mine as well


1770/1106 @ 1.2v with 50% power target. It hovers around 1700-1730Mhz.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *madmanmarz*
> 
> Well I remounted my Nexxxos GPX block, and when I took it off I could see that part of the HBM was not really covered with paste. This time I used Kryonaut and the included spreader, and where as my GPU Hot Spot was previously reaching 75-80c at 1000mv 1500/1100, it is now reaching 60c.
> 
> yay


Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1770/1106 @ 1.2v with 50% power target. It hovers around 1700-1730Mhz.






i may have missed it is that a LC bios or stock air and is that 1.2 on P7 and P6 or just P7 and what voltage on the hbm/other voltage.

and lastly in hwinfo/gpuz what's the power draw...or maybe don't tell me i might feint


----------



## Reikoji

Quote:


> Originally Posted by *TrixX*
> 
> Set the rules up properly and it'll be using P7 even if the clock speed isn't P7's stated speeds. The ACG limits the Clocks to usable range based off the available voltage. Basically once you have have set the clocks they are like a target clock rather than a set clock. This way you can confirm voltage settings and then mess with the power, voltage and cooling settings properly.


Does Clockblocker perform the state locking better than setting the minimum state from wattman?


----------



## Soggysilicon

Quote:


> Originally Posted by *TrixX*
> 
> Noice, what settings Soggy?


Ref. 64cu Air, LC-AIO bios (sapphire) EK-WB

wattman...

[email protected] [email protected] 1105/882 +50, HBCC ON, 9.3 whql release

UE/ freesync ON

R7 1800X XFR (Ryzen Balanced, latest rev.), 1600mhz (3200) on the fabric for testing.

P7 seems to be the leading cause of the overboost with high volts, going to happen right around 1742, I am thinking the "sweet spot" is to get a card hovering between 1710-1730 (10 min. average). I like what some folks seem to have been doing with locking down the P5 state for transitions keeping the card from throttling down, @3440 on my monitor I never want to drop below 48, ideally never below 52-55 fps. I can game on the above successfully in:

Prey, TW: Warhammer DX12 maxed (FXAA) @~58-59 fps, SPG: Warlords, and a couple others. If a desktop app allows for it, I leave GPU acceleration "off" for stability and auto tuning (shader) frequency shenanigans.









HBCC impacts my cpu score in firestrike (freebie' edition), but seems negligible on gfx scores (not that it matter Ryzen proc. don't report right). These settings bench fine. Although I have found that once you run an older DX title, your probably going to need to reboot / relaunch the drivers to "reset" the auto frequency / tuning.


----------



## madmanmarz

For some reason my clocks aren't at full speed on some games (Xenoverse 2 for instance). The HBM is at like 500mhz and it fluctuates sometimes and causes the game to sort of lock up (if i alt/ctrl/del and go back in it keeps running).


----------



## Soggysilicon

Quote:


> Originally Posted by *madmanmarz*
> 
> For some reason my clocks aren't at full speed on some games (Xenoverse 2 for instance). The HBM is at like 500mhz and it fluctuates sometimes and causes the game to sort of lock up (if i alt/ctrl/del and go back in it keeps running).


Sounds like its stuck at a low P state, have you tried setting a frame rate target for the game and just playing balanced or turbo and seeing what you get? Also, do you have browsers open in another window or any other applications which have GPU accelerated access? Mentioning my previous post, some older titles can monkey with the card causing the driver to "latch" into certain predefined states (for lack of a better way to describe it). Let us know how it turns out.


----------



## Reikoji

Quote:


> Originally Posted by *madmanmarz*
> 
> For some reason my clocks aren't at full speed on some games (Xenoverse 2 for instance). The HBM is at like 500mhz and it fluctuates sometimes and causes the game to sort of lock up (if i alt/ctrl/del and go back in it keeps running).


Same. in my case it is Wolfenstein: New Order. Old blood ran beautifully, but new order the GPU will not spool up and game eventually crash to windows. I haven't yet tried it with minimum gpu state set to p5 or higher. It may survive, but idk.


----------



## TrixX

Quote:


> Originally Posted by *Reikoji*
> 
> Does Clockblocker perform the state locking better than setting the minimum state from wattman?


Yes. I haven't found a time when Wattman doesn't bug out with something.
Quote:


> Originally Posted by *Reikoji*
> 
> Same. in my case it is Wolfenstein: New Order. Old blood ran beautifully, but new order the GPU will not spool up and game eventually crash to windows. I haven't yet tried it with minimum gpu state set to p5 or higher. It may survive, but idk.


Clockblocker should prevent that. I haven't got that particular game, but I've tested with some older and newer games and haven't had it downclock crash in any of them since CB has been active and the rules set correctly.
Quote:


> Originally Posted by *madmanmarz*
> 
> For some reason my clocks aren't at full speed on some games (Xenoverse 2 for instance). The HBM is at like 500mhz and it fluctuates sometimes and causes the game to sort of lock up (if i alt/ctrl/del and go back in it keeps running).


Yup happens with Wattman, it's almost like it's got 5 uses then needs a restart









As Reikoji mentioned above, I use Clockblocker (on Guru3D) to force P7 state. Setup the rules and you won't get the random downclocking. Something I also set is a max of 300FPS in Global settings as in some games (pCARS2 menu for instance) the FPS can run away into the 1000's inducing coil whine.

Also another note on CB, if you plan to use the borderless window mode, be sure to set a rule for that game (usually just use the exe rather than full path for the rule) to block with a higher status than any of the downclock rules. Otherwise you can have it downclock when you click outside the border of the window. The caused a crash for me (downclocked to P0 state) in iRacing. Easy fix though


----------



## madmanmarz

Quote:


> Originally Posted by *Soggysilicon*
> 
> Sounds like its stuck at a low P state, have you tried setting a frame rate target for the game and just playing balanced or turbo and seeing what you get? Also, do you have browsers open in another window or any other applications which have GPU accelerated access? Mentioning my previous post, some older titles can monkey with the card causing the driver to "latch" into certain predefined states (for lack of a better way to describe it). Let us know how it turns out.


I'll mess around with that stuff

Quote:


> Originally Posted by *TrixX*
> 
> Yes. I haven't found a time when Wattman doesn't bug out with something.
> Clockblocker should prevent that. I haven't got that particular game, but I've tested with some older and newer games and haven't had it downclock crash in any of them since CB has been active and the rules set correctly.
> Yup happens with Wattman, it's almost like it's got 5 uses then needs a restart
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As Reikoji mentioned above, I use Clockblocker (on Guru3D) to force P7 state. Setup the rules and you won't get the random downclocking. Something I also set is a max of 300FPS in Global settings as in some games (pCARS2 menu for instance) the FPS can run away into the 1000's inducing coil whine.
> 
> Also another note on CB, if you plan to use the borderless window mode, be sure to set a rule for that game (usually just use the exe rather than full path for the rule) to block with a higher status than any of the downclock rules. Otherwise you can have it downclock when you click outside the border of the window. The caused a crash for me (downclocked to P0 state) in iRacing. Easy fix though


copy that copy that

Quote:


> Originally Posted by *Reikoji*
> 
> Same. in my case it is Wolfenstein: New Order. Old blood ran beautifully, but new order the GPU will not spool up and game eventually crash to windows. I haven't yet tried it with minimum gpu state set to p5 or higher. It may survive, but idk.


nice to know im not the only one


----------



## madmanmarz

Decided to see how high I can go on this thing. Along the way I already knew my core clocks were not going to be that great, but anyway at 1200mv I can get 1630mhz but hotspot approaches 100c.

I'll stick to my 1500/1100 1000mv setup for 24/7 =)


----------



## aylan1196

Asus zenith extreme 2 vega Rx lc crossfire the screen black and sign out randomly even with one gpu only on 17.9.3 driver
Weird one gpu b4 was good if I swap to 1080 ti nothing happens only with vega and this driver any tips
GPUs seated properly
Screen chg70 Samsung with the latest firmware
Dp cable 1.4a
Hdmi 2.0
Iam going crazy with this
Hope amd polish there driver soon


----------



## owntecx

Quote:


> Originally Posted by *madmanmarz*
> 
> 
> 
> Decided to see how high I can go on this thing. Along the way I already knew my core clocks were not going to be that great, but anyway at 1200mv I can get 1630mhz but hotspot approaches 100c.
> 
> I'll stick to my 1500/1100 1000mv setup for 24/7 =)


well u are within the margin of clocks, u set 1650 u got 1630&#8230; Try set p7 to 1700 with 1.2v and u will get likely 1670s. And hotspot that high may get in the way of being stable.


----------



## madmanmarz

Quote:


> Originally Posted by *owntecx*
> 
> well u are within the margin of clocks, u set 1650 u got 1630&#8230; Try set p7 to 1700 with 1.2v and u will get likely 1670s. And hotspot that high may get in the way of being stable.


nah this one won't clock that high. I can barely get 1600 @ 1150mv and 1650 would crash at 1200mv. at least the HBM hits 1100 easy peasy!

I would like to make a custom bios for it eventually because running p6/p7 @ 1000mv, the states below it run at 1050/1100mv and its totally unnecessary. plus the default is 1150/1200mv and id rather just set that for benchies instead of being default.


----------



## owntecx

for being stable, did you try up the hbm voltage? in my case, seting it to same p6 voltage, gives me about 30/40mhz and make ir stable, for the rest, Make a powerplaytable. i have mine maxing at 1005mv giving me 1540coreclock on load with 1632p7. Just for you to see, i set all equal, if voltage is 1000mv, i get 1510mhz, with 1001 i get 1540/50 xD. Thats a strange step.


----------



## owntecx

I didnt test if stable, cause of stock cooler, but 1680p7 with 1.055mv on everything(hbm must be too) i get around 1600 core, if gives about hwinfo mv of 1.051


----------



## madmanmarz

Quote:


> Originally Posted by *owntecx*
> 
> for being stable, did you try up the hbm voltage? in my case, seting it to same p6 voltage, gives me about 30/40mhz and make ir stable, for the rest, Make a powerplaytable. i have mine maxing at 1005mv giving me 1540coreclock on load with 1632p7. Just for you to see, i set all equal, if voltage is 1000mv, i get 1510mhz, with 1001 i get 1540/50 xD. Thats a strange step.


I'll give it a try but I read others saying to set the HBM voltage lower so I set it to 900mv.

EDIT: Just tried it, makes a HUGE difference! 1100mv doesn't lock in but 1050mv does.


----------



## owntecx

Quote:


> Originally Posted by *madmanmarz*
> 
> I'll give it a try but I read others saying to set the HBM voltage lower so I set it to 900mv.


I did that to early, but try for yourself, there are some steps on voltage, Like setting 1.06v and u get 1.01 during load, check gpuz ou hwinfo, u set hbm voltage to 1.05 same. you set it to 1.051 and boom. 1.051mv underload. I wasted days instaling windows and drivers, cause i instaled an ssd trying to get my old firestrike scores, 22 000 with 1530 core, aplying same settinga over and over, just to get 21600. when i was getting tired, i set hbm higher to 1001. and bum... 22100... i even dialed p7 down, cause i was hittinf 1560 with 1.013 volts. And after some testing, i fond, that there are some steps, oce u go over 1v. 1.05v and 1.1v setting hbm voltage 1mv over and boom extra mhz


----------



## yraith

I just currently bought the Vega 64, and did a benchmark on Steam.. I can't really tell if my numbers are good. I can't seem to get mega-fps on several of my games like SWTOR.. Here are
the results:


----------



## owntecx

Quote:


> Originally Posted by *yraith*
> 
> I just currently bought the Vega 64, and did a benchmark on Steam.. I can't really tell if my numbers are good. I can't seem to get mega-fps on several of my games like SWTOR.. Here are
> the results:


they are not bad for the start, Have you already up the power limit to +50? do it and set p6 and p7 and hbm voltage to 1.051mv, then up the hbm mhz to 1050 to Start, and report back. Mean while, see what clocks you are getting during load


----------



## madmanmarz

What's the real difference between the 64 AC and LC bios? I wouldn't mind a bios with lower stock voltage.


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> What's the real difference between the 64 AC and LC bios? I wouldn't mind a bios with lower stock voltage.


It's actually same voltages, higher clocks and the ability to go up to 1.25v on core. When OC'ing you do need to push the HBM voltage setting a bit to keep it stable and get the most out of your clock settings. Power draw goes up quite a lot though


----------



## madmanmarz

Quote:


> Originally Posted by *TrixX*
> 
> It's actually same voltages, higher clocks and the ability to go up to 1.25v on core. When OC'ing you do need to push the HBM voltage setting a bit to keep it stable and get the most out of your clock settings. Power draw goes up quite a lot though


Damn, so aside from powerplay table modding, is there any way to have a 64 bios with default 1000mv voltage in p6/p7 or be able to change the voltage in lower states?


----------



## TrixX

Nope not currently, though using the LC BIOS combined with Clockblocker and OverdriveNTool you can get close


----------



## Reikoji

Quote:


> Originally Posted by *TrixX*
> 
> Yes. I haven't found a time when Wattman doesn't bug out with something.
> Clockblocker should prevent that. I haven't got that particular game, but I've tested with some older and newer games and haven't had it downclock crash in any of them since CB has been active and the rules set correctly.
> Yup happens with Wattman, it's almost like it's got 5 uses then needs a restart
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As Reikoji mentioned above, I use Clockblocker (on Guru3D) to force P7 state. Setup the rules and you won't get the random downclocking. Something I also set is a max of 300FPS in Global settings as in some games (pCARS2 menu for instance) the FPS can run away into the 1000's inducing coil whine.
> 
> Also another note on CB, if you plan to use the borderless window mode, be sure to set a rule for that game (usually just use the exe rather than full path for the rule) to block with a higher status than any of the downclock rules. Otherwise you can have it downclock when you click outside the border of the window. The caused a crash for me (downclocked to P0 state) in iRacing. Easy fix though


Nah its not helping. There might be something else wrong with the game, as it is crashing in the exact same place no matter what.
Quote:


> Originally Posted by *TrixX*
> 
> It's actually same voltages, higher clocks and the ability to go up to 1.25v on core. When OC'ing you do need to push the HBM voltage setting a bit to keep it stable and get the most out of your clock settings. Power draw goes up quite a lot though


Aren't the voltages/frequencies from P2 to P5 also different between LC and air bios?


----------



## raysheri

I don't see the point in running a LC bios on an air cooled card.
The LC bios gives increased voltage range and power range, both of which are useful if you're going for the highest possible mhz clocks. It also comes with a temp limit of 75c deg where if reached will shut the card down because it assumes the water cooling system has failed for some reason and thus to prevent damage to the card.
@TrixX, if you believe the LC bios is more stable for you then maybe you're right but I would be looking at other reasons to explain your feelings of instability if I were you. Your experimentation and research is interesting though.


----------



## surfinchina

Quote:


> Originally Posted by *raysheri*
> 
> I don't see the point in running a LC bios on an air cooled card.
> The LC bios gives increased voltage range and power range, both of which are needed if you're going for the highest possible mhz clocks. It also comes with a temp limit of 75c deg where if reached will shut the card down because it assumes the water cooling system has failed for some reason and thus to prevent damage to the card.
> @TrixX, if you believe the LC bios is more stable for you then maybe you're right but I would be looking at other reasons to explain your feelings of instability if I were you. Your experimentation and research is interesting though.


I run the LC bios because I have a waterblock for my air cooled card. Temps are not a problem.


----------



## raysheri

Quote:


> Originally Posted by *surfinchina*
> 
> I run the LC bios because I have a waterblock for my air cooled card. Temps are not a problem.


Your card is no longer air cooled then, of course, and so it can make some sense to run the LC bios.


----------



## TrixX

Quote:


> Originally Posted by *Reikoji*
> 
> Nah its not helping. There might be something else wrong with the game, as it is crashing in the exact same place no matter what.


Looks like some other factor is involved then.
Quote:


> Originally Posted by *Reikoji*
> 
> Aren't the voltages/frequencies from P2 to P5 also different between LC and air bios?


Frequencies yes, voltages seem to be the same ramping from 800mv for P0 to 1100mv for P5.
Quote:


> Originally Posted by *raysheri*
> 
> Your card is no longer air cooled then, of course, and so it can make some sense to run the LC bios.


Kinda, though keeping Vega under 60C is kinda target for air or water cooled. Just nowhere near as efficient on air.


----------



## JS17

Quote:


> Originally Posted by *Reikoji*
> 
> Nah its not helping. There might be something else wrong with the game, as it is crashing in the exact same place no matter what....


Yeah, unfortunately Wolfenstein: The New Order also keeps crashing for me. Oddly enough I was able to play a couple hours of it with only a few crashes, but now I'm at a point where I can't advance any further. ClockBlocker was a good idea, but no dice. Hoping a future driver somehow fixes it.


----------



## rv8000

Has anyone with a 4k monitor and or high refresh rate 1440p+ monitor have any random blackscreens at idle on the desktop? I've noticed they occur far more often upon booting my pc and logging into windows, I wonder if this is due to clock/pstate changes when loading relive/crimson drivers.

Hard to narrow it down to a GPU, monitor, cable, or software problem


----------



## Reikoji

:|


----------



## 113802

Quote:


> Originally Posted by *rv8000*
> 
> Has anyone with a 4k monitor and or high refresh rate 1440p+ monitor have any random blackscreens at idle on the desktop? I've noticed they occur far more often upon booting my pc and logging into windows, I wonder if this is due to clock/pstate changes when loading relive/crimson drivers.
> 
> Hard to narrow it down to a GPU, monitor, cable, or software problem


No random black screens using a 144Hz 1440p monitor. Is that at stock or HBM overclocked?


----------



## kril89

So I have a Vega 64 Air card that I put a EK waterblock could fun 1050mhz ram on the air bios all I wanted. I thought i'd flash the liquid bios to get some more mhz out of the card and now I can't add any without every game crashing on me or locking up. Any ideas?


----------



## gamervivek

Playing around with clockblocker and don't think it helps much. The newer drivers don't crash but get stuck at the p7 clocks while performance takes a nosedive. So it shows the card clocked at boost clocks but the performance is much worse than stock.

The card overshoots on clocks if there is less load so it might be a problem in games where usage can be very variable, in GTA V I get full utilisation one moment, the next I'm looking at <100W power draw on hwinfo.


----------



## TrixX

Quote:


> Originally Posted by *raysheri*
> 
> Your card is no longer air cooled then, of course, and so it can make some sense to run the LC bios.


Quote:


> Originally Posted by *gamervivek*
> 
> Playing around with clockblocker and don't think it helps much. The newer drivers don't crash but get stuck at the p7 clocks while performance takes a nosedive. So it shows the card clocked at boost clocks but the performance is much worse than stock.
> 
> The card overshoots on clocks if there is less load so it might be a problem in games where usage can be very variable, in GTA V I get full utilisation one moment, the next I'm looking at <100W power draw on hwinfo.


The perf should alter based on the load on the GPU. At 100% load you should see close to benchmark performance and clock speeds, as the load drops I've see the actual MHz go up but it can only process the load rather than at maximum efficiency (hence in gaming the constant P state drops) so you may not see greater performance despite higher clocks as the GPU load falls. I haven't seen performance drop below stock though it removed P State change stutter and slow reaction to high load scenario's vs low load scenario's.

Ran Warhammer 2 for 6hrs straight at 1440p with around 50-60 FPS on ultra settings last night (FPS would be higher with a better CPU than mine







) and the card was running 65C the whole time with no P state changes (used to happen every scene switch, every menu opening etc...) with smooth consistent FPS. Something I couldn't get with it at stock (though thermal/power throttling didn't help with direct comparison testing there).


----------



## Soggysilicon

Quote:


> Originally Posted by *rv8000*
> 
> Has anyone with a 4k monitor and or high refresh rate 1440p+ monitor have any random blackscreens at idle on the desktop? I've noticed they occur far more often upon booting my pc and logging into windows, I wonder if this is due to clock/pstate changes when loading relive/crimson drivers.
> 
> Hard to narrow it down to a GPU, monitor, cable, or software problem


Yes. I have had issues with DP between Vega and my Sammy CF791 (3440x1440) 100hz / UE FS. This issue has existed since I took the card out of the box. When the card is OC'd, (heavily OC'd?) the chances that the display will go blank on some titles is relatively high. This can be "corrected" by "rebooting" the monitor. In super position if I launch in 1080, and then switch to 4k the issue doesn't manifest.

The issue is the card and how it is interacting with the monitor. Once loaded the issue is drivers; the DP, the monitor and the card may not be interfacing correctly. As with all things Vega if you see some oddball behavior, its time to reboot the driver (close it, reboot the comp). Hoping going forward drivers address this and other issues.


----------



## Tgrove

No black screens on 4k here


----------



## jearly410

My old 390x would blackscreen my XF270hu when too much voltage was applied.


----------



## Sickened1

I wish that AIB's would announce something about custom cards already..I just want a custom Vega56 already.


----------



## kundica

Another afterburner update with more Vega control. Beta 19: http://www.guru3d.com/files-details/msi-afterburner-beta-download.html


----------



## The EX1

Quote:


> Originally Posted by *kundica*
> 
> Another afterburner update with more Vega control. Beta 19: http://www.guru3d.com/files-details/msi-afterburner-beta-download.html


I think I missed it, but what does Beta 19 have over Beta 18 for Vega?


----------



## kundica

Quote:


> Originally Posted by *The EX1*
> 
> I think I missed it, but what does Beta 19 have over Beta 18 for Vega?


From reading the notes it seems control over all states not just p6 and p7.


----------



## owntecx

Anyone with voltage steps? Like from 960 to 990, all mv i set always endup during load to 970mv give or take. same happens betten 1001 and 1040 +-


----------



## Reikoji

Quote:


> Originally Posted by *Soggysilicon*
> 
> Yes. I have had issues with DP between Vega and my Sammy CF791 (3440x1440) 100hz / UE FS. This issue has existed since I took the card out of the box. When the card is OC'd, (heavily OC'd?) the chances that the display will go blank on some titles is relatively high. This can be "corrected" by "rebooting" the monitor. In super position if I launch in 1080, and then switch to 4k the issue doesn't manifest.
> 
> The issue is the card and how it is interacting with the monitor. Once loaded the issue is drivers; the DP, the monitor and the card may not be interfacing correctly. As with all things Vega if you see some oddball behavior, its time to reboot the driver (close it, reboot the comp). Hoping going forward drivers address this and other issues.


ive noticed screen blackouts when radeon wattman/drivers act up.

for instance, if i close down the game and take a look at my gpu readings, the core clock will be above 1800mhz, mem running ar 1105, and chip power draw above 100w. IDLE. the gpu tach will also be maxxed out.

ending the radeon wattman in task manager reverts it back to true idle and the game blackouts cease.


----------



## poisson21

For Afterburner, i do not trust it, if i look at the monitoring graph, my temp for the mem on my first vega goes up to 1631°c with a power draw for that card of 1626 W


----------



## pmc25

Quote:


> Originally Posted by *Sickened1*
> 
> I wish that AIB's would announce something about custom cards already..I just want a custom Vega56 already.


Why? The board itself is already heavily over engineered on the AMD designed cards.

You won't find better cooling than either the Morpheus or water on OEM coolers.


----------



## The EX1

Quote:


> Originally Posted by *pmc25*
> 
> Why? The board itself is already heavily over engineered on the AMD designed cards.
> 
> You won't find better cooling than either the Morpheus or water on OEM coolers.


Because for the price of a Vega card plus the Morpheus you can likely just buy an AIB model with a good cooler while retaining your warranty. Some people also put a lot of weight on the cosmetics of the cards.


----------



## diabetes

Quote:


> Originally Posted by *poisson21*
> 
> For Afterburner, i do not trust it, if i look at the monitoring graph, my temp for the mem on my first vega goes up to 1631°c with a power draw for that card of 1626 W


That is not just Afterburner. It is either the firmware or the driver having a short moment of crazyness. I sometimes get such temperature and wattage readings in GPU-Z, HwInfo64, Hwmonitor and also in Afterburner. I think this is something that just happens on AMD cards. The R9 290 I had before my Vega sometimes reported that it was fed 16.7 million Volts by the power supply xD


----------



## owntecx

Does anyone have some random "default-radeon-wattman-settings-restored-due-unexpected-system-failure" on starting windows? It happens to me sometimes, very randomly, settings are fully stable, and even if they dont, i dont think windows goes 100% load on the startup to making it crash.


----------



## Reikoji

Quote:


> Originally Posted by *owntecx*
> 
> Does anyone have some random "default-radeon-wattman-settings-restored-due-unexpected-system-failure" on starting windows? It happens to me sometimes, very randomly, settings are fully stable, and even if they dont, i dont think windows goes 100% load on the startup to making it crash.


randomly yes. If you've installed the speaker on your mobo, randomly when rebooting or shutting down it will have a long beep, and wattman settings will be reset when you start windows back up again.


----------



## owntecx

Quote:


> Originally Posted by *Reikoji*
> 
> randomly yes. If you've installed the speaker on your mobo, randomly when rebooting or shutting down it will have a long beep, and wattman settings will be reset when you start windows back up again.


Any fix for those bugs?&#8230; I saw that i had a kernel-power critical error, after the boot that reseted the settings, but the shutdown before was clean i guess, i didnt had a speaker plugged. Its just an ocd about having to set the settings again.


----------



## Roboyto

Quote:


> Originally Posted by *pmc25*
> 
> Why? The board itself is already heavily over engineered on the AMD designed cards.
> 
> You won't find better cooling than either the Morpheus or water on OEM coolers.


Even though I'm a







enthusiast I would have to say:

Size of the card..morpheus makes it a BEAST
Ease of installation for those who won't block it
Performance...gaming and noise wise, for those who don't want water and don't want the inadequate blower. Assuming an AIB is going to figure out that the voltages are SOOO much higher than is necessary to sustain factory clocks.
Warranty
Aesthetics

Not necessarily in that order...but that is the gist of it I reckon...


----------



## Soggysilicon

Quote:


> Originally Posted by *owntecx*
> 
> Does anyone have some random "default-radeon-wattman-settings-restored-due-unexpected-system-failure" on starting windows? It happens to me sometimes, very randomly, settings are fully stable, and even if they dont, i dont think windows goes 100% load on the startup to making it crash.


Yes.

I "suspect" but cannot prove... that windows 10, and _especially_ windows 10 C have implemented fairly strict driver executions as well as having some issues with software tasked with loading with your system not fully loading, or getting hung threads. Developers seem to be required to pay more attention to their software under W10.

Additionally, I never trust that any setting from a previous session are saved / loaded with Radeon Settings / Wattman, and "always" reapply them before launching a game or bench. Maybe one day... but today is not that day.


----------



## rv8000

Quote:


> Originally Posted by *WannaBeOCer*
> 
> No random black screens using a 144Hz 1440p monitor. Is that at stock or HBM overclocked?


The card and HBM are completely stock. I haven't had any issues in a 3D load at stock settings.

Quote:


> Originally Posted by *Soggysilicon*
> 
> Yes. I have had issues with DP between Vega and my Sammy CF791 (3440x1440) 100hz / UE FS. This issue has existed since I took the card out of the box. When the card is OC'd, (heavily OC'd?) the chances that the display will go blank on some titles is relatively high. This can be "corrected" by "rebooting" the monitor. In super position if I launch in 1080, and then switch to 4k the issue doesn't manifest.
> 
> The issue is the card and how it is interacting with the monitor. Once loaded the issue is drivers; the DP, the monitor and the card may not be interfacing correctly. As with all things Vega if you see some oddball behavior, its time to reboot the driver (close it, reboot the comp). Hoping going forward drivers address this and other issues.


Currently using an ACER XR342CK over a certified DP cable @ 3440x1440 75hz, but my issue seems to happen more at idle/desktop/websurfing. Also, last night for the first time, when the display blackscreened I received some weird error displaying some display port error and the monitor could no longer run, recognize, or even select it's native resolution (had up to 2560x1080 selectable).

Have either of you tried using your monitors over HDMI to try and replicate these problems (if possible)?


----------



## Reikoji

Quote:


> Originally Posted by *owntecx*
> 
> Any fix for those bugs?&#8230; I saw that i had a kernel-power critical error, after the boot that reseted the settings, but the shutdown before was clean i guess, i didnt had a speaker plugged. Its just an ocd about having to set the settings again.


I've no clue what triggers it. Everything will seem fine until you reboot or shut down. I hardly ever shutdown or reboot my PC unless i'm messing with things.


----------



## Soggysilicon

Quote:


> Originally Posted by *rv8000*
> 
> The card and HBM are completely stock. I haven't had any issues in a 3D load at stock settings.
> Currently using an ACER XR342CK over a certified DP cable @ 3440x1440 75hz, but my issue seems to happen more at idle/desktop/websurfing. Also, last night for the first time, when the display blackscreened I received some weird error displaying some display port error and the monitor could no longer run, recognize, or even select it's native resolution (had up to 2560x1080 selectable).
> 
> Have either of you tried using your monitors over HDMI to try and replicate these problems (if possible)?


Funny you mention that, because I ended up trouble shooting Vega "right out of the box", I had to use the HDMI to verify that it was even working. The card seemed to have an issue with UEFI, now that being said; Vega replaced a 280x and captain hindsight being what it is, I would of driver wiped before swapping cards.

There is a reason Vega has 2 bios I suppose. Once I came to grips with the cards behavior I retired the HDMI. I don't recall the specs right off the top of my head but I am pretty sure that HDMI 1.4 isn't capable of my monitors refresh at native resolution; so no 100 hz without the DP. Additionally I am pretty sure to use "ultimate engine" freesync 48-100 hz, the monitor must be on the DP. In answer to your question the HDMI is perfectly fine, I do not have it physically connected because Vega sees it as a "second" monitor as the monitor is capable of PiP.

In the "early early" days, if Vega dumped, I had to go as far as reseat the CCA on the PCI slot, and using DP it was hit or miss if it would ever turn back on. W10 UEFI and drivers seem to function at a very low level... but someone that deals with that stuff would have to comment, I can only speak to my sample size of one and limited experience with it.

https://www.samsung.com/us/computing/monitors/curved/34--cf791-wqhd-monitor-lc34f791wqnxza/

Monitor is quite frankly... awesome... buttttttttt one has to take pause and wonder how Samsung developed for it, considering it didn't have a video card for nearly a year after they where selling it...







I have to assume using arbitrary signal generation and best guess. I suppose it has some speakers as well... never use em...

Monitor needs new drivers... but poo in one hand and wishes in the other....
Quote:


> Originally Posted by *Reikoji*
> 
> I've no clue what triggers it. Everything will seem fine until you reboot or shut down. I hardly ever shutdown or reboot my PC unless i'm messing with things.


Power saving on the monitor or on your desktop preferences... orrrrr in your power plan / options, anything which will turn off the monitor or put it in sleep can cause the driver to crap out if you are using an frame buffer technology... adaptive sync or freesync... additionally you want to be absolutely sure Vega is pointed at your monitor drivers and not some generic driver...


----------



## geriatricpollywog

Has anybody tried Vega and Fiji in crossfire? Or does Vega not support crossfire?


----------



## aylan1196

Quote:


> Originally Posted by *0451*
> 
> Has anybody tried Vega and Fiji in crossfire? Or does Vega not support crossfire?


Only vega 64 and 56 will crossfire else I don't think so


----------



## tarot

Quote:


> Originally Posted by *owntecx*
> 
> Does anyone have some random "default-radeon-wattman-settings-restored-due-unexpected-system-failure" on starting windows? It happens to me sometimes, very randomly, settings are fully stable, and even if they dont, i dont think windows goes 100% load on the startup to making it crash.


yes quite a few times but half f those can be cpu overclock locking up seems to reset wattman as well(at least on my Kelly rip...err ...threadripper.









check event viewer next time it happens and check.

i have also noticed that you need to either reboot or at least logout to get it to go again properly.

i changed to a 280 rad and redid the TIM on mine and its a bit happier (molded gpu=nice even layer) max i have seen the hot spot now is 72 after firestrike stress test and the temps were i think 55 tops.

not to bad for 305 watts


----------



## Trender07

Guys any of you already tried new drivers? https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.10.1-Release-Notes.aspx


----------



## kundica

Quote:


> Originally Posted by *Trender07*
> 
> Guys any of you already tried new drivers? https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.10.1-Release-Notes.aspx


Not yet, but it seems to offer some major improvements for Shadow of War.
http://www.pcgameshardware.de/Mittelerde-Schatten-des-Krieges-Spiel-60746/Specials/Mittelerde-Schatten-des-Krieges-Benchmarks-1240756/

Here's the relevant part of their review, you can also look at the graphs to see the difference. I used Google translate for the following:
Quote:


> We've measured AMD's RX Vega 64 again with the new driver software and see, the Radeon GPU can massively increase performance. Above all resolution averaged the performance gain is over 20 percent , enough to place itself before the overclocked GTX 1080. In some cases this has something to do with the memory allocation, according to the information in the game with an Nvidia with Ultra details are just over 8 GiByte video memory, Vega 64 with the Radeon software 17.10.1 is just below 8 GiByte. In any case, the result is exciting and a good point to interrupt our measurements for today - tomorrow we are planning further measurements and graphics card benchmarks.


----------



## tarot

Quote:


> Originally Posted by *kundica*
> 
> Not yet, but it seems to offer some major improvements for Shadow of War.
> http://www.pcgameshardware.de/Mittelerde-Schatten-des-Krieges-Spiel-60746/Specials/Mittelerde-Schatten-des-Krieges-Benchmarks-1240756/
> 
> Here's the relevant part of their review, you can also look at the graphs to see the difference. I used Google translate for the following:


i read that that is interesting...well the mitga bitz mitgoy whatever it said








my only pain is i would loose whql...i have been waiting to do valid 3dmark runs for so long


----------



## kundica

Quote:


> Originally Posted by *tarot*
> 
> i read that that is interesting...well the mitga bitz mitgoy whatever it said
> 
> 
> 
> 
> 
> 
> 
> 
> my only pain is i would loose whql...i have been waiting to do valid 3dmark runs for so long


I can't recommend 17.10.1 at this point. It introduces some weird frame dips which are noticeable while gaming. They are also reflected in synthetic benches, look at the minimum in the images I posted. I cleaned and reinstalled numerous times to confirm. I also ran the Shadow of War in-game benchmark and while the FPS was higher overall, there were terrible dips(stutter) there as well.

17.9.3


17.10.1


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *kundica*
> 
> I can't recommend 17.10.1 at this point. It introduces some weird frame dips which are noticeable while gaming. They are also reflected in synthetic benches, look at the minimum in the images I posted. I cleaned and reinstalled numerous times to confirm. I also ran the Shadow of War in-game benchmark and while the FPS was higher overall, there were terrible dips(stutter) there as well.
> 
> 17.9.3
> 
> 
> 17.10.1






are those scores with hbcc on and what clocks cpu/vga


----------



## kundica

Quote:


> Originally Posted by *tarot*
> 
> 
> are those scores with hbcc on and what clocks cpu/vga


HBCC on, I always run it. It's the Air 64 with the LC bios. +50% power limit, P7 at 1722 and HBM 1100. Stock LC bios voltage.

Edit: Forgot to add power limit.


----------



## tarot

Quote:


> Originally Posted by *kundica*
> 
> HBCC on, I always run it. It's the Air 64 with the LC bios. P7 at 1722 and HBM 1100. Stock LC bios voltage.


i,m going too have to try the lc bios again last time it over clocked it to 1780 and things went bad








so that's power limit 0% just stock volts?


----------



## dagget3450

Is any Vega FE owners able to use the crimson relive rx drivers? Everytime i try to load the new drivers with my vega fe 's it doesnt do anything. This is so annoying to be stuck on 17.6 vrga fe launch drivers.


----------



## dagget3450

Quote:


> Originally Posted by *0451*
> 
> Has anybody tried Vega and Fiji in crossfire? Or does Vega not support crossfire?


Answered this a while back in this thread at least the best i could. I tried fiji and vega fe together but found the driver wouldnt load for the FE or the fiji at the same time. Or vice versa so i couldnt even try it. I dont think it will matter though with RX vega either as they are too different to crossfire.


----------



## Roboyto

Quote:


> Originally Posted by *kundica*
> 
> HBCC on, I always run it. It's the Air 64 with the LC bios. P7 at 1722 and HBM 1100. Stock LC bios voltage.


Haven't cracked into 7K territory yet, but still running stock air BIOS...

6897 best run on 17.9.1.

P6 at 1597 with 950mV

P7 at 1727 with 1125mV

This gets the core ~1685 +/- ~7MHz throughout the SP4K run

HBM at 1100 with 800mV

R7 1700 @ 3725 & DDR4 3200...should push this further, but I'm pleased with that core speed @ stock volts











does 17.9.3 give some boosts to SuperPosition?

I was doing some extensive benchmarking with SuperPosition over the weekend....ran bench with, I think, 75 different voltage/clock combinations...

Couple interesting things I came across when tuning with WattMan on 17.9.1 in SuperPosition 4K:

1. Keeping stock clocks on my Vega 64 air I am able to undervolt all the way down to 800/800mv for P6/P7, with 50% power, and it still outperforms auto/balance settings by 269 points/4.5%

2. I found best UV performance at 825/875mv with stock clocks & 50% power which scored 6301. This was merely 37 points lower than all stock voltages and 50% power.

2a. I then decided to see if I could UV core and OC the HBM. My HBM is rock solid at 1100. With the core voltage at 825/875, HBM at 1100 MHz stock volts, I scored 6553; 9% improvement over bone stock settings.

3. Started to undervolt the HBM. 1025mv to HBM and score increased by 5..margin of error I reckon. Dropped HBM to 1000mv and score dropped ~100 points. While watching the core clocks in WattMan, it averaged about 30 MHz less overall. HBM at 1025mv or 1050mv and a steady ~1580 MHz. Once HBM voltage hit 1000mv or 975mv it was averaging ~1550 MHz. 950mv for HBM and core clocks dropped again averaging ~1500 MHz and score dropped again by well over 100 points.

4. Started to alter core/HBM voltages to see what would happen. If core voltage was brought up, then HBM voltage could go down and vice versa. For example core voltages 950/1000mv and HBM 800mv netted 6458. Drop core voltages to 825/875 and bring HBM to 1000mv and score was 6459. Average core clocks essentially identical with the 2 different voltage configurations.

5. I then started looking closer at VDDC in HWInfo. With the core voltages nearly bottomed out, reducing HBM voltage was reducing VDDC...which now makes sense why core clocks are dropping. (had to re-run benches at aforementioned settings tonight as I wasn't screenshotting results over the weekend until I started with core overclocking. Interestingly enough scores are a little higher tonight than they were over the weekend, BUT the same odd principle with voltages is present.)

6. No matter how low HBM voltage goes, it sticks/holds 1100 MHz...but the core clock will suffer due to VDDC dropping with reduced HBM voltage.

*All 3 screenshots below have the same core clocks, voltages and power settings of P6/P7 1537/1632MHz, 825/875mv, with 50% power. Only 'HBM voltage' was altered:*

*HBM voltage at 1025mV - VDDC average ~1.005V - Core clock average ~1580*



*HBM voltage at 1000mV - VDDC average ~0.961V - Core clock average ~1550*



*HBM voltage at 950mV - VDDC average ~0.912V and VDDC max down to 1.000V - Core clock average ~1500*












Anyone else experience anything similar to this? Or know if this maybe is a 17.9.1 specific issue?

Always thought it was odd HBM voltage was never effected in HWInfo...that HBM voltage control in WattMan appears to have no effect on HBM speed/stability..at least for me...what is it controlling then if it is impacting VDDC and core clocks?

Another strange phenomenon for me in SP4K when overclocking GPU core...settings P6 clocks above 1597, or p7 clocks above 1727, will reliably produce an overboost causing SP to crash.

Once SP crashes, no matter what settings are used in WattMan thereafter the crash, it runs unbelievably choppy with a ridiculous fluctuation of load/speeds on the card. From maxed out, to 0, and it keeps doing this in a ~3 second interval. Only fix is a system reboot.


----------



## milan616

n/m


----------



## lowdog

Quote:


> Originally Posted by *Roboyto*
> 
> Haven't cracked into 7K territory yet, but still running stock air BIOS...
> 
> 6897 best run on 17.9.1.
> P6 at 1597 with 950mV
> P7 at 1727 with 1125mV
> This gets the core ~1685 +/- ~7MHz throughout the SP4K run
> HBM at 1100 with 800mV
> R7 1700 @ 3725 & DDR4 3200...should push this further, but I'm pleased with that core speed @ stock volts
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> does 17.9.3 give some boosts to SuperPosition?
> 
> I was doing some extensive benchmarking with SuperPosition over the weekend....ran bench with, I think, 75 different voltage/clock combinations...
> 
> Couple interesting things I came across when tuning with WattMan on 17.9.1 in SuperPosition 4K:
> 
> 1. Keeping stock clocks on my Vega 64 air I am able to undervolt all the way down to 800/800mv for P6/P7, with 50% power, and it still outperforms auto/balance settings by 269 points/4.5%
> 
> 2. I found best UV performance at 825/875mv with stock clocks & 50% power which scored 6301. This was merely 37 points lower than all stock voltages and 50% power.
> 
> 2a. I then decided to see if I could UV core and OC the HBM. My HBM is rock solid at 1100. With the core voltage at 825/875, HBM at 1100 MHz stock volts, I scored 6553; 9% improvement over bone stock settings.
> 
> 3. Started to undervolt the HBM. 1025mv to HBM and score increased by 5..margin of error I reckon. Dropped HBM to 1000mv and score dropped ~100 points. While watching the core clocks in WattMan, it averaged about 30 MHz less overall. HBM at 1025mv or 1050mv and a steady ~1580 MHz. Once HBM voltage hit 1000mv or 975mv it was averaging ~1550 MHz. 950mv for HBM and core clocks dropped again averaging ~1500 MHz and score dropped again by well over 100 points.
> 
> 4. Started to alter core/HBM voltages to see what would happen. If core voltage was brought up, then HBM voltage could go down and vice versa. For example core voltages 950/1000mv and HBM 800mv netted 6458. Drop core voltages to 825/875 and bring HBM to 1000mv and score was 6459. Average core clocks essentially identical with the 2 different voltage configurations.
> 
> 5. I then started looking closer at VDDC in HWInfo. With the core voltages nearly bottomed out, reducing HBM voltage was reducing VDDC...which now makes sense why core clocks are dropping. (had to re-run benches at aforementioned settings tonight as I wasn't screenshotting results over the weekend until I started with core overclocking. Interestingly enough scores are a little higher tonight than they were over the weekend, BUT the same odd principle with voltages is present.)
> 
> 6. No matter how low HBM voltage goes, it sticks/holds 1100 MHz...but the core clock will suffer due to VDDC dropping with reduced HBM voltage.
> 
> *All 3 screenshots below have the same core clocks, voltages and power settings of P6/P7 1537/1632MHz, 825/875mv, with 50% power. Only 'HBM voltage' was altered:*
> 
> *HBM voltage at 1025mV - VDDC average ~1.005V - Core clock average ~1580*
> 
> 
> 
> *HBM voltage at 1000mV - VDDC average ~0.961V - Core clock average ~1550*
> 
> 
> 
> *HBM voltage at 950mV - VDDC average ~0.912V and VDDC max down to 1.000V - Core clock average ~1500*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Anyone else experience anything similar to this? Or know if this maybe is a 17.9.1 specific issue?
> 
> Always thought it was odd HBM voltage was never effected in HWInfo...that HBM voltage control in WattMan appears to have no effect on HBM speed/stability..at least for me...what is it controlling then if it is impacting VDDC and core clocks?
> 
> Another strange phenomenon for me in SP4K when overclocking GPU core...settings P6 clocks above 1597, or p7 clocks above 1727, will reliably produce an overboost causing SP to crash.
> 
> Once SP crashes, no matter what settings are used in WattMan thereafter the crash, it runs unbelievably choppy with a ridiculous fluctuation of load/speeds on the card. From maxed out, to 0, and it keeps doing this in a ~3 second interval. Only fix is a system reboot.


HBM voltage is floor voltage for gpu core ie; if HBM is set to 1050mv then regardless of whether you set S6/S7 to 900 or 950 or 1000mv they will not got below 1050mv. If you set HBM to 900mv then set S6/S7 to 900mv then gpu core mv will be able to go down to 900mv...lowest you set HBM mv to will be lowest gpu core mv will go to.........this is what seems to be the general consensus regarding HBM voltage and it's affect when altered.


----------



## Roboyto

Quote:


> Originally Posted by *lowdog*
> 
> HBM voltage is floor voltage for gpu core ie; if HBM is set to 1050mv then regardless of whether you set S6/S7 to 900 or 950 or 1000mv they will not got below 1050mv. If you set HBM to 900mv then set S6/S7 to 900mv then gpu core mv will be able to go down to 900mv...lowest you set HBM mv to will be lowest gpu core mv will go to.........this is what seems to be the general consensus regarding HBM voltage and it's affect when altered.


If it is setting a cap for how low the GPU voltage will go, why is it reducing VDDC and thus inhibiting clock speeds?

So if "HBM Voltage" is set to 1000mV, then GPU Core Voltage (VDDC) should not dip below 1.000 volts?


----------



## tarot

Quote:


> Originally Posted by *Roboyto*
> 
> If it is setting a cap for how low the GPU voltage will go, why is it reducing VDDC and thus inhibiting clock speeds?
> 
> So if "HBM Voltage" is set to 1000mV, then GPU Core Voltage (VDDC) should not dip below 1.000 volts?


yep
don't think of it as hbm voltage think of it as lower core voltage
mine is set to 1000 anything below that and it goes hinky
the problem with over 1727 i have had that mine overboosted to like 1780 not only in superposition though also in firestrike and boom

it is a lot trickier and finickier.. thast not a word...oh well...than i first thought







lowering voltage make it run at a more stable higher clock, my goal si to keep it under 300 watts or as close as possible for long term gaming


----------



## kundica

Quote:


> Originally Posted by *tarot*
> 
> i,m going too have to try the lc bios again last time it over clocked it to 1780 and things went bad
> 
> 
> 
> 
> 
> 
> 
> 
> so that's power limit 0% just stock volts?


Oops. +50% power limit too. I always run with +50% even if I'm downclocking or attempting to undervolt.
Quote:


> Originally Posted by *Roboyto*
> 
> does 17.9.3 give some boosts to SuperPosition?


Not that I'm aware of. I get the same results with 17.9.2 give or take a few points.


----------



## Skinnered

I'm pulling my hair out why I can't get my two (Vega's RX64 Liquid) to work in CF on Windows 10 64, while W7 64 works fine.

Since driver 17.9.2. the first that enables CF on Vega, the issues are as follow, driver won't install completely, probably at the proces of initiating CF, and a blackscreen follows.

Rebooting causes a black screen or one GPU disabled in devicemanager.

I really looked forward to Vega CF, but its a nightmare here for now in W10. I'm probably pulling a dead horse, because I'm the only one in the world with this config and problem and iy seems to stay here as solid gravity is.
















Is there anyone in the world who is using Vega CF with an X99 sytem, (maybe with a 5K mst screen ) who has this working in W10??


----------



## poisson21

I have a ryzen system (1800X with crosshair 6 hero), win 64 pro, and no problem to run my two rx vega 64 on crossfire setting, i've had some problem the first time to install the 17.9.2 driver, but i redownload them and no problem after.

So my only advice is that for a unknow reason your download was corrupted and you have to re download it.


----------



## Skinnered

Quote:


> Originally Posted by *poisson21*
> 
> I have a ryzen system (1800X with crosshair 6 hero), win 64 pro, and no problem to run my two rx vega 64 on crossfire setting, i've had some problem the first time to install the 17.9.2 driver, but i redownload them and no problem after.
> So my only advice is that for a unknow reason your download was corrupted and you have to re download it.


I have done this multiple times, downloaded drivers from different sources etc. , even reinstalled my OS, but al the drivers from 1.7.9.2 have this issue. (3 in a row)


----------



## Newbie2009

Some new drivers
Quote:


> Originally Posted by *Skinnered*
> 
> I have done this multiple times, downloaded drivers from different sources etc. , even reinstalled my OS, but al the drivers from 1.7.9.2 have this issue. (3 in a row)


You use DDU


----------



## PontiacGTX

edited the chart to include HBCC








Source


----------



## owntecx

What you guys use to check for Hbm Artifacts? Tomb Raider The Dagger Of Xian, i get 0 artifacts with 1080mhz, same happens with rise of tomb rider, with fxaa and smaa, With smaa x2 the same, but smaa x4, only under 1025 its artifact free. Can someone test it with smaa x4 with 1100 1080p and report back?


----------



## Skinnered

Quote:


> Originally Posted by *Newbie2009*
> 
> Some new drivers
> You use DDU


Yep, also the AMD uninstall utility and a fresh OS. Sometimes I manage to get both cards installed, but CF won't enable...


----------



## IvantheDugtrio

So my Vega FE might be dead. My loop sprang a leak while I was folding overnight and it dripped onto the GPU. It looks like 1 vrm has damage. I rinsed the area with distilled water followed by 91% isopropanol to remove any remaining water. There's still residual Mayhems UV blue coolant around. I hope there's a way to fix this by replacing some components or doing an ultrasonic wash but at this point I don't know.

If I were to get a replacement GPU I'm debating on it being an RX Vega 56 or a 1080 ti with a new block.


----------



## toxick

https://www.3dmark.com/spy/2517301


----------



## toxick

https://www.3dmark.com/3dm11/12421384


----------



## rancor

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> So my Vega FE might be dead. My loop sprang a leak while I was folding overnight and it dripped onto the GPU. It looks like 1 vrm has damage. I rinsed the area with distilled water followed by 91% isopropanol to remove any remaining water. There's still residual Mayhems UV blue coolant around. I hope there's a way to fix this by replacing some components or doing an ultrasonic wash but at this point I don't know.
> 
> If I were to get a replacement GPU I'm debating on it being an RX Vega 56 or a 1080 ti with a new block.


Try doing an ultrasonic wash with somthing that can remove the rest of the blue coolant. The hardest part will be cleaning under the QFN and that may not be possible without removal but it doesn't hurt to try at this point.


----------



## tarot

i,ll see that and raise you a time spy extreme









http://www.3dmark.com/spy/2525403
Quote:


> Originally Posted by *toxick*
> 
> https://www.3dmark.com/spy/2517301


as for the water leak stop scaring me ...its one of the main reasons i only have 4 points of failure in the case everything else is outside.

have you got a uv light have a look with that i would also try a heat gun on it.

if it is just those things that are blown how hard would they be to replace?
the whole thing sucks and i feel for you.

as for artifacts i will give tombe raider a run but i have not seen any artifacts yet but then i am only running 1055

Quote:


> Originally Posted by *owntecx*
> 
> What you guys use to check for Hbm Artifacts? Tomb Raider The Dagger Of Xian, i get 0 artifacts with 1080mhz, same happens with rise of tomb rider, with fxaa and smaa, With smaa x2 the same, but smaa x4, only under 1025 its artifact free. Can someone test it with smaa x4 with 1100 1080p and report back?


----------



## Manya3084

Finally, Vega FE has a new driver 17.10. I can now stop using my frankenstein mix-up of Rx Vega and Fe driver combo.


----------



## Chaoz

Did some undervolting today.
It's damn stable at these clocks and temps don't even go over 30°C. I'm happy











Might tweak a bit more later on.


----------



## gamervivek

Best configuration I've found is by disabling the p1-p4 states and undervolting the p5-p7 states. 1450-1470Mhz with power usage on average 10-15W over stock.


----------



## Chaoz

Meh, if it ain't broke ... . It works fine on those settings, so I'm leaving it at that for now.


----------



## IvantheDugtrio

Are you guys familiar with board repair companies that could offer ultrasonic cleaning service?


----------



## TrixX

Quote:


> Originally Posted by *gamervivek*
> 
> Best configuration I've found is by disabling the p1-p4 states and undervolting the p5-p7 states. 1450-1470Mhz with power usage on average 10-15W over stock.


Not sure how you are disabling P1-4. Every time I try to do so it doesn't apply it in OverdriveNTool. Unless Afterburner now allows you to?


----------



## punchmonster

This is at 1v under full synthetic load.
As you can see it's perfectly possible to get hotspot temperature under control.


----------



## dagget3450

Quote:


> Originally Posted by *Manya3084*
> 
> Finally, Vega FE has a new driver 17.10. I can now stop using my frankenstein mix-up of Rx Vega and Fe driver combo.


I wish they would merge drivers.. anyways thanks for the heads up ill try it tonight.


----------



## PontiacGTX

Quote:


> Originally Posted by *gamervivek*
> 
> Best configuration I've found is by disabling the p1-p4 states and undervolting the p5-p7 states. 1450-1470Mhz with power usage on average 10-15W over stock.


how did you do it?


----------



## gamervivek

Quote:


> Originally Posted by *TrixX*
> 
> Not sure how you are disabling P1-4. Every time I try to do so it doesn't apply it in OverdriveNTool. Unless Afterburner now allows you to?


It doesn't seem to apply it but it works better than whatever else I've tried. It does drop in clocks if the power limit is low but I've had the best results with it and the least power usage. 1.5Ghz and above requires more voltage and creates too much heat.
Quote:


> Originally Posted by *punchmonster*
> 
> This is at 1v under full synthetic load.
> As you can see it's perfectly possible to get hotspot temperature under control.


The delta is still 20C so even at 70C gpu temp, it goes to 90C on hotspot.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *gamervivek*
> 
> It doesn't seem to apply it but it works better than whatever else I've tried. It does drop in clocks if the power limit is low but I've had the best results with it and the least power usage. 1.5Ghz and above requires more voltage and creates too much heat.
> The delta is still 20C so even at 70C gpu temp, it goes to 90C on hotspot.





they are out of order should be hbm temps 48 and hotspot 62 think your sensors are off.

but saying that if you have a molded gpu block/hbm redo the thermal paste to cover the whole thing (i did an even spread with a very small dot on the 3 chips) also i added heat tape to some of the rear components before i put the backplate on and it helped quite a bit.

ambient is around maybe 15 degrees (temp near the case that is) so of course once it cranks up summer type temps that is going toio change just have to wait and see


----------



## madmanmarz

Quote:


> Originally Posted by *Chaoz*
> 
> Did some undervolting today.
> It's damn stable at these clocks and temps don't even go over 30°C. I'm happy
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Might tweak a bit more later on.


I run a similar setup as I'm on 2560x1080 and have no need for more speed outside of benchmarks. It's already overkill LOL. Anyway I found by bumping my memory voltage to 1000mv, that I was able to push P7 to 1700mhz/1000mv, which comes out to about 1540-1550mhz.


----------



## cephelix

Phew, finally finished reading the whole thread. Vega sure is interesting/frustrating to OC/UV. Already ordered my Sapphire 56 but my guy says it's gonna take some time before it arrives. Till then, I'll just sit around twiddling my thumbs and staring at my non-wc 290.lol

After reading through, I do have a few questions that I hope you guys are able to answer.

1. Best OC/UV tool. Wattman is frustrating to work with what with all the bugs present. What should I use then? OverdriveNtool, wattool? Or the usual AB/trixx?
2. Flashing BIOS. Vega 56 to 64. Should I bother with the LC version or just the air one? My card would be under water though and flashing does give me a higher HBM voltage which allows for higher HBM clocks.
3. TIM application. X method as per EK instructions? I've read that the razor/spread method is better at controlling hot spot temps. Should I also fill the gaps between the core and HBM with TIM? Is there any improved temps by using CLU since I have some left over.
4. Power supple cable. My Seasonic X750 comes with 2 x 6+2 gpu cables that ends in a 12-pin connector on the psu side. Like . Is it safe to connect both 6+2 pins to one psu header or should I use 2 separate psu headers? I've seen buildzoid's video that says that it should not matter but it's preferable to not use daisy chain cables.
5. EK block. Planning to purchase the EK FC block for my 56 and wondering if I should get the backplate as well? Does it help in any way beyond aesthetics? Anyone know if the Sapphire 56 backplate is metal or plastic?
6. Thermal pads. I have spare Fujipoly Extreme and Ultra Extreme. Use that or stock EK pads included with the block? Anyone know the thickness of the stock pads to use as well?

Those are all the qns I can think of now. I'm sure I'll have more when I actually receive my card. Thanks in advance for all the help guys


----------



## dagget3450

Really starting to regret getting vega FE, still unable to get Game mode even on newer Vega pro drivers... Good grief charlie brown...


----------



## madmanmarz

Quote:


> Originally Posted by *cephelix*
> 
> Phew, finally finished reading the whole thread. Vega sure is interesting/frustrating to OC/UV. Already ordered my Sapphire 56 but my guy says it's gonna take some time before it arrives. Till then, I'll just sit around twiddling my thumbs and staring at my non-wc 290.lol
> 
> After reading through, I do have a few questions that I hope you guys are able to answer.
> 
> 1. Best OC/UV tool. Wattman is frustrating to work with what with all the bugs present. What should I use then? OverdriveNtool, wattool? Or the usual AB/trixx?
> *I have found wattman to be just fine, but you NEED to monitor your clocks/voltages/temps through GPU-Z, as some settings don't stick and sometimes the clocks jump up and down depending what voltage you set (ie 1700mhz/1000mv nets me 1550mhz, but increase voltage to 1100mv or more and clocks jump up significantly.) I have tried all of them and keep going back to wattman.*
> 2. Flashing BIOS. Vega 56 to 64. Should I bother with the LC version or just the air one? My card would be under water though and flashing does give me a higher HBM voltage which allows for higher HBM clocks.
> *Yeah definitely run the latest LC 64 bios. It will give you the same default voltages as the 64 air cooled anyway, and the voltage bump for HBM will definitely help increase those clocks*
> 3. TIM application. X method as per EK instructions? I've read that the razor/spread method is better at controlling hot spot temps. Should I also fill the gaps between the core and HBM with TIM? Is there any improved temps by using CLU since I have some left over.
> *All I can say is that I tried my usual method of putting a big dot in the middle with 4 little dots around it (and did the same for HBM), and my hotspot temps were high. upon remounting I noticed part of the hbm's weren't fully covered. I did the spread it out method and it helped a bunch, although I imagine the X method works as well and both are recommended.*
> 4. Power supple cable. My Seasonic X750 comes with 2 x 6+2 gpu cables that ends in a 12-pin connector on the psu side. Like . Is it safe to connect both 6+2 pins to one psu header or should I use 2 separate psu headers? I've seen buildzoid's video that says that it should not matter but it's preferable to not use daisy chain cables.
> *I have the focus 850 with the cables daisy chained (noticed that as well) and haven't had any problems.*
> 5. EK block. Planning to purchase the EK FC block for my 56 and wondering if I should get the backplate as well? Does it help in any way beyond aesthetics? Anyone know if the Sapphire 56 backplate is metal or plastic?
> *dunno*
> 6. Thermal pads. I have spare Fujipoly Extreme and Ultra Extreme. Use that or stock EK pads included with the block? Anyone know the thickness of the stock pads to use as well?
> *I believe there are several different thicknesses, just use what it comes with. My Nexxxos GPX has 4 different thicknesses.*
> 
> Those are all the qns I can think of now. I'm sure I'll have more when I actually receive my card. Thanks in advance for all the help guys


----------



## cephelix

@madmanmarz Thanks for the reply. Helps answer most of my questions. + rep for you kind sir.

Quite excited to mess around with a vega. It's been quite a while since I bought my 290 and that card is still serving me well. Plays everything I want with decent frame rates. I could possibly extend it's life if only I had a freesync display.


----------



## Rootax

Quote:


> Originally Posted by *dagget3450*
> 
> Really starting to regret getting vega FE, still unable to get Game mode even on newer Vega pro drivers... Good grief charlie brown...


Amd software stop working when you switch ? If so, it's because of rtss, you need to exclude the process amd is using to switch drivers. I'm not in front of my pc right now, but you can find the name in the task manager or event log when it's crashing.


----------



## gamervivek

Quote:


> Originally Posted by *tarot*
> 
> they are out of order should be hbm temps 48 and hotspot 62 think your sensors are off.
> 
> but saying that if you have a molded gpu block/hbm redo the thermal paste to cover the whole thing (i did an even spread with a very small dot on the 3 chips) also i added heat tape to some of the rear components before i put the backplate on and it helped quite a bit.
> 
> ambient is around maybe 15 degrees (temp near the case that is) so of course once it cranks up summer type temps that is going toio change just have to wait and see


My bad, didn't notice but I've never had that much difference with hbm. I've put on cooler master paste but it's too viscous and doesn't spread easily,will try with mx-4 later.

I tried a fan on the components on the back but it did not change temperatures.


----------



## tarot

Quote:


> Originally Posted by *gamervivek*
> 
> My bad, didn't notice but I've never had that much difference with hbm. I've put on cooler master paste but it's too viscous and doesn't spread easily,will try with mx-4 later.
> 
> I tried a fan on the components on the back but it did not change temperatures.


no the thermal pads haven't done much if anything to the temps but it does seem to cool the back plate down on mine...it was fry and egg hot


----------



## TrixX

Quote:


> Originally Posted by *cephelix*
> 
> Phew, finally finished reading the whole thread. Vega sure is interesting/frustrating to OC/UV. Already ordered my Sapphire 56 but my guy says it's gonna take some time before it arrives. Till then, I'll just sit around twiddling my thumbs and staring at my non-wc 290.lol
> 
> After reading through, I do have a few questions that I hope you guys are able to answer.
> 
> 1. Best OC/UV tool. Wattman is frustrating to work with what with all the bugs present. What should I use then? OverdriveNtool, wattool? Or the usual AB/trixx?
> 2. Flashing BIOS. Vega 56 to 64. Should I bother with the LC version or just the air one? My card would be under water though and flashing does give me a higher HBM voltage which allows for higher HBM clocks.
> 3. TIM application. X method as per EK instructions? I've read that the razor/spread method is better at controlling hot spot temps. Should I also fill the gaps between the core and HBM with TIM? Is there any improved temps by using CLU since I have some left over.
> 4. Power supple cable. My Seasonic X750 comes with 2 x 6+2 gpu cables that ends in a 12-pin connector on the psu side. Like
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> . Is it safe to connect both 6+2 pins to one psu header or should I use 2 separate psu headers? I've seen buildzoid's video that says that it should not matter but it's preferable to not use daisy chain cables.
> 5. EK block. Planning to purchase the EK FC block for my 56 and wondering if I should get the backplate as well? Does it help in any way beyond aesthetics? Anyone know if the Sapphire 56 backplate is metal or plastic?
> 6. Thermal pads. I have spare Fujipoly Extreme and Ultra Extreme. Use that or stock EK pads included with the block? Anyone know the thickness of the stock pads to use as well?
> 
> Those are all the qns I can think of now. I'm sure I'll have more when I actually receive my card. Thanks in advance for all the help guys


1) Personally I use OverdriveNTool as it reliably applies the settings. Eventually the unstable drivers will kick back but nowhere near as fast as Wattman. Wattman usually fails after 4-5 applies. OverdriveNTool only failed to apply settings once for me so far. If you set mv too low then you'll find it may not apply correctly.

2) First test an air one (8730) then move to the MSI LC (8774) one if you are water blocking it. You do gain some OC headroom on the LC BIOS, but a the cost of lower max temps. However OC's pushing HBM above 60C already lose out on some compute performance.

3) Can't answer yet, will comment when I get to have a play with mine.

4) With the amount of power this card can pull I'd use separate otherwise cables might get hot.

5) & 6) Can't answer yet until I mess with mine. Though for 6 I have some Thermal Grizzly pads I'll be using.


----------



## lowdog

FFS some people post crap, the LC 64 bios has a completely different power table to the 64 AIR bios. If you flash your 56 to the 64 AIR bios it will more than likely be stable but it may NOT be stable on the LC 64 bios because of the differing power table settings.

From what I gather practically all 56 should handle the 1640MHz or so boost with the 64 AIR bios with 1200mv which is default.....BUT!!!.....may not necessarily handle the 1750MHz boost of the 64 LC bios even with it's extended 1250mv range. You may be lucky and it may handle it but then again it may not.


----------



## cephelix

Quote:


> Originally Posted by *TrixX*
> 
> 1) Personally I use OverdriveNTool as it reliably applies the settings. Eventually the unstable drivers will kick back but nowhere near as fast as Wattman. Wattman usually fails after 4-5 applies. OverdriveNTool only failed to apply settings once for me so far. If you set mv too low then you'll find it may not apply correctly.
> 
> 2) First test an air one (8730) then move to the MSI LC (8774) one if you are water blocking it. You do gain some OC headroom on the LC BIOS, but a the cost of lower max temps. However OC's pushing HBM above 60C already lose out on some compute performance.
> 
> 3) Can't answer yet, will comment when I get to have a play with mine.
> 
> 4) With the amount of power this card can pull I'd use separate otherwise cables might get hot.
> 
> 5) & 6) Can't answer yet until I mess with mine. Though for 6 I have some Thermal Grizzly pads I'll be using.


Thanks for that! I will check out the various OC/UV tools.
Quote:


> Originally Posted by *lowdog*
> 
> FFS some people post crap, the LC 64 bios has a completely different power table to the 64 AIR bios. If you flash your 56 to the 64 AIR bios it will more than likely be stable but it may NOT be stable on the LC 64 bios because of the differing power table settings.
> 
> From what I gather practically all 56 should handle the 1640MHz or so boost with the 64 AIR bios with 1200mv which is default.....BUT!!!.....may not necessarily handle the 1750MHz boost of the 64 LC bios even with it's extended 1250mv range. You may be lucky and it may handle it but then again it may not.


Ok, I'm asking a sincere question here because I really know nothing with regards to power tables and the like. If a 56 can't handle the 1750mhz boost as per your example, couldn't I just back of on the voltage/downclock p7 to reduce the boost range? Or is that not how any of this works?


----------



## Chaoz

Quote:


> Originally Posted by *madmanmarz*
> 
> I run a similar setup as I'm on 2560x1080 and have no need for more speed outside of benchmarks. It's already overkill LOL. Anyway I found by bumping my memory voltage to 1000mv, that I was able to push P7 to 1700mhz/1000mv, which comes out to about 1540-1550mhz.


It seems my P7 already boosts up to 1580MHz with 1000mv while in-game. I use FreeSync option, so even at those Undervolted settings and maxed out BF1 settings my 64 still doesn't reach its 100% usage at 75fps.

So as long as it works, I'm happy.

Quote:


> Originally Posted by *cephelix*
> 
> Phew, finally finished reading the whole thread. Vega sure is interesting/frustrating to OC/UV. Already ordered my Sapphire 56 but my guy says it's gonna take some time before it arrives. Till then, I'll just sit around twiddling my thumbs and staring at my non-wc 290.lol
> 
> After reading through, I do have a few questions that I hope you guys are able to answer.
> 
> 1. Best OC/UV tool. Wattman is frustrating to work with what with all the bugs present. What should I use then? OverdriveNtool, wattool? Or the usual AB/trixx?
> *Also use OverdriveNTool, as Watttool/Wattman and such don't save my settings for some reason. No issues with OverdriveNTool, tho.*
> 
> 3. TIM application. X method as per EK instructions? I've read that the razor/spread method is better at controlling hot spot temps. Should I also fill the gaps between the core and HBM with TIM? Is there any improved temps by using CLU since I have some left over.
> *I used the X method like EKWB suggested and it works fine. Your Sapphire will mostlikely have a molded package like my Sapphire 64 has.*
> 
> 5. EK block. Planning to purchase the EK FC block for my 56 and wondering if I should get the backplate as well? Does it help in any way beyond aesthetics? Anyone know if the Sapphire 56 backplate is metal or plastic?
> *Backplate is metal. I use the stock backplate, works perfectly fine. The EKWB backplate costs quite a bit, considering it's only a backplate and it won't let you use the dipswitches either as the EK backplate covers it.*
> 
> 6. Thermal pads. I have spare Fujipoly Extreme and Ultra Extreme. Use that or stock EK pads included with the block? Anyone know the thickness of the stock pads to use as well?
> *Stock thermalpads are good enough, imho. Don't bother. Stock pads are 1mm and 0.5mm.*
> 
> Those are all the qns I can think of now. I'm sure I'll have more when I actually receive my card. Thanks in advance for all the help guys


Answered the questions that didn't get answered plus some extra feedback.


----------



## TrixX

Quote:


> Originally Posted by *lowdog*
> 
> FFS some people post crap, the LC 64 bios has a completely different power table to the 64 AIR bios. If you flash your 56 to the 64 AIR bios it will more than likely be stable but it may NOT be stable on the LC 64 bios because of the differing power table settings.
> 
> From what I gather practically all 56 should handle the 1640MHz or so boost with the 64 AIR bios with 1200mv which is default.....BUT!!!.....may not necessarily handle the 1750MHz boost of the 64 LC bios even with it's extended 1250mv range. You may be lucky and it may handle it but then again it may not.


That's exactly why I suggested to *test* the 8370 Air BIOS before the MSI LC 8774 BIOS. If it works great, if not then flash back. There's also the option of downclocking the LC BIOS if needed. No need to get so upset about it.

EDIT: Keyword being test. No point running something that's not going to work...


----------



## IvantheDugtrio

Welp I pulled the trigger again and have a Powercolor RX Vega 56 on its way. It'll be watercooled with the EK block as well. At least now I know to leak-test under load for at least several days.


----------



## cephelix

Quote:


> Originally Posted by *Chaoz*
> 
> Answered the questions that didn't get answered plus some extra feedback.


Was wondering about the moulded vs unmoulded package and how it would affect temps as well. As to the backplate, I do agree that the EK backplate is pricey, for being just a backplate. Good to know about the thermal pads. Only reason I had the Fujipoly ones were to keep the temps on my MSI R9 290 under control.


----------



## Chaoz

Quote:


> Originally Posted by *cephelix*
> 
> Was wondering about the moulded vs unmoulded package and how it would affect temps as well. As to the backplate, I do agree that the EK backplate is pricey, for being just a backplate. Good to know about the thermal pads. Only reason I had the Fujipoly ones were to keep the temps on my MSI R9 290 under control.


It doesn't affect the temps that much. You just need to use a bit more TIM for in between the dies if it's unmolded. You're using a waterblock anyways, so I doubt that it would affect temps that much that it's noticable. I have no problems with mine at all even tho I have a molded package.
Undervolted it doesn't even go over 35°C most of the times.

The stock EKWB thermal pads are pretty good actually and are not so stiff like the Fujipoly ones.So it's easier to mount the block on it.
The stock backplate is pretty basic and only has 1 sticker on it right next to where the RAM is located when it's in your mobo. Unlike XFX and Powercolor that have sticker next to sticker.

Fingers crossed you don't get bad coilwhine. Mine whines quite a bit, but only in-game and at 100%. So while I'm using FreeSync it doesn't whine at all even at 1000mv.


----------



## TrixX

Also with the Unmoulded Die you can't use Conductonaut or other Liquid Metal/Conductive TIM.


----------



## cephelix

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Welp I pulled the trigger again and have a Powercolor RX Vega 56 on its way. It'll be watercooled with the EK block as well. At least now I know to leak-test under load for at least several days.


Several days? Ain't nobody got time for that! lol

Quote:


> Originally Posted by *Chaoz*
> 
> It doesn't affect the temps that much. You just need to use a bit more TIM for in between the dies if it's unmolded. You're using a waterblock anyways, so I doubt that it would affect temps that much that it's noticable. I have no problems with mine at all even tho I have a molded package.
> Undervolted it doesn't even go over 35°C most of the times.
> 
> The stock EKWB thermal pads are pretty good actually and are not so stiff like the Fujipoly ones.So it's easier to mount the block on it.
> The stock backplate is pretty basic and only has 1 sticker on it right next to where the RAM is located when it's in your mobo. Unlike XFX and Powercolor that have sticker next to sticker.
> 
> Fingers crossed you don't get bad coilwhine. Mine whines quite a bit, but only in-game and at 100%. So while I'm using FreeSync it doesn't whine at all even at 1000mv.


By in between the dies you mean the channels? I cross my fingers all the time that I don't get coilwhine. So far I've been lucky. My 290 only whines when stopping a heaven/valley run. Other than that it's been good to me.

Edit
@TrixX is it because of the difference in height as well as the thinness of the applied LM ?


----------



## TrixX

Quote:


> Originally Posted by *cephelix*
> 
> Edit
> @TrixX is it because of the difference in height as well as the thinness of the applied LM ?


It's because the substrate is open to conductive damage. The height difference is an issue too, but the open substrate is far more of an issue!

EDIT: Technically you can make it safe, but you need to use nail varnish to protect the open spaces which would serve the same purpose as the moulding.


----------



## Chaoz

Quote:


> Originally Posted by *cephelix*
> 
> Several days? Ain't nobody got time for that! lol
> By in between the dies you mean the channels? I cross my fingers all the time that I don't get coilwhine. So far I've been lucky. My 290 only whines when stopping a heaven/valley run. Other than that it's been good to me.
> 
> Edit
> @TrixX is it because of the difference in height as well as the thinness of the applied LM ?


Lol @several days leak testing. All in all I let it run for half an hour max to check for leaks. That's it.

Yeah the space between the 3 chips.

All my previous GPU's never had coilwhine. But my 64 has it quite bad. Luckily it's only at 100% so it doesn't bother me that much. Just sucks.


----------



## cephelix

Quote:


> Originally Posted by *TrixX*
> 
> It's because the substrate is open to conductive damage. The height difference is an issue too, but the open substrate is far more of an issue!


Ahhhh..so many new things to take note of! I'll keep it in mind. Anymore tips and tricks?
Quote:


> Originally Posted by *Chaoz*
> 
> Yeah the space between the 3 chips.
> 
> All my previous GPU's never had coilwhine. But my 64 has it quite bad. Luckily it's only at 100% so it doesn't bother me that much. Just sucks.


Ok...i usually try to keep TIM out from crevices but it seems like that's not the case this time around.


----------



## TrixX

Quote:


> Originally Posted by *cephelix*
> 
> Ahhhh..so many new things to take note of! I'll keep it in mind. Anymore tips and tricks?
> Ok...i usually try to keep TIM out from crevices but it seems like that's not the case this time around.


These Cards seem to have a lot more OC variance than normal gfx card releases (R9 390/GTX1080's etc...). Some undervolt like champions, others don't do that well. Some OC like champions and some can't move from stock clocks. Test the minimum voltages your card can handle Superposition 1080p Extreme and Firestrike happily then move onto OC'ing.

I generally work my way up until I'm either Thermally throttled, or Power throttled. So far It's been mostly Thermal throttling for my example though I know a few that have issues with power before thermals. There's also a registry hack that allows 142% power if you have the right cooling solution. With the stock blower you won't need more than 50% anyway.

To give an idea of the power draw, with my daily clocks I pull around 220W on the core and about 280W full package. That's P7 set to 1752MHz and 950mv, HBM is 1000MHz and 920mv. With max bench testing settings on air 1752MHz and 1100mv, HBM 1000MHz and 950mv I can pull 300W core and 360W full package and I'm only on +25% power.

However those settings are what works for me, it's not going to be that way for each card and I do have to run my fan at 4900 RPM to cope with the bench settings.


----------



## Chaoz

Quote:


> Originally Posted by *cephelix*
> 
> Ahhhh..so many new things to take note of! I'll keep it in mind. Anymore tips and tricks?
> Ok...i usually try to keep TIM out from crevices but it seems like that's not the case this time around.


Yeah, it's different than other cards. As the HBM stacks and such are on the same chip.


----------



## cephelix

@TrixX
Your card seems amazing! Can I just have yours? Lol
@Chaoz
Yeah, when the fury came out I was thinking how the HBM was to be cooled. But they seem to be ok. Now with the vega and it's moulded/unmoulded variants I was thinking the height difference would be an issue but it seems like it isn't.

Another question, since I'm new to P-states, how do u guys start OC or UV? Do I just uv P7 till i get crashes in benchmarks? Or must I adjust P6 downwards at the same time to P6 is always lower than P7. Read in the earlier part of the thread that if P6 and P7 were equal that the card would stay in P5(don't know if that's still an issue now). Once I find a stable mv/mhz for P6/P7, i then reset it back to stock and start on the HBM?


----------



## Chaoz

Quote:


> Originally Posted by *cephelix*
> 
> @TrixX
> Your card seems amazing! Can I just have yours? Lol
> @Chaoz
> Yeah, when the fury came out I was thinking how the HBM was to be cooled. But they seem to be ok. Now with the vega and it's moulded/unmoulded variants I was thinking the height difference would be an issue but it seems like it isn't.
> 
> Another question, since I'm new to P-states, how do u guys start OC or UV? Do I just uv P7 till i get crashes in benchmarks? Or must I adjust P6 downwards at the same time to P6 is always lower than P7. Read in the earlier part of the thread that if P6 and P7 were equal that the card would stay in P5(don't know if that's still an issue now). Once I find a stable mv/mhz for P6/P7, i then reset it back to stock and start on the HBM?


I undervolted mine to this:



So it's running stable on 1580MHz core and 1000MHz HBM +50% Power target.

But in general yes, P6 and P7 should be different. P7 being the highest, than it should work out. Just try whatever settings you want to achieve and see if it's stable. I haven't really OC'ed mine that much, not necessary for me, imho.


----------



## TrixX

Quote:


> Originally Posted by *cephelix*
> 
> @TrixX
> Your card seems amazing! Can I just have yours? Lol
> @Chaoz
> Yeah, when the fury came out I was thinking how the HBM was to be cooled. But they seem to be ok. Now with the vega and it's moulded/unmoulded variants I was thinking the height difference would be an issue but it seems like it isn't.
> 
> Another question, since I'm new to P-states, how do u guys start OC or UV? Do I just uv P7 till i get crashes in benchmarks? Or must I adjust P6 downwards at the same time to P6 is always lower than P7. Read in the earlier part of the thread that if P6 and P7 were equal that the card would stay in P5(don't know if that's still an issue now). Once I find a stable mv/mhz for P6/P7, i then reset it back to stock and start on the HBM?


Well first up is neither P6 nor P7 it's the HBM Voltage which isn't. It's actually the GPU Floor Voltage (min voltage supplied to the GPU) so setting that above either P6 or P7 renders any of the values there irrelevant.

So first up try lowering HBM voltages and keeping stability in at least Superposition. Personally I then put the same voltage as HBM to the P6 state and then adjust P7 state as required for the results desired or until throttling/crashing ensues.

The best way to think about the core MHz settings is to use them as approximate targets similar to the temp target vs temp max. I have my P7 set to 1752MHz (stock for the MSI 8774 LC BIOS) but I rarely get above 1680MHz actual. In Superposition due to the 100% GPU load I only get 1580MHz with the same settings on my Main setup (P7 1752MHz, 950mv, HBM 1000MHz, 920mv). In games due to their more variable GPU loading I usually get well into the 1600's.

Also due to bugs in Wattman, I avoid using identical MHz values for P6 or P7, I also avoid using identical mv values too as it can prevent changing P State. I also use ClockBlocker to confirm I'm using P7 for testing otherwise it can get stuck in P5 due to the higher available voltages.



EDIT: PS it's MY Precious









EDIT 2: Upload correct Main settings image!


----------



## cephelix

@TrixX@Chaoz
I didn't know that P6 could be lower than P7. Tempted to just get a 64 now since they're in stock but that would cost me an additional USD133. Again, thanks so much guys! You've been a big help. Now all I have to do is wait.


----------



## TrixX

Quote:


> Originally Posted by *cephelix*
> 
> @TrixX@Chaoz
> I didn't know that P6 could be lower than P7. Tempted to just get a 64 now since they're in stock but that would cost me an additional USD133. Again, thanks so much guys! You've been a big help. Now all I have to do is wait.


Only real difference between 64 and 56 at the moment is likely to be binning. Even then there are some lemons in both cards out there, though less likely in the 64. There's no real benefit to the extra CU's at the moment as the card seems to be HBM bandwidth limited. That extends to the FE edition too apparently.

Happy to help. Hopefully I'll be putting up a Vega UV/OC testing vid to Youtube in the next week or so. Just getting to grips with OBS and so on


----------



## kundica

Quote:


> Originally Posted by *kundica*
> 
> I can't recommend 17.10.1 at this point. It introduces some weird frame dips which are noticeable while gaming. They are also reflected in synthetic benches, look at the minimum in the images I posted. I cleaned and reinstalled numerous times to confirm. I also ran the Shadow of War in-game benchmark and while the FPS was higher overall, there were terrible dips(stutter) there as well.
> 
> 17.9.3
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 17.10.1
> 
> 
> Spoiler: Warning: Spoiler!


I want to make a correction to this previous claim I made regarding the new drivers. After several more days of testing I believe the issue is actually related to the bios version of my mobo. I run a Ryzen 1800x on the Crosshair VI Hero and we've had several bios revisions drop in the past month. I updated to the newest one and noticed I was getting different benchmark scores with the same settings and driver version(17.9.3) than with the previous bios. I decided to clean and reinstall the new driver and to my surprise, I was no longer getting dips. I switched back and forth between drivers to run more tests and sure enough my results were repeatable.

I'm currently running 17.10.1 with my mobo's newest bios. While the bios produces lower benchmark scores than the previous bios that didn't show the dips while using 17.9.3, I no longer experience the issue.


----------



## cephelix

Quote:


> Originally Posted by *TrixX*
> 
> Only real difference between 64 and 56 at the moment is likely to be binning. Even then there are some lemons in both cards out there, though less likely in the 64. There's no real benefit to the extra CU's at the moment as the card seems to be HBM bandwidth limited. That extends to the FE edition too apparently.
> 
> Happy to help. Hopefully I'll be putting up a Vega UV/OC testing vid to Youtube in the next week or so. Just getting to grips with OBS and so on


Ahh, well, a video would definitely be helpful. Send the link my way when you do get it up.


----------



## pmc25

Quote:


> Originally Posted by *TrixX*
> 
> Also with the Unmoulded Die you can't use Conductonaut or other Liquid Metal/Conductive TIM.


Um, why?

I did with both my Fiji Nanos and have once more with my RX Vega. CL LM Ultra.

If you're careful there's no risk. If you're patient and have a steady hand and good eyesight, masking during application isn't even necessary.


----------



## madmanmarz

Quote:


> Originally Posted by *cephelix*
> 
> @madmanmarz Thanks for the reply. Helps answer most of my questions. + rep for you kind sir.
> 
> Quite excited to mess around with a vega. It's been quite a while since I bought my 290 and that card is still serving me well. Plays everything I want with decent frame rates. I could possibly extend it's life if only I had a freesync display.


Yeah I'm coming from an unlocked 290 as well, but I bought into a freesync display early. I felt the 290 was fine but now I can truly have all settings maxed (supersampled aa, Hairworks, doesn't matter), while also being at my Max refresh rate or higher with enhanced sync and it's fantastic. Performance has to be about double after tweaking the card. 290x was overvolted 200mv vs Vega with 200mv undervolt, so I'm using way less power as well


----------



## TrixX

Quote:


> Originally Posted by *pmc25*
> 
> Um, why?
> 
> I did with both my Fiji Nanos and have once more with my RX Vega. CL LM Ultra.
> 
> If you're careful there's no risk. If you're patient and have a steady hand and good eyesight, masking during application isn't even necessary.


I did make an edit to say it was possible but risky


----------



## cephelix

Quote:


> Originally Posted by *madmanmarz*
> 
> Yeah I'm coming from an unlocked 290 as well, but I bought into a freesync display early. I felt the 290 was fine but now I can truly have all settings maxed (supersampled aa, Hairworks, doesn't matter), while also being at my Max refresh rate or higher with enhanced sync and it's fantastic. Performance has to be about double after tweaking the card. 290x was overvolted 200mv vs Vega with 200mv undervolt, so I'm using way less power as well


nice, I could only do a 50mv before temps became too high. Maybe I'll get myself a freesync monitor as a christmas present


----------



## madmanmarz

Quote:


> Originally Posted by *TrixX*
> 
> Only real difference between 64 and 56 at the moment is likely to be binning. Even then there are some lemons in both cards out there, though less likely in the 64. There's no real benefit to the extra CU's at the moment as the card seems to be HBM bandwidth limited. That extends to the FE edition too apparently.
> 
> Happy to help. Hopefully I'll be putting up a Vega UV/OC testing vid to Youtube in the next week or so. Just getting to grips with OBS and so on


At least in my case from what I'm seeing my 56 seems to overclock about 50 mhz less than the 64 cards. Getting the most out of these cards requires quite a bit more knowledge and tweaking than usual that I'm sure is holding some people back.

Hopefully in a few months there will be a better solution for overclocking whether it's through a newer wattman or software or bios editing, as it's far from perfect right now.


----------



## rancor

Quote:


> Originally Posted by *cephelix*
> 
> 3. TIM application. X method as per EK instructions? I've read that the razor/spread method is better at controlling hot spot temps. Should I also fill the gaps between the core and HBM with TIM? Is there any improved temps by using CLU since I have some left over.
> 
> 5. EK block. Planning to purchase the EK FC block for my 56 and wondering if I should get the backplate as well? Does it help in any way beyond aesthetics? Anyone know if the Sapphire 56 backplate is metal or plastic?


3) I got fine hotspot temps, 5-10C higher, with liberal dots in the center of the GPU and HBM with a thin thermal paste, MX-4. Seems like the credit card method works will for thicker pastes. I wouldn't worry about CLU if you can keep your water temps low.

5) I have a 64 and the backplate is metal as is the cooler shroud. The cooler should be identical between the 56 and 64 and I would just use the backplate that came with the GPU.


----------



## cephelix

Quote:


> Originally Posted by *rancor*
> 
> 3) I got fine hotspot temps, 5-10C higher, with liberal dots in the center of the GPU and HBM with a thin thermal paste, MX-4. Seems like the credit card method works will for thicker pastes. I wouldn't worry about CLU if you can keep your water temps low.
> 
> 5) I have a 64 and the backplate is metal as is the cooler shroud. The cooler should be identical between the 56 and 64 and I would just use the backplate that came with the GPU.


Thanks for the input! With that I have decided to not get the EK backplate.


----------



## Chaoz

Quote:


> Originally Posted by *cephelix*
> 
> @TrixX@Chaoz
> I didn't know that P6 could be lower than P7. Tempted to just get a 64 now since they're in stock but that would cost me an additional USD133. Again, thanks so much guys! You've been a big help. Now all I have to do is wait.


Np, yeah you can put the P states pretty low for undervolting. So far everything is running really stable and the temps are amazing.

Ihmo, it's not really worth it to upgrade to a 64 from a 56. The bios flash with a 64 bios gives it a boost aswell.

I just got the 64 because the 56 wasn't out at the time and I really needed a decent AMD GPU to use my FreeSync. If I could've waitef I would've gotten a 56 if they came out on the same date.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Chaoz*
> 
> Lol @several days leak testing. All in all I let it run for half an hour max to check for leaks. That's it.
> 
> Yeah the space between the 3 chips.
> 
> All my previous GPU's never had coilwhine. But my 64 has it quite bad. Luckily it's only at 100% so it doesn't bother me that much. Just sucks.


I did the 30 minute leak test and found nothing. The leak that killed my Vega FE happened slowly overnight while folding. I'm not sure if the additional heat from folding caused the leak but it may have played a role.

I'll take the Vega FE to a shop this weekend and see if anything can be done. I've cleaned up the VRM with a toothbrush and 99% isopropyl alcohol but it still won't boot. The P0 LED flashes for a millisecond before going dark when I try to power it on now. I wonder if there's a way to bypass the affected VRM. Maybe there's a circuit diagram for the card out there somewhere.


----------



## cephelix

Quote:


> Originally Posted by *madmanmarz*
> 
> At least in my case from what I'm seeing my 56 seems to overclock about 50 mhz less than the 64 cards. Getting the most out of these cards requires quite a bit more knowledge and tweaking than usual that I'm sure is holding some people back.
> 
> Hopefully in a few months there will be a better solution for overclocking whether it's through a newer wattman or software or bios editing, as it's far from perfect right now.


Hopefully. I do like the longevity of amd card but their execution could definitely be better


----------



## Chaoz

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> I did the 30 minute leak test and found nothing. The leak that killed my Vega FE happened slowly overnight while folding. I'm not sure if the additional heat from folding caused the leak but it may have played a role.
> 
> I'll take the Vega FE to a shop this weekend and see if anything can be done. I've cleaned up the VRM with a toothbrush and 99% isopropyl alcohol but it still won't boot. The P0 LED flashes for a millisecond before going dark when I try to power it on now. I wonder if there's a way to bypass the affected VRM. Maybe there's a circuit diagram for the card out there somewhere.


That's something I would never do, leave my PC running overnight or when I'm not at home. You never know, imho, it could indeed leak.


----------



## madmanmarz

I dunno been water cooling for like 10 years never had a problem. **** happens but generally if it doesn't leak, it doesn't leak. I use normal barbed fittings and use zip ties to secure tubing. There are many fancier ways now but I've had one of those rotating fittings fail so I don't trust the fancy stuff. Usually stuff happens on a new install.
Quote:


> Originally Posted by *Chaoz*
> 
> That's something I would never do, leave my PC running overnight or when I'm not at home. You never know, imho, it could indeed leak.


Quote:


> Originally Posted by *TrixX*
> 
> Only real difference between 64 and 56 at the moment is likely to be binning. Even then there are some lemons in both cards out there, though less likely in the 64. There's no real benefit to the extra CU's at the moment as the card seems to be HBM bandwidth limited. That extends to the FE edition too apparently.
> 
> Happy to help. Hopefully I'll be putting up a Vega UV/OC testing vid to Youtube in the next week or so. Just getting to grips with OBS and so on


----------



## IvantheDugtrio

Quote:


> Originally Posted by *madmanmarz*
> 
> I dunno been water cooling for like 10 years never had a problem. **** happens but generally if it doesn't leak, it doesn't leak. I use normal barbed fittings and use zip ties to secure tubing. There are many fancier ways now but I've had one of those rotating fittings fail so I don't trust the fancy stuff. Usually stuff happens on a new install.
> 
> At least in my case from what I'm seeing my 56 seems to overclock about 50 megahertz less than the 64 cards. Getting the most out of these cards requires a little bit more knowledge and tweaking than usual that I'm sure is holding some people back.
> 
> Plus surely in a few months there will be a better solution for overclocking whether it's through a newer wattman or software or custom bios.


That's kind of my situation. I just put the system together a couple days before I noticed the leak. I'm using PETG hardline and bitspower compression fittings for hardline. The double o-rings can be a bit tricky if the outer one snags on the end of the tubing but I know to check for that. I haven't figured out where the leak started from so I have some troubleshooting to do. This is also only my second watercooled build.

I just got a 56 to replace my FE. I'm hoping I can get the 56 up to at least 1700 MHz on the core. That alone will make it perform better than the FE. Combined with the newer drivers and easier BIOS replacement it should be much nicer to work with.

I saw on another thread that HBCC was borked on 56 is this true?


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> Amd software stop working when you switch ? If so, it's because of rtss, you need to exclude the process amd is using to switch drivers. I'm not in front of my pc right now, but you can find the name in the task manager or event log when it's crashing.


For me the driver loads fine, and i get the pro mode no issues, but my option for game mode is no longer there. This keeps me from using crossfire,wattman for undervolting. Makes me sad but if there is a way to just use game mode relive ui i would love to do that. I was tempted to try a fresh load of windows 10 to see if it fixes my issue. But it looks like google fu says its same issue for everyone.


----------



## Rootax

Quote:


> Originally Posted by *dagget3450*
> 
> For me the driver loads fine, and i get the pro mode no issues, but my option for game mode is no longer there. This keeps me from using crossfire,wattman for undervolting. Makes me sad but if there is a way to just use game mode relive ui i would love to do that. I was tempted to try a fresh load of windows 10 to see if it fixes my issue. But it looks like google fu says its same issue for everyone.


Well it's working for me. But It's not game mode anymore, it's called "driver option", and you have to make a custom installation, not an express one, to enable it

It's all explained here : https://www2.ati.com/relnotes/changing-your-driver-options-with-radeon-pro-settings-user-guide.pdf

And BTW, even with pro drivers, OverdriveNTtool is working.


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> Well it's working for me. But It's not game mode anymore, it's called "driver option", and you have to make a custom installation, not an express one, to enable it
> 
> It's all explained here : https://www2.ati.com/relnotes/changing-your-driver-options-with-radeon-pro-settings-user-guide.pdf
> 
> And BTW, even with pro drivers, OverdriveNTtool is working.


Thqnk you for the link +rep. However, when i try to follow this instructions i do not get the prompt about loading multiple drivers. I am trying a fresh ddu and driver install now to see if it will.do it.


----------



## dagget3450

Yep, okay so i am not getting this prompt when following instructions:



I never get this prompt.... i wonder if i have wrong windows version or something.

I am on windows 10 Pro creators update. They show Windows 10 Anniversary 64 as supported.... hmmm.. I am at a loss, i cannot get the prompt no matter what.

Rootax, what version of windows are you using.


----------



## Rootax

Quote:


> Originally Posted by *dagget3450*
> 
> Yep, okay so i am not getting this prompt when following instructions:
> 
> 
> 
> I never get this prompt.... i wonder if i have wrong windows version or something.
> 
> I am on windows 10 Pro creators update. They show Windows 10 Anniversary 64 as supported.... hmmm.. I am at a loss, i cannot get the prompt no matter what.
> 
> Rootax, what version of windows are you using.


Are you doing a custom install ? It's needed to get the prompt.

I'm using a preview insider build right now, but it should work the same. Have you multiple gpus ? If so, Maybe it's not compatible with that ?


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> Are you doing a custom install ? It's needed to get the prompt.
> 
> I'm using a preview insider build right now, but it should work the same. Have you multiple gpus ? If so, Maybe it's not compatible with that ?


Yes i've tried custom install - everything is like it shows in directions except i never get that prompt screen to say yes.

I have 2 vega fe in this system, they work fine with 17.6 driver and i get game mode /pro mode

When i try to load 17.10 pro software or even after that the rx 17.9.3 driver i never get the prompt. i don't see what i am doing wrong.


----------



## milan616

Any opinions on using CLU or another liquid metal type TIM, or tips/tricks from people who have?


----------



## Rootax

Quote:


> Originally Posted by *dagget3450*
> 
> Yes i've tried custom install - everything is like it shows in directions except i never get that prompt screen to say yes.
> 
> I have 2 vega fe in this system, they work fine with 17.6 driver and i get game mode /pro mode
> 
> When i try to load 17.10 pro software or even after that the rx 17.9.3 driver i never get the prompt. i don't see what i am doing wrong.


Can you try with only one FE in the system ? My guess is the gaming driver doesn't support crossfire, so it doesn't show you the choice.


----------



## The EX1

Quote:


> Originally Posted by *milan616*
> 
> Any opinions on using CLU or another liquid metal type TIM, or tips/tricks from people who have?


Depends really on if you have a molded or non-molded die. Certain Vega chips have parts of the silicon exposed and you don't want CLU, which is conductive, to touch those areas. I've seen people use nail varnish to seal those areas up to provide a barrier. Others are just really, really careful on application.

Some packages also the HBM stacks and different heights that the GPU. This can cause a bigger gap between the cooler plate and the memory so a thicker thermal compound is needed to fill the gap (liquid metal is too thin).

I personally haven't used it. I just used a really good TIM, GC Extreme, and called it a day.


----------



## Irev

I'm yet to find OC software that actually works correctly,

sapphire trix, asus gpu tweak and msi ab all have issues -

why cant AMD add in saving profiles and automatic applied on startup? wattman would actually be more useful.

Overdrive NT Tool is detected as a virus on my machine which is why I dont use it.


----------



## Chaoz

Quote:


> Originally Posted by *The EX1*
> 
> Some packages also the HBM stacks and different heights that the GPU. This can cause a bigger gap between the cooler plate and the memory so a thicker thermal compound is needed to fill the gap (liquid metal is too thin).


Not particularly, if you use Liquid Metal, like Conductonaut, you need to apply it on both the die and the cooler. So any gap that might exist will be filled. So you should be fine.


----------



## TrixX

Quote:


> Originally Posted by *Irev*
> 
> I'm yet to find OC software that actually works correctly,
> 
> sapphire trix, asus gpu tweak and msi ab all have issues -
> 
> why cant AMD add in saving profiles and automatic applied on startup? wattman would actually be more useful.
> 
> Overdrive NT Tool is detected as a virus on my machine which is why I dont use it.


It's not a virus, it's a false positive. Whitelist it and OC/UV to ones heart's content as it's the only one that works right. If you don't want to do that not a lot we can do to help you.

P.S. It's not a virus.


----------



## Chaoz

I use OverdriveNTool aswell, my IS doesn't flag it at all, cuz it's a not virus. It works great, a lot better than Wattman/Watttool, which always resets my settings after a reboot, which OverdriveNTool does not.


----------



## Irev

Quote:


> Originally Posted by *Chaoz*
> 
> I use OverdriveNTool aswell, my IS doesn't flag it at all, cuz it's a not virus. It works great, a lot better than Wattman/Watttool, which always resets my settings after a reboot, which OverdriveNTool does not.


I may have to whitelist it, what is the official website to download the tool? will think about giving it another try,


----------



## Chaoz

Quote:


> Originally Posted by *Irev*
> 
> I may have to whitelist it, what is the official website to download the tool? will think about giving it another try,


I got it from Guru3D forums. I used the zippyshare link.

https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/


----------



## pmc25

Quote:


> Originally Posted by *Chaoz*
> 
> Not particularly, if you use Liquid Metal, like Conductonaut, you need to apply it on both the die and the cooler. So any gap that might exist will be filled. So you should be fine.


I've never used Conductonaut (which is significantly less liquid I think?), but in all the applications I've done of Liquid Metal Ultra, I haven't applied it to the cooler.


----------



## laczarus

question regarding conductonaut:
I recall reading that it damages the die slowly over time. Is this accurate?


----------



## pmc25

Quote:


> Originally Posted by *laczarus*
> 
> question regarding conductonaut:
> I recall reading that it damages the die slowly over time. Is this accurate?


Wouldn't surprise me if there's very minor corrosion, but doubt there's anything major.

CL Liquid Pro had major issues, and they kept it on the market way too long IMO.

Ultra I've had zero issues with. All it does is eat the laser etching on the heat spreaders a little bit after 3 or 4 years.


----------



## PontiacGTX

So far the games benefit from HBCC are

CoD IW(and newer)
Mirrors Edge Catalyst
Middle earth Shadow of Mordor
Gears of wars 4
Middle earth Shadow of war
Titanfall 2
The rise of the tomb raider
Watch Dogs 2
Quantum Break


----------



## Trender07

Quote:


> Originally Posted by *Chaoz*
> 
> Did some undervolting today.
> It's damn stable at these clocks and temps don't even go over 30°C. I'm happy
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Might tweak a bit more later on.


How can you maintain that XD
Runnning atm p6 [email protected] mV and p7 [email protected] mV
But can't stock clocks like u (1630) even at 1000 mv, need like 1012 mV, so for 30 Mhz more only I prefer UV to 985 mV


----------



## Chaoz

Quote:


> Originally Posted by *Trender07*
> 
> How can you maintain that XD
> Runnning atm p6 [email protected] mV and p7 [email protected] mV
> But can't stock clocks like u (1630) even at 1000 mv, need like 1012 mV, so for 30 Mhz more only I prefer UV to 985 mV


No clue, tbh. Don't even have stability issues. Might be because I watercooled my GPU? Or just lucky, I guess.


----------



## PontiacGTX

Quote:


> Originally Posted by *Trender07*
> 
> How can you maintain that XD
> Runnning atm p6 [email protected] mV and p7 [email protected] mV
> But can't stock clocks like u (1630) even at 1000 mv, need like 1012 mV, so for 30 Mhz more only I prefer UV to 985 mV


it drops to P-State5?


----------



## Paul17041993

Have or are any vega owners currently getting weird system lock-ups or reboots on occasion or rarely when launching games? just curious if anyone has encountered stability issues due to the drivers or if it's simply my motherboard (which is going to be replaced soon), all stock clocks fyi.


----------



## TrixX

Quote:


> Originally Posted by *Paul17041993*
> 
> Have or are any vega owners currently getting weird system lock-ups or reboots on occasion or rarely when launching games? just curious if anyone has encountered stability issues due to the drivers or if it's simply my motherboard (which is going to be replaced soon), all stock clocks fyi.


Actually not surprised if you get driver crashes with stock clocks. You are likely suffering from the card being too high mv in P6/P7 for the card to run within thermal/power limitations so will likely see some heavy core throttling in high load situations.

Have you tried Undervolting to find the situation that your card runs smoothly in? Also I'd suggest using clockblocker to maintain P7 state for testing and 3D applications. Otherwise when undervolting it can get stuck in P5 as it has higher mv compared to an undervolted P6/P7 in some cases.

Here's my UV setup using OverdriveNTool for comparison:



I'm using the LC BIOS hence the higher clocks and lower temp limits. Just use the stock clocks for your card and work out the low point of the voltages for the P6/P7 and HBM voltage settings.

Also the HBM is the voltage floor for the GPU so setting that higher than P6/P7 negates the settings in those areas. I usually match P6 to the HBM and have P7 a little higher than P6.


----------



## Rootax

How the hell are you stable at 1750 with 950mv


----------



## Ne01 OnnA




----------



## TrixX

Quote:


> Originally Posted by *Rootax*
> 
> How the hell are you stable at 1750 with 950mv


With water It'd be happy at 1852MHz with 1200mv methinks, maybe more. It's a pretty mad card









It gets to 1680MHz max in games and 1580MHz in Superposition. Though with the crashing due to CPU testing I've borked something in Windows preventing ClockBlocker from working


----------



## Rootax

Quote:


> Originally Posted by *TrixX*
> 
> With water It'd be happy at 1852MHz with 1200mv methinks, maybe more. It's a pretty mad card
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It gets to 1680MHz max in games and 1580MHz in Superposition. Though with the crashing due to CPU testing I've borked something in Windows preventing ClockBlocker from working


My FE isn't 100% stable at 1700 with 1.2v. It's stable in actual games I play, I can bench everything, it's fine, Heaven for hours... But 3dmark Timespy Stress Test crash (sometime 4 minutes in, sometimes 16...). It's ok with 1.235v, but, it's not worth it imo. The card is underwater, temps are fine. But I prefer 1667 at 1.150v (hbm2 at 1050mhz). It's a little less performant, but a it's a "lot" cooler, less heat dumped in my loop. Haven't try 1.125v yet. Will do next week end, it takes time to fully test stability :/


----------



## TrixX

Yeah the limit for the LC cards is 1250mv so can pull some insane wattage at that voltage. Will need to test what's possible under water. I just know if I run my bench settings I can't live with 4900RPM fan


----------



## Newbie2009

Quote:


> Originally Posted by *TrixX*
> 
> With water It'd be happy at 1852MHz with 1200mv methinks, maybe more. It's a pretty mad card
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It gets to 1680MHz max in games and 1580MHz in Superposition. Though with the crashing due to CPU testing I've borked something in Windows preventing ClockBlocker from working


Quote:


> Originally Posted by *Rootax*
> 
> How the hell are you stable at 1750 with 950mv


Just the target clock. For comparison 1000mv, 1647mhz(1%OC from stock) hits around 1610 in super position for me.

It is surprising he doesn't just crash though, setting clocks so high at those volts. (If I set more than a 1% oc at 1000mv it just hard crashes in some programs.)


----------



## gamervivek

New afterburner beta allows 200mV changes to all p-states. I've tried locking the min/max to a single p-state in wattman and then undervolting from afterburner.

The results are encouraging in that you don't have to push the power limit.


----------



## Paul17041993

Quote:


> Originally Posted by *TrixX*
> 
> Actually not surprised if you get driver crashes with stock clocks. You are likely suffering from the card being too high mv in P6/P7 for the card to run within thermal/power limitations so will likely see some heavy core throttling in high load situations.
> 
> Have you tried Undervolting to find the situation that your card runs smoothly in? Also I'd suggest using clockblocker to maintain P7 state for testing and 3D applications. Otherwise when undervolting it can get stuck in P5 as it has higher mv compared to an undervolted P6/P7 in some cases.
> 
> Here's my UV setup using OverdriveNTool for comparison:
> 
> 
> 
> I'm using the LC BIOS hence the higher clocks and lower temp limits. Just use the stock clocks for your card and work out the low point of the voltages for the P6/P7 and HBM voltage settings.
> 
> Also the HBM is the voltage floor for the GPU so setting that higher than P6/P7 negates the settings in those areas. I usually match P6 to the HBM and have P7 a little higher than P6.


Oh no, the performance and temperatures are perfectly fine, just wondering if anyones got/getting random system reboots on occasion.

Though I'm still pretty sure that it's the motherboard and not the card, as if it were just drivers it should BSOD properly, and likewise if it were the card itself it usually would just black-screen and not reboot...


----------



## kundica

Quote:


> Originally Posted by *Paul17041993*
> 
> Have or are any vega owners currently getting weird system lock-ups or reboots on occasion or rarely when launching games? just curious if anyone has encountered stability issues due to the drivers or if it's simply my motherboard (which is going to be replaced soon), all stock clocks fyi.


I've been getting random black screens since I updated to 17.10.1 but I'm also running the Fall creators update so it could be related to that. It almost always happens when my computer is just sitting idle or occasionally when I launch a benching app.


----------



## madmanmarz

That's weird, wattman doesn't ever reset my settings, clocks or voltages unless it's not stable and crashes.
Quote:


> Originally Posted by *Chaoz*
> 
> I use OverdriveNTool aswell, my IS doesn't flag it at all, cuz it's a not virus. It works great, a lot better than Wattman/Watttool, which always resets my settings after a reboot, which OverdriveNTool does not.


----------



## Trender07

Quote:


> Originally Posted by *PontiacGTX*
> 
> it drops to P-State5?


It just crashes at stock clock + 1000 mv, so I neecd 1012 mv, and for that I rather have -30 Mhz and run it 1602 mhz @ 985 mv


----------



## Chaoz

Quote:


> Originally Posted by *madmanmarz*
> 
> That's weird, wattman doesn't ever reset my settings, clocks or voltages unless it's not stable and crashes.


Mine did and they were stable. I had the exact same settings as I had in OverdriveNTool and it kept resetting after a reboot.


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> That's weird, wattman doesn't ever reset my settings, clocks or voltages unless it's not stable and crashes.


I don't think Wattman managed to keep any settings through reboot once for me. Always changing something whether it was fan profile, voltages, power target, something would always change.

Only reliable tool so far has been OverdriveNTool for me. 100% works. Except when I crashed hard and corrupted Windows, then it didn't work, mostly cos Windows didn't


----------



## Paul17041993

Quote:


> Originally Posted by *kundica*
> 
> I've been getting random black screens since I updated to 17.10.1 but I'm also running the Fall creators update so it could be related to that. It almost always happens when my computer is just sitting idle or occasionally when I launch a benching app.


Does your USB devices black out and the system reboot within seconds when it does that? or does it take much longer to reboot or not at all?


----------



## madmanmarz

Quote:


> Originally Posted by *TrixX*
> 
> I don't think Wattman managed to keep any settings through reboot once for me. Always changing something whether it was fan profile, voltages, power target, something would always change.
> 
> Only reliable tool so far has been OverdriveNTool for me. 100% works. Except when I crashed hard and corrupted Windows, then it didn't work, mostly cos Windows didn't


Maybe they fixed it?


----------



## cephelix

I'm back!! Well, talked to my guy yesterday and it seems that there's no stock for the vega 56. But he'll ask around if someone has it in stock. Some guy online though is selling 10xMSI Vega 64 air cooled for SGD30 more than what a brand new Vega 56 is going for. Pretty sure it was used for mining and might be a crapshoot as to the longevity of the card depending on the situation he mined them under. Anyone here have the MSI Vega 64? Quality of components wise it should not matter since all of them are reference right?


----------



## steadly2004

Quote:


> Originally Posted by *cephelix*
> 
> I'm back!! Well, talked to my guy yesterday and it seems that there's no stock for the vega 56. But he'll ask around if someone has it in stock. Some guy online though is selling 10xMSI Vega 64 air cooled for SGD30 more than what a brand new Vega 56 is going for. Pretty sure it was used for mining and might be a crapshoot as to the longevity of the card depending on the situation he mined them under. Anyone here have the MSI Vega 64? Quality of components wise it should not matter since all of them are reference right?


yes all reference should be very comparable. I have a Sapphire and power color. I think the reference are all about the same, silicon lottery and all. My power color is a better over clocker.


----------



## cephelix

Quote:


> Originally Posted by *steadly2004*
> 
> yes all reference should be very comparable. I have a Sapphire and power color. I think the reference are all about the same, silicon lottery and all. My power color is a better over clocker.


Yeah, silicon lottery.. and i've never been particularly lucky though some guy did mention that my 4790K at 4.7ghz was pretty good so at least I have that going for me. Lol. I've messaged the seller and awaiting his response.


----------



## astrixx

Picked up myself a MSI RX Vega 64 Wave!


----------



## Soggysilicon

Quote:


> Originally Posted by *TrixX*
> 
> With water It'd be happy at 1852MHz with 1200mv methinks, maybe more. It's a pretty mad card
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It gets to 1680MHz max in games and 1580MHz in Superposition. Though with the crashing due to CPU testing I've borked something in Windows preventing ClockBlocker from working


Absolutely no way!









Vega starts going squirrel'lee anything north of 1750. You don't need water you need phase change or direct port LN2. Going into the age of "packed signals", or simply using an encoding scheme, error recovery becomes ever more nontrivial. Even if the drivers themselves don't flat out crash or hang, your application will... prey is a perfect example... had it stop the .exe, had it give me memory exceptions... now just sitting around waiting for some driver tweaks... I don't think there is a whole lot left in the "raw performance" tank, when it comes to Vega.


----------



## steadly2004

Quote:


> Originally Posted by *Soggysilicon*
> 
> Absolutely no way!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Vega starts going squirrel'lee anything north of 1750. You don't need water you need phase change or direct port LN2. Going into the age of "packed signals", or simply using an encoding scheme, error recovery becomes ever more nontrivial. Even if the drivers themselves don't flat out crash or hang, your application will... prey is a perfect example... had it stop the .exe, had it give me memory exceptions... now just sitting around waiting for some driver tweaks... I don't think there is a whole lot left in the "raw performance" tank, when it comes to Vega.


My VEGA was holding 1800 or so, didn't write down or screen shot.... While my son was playing prey on my VEGA via steam link. I had to adjust wattman when he was done to go custom (instead of balanced) to go back to mining when not in use. Maybe I was seeing something. But I'll definitely have to re-test in the future. I'm pretty sure is was 1802 or something like that on the core. Perhaps it was a fluke, or perhaps I am tripping. I don't know.


----------



## IvantheDugtrio

Well I did a more thorough leak test and sure enough I found the leak came from my alphacool radiator. I thought it was weird how the fittings were dry when I found the leak. It's leaking around one of the middle conduits of the rad at the joint between the conduit and the collector.

I hope they have a somewhat decent warranty.


----------



## TrixX

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Well I did a more thorough leak test and sure enough I found the leak came from my alphacool radiator. I thought it was weird how the fittings were dry when I found the leak. It's leaking around one of the middle conduits of the rad at the joint between the conduit and the collector.
> 
> I hope they have a somewhat decent warranty.


Should have a fairly strong case for rreimbursement of your Vega FE too...


----------



## Rootax

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Well I did a more thorough leak test and sure enough I found the leak came from my alphacool radiator. I thought it was weird how the fittings were dry when I found the leak. It's leaking around one of the middle conduits of the rad at the joint between the conduit and the collector.
> 
> I hope they have a somewhat decent warranty.


Ah fu** :/

Was it a new rad ? Or an "old" one which suddently leaked ?


----------



## nolive721

sold my RX480 lats month to fund an upgrade to compliment my 3 1080p monitor set-up

short list would be 1070 OC , the new 1070Ti coming but I would like to include a VEGA card as well being a 56 if AIB models are getting available anytime between now and January.

Is that happening at all?


----------



## TrixX

Quote:


> Originally Posted by *nolive721*
> 
> sold my RX480 lats month to fund an upgrade to compliment my 3 1080p monitor set-up
> 
> short list would be 1070 OC , the new 1070Ti coming but I would like to include a VEGA card as well being a 56 if AIB models are getting available anytime between now and January.
> 
> Is that happening at all?


There will be AIB models #soon, Gigabyte, ASUS and MSI are confirmed so far.


----------



## tarot

Spoiler: Warning: Spoiler!






Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *PontiacGTX*
> 
> So far the games benefit from HBCC are
> 
> CoD IW(and newer)
> Mirrors Edge Catalyst
> Middle earth Shadow of Mordor
> Gears of wars 4
> Middle earth Shadow of war
> Titanfall 2
> The rise of the tomb raider
> Watch Dogs 2
> Quantum Break





any figures with that i have ROTR(and shadows of mordor) so i can give that a run and what resolution...i,m thinking the higher res games should benefit more.
Quote:


> Originally Posted by *TrixX*
> 
> There will be AIB models #soon, Gigabyte, ASUS and MSI are confirmed so far.






i wouldn't get to escited about AIB cards if the asus is anything to go by i just wish they would get the price to a better point they drag it down to 600 odd here i will get another one and another block









as for the blackscreen system reboot etc etc questions short answer is no
but
i have not gone to 17.10 yet.
so far the only thing that will happen is a driver crash if i clock it wrong.

now a quick question here i asked a similar one on the threadripper tread...i have my 2 loops setup to go from pump straight to block then radiator...i had it the other way around and thought it wasn't doing to well but after seeing some setups they run pump to rad then block...which is better?


----------



## spyshagg

How do these cards fare with Racing Sims VS nvidia? Assetto, RaceRoom, Pcars2, iRacing?


----------



## tarot

Quote:


> Originally Posted by *spyshagg*
> 
> How do these cards fare with Racing Sims VS nvidia? Assetto, RaceRoom, Pcars2, iRacing?


yeah... i know maybe one of those project cars 2?





haven't looked at it ye ti will edit when i do
apart from that the new forza is looking good









https://hothardware.com/news/amd-radeon-rx-vega-64-forza-7-geforce-gtx-1080-ti

after looking at the video and guessing he is running dead stock and probably balanced seemed to do quite well.

but then you mentioned vr...
different story i guess...hell i have a vr setup here just can't be assed dragging it out of the box


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> How do these cards fare with Racing Sims VS nvidia? Assetto, RaceRoom, Pcars2, iRacing?


OOOOH yes that's a good question









Well in iRacing with ClockBlocker active I have zero issues though tbh without CB active I still get 144FPS+ on Nordschleife with pretty much max settings at ~700MHz on the core (CPU bound heavily).

It loves pCARS2, though i strongly recommend limiting FPS in the driver to 300 as in the menu you'll hit 3K FPS+ and instant coil whine. I also strongly suggest using ClockBlocker here as the card has a tendency to drop down P States and stutters a bit as it jumps back up the P States. With CB on it's a steady 1600+MHz core and only the GPU load fluctuates.

Haven't tested Assetto or RaceRoom yet, though I tend to stick to iRacing and PCARS 2 at the moment

As for Nvidia comparison, well I can't do that but the image quality is usual AMD so generally better than Nvidia


----------



## spyshagg

oh god this is much worse than I thought.

pcars2 VR performance vs 1080ti

vega 64 LC:





1080ti





I have the vega 64 on a carting shop right now just waiting to confirm the order. But this doesn't look to hot


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> oh god this is much worse than I thought.
> 
> pcars2 VR performance vs 1080ti
> 
> vega 64 LC:
> 
> 
> 
> 
> 
> 1080ti
> 
> 
> 
> 
> 
> I have the vega 64 on a carting shop right now just waiting to confirm the order. But this doesn't look to hot


Not really a direct comparison, the Ti isn't the card the Vega64 competes with...

Also pCARS 2 is Nvidia biased. I very very much doubt that Vega64 has been tweaked properly with those results too.


----------



## spyshagg

Quote:


> Originally Posted by *TrixX*
> 
> Not really a direct comparison, the Ti isn't the card the Vega 64 competes with...
> 
> Also pCARS 2 is Nvidia biased. I very very much doubt that Vega64 has been tweaked properly with those results too.


Well its the price bracket that counts as a buyer. For 100€ more I can get the Ti, hence my questions. Thanks for the help though


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *spyshagg*
> 
> oh god this is much worse than I thought.
> 
> pcars2 VR performance vs 1080ti
> 
> vega 64 LC:
> 
> 
> 
> 
> 
> 1080ti
> 
> 
> 
> 
> 
> I have the vega 64 on a carting shop right now just waiting to confirm the order. But this doesn't look to hot






now saying that you are comparing a12/1400 dollar TI to a lc vega which is way overpriced i would stick with an air put up with the noise and tweak it and save 4/500 bucks on the TI (at least here)
hell if you have water-cooling already a block is 140 here so add that to my 729 i pad and still only the price of a 1080 here.

if i were you and i needed a card right now i would find the cheapest TI i could...even second hand and go that way especially if the VR is not up to snuff.


----------



## plywood99

Quote:


> Originally Posted by *spyshagg*
> 
> oh god this is much worse than I thought.
> 
> pcars2 VR performance vs 1080ti
> 
> vega 64 LC:
> 
> 
> 
> 
> 
> 1080ti
> 
> 
> 
> 
> 
> I have the vega 64 on a carting shop right now just waiting to confirm the order. But this doesn't look to hot


If you only play one game......


----------



## spyshagg

Quote:


> Originally Posted by *plywood99*
> 
> If you only play one game......


mostly vr, yes.

I found a revelant review here: http://www.babeltechreviews.com/rx-vega-64-liquid-10-vr-games-vs-the-gtx-1080-gtx-1080-ti/

I'm not trying to step on everybody's toes here, Its a good card but I just need to pick the better option for the games I play.


----------



## spyshagg

Quote:


> Originally Posted by *tarot*
> 
> 
> now saying that you are comparing a12/1400 dollar TI to a lc vega which is way overpriced i would stick with an air put up with the noise and tweak it and save 4/500 bucks on the TI (at least here)
> hell if you have water-cooling already a block is 140 here so add that to my 729 i pad and still only the price of a 1080 here.
> 
> if i were you and i needed a card right now i would find the cheapest TI i could...even second hand and go that way especially if the VR is not up to snuff.


I'm agreeing with your final statement. However the first part, I am comparing a 550€ vega64 (mindfactory) to a 630€ 1080ti I can get new here.


----------



## Newbie2009

Quote:


> Originally Posted by *spyshagg*
> 
> Well its the price bracket that counts as a buyer. For 100€ more I can get the Ti, hence my questions. Thanks for the help though


I got my 64 for the price of a 1070, 500€

A 1080ti would have been another 250-300€, so people will view things differently when comparing.


----------



## pmc25

Interestingly, in 17.10.1, in PubG, the over-tesselation has ceased, title screen no longer heats the GPU more than Furmark, and AA has stopped being a huge drain on performance.

2560x1440, ultra textures, ultra AA, and everything else very low or off, average FPS is now well over 100. Before, AA absolutely nuked performance, as in most games (PubG uses MSAA), now there's barely a difference between low and ultra, and only a small one from very low (off).

GPU utilisation is also WAY down. 60-80% most of the time. Even lower at 1920x1080. Even at 4K it's only 70-85% most of the time.

It's now behaving like the 1080Ti (in being hugely CPU bound), with low utilisation, and similar if not faster performance at 1920x1080 and 2560x1440 (4K it's significantly slower I think due to the ever present bandwidth issues).

This on Vega64 under an EKWB block.

Aside from general tuning and improvement, it looks to me like the AMD driver team have finally managed to get a handle on non-shader based AA, at least in some games. It's no longer crippling.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *spyshagg*
> 
> I'm agreeing with your final statement. However the first part, I am comparing a 550€ vega64 (mindfactory) to a 630€ 1080ti I can get new here.





100 percent agree the lc version is way to overpriced for really no big performance and if that's the prices difference i see no point getting the vega ( i would but i hate NVidia with a fiery passion that consumes my soul







) but that's me









that's really a blanket UE4 issue that engine is too tuned to NVidia and intel and always has been.
they need to open it up to more cores and better tune it to other graphics cards.

a prime example of how to take a dump on a good engine is ARK survival that has run like crap from day one regardless of what you throw at it and the DX12 coming soon for like a year or something just erks me.

but it also seems to be a bit of a trend.
game devs really need to up their game (pun intended)
hell look at what forza 7 is doing.
not that there is really a lot of info







but the precursor looks good.
why can't they all just get moving..

hell dooms another one that seems to buck the trend i can get 90fps at 4k ultra in that game on the vega zero to sneeze at.

as for tessellation that has been amd/ati's Achilles heel for i don't know how long i mean really what does it take to fix that problem...


Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *pmc25*
> 
> Interestingly, in 17.10.1, in PubG, the over-tesselation has ceased, title screen no longer heats the GPU more than Furmark, and AA has stopped being a huge drain on performance.
> 
> 2560x1440, ultra textures, ultra AA, and everything else very low or off, average FPS is now well over 100. Before, AA absolutely nuked performance, as in most games (PubG uses MSAA), now there's barely a difference between low and ultra, and only a small one from very low (off).
> 
> GPU utilisation is also WAY down. 60-80% most of the time. Even lower at 1920x1080. Even at 4K it's only 70-85% most of the time.
> 
> It's now behaving like the 1080Ti (in being hugely CPU bound), with low utilisation, and similar if not faster performance at 1920x1080 and 2560x1440 (4K it's significantly slower I think due to the ever present bandwidth issues).
> 
> This on Vega64 under an EKWB block.
> 
> Aside from general tuning and improvement, it looks to me like the AMD driver team have finally managed to get a handle on non-shader based AA, at least in some games. It's no longer crippling.






anyway sorry had to rant








as a side note...who here has UT4 and how does it go for you? another game that could use a multi core boost







but oh so pretty and fast


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> I'm agreeing with your final statement. However the first part, I am comparing a 550€ vega64 (mindfactory) to a 630€ 1080ti I can get new here.


Honestly if you are that close on price, then the Ti is the better card for the short term with current VR. Much better in fact.

Over the longer term I think Vega will bring some of that back, but unlikely with pCARS 2 and Iain Bell at the helm.

Seeing as I got my V64 for $669 AUD ($530 USD at the time) it was a bargain as the 1070's were more expensive. The 1080 Ti is over double that here. So it's purely dependant on source price. If you can grab a 56 or 64 Air at close to MSRP (they do pop up now and then) with ref cooler and put a water block on it with the correct tweaking it could be awesome. The 1080 Ti is a much more guaranteed result though especially for VR.

I'm with Tarot on getting dev's to actually use the new tech and optimise games properly...


----------



## cephelix

So it seems like i can't get a vega 56 since it's OOS locally and the distributor isn't planning to bring any in cos of low sales volume. The 64 is just way out of my price range.i'll jist live vicariously througg u guys till navi..


----------



## cplifj

Well.... I got me one after a lot of thinking.

Bought a way overpriced Gigabyte Radeon Vega 64 Liquid cooled edition. this at 819€.

BUT, my ears are going to thank me till the day i die, comming from the RX290X ref edition from asus. This card has served me well and still isnt performing all that bad compared to the new vega 64 even.

I am also counting on the fact this card's full potential has yet to be unleashed, like it was with the 290x before.

Cheers.


----------



## TrixX

Interesting info for Vega









Also interesting that RX Vega PowerPlay tables work for FE and make it better too.


----------



## kundica

Quote:


> Originally Posted by *TrixX*
> 
> Interesting info for Vega
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also interesting that RX Vega PowerPlay tables work for FE and make it better too.
> 
> 
> 
> Spoiler: Warning: Spoiler!


I've been thinking about switching to a powerplay edit instead of using the LC bios. I currently have Linux and macOS on 2 other drives but my particular card is unstable at the default LC settings causing crashes when running anything graphically intensive.


----------



## TrixX

Quote:


> Originally Posted by *kundica*
> 
> I've been thinking about switching to a powerplay edit instead of using the LC bios. I currently have Linux and macOS on 2 other drives but my particular card is unstable at the default LC settings causing crashes when running anything graphically intensive.


Is that even after downclocking and undervolting to suit?


----------



## kundica

Quote:


> Originally Posted by *TrixX*
> 
> Is that even after downclocking and undervolting to suit?


I know of no way to downclock and undervolt Vega in macOS. There might be a way in Linux but I haven't done enough research to know for certain.

If you meant to make it stable in Windows I have the Air 64 card with the LC bios and a full custom loop. I downclock p7 to 1702 for gaming which runs stable in everything. It'll bench fine at 1722 but some games crash even if I bump up the voltage. My card doesn't like undervolting much at all and will crash trying to run at the clocks I currently have set if I lower voltage. HBM runs fine at 1100.

Running the powerplay mod with the Air bios would allow me to push the card to the same levels I currently do in Windows but not sacrifice the stability in other OSes.

I get great performance at my current OC settings. Here's an idea of how my card runs in Windows at 1722 with driver 17.10.1. I could probably do better about 20 points better on 17.9.3 but I'd have to flash an older mobo bios because I get stutter depending on the combination of mobo bios/GPU driver I use.


----------



## punchmonster

Anyone benchmarked Fall Creator's Beta Update drivers yet?


----------



## TrixX

Sorry didn't realise you were on MacOS. I don't have any Mac's so couldn't help, though without something like OverdriveNTool not sure you can unless there's Wattman...

I think our cards are quite different in their reaction to voltage and the LC BIOS.


----------



## surfinchina

Quote:


> Originally Posted by *kundica*
> 
> I know of no way to downclock and undervolt Vega in macOS. There might be a way in Linux but I haven't done enough research to know for certain


In the Mac OSx there are kexts (5000 series graphics kexts) for the vega that have wattman information and settings in them. I have no idea about these things, but I'm sure someone clever could figure out how to change these to get an over or underclock


----------



## kundica

Quote:


> Originally Posted by *punchmonster*
> 
> Anyone benchmarked Fall Creator's Beta Update drivers yet?


My post above was on the Fall update.

I haven't noticed much difference in Superposition though, the big bench gain was in Time Spy. The Fall update handles the CPU threads more efficiently so I'm seeing higher CPU scores there. I'll post some results later.

Quote:


> Originally Posted by *surfinchina*
> 
> In the Mac OSx there are kexts (5000 series graphics kexts) for the vega that have wattman information and settings in them. I have no idea about these things, but I'm sure someone clever could figure out how to change these to get an over or underclock


Thanks for the info. It'll be interesting to see what comes of this.


----------



## Flaxen Hegemony

Hi all,

Great to find a thread on this card here at Overclock. Currently reading through this thread and learning, but had a few questions, one of them stupid.

Dumb question first, I'm looking all over the place for the "switch" that makes the Vega FE water-cooled use the 350 watt power mode. The only candidate is a tiny dip switch near the water inlet hoses. Is that it?

Other question concerns a comment I saw on a Youtube video, by "Rick's Performance Computing". He's a rich dude who builds obscenely cool builds for fun. Anyway, in one video he was working with the Vega FE air-cooled version, and made a comment about the "silicon being better" than the water-cooled version. Does he mean it somehow has a lower volt/temp profile because AMD binned their better parts into the FE air-cooled? That seems to make some sense, until you consider that the water-cooled FE has an operating mode the air cooled doesn't have.

In other words, is the Vega FE Water-cooled SKU == Vega FE Air-cooled SKU + added water? Or is the FE water-cooled still a better performing unit than air cooled on water?


----------



## rancor

Quote:


> Originally Posted by *Flaxen Hegemony*
> 
> Hi all,
> 
> Great to find a thread on this card here at Overclock. Currently reading through this thread and learning, but had a few questions, one of them stupid.
> 
> Dumb question first, I'm looking all over the place for the "switch" that makes the Vega FE water-cooled use the 350 watt power mode. The only candidate is a tiny dip switch near the water inlet hoses. Is that it?
> 
> Other question concerns a comment I saw on a Youtube video, by "Rick's Performance Computing". He's a rich dude who builds obscenely cool builds for fun. Anyway, in one video he was working with the Vega FE air-cooled version, and made a comment about the "silicon being better" than the water-cooled version. Does he mean it somehow has a lower volt/temp profile because AMD binned their better parts into the FE air-cooled? That seems to make some sense, until you consider that the water-cooled FE has an operating mode the air cooled doesn't have.
> 
> In other words, is the Vega FE Water-cooled SKU == Vega FE Air-cooled SKU + added water? Or is the FE water-cooled still a better performing unit than air cooled on water?


The Bios switch is the small switch near the HDIM/displayport outputs. This will raise the power limit to 260W and that can them be increased to 390W using the +50% power limit. For me the card uses around 350W at 1660MHz.

The comment about the silicon being better I'm not sure if it is true. The LC cards come with higher stock clocks and it seems like most air cooled cards on water cooling cannot reach those clocks without crashing. The would make me believe the water cooled cards are using better, higher clocking, chips. It's possible the air cooled cards have lower leakage parts to reduce power consumption in limited power and temp situations but the LC cards seem to clock higher.


----------



## Flaxen Hegemony

Quote:


> Originally Posted by *rancor*
> 
> The Bios switch is the small switch near the HDIM/displayport outputs. This will raise the power limit to 260W and that can them be increased to 390W using the +50% power limit. For me the card uses around 350W at 1660MHz.
> 
> The comment about the silicon being better I'm not sure if it is true. The LC cards come with higher stock clocks and it seems like most air cooled cards on water cooling cannot reach those clocks without crashing. The would make me believe the water cooled cards are using better, higher clocking, chips. It's possible the air cooled cards have lower leakage parts to reduce power consumption in limited power and temp situations but the LC cards seem to clock higher.


Your explanation of the clocks and benchmarks makes sense to me, which is why the statement struck me as odd. If the air cooled and water cooled were equivalent at chip level except for a cooler, then AMD was literally just selling an AIO for $500. There has to be a performance difference. I mean, even that may not be worth $500, but still.


----------



## Trender07

Quote:


> Originally Posted by *punchmonster*
> 
> Anyone benchmarked Fall Creator's Beta Update drivers yet?


No but for now I have them and:

- It now detects all the games in your PC.
- Can't enable freesync now , it says not supported -.-


----------



## rancor

Quote:


> Originally Posted by *Flaxen Hegemony*
> 
> Your explanation of the clocks and benchmarks makes sense to me, which is why the statement struck me as odd. If the air cooled and water cooled were equivalent at chip level except for a cooler, then AMD was literally just selling an AIO for $500. There has to be a performance difference. I mean, even that may not be worth $500, but still.


Well the AIO at least in the US should only be 200$ more but still seems over priced. That being said you are paying mostly for the cooler and it allows lower temps and greater power consumption.

If you run a custom loop buying an air card and flashing a LC bios make the best cost sense but might not net you the absolutely highest clocks. Don't forget that we, watercoolers, are oddballs that AMD is not marketing to. The AIO cards are there for people who want the best performing Vega card in stock form.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Rootax*
> 
> Ah fu** :/
> 
> Was it a new rad ? Or an "old" one which suddently leaked ?


It was a brand new Alphacool NexXxoS ST30 240mm radiator. From the moment I got it I already had an issue with two of the holes not fitting most of my new fittings. In the end only my bitspower hardline compression fittings would properly fit without cross-threading.

For the good news I'm testing out my new RX Vega 56 and it's stunning what the right BIOS can do for Vega. No thermal throttling with the stock cooler on Turbo Wattman settings when folding, getting around 450k PPD. I'll be putting it underwater shortly with a single 360mm radiator for the Ryzen 1700 and Vega 56. The 56 runs noticeably cooler at full load than the FE did.


----------



## Reikoji

Fall update beta drivers power readings


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> Fall update beta drivers power readings






are you running the actual fall update as in windows?
and yeah those readings are a tad weird









did you black out the city block or what


----------



## Soggysilicon

Quote:


> Originally Posted by *steadly2004*
> 
> My VEGA was holding 1800 or so, didn't write down or screen shot.... While my son was playing prey on my VEGA via steam link. I had to adjust wattman when he was done to go custom (instead of balanced) to go back to mining when not in use. Maybe I was seeing something. But I'll definitely have to re-test in the future. I'm pretty sure is was 1802 or something like that on the core. Perhaps it was a fluke, or perhaps I am tripping. I don't know.


What was doing the reporting? Be sure to update whatever it is to the latest and or greatest version. There were some issues with false reports and "amazing" clocks which are simply not true. Now to be fair, and to clarify, that a Vega hit some high frequency... once... and didn't hose is noteworthy, but doesn't really mean all that much.

By maximal clock, I mean the "average" sustained clock rate over a 10 minute time span averaged in a repeating bench, including maxima and minima of the frequency curve performance. HWinfo64 is very handy for this. This number is going to vary from application to application, but ultimately, the goal for me is to realize the best possible clock for the most performance while minimizing the driver / application failure. Now, I am fairly tolerant of a dodgy setup if it gets me at least 2 fps at my native res in my minimums. (For freesync purposes @ 3440 @ 100 hz, so my minimum acceptable low is 48, ideally 55+).

The reason I call BS is just from my own experience and hours of tweakin' n' benching on the sig rig with a Vega 64 Ref on the LC Bios, EKWB; 600 mm of rad, 3/4 gal fluid, 18w pump; in a positive pressure room with a dedicated LG inverter and 2 air circulation tower fans.


Spoiler: Warning: Spoiler!







1737 held for 22 mins


Spoiler: Warning: Spoiler!







1796ish application crash

1710-1720s has been (in my experience, your mileage may vary) a physically realizable frequency that is perfectly "game-able". Not sure I would trust it mining or doing significant work for $$$... drop another 10mhz for reliability. To go further is to get into reg. tweaking the core voltage... which looks dodgy.
Quote:


> Originally Posted by *Flaxen Hegemony*
> 
> Your explanation of the clocks and benchmarks makes sense to me, which is why the statement struck me as odd. If the air cooled and water cooled were equivalent at chip level except for a cooler, then AMD was literally just selling an AIO for $500. There has to be a performance difference. I mean, even that may not be worth $500, but still.


Well even at that, there are not all that many AIOs which can be set @ stock settings and still hold up with a +50 power... most that I have seen... crash. Vega is only going to clock up if there is suitable power and volts in reserve. The stock AIO setting... never going to see that P7 except once in a blue moon... its near pointless. P6 can help out there, once you find your Vegas P7 limit, or... it finds your patience threshold with all the rebooting.

Frame rates are all!


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> Amd software stop working when you switch ? If so, it's because of rtss, you need to exclude the process amd is using to switch drivers. I'm not in front of my pc right now, but you can find the name in the task manager or event log when it's crashing.


Well i somehow got it to prompt me now, and i have it working so far. I removed one gpu but seemed to still have issues, then in the middle of trying drivers i got a windows update. After all that i reloaded 17.10 over 17.10 and it finally prompted me for driver options... Thank god...

So anyways its late here but ill test it with both gpus tomorrow and see if that has anything to do with it.


----------



## dagget3450

Alright confirmation with 2nd vega fe installed i loose "driver options" and not only that it defaults to radeon pro ui even if i am on gaming driver. I guess ill try reporting this to them but i wont hold my breath.

Gah i knew it was too good to be true, back to 17.6 i guess or use my second vega as a paper weight...


----------



## Reikoji

Quote:


> Originally Posted by *tarot*
> 
> 
> are you running the actual fall update as in windows?
> and yeah those readings are a tad weird
> 
> 
> 
> 
> 
> 
> 
> 
> 
> did you black out the city block or what


Didn't have it, but just finished installing the update, fortunately with no hiccups or bsod bootlooping.

Didn't change anything tho, still got insane power draw measurements.


----------



## JasonMZW20

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> Fall update beta drivers power readings


Yeah, they're pretty screwy. I've just been dividing them by 100. Seems close enough on mine (which reports 22,016). Yours, probably not.


----------



## TrixX

Looks like Core power is correct but Chip power went full ******!


----------



## tarot

Quote:


> Originally Posted by *JasonMZW20*
> 
> Yeah, they're pretty screwy. I've just been dividing them by 100. Seems close enough on mine (which reports 22,016). Yours, probably not.


ok well I just did the fall update took ages a few quirky things I need to work out
installed the amd fall update redid my clocks started firestrike extreme 2 minute in BOOM looked up and the cpu started going freaking nuts...I was sitting here watching the taichis cpu temp leds go up and up and up and they never go up







so kill it kill it kill it
reinstalled 17.93 and sticking with it for now back to normal.

first and last beta i try for awhile









now I have to do some work then install the xspc block on the threadripper


----------



## spyshagg

Alright so I just found a vega 64 for 550€ right in my country. I'm selling one 290x for 170€ so that brings me to just 380€.

I'm expecting a lot of noise from this cooler. Lets hope its not that bad. I dont feel like shelling out another 104€ for a waterblock and void my warranty.

On the plus side, I get to keep my freesync monitor. I'll just have to dial down settings for VR compared to the 1080ti.


----------



## spyshagg

It seems they are falling in price all over europe. Getting ready to receive third party cards maybe?


----------



## redshoulder

What fan is supplied with Vega water edition? it looks like gentle typhoon.


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> Alright so I just found a vega 64 for 550€ right in my country. I'm selling one 290x for 170€ so that brings me to just 380€.
> 
> I'm expecting a lot of noise from this cooler. Lets hope its not that bad. I dont feel like shelling out another 104€ for a waterblock and void my warranty.
> 
> On the plus side, I get to keep my freesync monitor. I'll just have to dial down settings for VR compared to the 1080ti.


Waterblock shouldn't void the warranty unless damage is caused by it directly. The anti-tamper sticker is there to notify the RMA team that the cooler had been removed and needs to be checked in more detail (according to GN at least).

Though to get the best out of it I'd recommend a water block/AIO at least. I'm thermally limited with the stock cooler.


----------



## spyshagg

Quote:


> Originally Posted by *TrixX*
> 
> Waterblock shouldn't void the warranty unless damage is caused by it directly. The anti-tamper sticker is there to notify the RMA team that the cooler had been removed and needs to be checked in more detail (according to GN at least).
> 
> Though to get the best out of it I'd recommend a water block/AIO at least. I'm thermally limited with the stock cooler.


Thanks for the info. I'll see how the cooler behaves and will consider it.

Whats an average firestrike graphics score for the air 64? (1080p)


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> Thanks for the info. I'll see how the cooler behaves and will consider it.
> 
> Whats an average firestrike graphics score for the air 64? (1080p)


I've got between 23K and just over 25K Graphics score with:
P7 1692MHz @ 1050-1150mv (low score at 1050mv, high at 1150mv)
HBM 1050MHz @ 1050mv
Those were done on my old 3930K system prior to LC BIOS flash so using the 8730 Air BIOS.


----------



## PontiacGTX

Quote:


> Originally Posted by *TrixX*
> 
> I've got between 23K and just over 25K Graphics score with:
> P7 1692MHz @ 1050-1150mv (low score at 1050mv, high at 1150mv)
> HBM 1050MHz @ 1050mv
> Those were done on my old 3930K system prior to LC BIOS flash so using the 8730 Air BIOS.


HBM voltage isnt just 1.35 always? that doesnt change


----------



## Trender07

hugh for some reason my HBM get stucks on 800 MHz :/ already tried with overdriventool even afterburner and it shows 800 mhz all the time on superposition , with latest fall CU update and beta drivers


----------



## TrixX

Quote:


> Originally Posted by *Trender07*
> 
> hugh for some reason my HBM get stucks on 800 MHz :/ already tried with overdriventool even afterburner and it shows 800 mhz all the time on superposition , with latest fall CU update and beta drivers


Apparently the beta drivers are a bit ****. I'd drop back to 10.1's or 9.3's and test to confirm HBM is not just balked by drivers.


----------



## rancor

Quote:


> Originally Posted by *PontiacGTX*
> 
> HBM voltage isnt just 1.35 always? that doesnt change


HBM is always 1.35V on the 64 bios but the memory voltage seems to set the minimum core voltage and can help stability for some people.


----------



## Trender07

Quote:


> Originally Posted by *TrixX*
> 
> Apparently the beta drivers are a bit ****. I'd drop back to 10.1's or 9.3's and test to confirm HBM is not just balked by drivers.


Yeah I think im rolling back teh drivers... but are those compatible with Fall CU?


----------



## kundica

Quote:


> Originally Posted by *TrixX*
> 
> Apparently the beta drivers are a bit ****. I'd drop back to 10.1's or 9.3's and test to confirm HBM is not just balked by drivers.


Yeah. You can manually set it and it'll take but the driver defaults to 800. You can confirm and repeat it by doing a driver reset then enabling custom.


----------



## Trender07

Quote:


> Originally Posted by *kundica*
> 
> Yeah. You can manually set it and it'll take but the driver defaults to 800. You can confirm and repeat it by doing a driver reset then enabling custom.


Yeah it happens when you touch the voltage settings. Are old drivers compatible with Fall CU?


----------



## kundica

Quote:


> Originally Posted by *Trender07*
> 
> Yeah it happens when you touch the voltage settings. Are old drivers compatible with Fall CU?


You don't have to touch the voltage for this to happen, it happens at default. It's not the same issue as upping HBM voltage and forcing it lock at 800. It thinks 800 is the default so for those running balanced or one of the other presets it can default to 800 on the 64.

I ran the Fall update as an insider preview for about a week on 17.10.1 and it was mostly fine. Occasionally I would get a black screen/driver crash when launching certain apps. 3D Mark is one of them. I've rolled back to 17.10.1 for now.


----------



## Reikoji

Fall drivers don't appear to be locking memory to 800 in my case, but it does default to it now. HBCC however did get stuck at the lowest selectable amount of memory. I did do a quick reinstall just to see if my that would be fixed and my chip power reading would return to normal. Chip power wasnt fixed, but I can set HBCC again.


----------



## kilgrim2

Great tks.

What connections do you use? Can you mix HDMI with DisplayPort


----------



## Roboyto

Anyone play around with Time Spy Extreme yet?

My best valid run so far on 17.9.3 WHQL:

https://www.3dmark.com/spy/2567743

Time Spy Score: 3768
Graphics Score: 3837
CPU Score: 3422

1617/1717 | 950mV/1150mV
1100 HBM | 1150mv
40% Power

R7 1700 @ 3725 1.200V | 8GB 3200MHz

Best invalid score on 17.10.1 with same settings:

https://www.3dmark.com/spy/2567549

Time Spy Score: 3779
Graphics Score: 3847
CPU Score: 3439

GPU settings yield a consistent clock speed of 1680-1690. No matter what I tinker with I can't get the clock speed to hold steady; especially on GS2. GS1 there is always a 4Mhz variance through the test. For example the low will be 1679and a peak of 1683, with no further fluctuation.

GS2 is a little different where it will have one low dip, ~25 MHz in the same spot, and then it will recover back to a small variance of ~15MHz.

I know it's a very new addition to the benchmark suite, but I am slightly ahead of the best R7 1700/1080 score. The 1700 clocked at 3890 with the 1080 at 2177/1379. https://www.3dmark.com/spy/2545579


----------



## cplifj

after a long run of heavy load:



when idle:


one amazing silent BEAST, just running it stock out of the box.
i got one benchmark somewhere that outshined even a 1080Ti result.

corsair psu HX650 is ample since it served my 290X without a sweat either.

i am adding the pricing here : bought at 819€ incl. VAT which would make it a US Retail price of 763$ excl. VAT. pricy but for this kind of surprise, not bad at all









nice job amd & gigabyte

i have to share the thought here that apparently overclocking these days is best left to the OC hardware that comes with everything now, The only thing you need to do is COOL it as best as you can, the oc hardware is way better then any manual overclocker these days, unless you are a kingping going for the extreme.


----------



## madmanmarz

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> It was a brand new Alphacool NexXxoS ST30 240mm radiator. From the moment I got it I already had an issue with two of the holes not fitting most of my new fittings. In the end only my bitspower hardline compression fittings would properly fit without cross-threading.
> 
> For the good news I'm testing out my new RX Vega 56 and it's stunning what the right BIOS can do for Vega. No thermal throttling with the stock cooler on Turbo Wattman settings when folding, getting around 450k PPD. I'll be putting it underwater shortly with a single 360mm radiator for the Ryzen 1700 and Vega 56. The 56 runs noticeably cooler at full load than the FE did.


Really impressive, is that a factory liquid cooled card?


----------



## Newbie2009

Quote:


> Originally Posted by *spyshagg*
> 
> Thanks for the info. I'll see how the cooler behaves and will consider it.
> 
> Whats an average firestrike graphics score for the air 64? (1080p)


https://www.3dmark.com/fs/13601825

Undervolted to 1150, 6.5% overclock. 25327


----------



## raysheri

well that 3dmark link doesn't make any sense
- Core clock
350 MHz

Memory bus clock
1,200 MHz

????


----------



## Newbie2009

Quote:


> Originally Posted by *raysheri*
> 
> well that 3dmark link doesn't make any sense
> - Core clock
> 350 MHz
> 
> Memory bus clock
> 1,200 MHz
> 
> ????


obviously not reading clocks correctly


----------



## 113802

https://www.3dmark.com/fs/13805929

Undervolted to 1150 and overclocked to 1770Mhz







26216


----------



## rancor

Quote:


> Originally Posted by *WannaBeOCer*
> 
> https://www.3dmark.com/fs/13805929
> 
> Undervolted to 1150 and overclocked to 1770Mhz
> 
> 
> 
> 
> 
> 
> 
> 26216


What clocks are you actually getting in the benchmark? 1680?


----------



## 113802

Quote:


> Originally Posted by *rancor*
> 
> What clocks are you actually getting in the benchmark? 1680?


1700-1729Mhz

When running light games it spikes to 1800Mhz and crashes my PC


----------



## Roboyto

Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1700-1729Mhz
> 
> When running light games it spikes to 1800Mhz and crashes my PC


You will have to play around with it a little bit, but there can be a, or multiple, definitive threshold(s) for various settings that cause the overboost/crash. Those thresholds being the specified P7 clock/voltage, 'HBM'/Floor voltage, and/or how much additional power you are giving the card on the power slider.

For example in Time Spy Extreme I got an overboost/crash with P7 1727/1125mV and 15% power; Below 15% power and the bench runs fine with P7 at 1727.

Turn the P7 clock down to 1717 and P7 voltage can go up to 1150mV with a max of 40% power; beyond 40% power and it overboosts and crashes.

Turn the P7 clock down to 1697 with 1125mV and I can push power limit all the way to 50% without issue.

Overclocking these cards is a delicate juggling act. On top of that I've found, more so than on previous gen cards, that you will have to fine tune further for different benches. Settings for Super Position 4K will almost instantly cause a crash in Time Spy.

For the overboost problem it seems I can dial a setting, or two, down slightly and get the overboost to stop...but the point at which it overboosts/crashes has been different with different benches.


----------



## rancor

Quote:


> Originally Posted by *WannaBeOCer*
> 
> 1700-1729Mhz
> 
> When running light games it spikes to 1800Mhz and crashes my PC


I apparently have a crap card. I need 1.25V for 1700 (1660-1680) stable with a 50% power limit.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Roboyto*
> 
> You will have to play around with it a little bit, but there can be a, or multiple, definitive threshold(s) for various settings that cause the overboost/crash. Those thresholds being the specified P7 clock/voltage, 'HBM'/Floor voltage, and/or how much additional power you are giving the card on the power slider.
> 
> For example in Time Spy Extreme I got an overboost/crash with P7 1727/1125mV and 15% power; Below 15% power and the bench runs fine with P7 at 1727.
> 
> Turn the P7 clock down to 1717 and P7 voltage can go up to 1150mV with a max of 40% power; beyond 40% power and it overboosts and crashes.
> 
> Turn the P7 clock down to 1697 with 1125mV and I can push power limit all the way to 50% without issue.
> 
> Overclocking these cards is a delicate juggling act. On top of that I've found, more so than on previous gen cards, that you will have to fine tune further for different benches. Settings for Super Position 4K will almost instantly cause a crash in Time Spy.
> 
> For the overboost problem it seems I can dial a setting, or two, down slightly and get the overboost to stop...but the point at which it overboosts/crashes has been different with different benches.






it seems to come down to load more load more boost light games will also overboost especially with the LC bioses
mine Is 1542/1075 and 1685/1150 and the hbm ram @ 1100/1000 (although as we know that voltage seems to be an overall lock not ram) and 50 percent power.
it will sit at close to 1680 all day in any bench or game

one thing that can trigger overboost and issues is enhanced sync and freesync so if you are getting crashes test it with those off and see how you go.

now also I am still running the stock xspc air bios.


----------



## diabetes

Small update on Hotspot temp:

I have a Vega 56 with an EKWB Nickel. I was running Unigine Superposition today with +50% Powerlimit and my hotspot temps have become much worse compared to when i first mounted the block. GPU and HBM reach 40C which is totally fine, but hotspot now reaches 70°C (was 55C-58C after installation). TIM used was EK Ektotherm. Seems like there was some kind of pump-out effect going on there. Keep an eye on your temps folks, because it seems like we'll soon see reports of burnt out cards from people with custom coolers if the same happens with them too and goes unnoticed (I know that 70C is not harmful but imagine that the problem gets worse over time).


----------



## spyshagg

Quote:


> Originally Posted by *Newbie2009*
> 
> https://www.3dmark.com/fs/13601825
> 
> Undervolted to 1150, 6.5% overclock. 25327


Heres your score compared to my current 290x score

https://www.3dmark.com/compare/fs/13601825/fs/10263465

So it seems overclock to overclock, my yet to arrive vega 64 will only give me 75% more performance, for 550€ 4 years later. :\


----------



## Newbie2009

Quote:


> Originally Posted by *spyshagg*
> 
> Heres your score compared to my current 290x score
> 
> https://www.3dmark.com/compare/fs/13601825/fs/10263465
> 
> So it seems overclock to overclock, my yet to arrive vega 64 will only give me 75% more performance, for 550€ 4 years later. :\


I was running 290x crossfire. One vega is superior.


----------



## Xender

Honestly, on average Vega 64 reference unit, with typical undervolting (without lowering clocks) in typical air cooled case (Fortress FT05, Noctua D15, 6700K @1.28V) how low temperatures during QHD gaming/full stress will be achievable? What about noise? I am hyped and I want to buy Vega as soon as possible because my GTX 1060 is not enough for my QHD Freesync monitor, however I am worried about buying reference card...


----------



## Newbie2009

Quote:


> Originally Posted by *Xender*
> 
> Honestly, on average Vega 64 reference unit, with typical undervolting (without lowering clocks) in typical air cooled case (Fortress FT05, Noctua D15, 6700K @1.28V) how low temperatures during QHD gaming/full stress will be achievable? What about noise? I am hyped and I want to buy Vega as soon as possible because my GTX 1060 is not enough for my QHD Freesync monitor, however I am worried about buying reference card...


If you are sticking with AIR I can't recommend a reference card. Wait for partner cards.


----------



## TrixX

Quote:


> Originally Posted by *Xender*
> 
> Honestly, on average Vega 64 reference unit, with typical undervolting (without lowering clocks) in typical air cooled case (Fortress FT05, Noctua D15, 6700K @1.28V) how low temperatures during QHD gaming/full stress will be achievable? What about noise? I am hyped and I want to buy Vega as soon as possible because my GTX 1060 is not enough for my QHD Freesync monitor, however I am worried about buying reference card...


If you are worried about noise, then there's two solutions, first is wait for AIB and their reviews.

Second and IMO the preferable one is to get a reference and stick the Morpheus 2 on it with dual good 140mm fans. Very good at keeping the temp numbers down.


----------



## Naeem

got myself a liquid version



time spy

https://www.3dmark.com/3dm/22781531

firestrike

https://www.3dmark.com/3dm/22767895

firestrike extreme

https://www.3dmark.com/3dm/22758120

time spy extreme

https://www.3dmark.com/3dm/22750743

50% power target
1100 HBM2

clock stays around 1700 to 1730


----------



## gupsterg

Well I bit the bullet on a RX VEGA 64 Limited Edition yesterday. Now waiting eagerly for delivery







.

Seemed to me the best deal I may see this side of Christmas. £515 delivered, ~£11 cashback, got Prey and Wolfenstein II with it (worth ~£40 in my estimates, so card is net ~£465 IMO).

The decision was concreted after having experienced the MSI GTX 1080 EK X. I missed the lack of variable refresh rate greatly, wasn't inclined to get into hassle of swapping my MG279Q for similar G-Sync monitor. Also missed some of the features of AMD driver panel which didn't have equivalents in nVidia drivers.

I was also lucky in that the Fury X sold at no loss. The MSI GTX 1080 EK X had a missing screw on WB so got a discount, I was able to sell it for more than I got it so it bolstered my "pot of £" to make the jump to VEGA viable.

As OCuk had a weekly special on EK full cover block I've also ordered that.


----------



## Sgt Bilko

Quote:


> Originally Posted by *gupsterg*
> 
> Well I bit the bullet on a RX VEGA 64 Limited Edition yesterday. Now waiting eagerly for delivery
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Seemed to me the best deal I may see this side of Christmas. £515 delivered, ~£11 cashback, got Prey and Wolfenstein II with it (worth ~£40 in my estimates, so card is net ~£465 IMO).
> 
> The decision was concreted after having experienced the MSI GTX 1080 EK X. I missed the lack of variable refresh rate greatly, wasn't inclined to get into hassle of swapping my MG279Q for similar G-Sync monitor. Also missed some of the features of AMD driver panel which didn't have equivalents in nVidia drivers.
> 
> I was also lucky in that the Fury X sold at no loss. The MSI GTX 1080 EK X had a missing screw on WB so got a discount, I was able to sell it for more than I got it so it bolstered my "pot of £" to make the jump to VEGA viable.
> 
> As OCuk had a weekly special on EK full cover block I've also ordered that.


Grats mate and welcome aboard!


----------



## gupsterg

Cheers mate







.

It's gonna be such a shame to pull the Limited Edition blower cooler from card. All images I have seen make it look so nice, even the IO plate with it's stylized venting and black color looks so snazzy IMO. I'm envisaging I may have to put up a shelf near my PC to display the cooler







, be such a shame to hide it in a box .


----------



## ManofGod1000

Well, I am a Vega 56 owner and am quite happy with it. (Came from 2 x Fury Non X.) I have the power limit set to 50+, max temp at 80C and the max fan speed set to 4000 rpm. This has given me the best performance so far and I do not notice any fan noise at all. (I am different from most though, I tend to notice things like clicking and such over start up fan hush.) 50% better performance that a Single R9 Fury Nitro.


----------



## madmanmarz

Quote:


> Originally Posted by *rancor*
> 
> I apparently have a crap card. I need 1.25V for 1700 (1660-1680) stable with a 50% power limit.


Don't feel too bad my 56 will barely get 1650 mhz at 1.2v (1250 in wattman). Heck the LC 64 bios default settings aren't even stable for me (can't wait for bios editing!).

I keep it at 1700mhz/1100hbm, 1000mv which gives me 1550mhz/1100hbm, 1000mv. Frequency doesn't fluctuate and it's nice and cool. Wayyyyy better than the old 290x.


----------



## Xender

I bought MSI Vega64 I hope I will be happy with my 32" QHD Freesync monitor playing Elex ;D.

I will try with little undervoltage and underclocking and if it not will be enough I will but Morpheus 2.


----------



## punchmonster

mining performance jumped significantly (8%) going to fall creators update beta. Made sure to check it's maintaining clocks and stuff. Nice.


----------



## Trender07

Quote:


> Originally Posted by *Roboyto*
> 
> You will have to play around with it a little bit, but there can be a, or multiple, definitive threshold(s) for various settings that cause the overboost/crash. Those thresholds being the specified P7 clock/voltage, 'HBM'/Floor voltage, and/or how much additional power you are giving the card on the power slider.
> 
> For example in Time Spy Extreme I got an overboost/crash with P7 1727/1125mV and 15% power; Below 15% power and the bench runs fine with P7 at 1727.
> 
> Turn the P7 clock down to 1717 and P7 voltage can go up to 1150mV with a max of 40% power; beyond 40% power and it overboosts and crashes.
> 
> Turn the P7 clock down to 1697 with 1125mV and I can push power limit all the way to 50% without issue.
> 
> Overclocking these cards is a delicate juggling act. On top of that I've found, more so than on previous gen cards, that you will have to fine tune further for different benches. Settings for Super Position 4K will almost instantly cause a crash in Time Spy.
> 
> For the overboost problem it seems I can dial a setting, or two, down slightly and get the overboost to stop...but the point at which it overboosts/crashes has been different with different benches.


"Settings for Super Position 4K will almost instantly cause a crash in Time Spy."
This is very right. I for myself have to test with SuperPosition, TimeSpy and FireStrike, because even if it doesn't crash in 2, it can crash in FireStrike or TimeSpy


----------



## Roboyto

Quote:


> Originally Posted by *Trender07*
> 
> "Settings for Super Position 4K will almost instantly cause a crash in Time Spy."
> This is very right. I for myself have to test with SuperPosition, TimeSpy and FireStrike, because even if it doesn't crash in 2, it can crash in FireStrike or TimeSpy


Just going to have to find the lowest overall settings that are stable in all benches I run...hopefully that yields stability for gaming. *fingers crossed*


----------



## The EX1

Quote:


> Originally Posted by *punchmonster*
> 
> mining performance jumped significantly (8%) going to fall creators update beta. Made sure to check it's maintaining clocks and stuff. Nice.


I was wondering if it would impact mining performance like it has games....


----------



## TrixX

Quote:


> Originally Posted by *Trender07*
> 
> "Settings for Super Position 4K will almost instantly cause a crash in Time Spy."
> This is very right. I for myself have to test with SuperPosition, TimeSpy and FireStrike, because even if it doesn't crash in 2, it can crash in FireStrike or TimeSpy


TBH I'm not entirely sure the crashes in TS/FS are AMD's fault, the software is buggy as fark...


----------



## Snowknight26

Anyone have this very specific issue on their RX Vega 56?

When I'm connected to my Windows 7 machine via RDP (and only then), there's about a 1 in 100 chance that when I open a new tab in Chrome, my machine BSODs. Bug Check is usually 0x50.

I thought it was because I was OCing/undervolting, but it's even happened with Wattman set to default.


----------



## Rootax

Aaaaannnnnd the beta driver for Fall upgrade for Vega FE / Radeon Pro has not the "drivers option" thing. Oh well... They're pretty similar in performances anyway, and overdriveNTool is working even in pro mode...


----------



## poisson21

Weird , i can't enable anymore crossfire for superposition 4K but i beat my score with only one.



P6 1667/1150

P7 1722/1250

Hbm 1105/1100


----------



## kundica

Quote:


> Originally Posted by *poisson21*
> 
> Weird , i can't enable anymore crossfire for superposition 4K but i beat my score with only one.
> 
> 
> P6 1667/1150
> P7 1722/1250
> Hbm 1105/1100


I think that's because crossfire might not be fully enabled but it's doing something. Look at your GPU utilization. Certainly you didn't reach that score on a single card running at 90% max.


----------



## poisson21

Yeah , i think there is a weird thing going, i just completly disable crossfire and i have a score of 6884, normally with crossfire the score should be ~11000 like this



So i'll try again and again until i found what is happening.


----------



## Newbie2009

Quote:


> Originally Posted by *Newbie2009*
> 
> My best undervolted score , 6.5% OC @ 1150mv core


New score with the update


----------



## kundica

Quote:


> Originally Posted by *Newbie2009*
> 
> New score with the update


This is about the best I can do right now. I don't have the overclocker some people here have.



Edit: What's up with your Windows build? That's not the Fall Creators Update.


----------



## poisson21

Really weird behavior for ma part, I didn't try to beat my score so i followed the behavior of my cards on my phone throught aida64/arx control, during superposition 4K.

With crossfire completly disabled, with p7 1722/1250 and hbm 1105/1100, my first card stay at ~1685Mhz with a use of ~99%, resulting in a score of ~6880.

With crossfire enabled, it seems it didn't work properly, but with same setting, my first card stay at ~1665Mhz with a use between 15% and 91% and the second card stay at ~850Mhz with a use of 0%, resulting in a score of ~7400.

It 's stange because in the paste crossfire work properly and the 2 cards had a use of ~99% resulting in a score of more than 11000.


----------



## Newbie2009

Quote:


> Originally Posted by *kundica*
> 
> This is about the best I can do right now. I don't have the overclocker some people here have.
> 
> 
> 
> Edit: What's up with your Windows build? That's not the Fall Creators Update.


Hmmm, I ran the update, ok will check it out. Odd. Thanks.

EDIT: Apparently I don't have it. drivers working ok though. Will update now.


----------



## majestynl

Quote:


> Originally Posted by *gupsterg*
> 
> Well I bit the bullet on a RX VEGA 64 Limited Edition yesterday. Now waiting eagerly for delivery
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Seemed to me the best deal I may see this side of Christmas. £515 delivered, ~£11 cashback, got Prey and Wolfenstein II with it (worth ~£40 in my estimates, so card is net ~£465 IMO).
> 
> The decision was concreted after having experienced the MSI GTX 1080 EK X. I missed the lack of variable refresh rate greatly, wasn't inclined to get into hassle of swapping my MG279Q for similar G-Sync monitor. Also missed some of the features of AMD driver panel which didn't have equivalents in nVidia drivers.
> 
> I was also lucky in that the Fury X sold at no loss. The MSI GTX 1080 EK X had a missing screw on WB so got a discount, I was able to sell it for more than I got it so it bolstered my "pot of £" to make the jump to VEGA viable.
> 
> As OCuk had a weekly special on EK full cover block I've also ordered that.


Welcome mate! Now playing can start








I needed also pull out the nice Alu cover from my LE for the waterblock


----------



## Sufferage

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers mate
> 
> 
> 
> 
> 
> 
> 
> .
> 
> It's gonna be such a shame to pull the Limited Edition blower cooler from card. All images I have seen make it look so nice, even the IO plate with it's stylized venting and black color looks so snazzy IMO. I'm envisaging I may have to put up a shelf near my PC to display the cooler
> 
> 
> 
> 
> 
> 
> 
> , be such a shame to hide it in a box .


So true. Just installed my GPX Pro today and had a really hard time getting rid of the original standard edition cooler, it just looks too nice








On the other hand, temps and noise levels i'm seeing now are looking damn well too


----------



## buildzoid

I hard modded the Vmem on my V56. The HBM2 doesn't scale past 1.35V on ambient cooling. Went all the way to 1.42V before accepting that 1110 was not going to run. There's still the 1.8V VPP as well as 0.9V display drive and memory controller voltages but I really doubt they will do anything and they are a pain to mod.


----------



## diggiddi

Quote:


> Originally Posted by *ManofGod1000*
> 
> Well, I am a Vega 56 owner and am quite happy with it. (Came from 2 x Fury Non X.) I have the power limit set to 50+, max temp at 80C and the max fan speed set to 4000 rpm. This has given me the best performance so far and I do not notice any fan noise at all. (I am different from most though, I tend to notice things like clicking and such over start up fan hush.) 50% better performance that a Single R9 Fury Nitro.


Sowutchusayin, a Vega 56 is 2x Fury??? In which titles? Cos none of the benches I've seen say that


----------



## Reikoji

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers mate
> 
> 
> 
> 
> 
> 
> 
> .
> 
> It's gonna be such a shame to pull the Limited Edition blower cooler from card. All images I have seen make it look so nice, even the IO plate with it's stylized venting and black color looks so snazzy IMO. I'm envisaging I may have to put up a shelf near my PC to display the cooler
> 
> 
> 
> 
> 
> 
> 
> , be such a shame to hide it in a box .


Thats why I wish I could mod my LC card to have a bigger radiator. I don't want to see the silver shroud go.


----------



## Tgrove

Quote:


> Originally Posted by *diggiddi*
> 
> Sowutchusayin, a Vega 56 is 2x Fury??? In which titles? Cos none of the benches I've seen say that


I came from fury x crossfire to a single vega liquid. I would not go back. Gaming experience is league's better, framerates are not the whole picture. At this point i would say yes 1 vega is better than fury xfire.


----------



## dagget3450

Quote:


> Originally Posted by *poisson21*
> 
> Weird , i can't enable anymore crossfire for superposition 4K but i beat my score with only one.
> 
> 
> P6 1667/1150
> P7 1722/1250
> Hbm 1105/1100


Quote:


> Originally Posted by *poisson21*
> 
> Yeah , i think there is a weird thing going, i just completly disable crossfire and i have a score of 6884, normally with crossfire the score should be ~11000 like this
> 
> 
> 
> So i'll try again and again until i found what is happening.


Quote:


> Originally Posted by *poisson21*
> 
> Really weird behavior for ma part, I didn't try to beat my score so i followed the behavior of my cards on my phone throught aida64/arx control, during superposition 4K.
> With crossfire completly disabled, with p7 1722/1250 and hbm 1105/1100, my first card stay at ~1685Mhz with a use of ~99%, resulting in a score of ~6880.
> With crossfire enabled, it seems it didn't work properly, but with same setting, my first card stay at ~1665Mhz with a use between 15% and 91% and the second card stay at ~850Mhz with a use of 0%, resulting in a score of ~7400.
> It 's stange because in the paste crossfire work properly and the 2 cards had a use of ~99% resulting in a score of more than 11000.


For crossfire on superposition, try setting crossfire mode to 1x1 under superposition profile. Also, not sure if they ever fixed it but crossfire in superposition was a bit wonky from the start.


----------



## ManofGod1000

Quote:


> Originally Posted by *diggiddi*
> 
> Sowutchusayin, a Vega 56 is 2x Fury??? In which titles? Cos none of the benches I've seen say that


Strongly recommend you take a look at my post again.














I said I can from a 2 x Fury Non X. (As in, what my setup once was before I became a Vega convert.







) Besides, most games do not support Crossfire or SLI anymore so I sold one card in August and the other last week.









I got a 3D Mark 11 graphics score of just over 20000 with the Fury Nitro I had and I am now getting close to 31000 with the Vega 56 at 50+ power limit, max temp of 80C and fan speed of a max of 4000 rpm's. (Oh, and the VRam overclocked to 950.) I do not hear the noise of the fan really at all but, I tend to notice clicking and other noises anyways than a regular rushing sound of a fan.

Edit: This is on a Ryzen 7 1700X at 3.8 Ghz, I just have not updated my rigs below yet.


----------



## dagget3450

I am still stuck on 17.6 launch drivers for Vega FE so i can use crossfire. So i redid Superposition 4k and this was my result

1600 max boost/ 1100hbm 2x Vega FE CF


----------



## Soggysilicon

Quote:


> Originally Posted by *Snowknight26*
> 
> Anyone have this very specific issue on their RX Vega 56?
> 
> When I'm connected to my Windows 7 machine via RDP (and only then), there's about a 1 in 100 chance that when I open a new tab in Chrome, my machine BSODs. Bug Check is usually 0x50.
> 
> I thought it was because I was OCing/undervolting, but it's even happened with Wattman set to default.


I have had chrome BSOD me on a couple occasions with Vega... this is with HW accel. "OFF", chrome has never impressed me with that "feature".


----------



## Soggysilicon

Quick little update going to Winblows 10FC... gained .3-.5% benchmark scores across the board, which is translating to about a frame better performance in games. The BIGGEST difference however, is that my DP issues when switching between resolutions has miraculously vanished with this patch... so thank you very much Micro$lop for fixing your FxxCing software.


----------



## tarot

I just tried the msi LC bios on my xfx card and not a fan first run overboosted to 1790 and boom and even with tweaking not good that and the coil noise went through the roof at anything near 1700.

so sticking with the original bios 1682/1542/1100


----------



## cplifj

does anyone got any idea why Cinebench R15 only gets me 92.38 score on a vega64 liquid ???

that score should be alot higher as in almost double that.

all other benchmarks i have run so far are within known margins.

This happens on every driver i tried, from 17.7 to 17.10 and now on the 17.40 beta for fall creators update which i have got running now. Same result with the previous win10 and different drivers.

i'm kind of puzzled here...


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> does anyone got any idea why Cinebench R15 only gets me 92.38 score on a vega64 liquid ???
> 
> that score should be alot higher as in almost double that.
> 
> all other benchmarks i have run so far are within known margins.
> 
> This happens on every driver i tried, from 17.7 to 17.10 and now on the 17.40 beta for fall creators update which i have got running now. Same result with the previous win10 and different drivers.
> 
> i'm kind of puzzled here...


Card sucks at OpenGL i think.


----------



## cplifj

normally a 64 does about 150/160fps and this is even a liquid cooled one so no. i'm gonna try a fresh system install soon , if it does not improve , i'm returning this beast to its maker..


----------



## Kyozon

Hello Guys, it is a pleasure to be able to talk with you.

Unfortunately i came across some issues with my VEGA Frontier Edition (Air Cooled) updating to the Windows Falls Update.

There are two new Drivers for the FE right now, the Falls Update - Beta 17.40 - And the WHQL Pro Crimson 17.10

The behavior happens with both Drivers. On Any GPU Load Scenario, my Card crashes immediately. The GPU is at Stock settings, i haven't done any changes to it, besides increasing the Fan RPM to prevent overheating.

I have been trying to run SpecViewPerf ever since this afternoon, but no luck so far. I hope you are able to assist me, Thanks!

System:

Ryzen ThreadRipper 1950X - Stock
ASUS Zenith Extreme
VEGA Frontier Edition - Air Cooled
Thermaltake Toughpower 1050W Full Modular 80+ Gold.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *cplifj*
> 
> does anyone got any idea why Cinebench R15 only gets me 92.38 score on a vega64 liquid ???
> 
> that score should be alot higher as in almost double that.
> 
> all other benchmarks i have run so far are within known margins.
> 
> This happens on every driver i tried, from 17.7 to 17.10 and now on the 17.40 beta for fall creators update which i have got running now. Same result with the previous win10 and different drivers.
> 
> i'm kind of puzzled here...






I get 128 fps but if you look at the card notice the gpu tach never moves...mine also does the whole test at 613 mhz...yeah 613 so it isn't the card its the dopey benchmark for some reason it does not even load the card, seems for some stupid reason to load the cpu...


----------



## gupsterg

Quote:


> Originally Posted by *majestynl*
> 
> Welcome mate! Now playing can start
> 
> 
> 
> 
> 
> 
> 
> 
> I needed also pull out the nice Alu cover from my LE for the waterblock


Cheers chap!







.

When you do block be interesting to know if it's a molded die or not







.
Quote:


> Originally Posted by *Sufferage*
> 
> So true. Just installed my GPX Pro today and had a really hard time getting rid of the original standard edition cooler, it just looks too nice
> 
> 
> 
> 
> 
> 
> 
> 
> On the other hand, temps and noise levels i'm seeing now are looking damn well too


Simple yet truly elegant design IMO.

Were you fortunate enough to gain molded die? what temps you seeing on your setup







.
Quote:


> Originally Posted by *Reikoji*
> 
> Thats why I wish I could mod my LC card to have a bigger radiator. I don't want to see the silver shroud go.


The internals of the Liquid Edition card seems so sweet, does not look like AMD skimped on cost there. I guess the rad size is as so to make it more compatible with as many cases. I guess for the enthusiast a bigger rad really would have been best.

An owner of Fury X modded the cooler to connect to his loop, initial testing he had some leaks and once resolved some how his mobo died.


----------



## cplifj

Quote:


> Originally Posted by *tarot*
> 
> 
> I get 128 fps but if you look at the card notice the gpu tach never moves...mine also does the whole test at 613 mhz...yeah 613 so it isn't the card its the dopey benchmark for some reason it does not even load the card, seems for some stupid reason to load the cpu...


Thx, i wasn't watching those at the time, indeed, tachometer says idle while gpu-z reports a clockspeed from 280 to 380 MHz, averaging 330MHz will indeed show only about 93 fps.

i was kind of surprised there seeing my card seems to be a golden pick in all other performance.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *cplifj*
> 
> normally a 64 does about 150/160fps and this is even a liquid cooled one so no. i'm gonna try a fresh system install soon , if it does not improve , i'm returning this beast to its maker..


Could be a CPU-Limit, the OpenGL Bench is Singlethreaded.....


----------



## tarot

Quote:


> Originally Posted by *cplifj*
> 
> Thx, i wasn't watching those at the time, indeed, tachometer says idle while gpu-z reports a clockspeed from 280 to 380 MHz, averaging 330MHz will indeed show only about 93 fps.
> 
> i was kind of surprised there seeing my card seems to be a golden pick in all other performance.


I would be interested to see anyone with an NVidia card to test this out does it ramp up your cards? because every amd card I have owned from furyx nano rx480 does the same thing no activity on the video card


----------



## BeetleatWar1977

Quote:


> Originally Posted by *tarot*
> 
> I would be interested to see anyone with an NVidia card to test this out does it ramp up your cards? because every amd card I have owned from furyx nano rx480 does the same thing no activity on the video card


I switched from a gtx 670 .... the same.... cpu limit


----------



## tarot

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> I switched from a gtx 670 .... the same.... cpu limit


guess that makes sense except the zero load on the gpu.

well not exactly zero but I fart harder than this








what I want to know is does this dopey bench load up any card...


----------



## poisson21

@dagget3450

Yeah superposition is totally wonky now with crossfire, i'd run it with the first driver that allowed crossfire and no problem. With 17.10.1 , i tried all the different setting for crossfire and impossible to make it work correctly and nothing work, something change and i don't know what.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *tarot*
> 
> guess that makes sense except the zero load on the gpu.
> 
> well not exactly zero but I fart harder than this
> 
> 
> 
> 
> 
> 
> 
> 
> what I want to know is does this dopey bench load up any card...




if you look, you see @ place 1 the 670 with my FX and @6 with my old C2Q


----------



## poisson21

So i retried to make crossfire working , i succeed but there's some strange things happening.

To succeed i completly erase my superposition profile and create a new one.

it work but the wattman setting of this profile is not taken in account when you run the benchmark, it take the general setting.

The best setting i found is "respectable of afr" (i'm french so i don't know if my setting really translate in that in the english version), with p7 1727Mhz/1250mV and hbm 1105Mhz/1100mV for both card resulting in:



Spoiler: Warning: Spoiler!















I am pretty happy but it sucks to be this difficult to set correctly.

And another note, it seems with the newest drivers (17.40 beta in my case), the frequencies didn't overshoot anymore over your setting, during benchmark it stay ~1705Mhz while before it happen to go up to ~1780Mhz with same setting.

Edit: and another strange thing, my default hbm frequency is now 800Mhz and not 945Mhz anymore.


----------



## Trender07

Quote:


> Originally Posted by *poisson21*
> 
> So i retried to make crossfire working , i succeed but there's some strange things happening.
> To succeed i completly erase my superposition profile and create a new one.
> it work but the wattman setting of this profile is not taken in account when you run the benchmark, it take the general setting.
> The best setting i found is "respectable of afr" (i'm french so i don't know if my setting really translate in that in the english version), with p7 1727Mhz/1250mV and hbm 1105Mhz/1100mV for both card resulting in:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I am pretty happy but it sucks to be this difficult to set correctly.
> 
> And another note, it seems with the newest drivers (17.40 beta in my case), the frequencies didn't overshoot anymore over your setting, during benchmark it stay ~1705Mhz while before it happen to go up to ~1780Mhz with same setting.
> 
> Edit: and another strange thing, my default hbm frequency is now 800Mhz and not 945Mhz anymore.


Budd sadly the latest drivers beta for fall CU are bugged, when you touch the voltage in custom setttings , hbm mhz gets locked to 800 mhz. You either roll back drivers or do like me and just leave stock voltages with 1055 hbm memory and some oc on core


----------



## CaptBhlavious

Have any other Vega FE owners installed the Creator's Update on Win 10? I reinstalled the latest beta drivers 17.10 after Windows was updated to the Creator's update and I lost the option to switch to gaming drivers. Has anyone else experienced this?


----------



## CaptBhlavious

I have noticed similar instability after the fall creator's update to WIndows 10. I reinstalled 17.10 and now I no longer have the option to change to gaming drivers either.


----------



## spyshagg

So the P6 and P7 mV cannot go lower than the HBM p3 mv, correct? You can set p7 lower but it will be ignored?

How much mV do you guys need for 1000mhz on hbm?


----------



## Nuke33

Quote:


> Originally Posted by *spyshagg*
> 
> So the P6 and P7 mV cannot go lower than the HBM p3 mv, correct? You can set p7 lower but it will be ignored?
> 
> How much mV do you guys need for 1000mhz on hbm?


Yes correct. You can get as high as your HBM handles on any HBM voltage. Temperature is the key, keeping HBM below 55-60° C will allow for high HBM clocks most of the time. Of course there is still a bit of luck involved.

The HBM (minimum) voltage is bound to P2.


----------



## Rootax

Quote:


> Originally Posted by *CaptBhlavious*
> 
> Have any other Vega FE owners installed the Creator's Update on Win 10? I reinstalled the latest beta drivers 17.10 after Windows was updated to the Creator's update and I lost the option to switch to gaming drivers. Has anyone else experienced this?


The latest beta doesn't have the drivers option indeed. I reverted back to the non-beta, and it's still working good even will the fall update, with driver option still available. I had an error during switching to 17.9.1 gaming drivers, saying it was not installed. I got to device manager, and indeed after the "switch", Vega was showing with an exclamation mark. So, I installed the driver manually (the .inf should be here :C:\AMD\Packages\Drivers\Radeon-Crimson-ReLive-17.9.1-c0318725-64bit-171006_drvWHQL ) and voilà.

Plus, the 17.10 beta has a bug with undervolting for me (it's not working, staying at 1.2v) With the non beta or 17.9.1 gaming drivers, my undervolting is in effect.


----------



## majestynl

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers chap!
> 
> 
> 
> 
> 
> 
> 
> .
> 
> When you do block be interesting to know if it's a molded die or not


Didn't Check first time. But I'm installing 2nd rad (magicool







) tonight hopefully. Probably I will remove the block and check that also...


----------



## spyshagg

can someone explain me what is happening here?



GPU MHZ hits a wall at 1590mhz even in the beginning, when the temperature and fan speed limits i set have not yet been met. This wall doesn't change when rising power limit. 10% or 50% it hits the same 1590mhz wall.

I do have the voltage set to 1050mv. Does the gpu "detect" it needs more voltage to go higher? if so, smart.


----------



## Chaoz

Quote:


> Originally Posted by *spyshagg*
> 
> So the P6 and P7 mV cannot go lower than the HBM p3 mv, correct? You can set p7 lower but it will be ignored?
> 
> How much mV do you guys need for 1000mhz on hbm?


Got mine running on this and it's stable af.


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> Budd sadly the latest drivers beta for fall CU are bugged, when you touch the voltage in custom setttings , hbm mhz gets locked to 800 mhz. You either roll back drivers or do like me and just leave stock voltages with 1055 hbm memory and some oc on core


I cannot confirm this. I have seen 800 HBM lock out, but that was at greater than a volt on the HBM voltages... not sure where the setting trips this over to the 800 mhz, but I operate at 1 V and its fine @ 1105 mhz.


----------



## laczarus

Even after applying Conductonaut to my V56 with the Morpheus II I get a hotspot temp of 80-90C
This might be due to height difference of hbm and gpu die


----------



## spyshagg

whats the max allowed HBM temperature? mine is constantly at 90ºc


----------



## pmc25

Quote:


> Originally Posted by *spyshagg*
> 
> whats the max allowed HBM temperature? mine is constantly at 90ºc


On air you should aim to keep it below 65C. Preferably 60C. Timings will be dire at 90C, and highly unstable too if you try to go much above stock clocks. But if you don't care about timings (which will impact performance quite a lot) and aren't OC'ing it, 90C is fine.

Aim for sub 40C on water. Ideally sub 35C full load, as high GPU core clocks seem more stable with HBM2 sub 35C.


----------



## Sufferage

Quote:


> Originally Posted by *gupsterg*
> 
> Were you fortunate enough to gain molded die? what temps you seeing on your setup
> 
> 
> 
> 
> 
> 
> 
> .


Yep, been lucky and got a molded die, bought the card mid august from a french retailer for just 507€, well invested money if you ask me, had lotsa fun already trying to figure out this often pretty strange behaving piece of hardware, so i think you'll love it too, just so much to mess around with there








As for the temps, at idle it sits at exactly ambient temp now, after a few consecutive superposition 4k runs at 1742/1192mv/1100HBM core reaches 47°, HBM 49°








...and the stock cooler already has found it's place on my desk and awaits some power to make the LED glow again


----------



## spyshagg

Quote:


> Originally Posted by *pmc25*
> 
> On air you should aim to keep it below 65C. Preferably 60C. Timings will be dire at 90C, and highly unstable too if you try to go much above stock clocks. But if you don't care about timings (which will impact performance quite a lot) and aren't OC'ing it, 90C is fine.
> 
> Aim for sub 40C on water. Ideally sub 35C full load, as high GPU core clocks seem more stable with HBM2 sub 35C.


65c? thats impossible with default cooler @ stock settings. cannot be done :\ Maybe with fan @100% and lots of undervolting. But thats not how the card was meant to run with stock cooler.

I also did not know hbm adjusted its timings according to temperature. Good to know!

Also, this card behaves differently that any other I had until now. Even if all conditions are met (power allowance, temperature allowance, Fan speed allowance), the card will rarely hit its default boost clock If I undervolt it to say, 1050mv. It will hover around 1590mhz. If I give it 1100mv it will go to 1600mhz. The only other way to make it boost to 1630 with 1050mv, is to overclock P7 to 1660 or so. So why wont it go 1630mhz when we already tested it can do it? weird programming.


----------



## cplifj

some proofing was still in order:


----------



## Naeem

here is my vega 64 liquid


----------



## PontiacGTX

Quote:


> Originally Posted by *spyshagg*
> 
> can someone explain me what is happening here?
> 
> 
> 
> GPU MHZ hits a wall at 1590mhz even in the beginning, when the temperature and fan speed limits i set have not yet been met. This wall doesn't change when rising power limit. 10% or 50% it hits the same 1590mhz wall.
> 
> I do have the voltage set to 1050mv. Does the gpu "detect" it needs more voltage to go higher? if so, smart.


what version of MSI AB are you using


----------



## Naeem

i disabled 4 cores on my ryzen 1800x 4.0 mode and got better graphics score and combined score

4 cores 8 threds @ 4.0 ghz ( 1 ccx enabled )

https://www.3dmark.com/3dm/22825659



8 core 16 threads @ 4.0 ghz ( 2 ccx)

https://www.3dmark.com/3dm/22767895


----------



## Nuke33

Quote:


> Originally Posted by *Naeem*
> 
> i disabled 4 cores on my ryzen 1800x 4.0 mode and got better graphics score and combined score
> 
> 4 cores 8 threds @ 4.0 ghz ( 1 ccx enabled )
> 
> https://www.3dmark.com/3dm/22825659
> 
> 
> 
> 8 core 16 threads @ 4.0 ghz ( 2 ccx)
> 
> https://www.3dmark.com/3dm/22767895


Could be because of the higher latency between the CCX packages. Threadripper has a gaming mode which sets the packages in NUMA mode which in turn reduces latency by disabling cross CCX package memory operations. Maybe it is a similar thing for the "little" Ryzens.
Or maybe just really bad programming of 3DMark


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Sufferage*
> 
> Yep, been lucky and got a molded die, bought the card mid august from a french retailer for just 507€, well invested money if you ask me, had lotsa fun already trying to figure out this often pretty strange behaving piece of hardware, so i think you'll love it too, just so much to mess around with there
> 
> 
> 
> 
> 
> 
> 
> 
> As for the temps, at idle it sits at exactly ambient temp now, after a few consecutive superposition 4k runs at 1742/1192mv/1100HBM core reaches 47°, HBM 49°
> 
> 
> 
> 
> 
> 
> 
> 
> ...and the stock cooler already has found it's place on my desk and awaits some power to make the LED glow again






I might have missed it but what bios and settings are you using







I too have the molded die and temps are ok but clocking this thing is a rubiks cube


----------



## geriatricpollywog

Just picked up a Vega 64. Now to get a waterblock.


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> 65c? thats impossible with default cooler @ stock settings. cannot be done :\ Maybe with fan @100% and lots of undervolting. But thats not how the card was meant to run with stock cooler.
> 
> I also did not know hbm adjusted its timings according to temperature. Good to know!


I maintain 65C with the stock air cooler...
LC BIOS too...
Was running at stock voltage for last 6hrs in iRacing at ~1748MHz with 1500RPM fan.

Quote:


> Originally Posted by *spyshagg*
> 
> Also, this card behaves differently that any other I had until now. Even if all conditions are met (power allowance, temperature allowance, Fan speed allowance), the card will rarely hit its default boost clock If I undervolt it to say, 1050mv. It will hover around 1590mhz. If I give it 1100mv it will go to 1600mhz. The only other way to make it boost to 1630 with 1050mv, is to overclock P7 to 1660 or so. So why wont it go 1630mhz when we already tested it can do it? weird programming.


It's like the P State is the max operating limit based on power draw and thermals and intended MHz ceiling. As the ceiling increases so does the MHz until the Power or Thermal ceilings are hit. so basically keep filling each pot until you hit a limit then increase size of that pot.


----------



## geriatricpollywog

Why are my results so bad?


----------



## TrixX

HBCC is inactive on yours. Also the 5664 is with watercooled unit.

Crap just realised one is 1080p Extreme setting and the other 4K optimised, that highlights a very different picture.

Needed to correct my comment as it was completely wrong. Apologies for that.


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> HBCC is inactive on yours. Also the 5664 is with watercooled unit.



Results with HBCC enabled. My unit is air cooled.


----------



## TrixX

Using the settings below combined with the 142% Power Play registry edit and the MSI LC BIOS, these are the sort of points I am getting on air in the 4K optimised. Oh and HBCC turned on too.


----------



## gupsterg

Quote:


> Originally Posted by *majestynl*
> 
> Didn't Check first time. But I'm installing 2nd rad (magicool
> 
> 
> 
> 
> 
> 
> 
> ) tonight hopefully. Probably I will remove the block and check that also...


Cheers







.
Quote:


> Originally Posted by *Sufferage*
> 
> Yep, been lucky and got a molded die, bought the card mid august from a french retailer for just 507€, well invested money if you ask me, had lotsa fun already trying to figure out this often pretty strange behaving piece of hardware, so i think you'll love it too, just so much to mess around with there
> 
> 
> 
> 
> 
> 
> 
> 
> As for the temps, at idle it sits at exactly ambient temp now, after a few consecutive superposition 4k runs at 1742/1192mv/1100HBM core reaches 47°, HBM 49°
> 
> 
> 
> 
> 
> 
> 
> 
> ...and the stock cooler already has found it's place on my desk and awaits some power to make the LED glow again


Sweet







, what's hotspot temp like? which TIM did you use?


----------



## spyshagg

Quote:


> Originally Posted by *PontiacGTX*
> 
> what version of MSI AB are you using


4.4.0 beta


----------



## madmanmarz

Give us some details! What card? What bios? Are you using GPU-z to monitor core/hotspot/hbm temps? What are your average clocks and voltage on gpu-z?

Also any meaning behind the name 0451??
Quote:


> Originally Posted by *0451*
> 
> Why are my results so bad?


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> Using the settings below combined with the 142% Power Play registry edit and the MSI LC BIOS, these are the sort of points I am getting on air in the 4K optimised. Oh and HBCC turned on too.


Is that the same LC bios linked in the original post? And is it safe to flash an XFX card with an MSI bios?
Quote:


> Originally Posted by *madmanmarz*
> 
> Give us some details! What card? What bios? Are you using GPU-z to monitor core/hotspot/hbm temps? What are your average clocks and voltage on gpu-z?
> 
> Also any meaning behind the name 0451??


XFX card with Samsung HBM2, BIOS version 016.001.001.000. Core speed ~1360 and 78-80 C, HBM 945mhz and 85 C, hot spot 102-108, fan speed 2384. Voltage all over the place, but the max I see is 1.0438. 0 451 is a deep-cut Deus Ex reference.


----------



## madmanmarz

Quote:


> Originally Posted by *0451*
> 
> Is that the same LC bios linked in the original post? And is it safe to flash an XFX card with an MSI bios?
> XFX card with Samsung HBM2, BIOS version 016.001.001.000. Core speed ~1360 and 78-80 C, HBM 945mhz and 85 C, hot spot 102-108, fan speed 2384. Voltage all over the place, but the max I see is 1.0438. 0 451 is a deep-cut Deus Ex reference.


I'm not sure what the general recommendation is for a stock air cooled card, but your temps are already pretty high so I'm not sure it will help. 64 bios will increase HBM voltage as well as give you higher power limit, but by default it also increases core voltages quite a bit.

Try going into wattman or whatever o/c app and setting the power limit to +50% and dropping your core voltage in p6/p7 to 1000mv, then you can try to see how much you can o/c your HBM. After that's figured out, play with your core frequency and see what you can get out of that. You should be able to get at least 1500mhz out of 1000mv (many get much more). Remember what you enter into wattman/oc app will vary. Sometimes when you change your core voltage, your frequencies will jump around, so you have to test each setting and make sure you confirm the changes in GPU-Z. If your temps are high, your frequencies will suffer, so I think first things first is trying to get your temps in check and reducing voltages as much as possible will probably help more than anything.


----------



## geriatricpollywog

Quote:


> Originally Posted by *madmanmarz*
> 
> I'm not sure what the general recommendation is for a stock air cooled card, but your temps are already pretty high so I'm not sure it will help. 64 bios will increase HBM voltage as well as give you higher power limit, but by default it also increases core voltages quite a bit.
> 
> Try going into wattman or whatever o/c app and setting the power limit to +50% and dropping your core voltage in p6/p7 to 1000mv, then you can try to see how much you can o/c your HBM. After that's figured out, play with your core frequency and see what you can get out of that. You should be able to get at least 1500mhz out of 1000mv (many get much more). Remember what you enter into wattman/oc app will vary. Sometimes when you change your core voltage, your frequencies will jump around, so you have to test each setting and make sure you confirm the changes in GPU-Z. If your temps are high, your frequencies will suffer, so I think first things first is trying to get your temps in check and reducing voltages as much as possible will probably help more than anything.


Quote:


> Originally Posted by *madmanmarz*
> 
> I'm not sure what the general recommendation is for a stock air cooled card, but your temps are already pretty high so I'm not sure it will help. 64 bios will increase HBM voltage as well as give you higher power limit, but by default it also increases core voltages quite a bit.
> 
> Try going into wattman or whatever o/c app and setting the power limit to +50% and dropping your core voltage in p6/p7 to 1000mv, then you can try to see how much you can o/c your HBM. After that's figured out, play with your core frequency and see what you can get out of that. You should be able to get at least 1500mhz out of 1000mv (many get much more). Remember what you enter into wattman/oc app will vary. Sometimes when you change your core voltage, your frequencies will jump around, so you have to test each setting and make sure you confirm the changes in GPU-Z. If your temps are high, your frequencies will suffer, so I think first things first is trying to get your temps in check and reducing voltages as much as possible will probably help more than anything.


I have Vega 64, not 56. I was wondering if flashing to a watercooled bios would help. I already ordered a waterblock. I tried setting p6/p7 to 1000mv and setting HBM to 1000mhz. Core voltage was 1530mhz for the first 10 seconds of the benchmark, but then tanked to 1150 for the remainder of the benchmark. HMB2 fell from 1000 to 800 mhz. My final score in 4k optimized was 5200. I reset Global Wattman to default, ran the benchmark again, and my score was 5700.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *0451*
> 
> I have Vega 64, not 56. I was wondering if flashing to a watercooled bios would help. I already ordered a waterblock. I tried setting p6/p7 to 1000mv and setting HBM to 1000mhz. Core voltage was 1530mhz for the first 10 seconds of the benchmark, but then tanked to 1150 for the remainder of the benchmark. HMB2 fell from 1000 to 800 mhz. My final score in 4k optimized was 5200. I reset Global Wattman to default, ran the benchmark again, and my score was 5700.


thats way low.... i´m reaching 6200 on a 56 with 64 Airbios.....



Edit: pt+50 p7 [email protected] [email protected]


----------



## madmanmarz

Quote:


> Originally Posted by *0451*
> 
> I have Vega 64, not 56. I was wondering if flashing to a watercooled bios would help. I already ordered a waterblock. I tried setting p6/p7 to 1000mv and setting HBM to 1000mhz. Core voltage was 1530mhz for the first 10 seconds of the benchmark, but then tanked to 1150 for the remainder of the benchmark. HMB2 fell from 1000 to 800 mhz. My final score in 4k optimized was 5200. I reset Global Wattman to default, ran the benchmark again, and my score was 5700.


Won't really help at the moment. Just wait till you get that waterblock on there, currently your temps won't allow you to go any higher. Best thing you can do at the moment is figure out a way to get the temps down.


----------



## GroupB

I cant crack more than 5700 also on 4k and mine is a 64 with a waterblock , even at 1100 hbm and plenty of clock so dont worry, there something funky about superposition bench ( probably affected by different driver more than other benchmark or maybe cpu, 4 sv 8 core) cause the same setting give me 26,112 k graphic in firestrike and I think that pretty good for firestrike graphic.


----------



## BeetleatWar1977

i get only ~23k on the same settings - but i´m running in the CPU-Limit..... my FX is too slow, even if i go all the way up to 4.9Ghz


----------



## geriatricpollywog

Quote:


> Originally Posted by *GroupB*
> 
> I cant crack more than 5700 also on 4k and mine is a 64 with a waterblock , even at 1100 hbm and plenty of clock so dont worry, there something funky about superposition bench ( probably affected by different driver more than other benchmark or maybe cpu, 4 sv 8 core) cause the same setting give me 26,112 k graphic in firestrike and I think that pretty good for firestrike graphic.


In Fire Strike Extreme I am getting 9376 (graphics score 10381) and 17859 in Fire Strike (21937 graphics score) with stock card and balanced power plan.


----------



## majestynl

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers
> 
> 
> 
> 
> 
> 
> 
> .


I ad the extra 240 MagiCool rad and bend some extra tubing, thanks







GPU temps lowered approx with 10c. Also Checked my DIE and it was unmolded!







So i gave it an extra TIM Massage









Today i played around with some new tests and below some scores and settings if somebody is interested:

*#Settings*

*P6:* 1672Mhz / 1152mv
*P7:* 1742Mhz / 1192mv
*HBM:* 1100Mhz / 900mv
*Bios:* LC
*Registry:* Powertables 150% PL / 400

*#Superposition*

*Max score:* 7013
*Max Speed:* 1690Mhz
*Max Temp:* 42c

*#Firestrike*

*Max. Graphic score:* 25.786
*Max. Speed:* 1717Mhz
*Max. Temp:* 41c

*Max. HBM Temp:* 46c
*Max.GPU Hotspot:* 61c

*#Screenies:*


----------



## diggiddi

Quote:


> Originally Posted by *ManofGod1000*
> 
> Strongly recommend you take a look at my post again.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I said I can from a 2 x Fury Non X. (As in, what my setup once was before I became a Vega convert.
> 
> 
> 
> 
> 
> 
> 
> ) Besides, most games do not support Crossfire or SLI anymore so I sold one card in August and the other last week.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I got a 3D Mark 11 graphics score of just over 20000 with the Fury Nitro I had and I am now getting close to 31000 with the Vega 56 at 50+ power limit, max temp of 80C and fan speed of a max of 4000 rpm's. (Oh, and the VRam overclocked to 950.) I do not hear the noise of the fan really at all but, I tend to notice clicking and other noises anyways than a regular rushing sound of a fan.
> 
> Edit: This is on a Ryzen 7 1700X at 3.8 Ghz, I just have not updated my rigs below yet.


Ok it was just bad math on my part, my bad

Quote:


> Originally Posted by *Tgrove*
> 
> I came from fury x crossfire to a single vega liquid. I would not go back. Gaming experience is league's better, framerates are not the whole picture. At this point i would say yes 1 vega is better than fury xfire.


Cool that's good to know


----------



## Soggysilicon

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> thats way low.... i´m reaching 6200 on a 56 with 64 Airbios.....
> 
> 
> 
> Edit: pt+50 p7 [email protected] [email protected]


That is a very good result, those 56'rs are strong cards. I think I am between 7-7.1k on my 64 air with LC bios with my daily driver settings... I have never quite cracked 7.2k, There are some frontier editions which have broken into the 7.3's but that stands to reason as SP4k _*REALLY*_ likes fast, furious, and free memory.







Additionally the fall creators update for winblow$ 10 can yield another .5% in that bench.


----------



## PontiacGTX

Quote:


> Originally Posted by *spyshagg*
> 
> 4.4.0 beta


which fo the beta versions?


----------



## Sufferage

Quote:


> Originally Posted by *tarot*
> 
> 
> I might have missed it but what bios and settings are you using
> 
> 
> 
> 
> 
> 
> 
> I too have the molded die and temps are ok but clocking this thing is a rubiks cube


Running the AC 8730 bios from TPU at the moment, powerlimit +90. Yeah, that thing sure keeps me scratching my head, it's a tricky bastard to figure out








Lowering vcore at a given frequency without ever reaching powerlimit and scores will actually be higher...just doesn't quite make sense...so lots to experiment with, love it


----------



## Sufferage

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers
> 
> 
> 
> 
> 
> 
> 
> .
> Sweet
> 
> 
> 
> 
> 
> 
> 
> , what's hotspot temp like? which TIM did you use?


hotspot is running a bit high still, up to 75°, ran out of kryonaut the other day, so i had to use the TIM that came with the AIO, seems to be GC-extreme, which sure is ok too...yet still, once i'm restocked on kryonaut, i'll see if i can get the hotspot temp a tad lower


----------



## tarot

Quote:


> Originally Posted by *Sufferage*
> 
> hotspot is running a bit high still, up to 75°, ran out of kryonaut the other day, so i had to use the TIM that came with the AIO, seems to be GC-extreme, which sure is ok too...yet still, once i'm restocked on kryonaut, i'll see if i can get the hotspot temp a tad lower


what are you using to stress the hotspot?


----------



## cplifj

i think it might be time to start modding these babies inbetween the graphics card and its pci-E connectors :



1 for each connector ofcourse.


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Is that the same LC bios linked in the original post? And is it safe to flash an XFX card with an MSI bios?
> XFX card with Samsung HBM2, BIOS version 016.001.001.000. Core speed ~1360 and 78-80 C, HBM 945mhz and 85 C, hot spot 102-108, fan speed 2384. Voltage all over the place, but the max I see is 1.0438. 0 451 is a deep-cut Deus Ex reference.


I've got an HIS card and run the MSI BIOS, all reference cards seem fine flashing with that MSI 8774 one. The Powercolor 8774 BIOS has caused issues on some cards. Not always needed and runs cooler due to 70C max temp.

Though It probably won't make a huge difference yet. First up ignore the Balanced/Turbo settings of the card. They are useless. Set it to custom and use Wattman or OverdriveNTool and work out your low limits for Voltage with the HBM voltage (that's the minimum voltage supplied to the GPU Core, not just the HBM). Then make the P6 state the same as the HBM voltage and the P7 state about 20-25mv higher until you find a stable minimum voltage. Leave clocks stock doing this.


----------



## madmanmarz

What is everyone setting their HBCC size to? Has there been any benchmarking or testing with different sizes?


----------



## asder00

Quote:


> Originally Posted by *TrixX*
> 
> I've got an HIS card and run the MSI BIOS, all reference cards seem fine flashing with that MSI 8774 one. The Powercolor 8774 BIOS has caused issues on some cards. Not always needed and runs cooler due to 70C max temp.


MSI 8774 and Powercolor 8774 are 100% the same.


----------



## Chaoz

Quote:


> Originally Posted by *madmanmarz*
> 
> What is everyone setting their HBCC size to? Has there been any benchmarking or testing with different sizes?


Got mine on ±11GB. Had ±16GB before but didn't seem to make that much difference.


----------



## madmanmarz

Finally got over 23k in Firestrike! Posting some scores for reference.

Firestrike:
17430 score, 23459 graphics @ 1640/1100mhz, 1200mv

Been having trouble with superposition because the voltage fluctuates so much!

Superposition:

1080p extreme
4639 @ 1600/1100, 1150mv
4751 @ 1630/1100, 1200mv

4k
5994 @ 1560/1100, 1050mv (my 24/7 clock, goes up to 1585 in other apps)


----------



## TrixX

Quote:


> Originally Posted by *asder00*
> 
> MSI 8774 and Powercolor 8774 are 100% the same.


Good to know, just saw some having issues with the Powercolor on the OCUK forums.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *Soggysilicon*
> 
> That is a very good result, those 56'rs are strong cards. I think I am between 7-7.1k on my 64 air with LC bios with my daily driver settings... I have never quite cracked 7.2k, There are some frontier editions which have broken into the 7.3's but that stands to reason as SP4k _*REALLY*_ likes fast, furious, and free memory.
> 
> 
> 
> 
> 
> 
> 
> Additionally the fall creators update for winblow$ 10 can yield another .5% in that bench.


I can get geh Clocks about 100Mhz higher @ 1.2v, but no Chance to cool it under Air. About 350w asic........


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> I've got an HIS card and run the MSI BIOS, all reference cards seem fine flashing with that MSI 8774 one. The Powercolor 8774 BIOS has caused issues on some cards. Not always needed and runs cooler due to 70C max temp.
> 
> Though It probably won't make a huge difference yet. First up ignore the Balanced/Turbo settings of the card. They are useless. Set it to custom and use Wattman or OverdriveNTool and work out your low limits for Voltage with the HBM voltage (that's the minimum voltage supplied to the GPU Core, not just the HBM). Then make the P6 state the same as the HBM voltage and the P7 state about 20-25mv higher until you find a stable minimum voltage. Leave clocks stock doing this.




You, Sir, are The Man. +Rep.


----------



## reptilee

Can anyone help me with this:

I've Vega64(1600mhz/1050mhz, 1100mV/1050mV, pl. +50) and it's running smootly at benchmarks (24k @ FS), but in some games core & mem drops down, like ~1200/800mhz.

Temps are ok ~70'c.

Also Wattman keeps only Voltages after boot.

1600 @3,9ghz and 16gb @ 2800mhz.


----------



## Sufferage

Quote:


> Originally Posted by *tarot*
> 
> what are you using to stress the hotspot?


I just ran timespy extreme stresstest plus a few loops superposition 8k.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Sufferage*
> 
> I just ran timespy extreme stresstest plus a few loops superposition 8k.






well that is just evil I thought firestrike stress was bad enough








I,ll give that a shot and see how I go

ok I am really starting to hate this thing







I change one thing from the settings that are stable and boom
1682/1150 1542/1075 1100/1075 50 percent
now I can do that all day long no issues I try going up or in speed or down voltage I get all kinds of crap.

anywho hwinfo now includes hotspot so no need to have 2 apps running.

now I know one run isn't enough to fully load it but I,m pretty happy with the results now especially seeing 312 watts being used and the temps it is throwing out


----------



## spyshagg

Quote:


> Originally Posted by *0451*
> 
> 
> 
> You, Sir, are The Man. +Rep.


Well, congratulations, but how can these settings be stable? Do Vega cards vary this much in silicon lottery? I can't get my V64 to be game stable above 1670mhz/1050mv* HBM 1050mhz/1050mv. And you guys (seen others in this forum) are doing away with >900mv for both gpu and hbm while clocking higher.

* under full load, this drops to ~1600mhz


----------



## spyshagg

Alphacool NexXxos GPX

or

EK-FC

or

XSPC Razor

?


----------



## madmanmarz

Quote:


> Originally Posted by *reptilee*
> 
> Can anyone help me with this:
> 
> I've Vega64(1600mhz/1050mhz, 1100mV/1050mV, pl. +50) and it's running smootly at benchmarks (24k @ FS), but in some games core & mem drops down, like ~1200/800mhz.
> 
> Temps are ok ~70'c.
> 
> Also Wattman keeps only Voltages after boot.
> 
> 1600 @3,9ghz and 16gb @ 2800mhz.


You might be throttling from temps but I also had the issue in one game and was told to use clockblocker to prevent it. Many people have reported the clocks/voltages not sticking in wattman - make sure you using the latest drivers otherwise try overclocknTool.

Quote:


> Originally Posted by *spyshagg*
> 
> Well, congratulations, but how can these settings be stable? Do Vega cards vary this much in silicon lottery? I can't get my V64 to be game stable above 1670mhz/1050mv* HBM 1050mhz/1050mv. And you guys (seen others in this forum) are doing away with >900mv for both gpu and hbm while clocking higher.
> 
> * under full load, this drops to ~1600mhz


Quote:


> Originally Posted by *spyshagg*
> 
> Alphacool NexXxos GPX
> 
> or
> 
> EK-FC
> 
> or
> 
> XSPC Razor
> 
> ?


Some people are approaching 1800mhz and others like me can barely hit 1650 so yes there does appear to be quite a bit of a lottery on these things. Makes me wonder if this is why AMD set the voltage so high because of the variance in how they clock.

I have the alphacool and I got it because I want to be able to upgrade my card down the line, but honestly I'm not impressed with the hotspot temp, nor the fact that it blocks the BIOS switch, the LED switches and the 4-pin fan header. Frankly if you have the money and are planning to keep the card for a while just get a regular full cover block


----------



## geriatricpollywog

Quote:


> Originally Posted by *spyshagg*
> 
> Well, congratulations, but how can these settings be stable? Do Vega cards vary this much in silicon lottery? I can't get my V64 to be game stable above 1670mhz/1050mv* HBM 1050mhz/1050mv. And you guys (seen others in this forum) are doing away with >900mv for both gpu and hbm while clocking higher.
> 
> * under full load, this drops to ~1600mhz


My previous result was 6075 with the same settings, but +1% core speed rather than +4%. Kind of unbelieveable how an extra couple core mhz can net over 400 points higher in score. There was slight artifacting in this run. Bullzoid mentioned that at high clock speeds (1800+ mhz) Vega will stop drawing polygons that its supposed to draw, increasing 3Dmark scores. I may be seeing that phenomenon at 1500+ core. I just ordered an EK-FC and my cooling system is bomb-diggity, so we'll see if core temps of 40C can push this thing harder.

As for you, have you tried setting the voltage to 800mv at stock clocks? Also, do you have Samsung or Hynix memory? Hynix may not make full contact with the heatsink.


----------



## gupsterg

Quote:


> Originally Posted by *majestynl*
> 
> I ad the extra 240 MagiCool rad and bend some extra tubing, thanks
> 
> 
> 
> 
> 
> 
> 
> GPU temps lowered approx with 10c. Also Checked my DIE and it was unmolded!
> 
> 
> 
> 
> 
> 
> 
> So i gave it an extra TIM Massage
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Today i played around with some new tests and below some scores and settings if somebody is interested:
> 
> *#Settings*
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> *P6:* 1672Mhz / 1152mv
> *P7:* 1742Mhz / 1192mv
> *HBM:* 1100Mhz / 900mv
> *Bios:* LC
> *Registry:* Powertables 150% PL / 400
> 
> *#Superposition*
> 
> *Max score:* 7013
> *Max Speed:* 1690Mhz
> *Max Temp:* 42c
> 
> *#Firestrike*
> 
> *Max. Graphic score:* 25.786
> *Max. Speed:* 1717Mhz
> *Max. Temp:* 41c
> 
> *Max. HBM Temp:* 46c
> *Max.GPU Hotspot:* 61c
> 
> 
> *#Screenies:*
> 
> 
> 
> Spoiler: Warning: Spoiler!


Sweet







, you have some nice temps







, besides clocks. Even hotspot temp is lower than some I have seen. Considering it's unmolded die, members shares seem to suggest that molded die has lower temps/ tighter delta. Seems you have good mount/TIM application/setup







.

Yesterday I posted a SOCCLK powerplay mod, 2 members have tried it and gained HBM clock past 1100MHz. It seems SOCCLK determines ceiling for HBM clock gain, perhaps have a play chap







.
Quote:


> Originally Posted by *Sufferage*
> 
> hotspot is running a bit high still, up to 75°, ran out of kryonaut the other day, so i had to use the TIM that came with the AIO, seems to be GC-extreme, which sure is ok too...yet still, once i'm restocked on kryonaut, i'll see if i can get the hotspot temp a tad lower


Thank you for share of info







. Surprised hotspot temp as you state, as you have molded die and GPU/HBM temps were so good. Look forward to reading if you see in an improvement with reapplication of TIM.


----------



## cplifj

i'd like to know how much other vega owners get reported as hardware reserved memory for their vega in fall creators update taskmanager.

here is mine, it just upped to 132 MB after being at 68.3 MB for a week. Here the **** goes again...


----------



## Trender07

Quote:


> Originally Posted by *cplifj*
> 
> i'd like to know how much other vega owners get reported as hardware reserved memory for their vega in fall creators update taskmanager.
> 
> here is mine, it just upped to 132 MB after being at 68.3 MB for a week. Here the **** goes again...


Mine is 68.3 rn


----------



## pmc25

Quote:


> Originally Posted by *asder00*
> 
> MSI 8774 and Powercolor 8774 are 100% the same.


I have 8706 on mine (what it came with).

What's the difference?


----------



## asder00

Quote:


> Originally Posted by *pmc25*
> 
> I have 8706 on mine (what it came with).
> 
> What's the difference?


Only amd knows, probably small fixes/optimizations


----------



## Sufferage

Quote:


> Originally Posted by *tarot*


Wow, now that's some really nice temps








You're running the EK block i suppose ? Guess the alphacool won't be able to match that, no matter how often i'm gonna reseat that thing, just too little direct water contact to cool core, HBM and VRMs that effectively








Here's some settings that work pretty well with my card: 1637/1100 1722/1160 and 1637/1100 1732/1184 with 1105/1050HBM for both. These for now seem to run everything i throw at 'em without getting unstable...

Quote:


> Originally Posted by *gupsterg*
> 
> Thank you for share of info
> 
> 
> 
> 
> 
> 
> 
> . Surprised hotspot temp as you state, as you have molded die and GPU/HBM temps were so good. Look forward to reading if you see in an improvement with reapplication of TIM.


Yeah, i was a little stunned too...thought i did a pretty good job with the TIM, so the high delta really did surprise me a bit. Will keep you updated once that kryonaut is here and i can motivate myself to rip that thing out of the case one more time








I'm really not all too concerned about that hotspot temp anyway, as long as it stays reasonable far away from the 105° limit


----------



## majestynl

Quote:


> Originally Posted by *gupsterg*
> 
> Sweet
> 
> 
> 
> 
> 
> 
> 
> , you have some nice temps
> 
> 
> 
> 
> 
> 
> 
> , besides clocks. Even hotspot temp is lower than some I have seen. Considering it's unmolded die, members shares seem to suggest that molded die has lower temps/ tighter delta. Seems you have good mount/TIM application/setup
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Yesterday I posted a SOCCLK powerplay mod, 2 members have tried it and gained HBM clock past 1100MHz. It seems SOCCLK determines ceiling for HBM clock gain, perhaps have a play chap


Thanks mate! Yeah was surprised too..
The very first time I didn't spent the time and installed the block with just quickly the X-method. Normally I always carefully spread the tim out.

And yes, just saw your posts on the other thread. Amazing work!! Will definitely try it out tomorrow. Keep u updated...


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Sufferage*
> 
> Wow, now that's some really nice temps
> 
> 
> 
> 
> 
> 
> 
> 
> You're running the EK block i suppose ? Guess the alphacool won't be able to match that, no matter how often i'm gonna reseat that thing, just too little direct water contact to cool core, HBM and VRMs that effectively
> 
> 
> 
> 
> 
> 
> 
> 
> Here's some settings that work pretty well with my card: 1637/1100 1722/1160 and 1637/1100 1732/1184 with 1105/1050HBM for both. These for now seem to run everything i throw at 'em without getting unstable...
> Yeah, i was a little stunned too...thought i did a pretty good job with the TIM, so the high delta really did surprise me a bit. Will keep you updated once that kryonaut is here and i can motivate myself to rip that thing out of the case one more time
> 
> 
> 
> 
> 
> 
> 
> 
> I'm really not all too concerned about that hotspot temp anyway, as long as it stays reasonable far away from the 105° limit






thanks and yes the ek block but on its own loop with a cheap ek pump and a 280 rad with 2 1600 varder fans running at about 1300(I have no idea how to get them to ramp to the gpu heat so I through them on the cpu...so if the cpu heats up the vga fans ramp up...yeah dumb I know but hey









now those settings are you using one of the LC bioses? last time I tried that it would over clock the thing to stupid speeds and explode like a *the rock* movie








and also what power limit setting do you use 0 50 25...


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> Hello Guys, it is a pleasure to be able to talk with you.
> 
> Unfortunately i came across some issues with my VEGA Frontier Edition (Air Cooled) updating to the Windows Falls Update.
> 
> There are two new Drivers for the FE right now, the Falls Update - Beta 17.40 - And the WHQL Pro Crimson 17.10
> 
> The behavior happens with both Drivers. On Any GPU Load Scenario, my Card crashes immediately. The GPU is at Stock settings, i haven't done any changes to it, besides increasing the Fan RPM to prevent overheating.
> 
> I have been trying to run SpecViewPerf ever since this afternoon, but no luck so far. I hope you are able to assist me, Thanks!
> 
> System:
> 
> Ryzen ThreadRipper 1950X - Stock
> ASUS Zenith Extreme
> VEGA Frontier Edition - Air Cooled
> Thermaltake Toughpower 1050W Full Modular 80+ Gold.


Did you solve your issue?

Quote:


> Originally Posted by *Naeem*
> 
> here is my vega 64 liquid


Added!

Owners list updated! If your not on the list for some reason, PM me or post here and ill try to catch up!


----------



## Reikoji

Does anyone know if the coldplate on LC vega 64 is aluminum or copper?


----------



## tarot

http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.10.2-Release-Notes.aspx

an update to the fall creators users hopefully this should help time to do another driver clean install









seems to also have some gnarly features










Spoiler: Warning: Spoiler!



Quote:


> [/Radeon™ Software Crimson ReLive Edition 17.10.2 Release Notes
> 
> Article Number: RN-WIN-RADEONCRIMSONRELIVE-17.10.2
> 
> Radeon™ Software Crimson ReLive Edition is AMD's advanced graphics software for enabling high-performance gaming and engaging VR experiences. Create, capture, and share your remarkable moments. Effortlessly boost performance and efficiency. Experience Radeon Software with industry-leading user satisfaction, rigorously-tested stability, comprehensive certification, and more.
> 
> Radeon Software Crimson ReLive Edition 17.10.2 Highlights
> 
> Support For
> ◾Windows®10 Fall Creators Update
> ◾This release provides initial support for the Windows®10 Fall Creators Update. For more information please visit here.
> ◾Wolfenstein™ II: The New Colossus
> ◾Up to 8% faster performance on Radeon™ RX Vega56 (8GB) graphics than with Radeon Software Crimson ReLive Edition 17.10.1 at 2560x1440. (RS-188)
> ◾Up to 4% faster performance on Radeon RX 580 (8GB) graphics card than with Radeon Software Crimson ReLive Edition 17.10.1 at 2560x1440. (RS-189)
> ◾Destiny 2™
> ◾Up to 43% faster performance on Radeon™ RX Vega56 (8GB) graphics than with Radeon Software Crimson ReLive Edition 17.10.1 at 2560x1440.(RS-184)
> ◾Up to 50% faster performance on Radeon RX 580 (8GB) graphics than with Radeon Software Crimson ReLive Edition 17.10.1 at 2560x1440.(RS-185)
> ◾Assassin's Creed™: Origins
> ◾Up to 16% faster performance on Radeon™ RX Vega56 (8GB) graphics than with Radeon Software Crimson ReLive Edition 17.10.1 at 2560x1440.(RS-186)
> ◾Up to 13% faster performance on Radeon RX 580(8GB) graphics than with Radeon Software Crimson ReLive Edition 17.10.1 at 1920x1080. (RS-187)
> ◾GPU Workload
> ◾A new toggle in Radeon Settings that can be found under the "Gaming", "Global Settings" options. This toggle will allow you to switch optimization between graphics or compute workloads on select Radeon RX 500, Radeon RX 400, Radeon R9 390, Radeon R9 380, Radeon R9 290 and Radeon R9 285 series graphics products.
> ◾Compute Support
> ◾Radeon Software now supports compute workloads for up to 12 installed Radeon RX 400, Radeon RX 500 or Radeon RX Vega series graphics products on Windows®10 system configurations.
> 
> Fixed Issues
> ◾Radeon Software may not appear in the uninstall options under "Apps and Features" on Windows® operating systems after a Radeon Software upgrade.
> ◾Minor corruption may appear in PLAYERUNKNOWN'S BATTLEGROUNDS™ in some game locations when using Ultra graphics settings in game.
> ◾Radeon Wattman may fail to apply user adjusted voltage values on certain configurations.
> ◾AMD XConnect™ Technology enabled system configurations may not be detected when plugged in or connected to a system after being previously unplugged during system sleep or hibernation.
> ◾Hearts of Iron™ IV may experience a crash or system hang during some scenario gameplay.
> ◾Radeon Settings gaming tab may not automatically populate games detected on the users system.
> 
> Known Issues
> ◾A random system hang may be experienced after extended periods of use on system configurations using 12 GPU's for compute workloads.
> ◾Assassin's Creed™: Origins may experience an intermittent application or system hang when playing on Windows®7 system configurations.
> ◾The GPU Workload feature may cause a system hang when switching to Compute while AMD CrossFire is enabled. A workaround is to disable AMD CrossFire before switching the toggle to Compute workloads.
> ◾Resizing the Radeon Settings window may cause the user interface to stutter or exhibit corruption temporarily.
> ◾Corruption may be experienced in Forza Motorsport™ 7 on some HDR displays with HDR enabled in game.
> ◾Radeon WattMan reset and restore factory default options may not reset graphics or memory clocks and unstable Radeon WattMan profiles may not be restored to default after a system hang.
> ◾OverWatch™ may experience a random or intermittent hang on some system configurations. Disabling Radeon ReLive as a temporary workaround may resolve the issue.
> ◾When recording with Radeon ReLive on Radeon RX Vega Series graphics products GPU usage and clocks may remain in high states. A workaround is to disable and then re-enable Radeon ReLive.
> 
> Footnotes
> ◾Testing conducted by AMD Performance Labs as of October 20th, 2017 on the 8GB Radeon RX Vega56, on a test system comprising of Intel i7 7700X CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Destiny 2 on the highest preset. PC manufacturers may vary configurations, yielding different results. At 2560x1440, the Radeon RX Vega56 scored 52.0 FPS with Radeon Software 17.10.1 whereas the Radeon RX Vega56 scored 74.6 FPS with Radeon Software 17.10.2. Performance may vary based on use of latest drivers. RS-184
> ◾Testing conducted by AMD Performance Labs as of October 20th, 2017 on the 8GB Radeon RX 580, on a test system comprising of Intel i7 7700X CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Destiny 2 on the highest preset. PC manufacturers may vary configurations, yielding different results. At 2560x1440, the Radeon RX 580 scored 34.7 FPS with Radeon Software 17.10.1 whereas the Radeon RX 580 scored 52.2 FPS with Radeon Software 17.10.2. Performance may vary based on use of latest drivers. RS-185
> ◾Testing conducted by AMD Performance Labs as of October 20th, 2017 on the 8GB Radeon RX Vega56, on a test system comprising of Intel i7 7700X CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Assassins Creed: Origins on the highest preset. PC manufacturers may vary configurations, yielding different results. At 2560x1440, the Radeon RX Vega56 scored 44 FPS with Radeon Software 17.10.1 whereas the Radeon RX Vega56 scored 51 FPS with Radeon Software 17.10.2. Performance may vary based on use of latest drivers. RS-186
> ◾Testing conducted by AMD Performance Labs as of October 20th, 2017 on the 8GB Radeon RX 580, on a test system comprising of Intel i7 7700X CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Assassins Creed: Origins on the highest preset. PC manufacturers may vary configurations, yielding different results. At 1920x1080, the Radeon RX 580 scored 45 FPS with Radeon Software 17.10.1 whereas the Radeon RX 580 scored 51 FPS with Radeon Software 17.10.2. Performance may vary based on use of latest drivers. RS-187
> ◾Testing conducted by AMD Performance Labs as of October 20th, 2017 on the 8GB Radeon RX Vega56, on a test system comprising of Intel i7 7700X CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Wolfenstein II on the ultra nightmare preset. PC manufacturers may vary configurations, yielding different results. At 2560x1440, the Radeon RX Vega56 scored 102.8 FPS with Radeon Software 17.10.1 whereas the Radeon RX Vega56 scored 110.7 FPS with Radeon Software 17.10.2. Performance may vary based on use of latest drivers. RS-188
> ◾Testing conducted by AMD Performance Labs as of October 20th, 2017 on the 8GB Radeon RX 580, on a test system comprising of Intel i7 7700X CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Wolfenstein II on the ultra nightmare preset. PC manufacturers may vary configurations, yielding different results. At 2560x1440, the Radeon RX 580 scored 74 FPS with Radeon Software 17.10.1 whereas the Radeon RX 580 scored 77.3 FPS with Radeon Software 17.10.2. Performance may vary based on use of latest drivers. RS-189
> QUOTE]


----------



## pmc25

Just upgraded to 17.10.2 and also changed my vBIOS from 8706 to 8730.

Something changed ... I can now do 1130Mhz stable HBM2. Not pushed any higher yet.

I suspect AMD may have allowed the SoC (IF?) clock speed to go higher than the previous limit of 1107Mhz (and thus crash whenever you went above 1105Mhz).

If others can replicate this, it looks like the PowerPlay table mod just discovered is now redundant.

Edit: Crashed BF1 and Radeon Settings and caused driver reset at 1160Mhz. Not sure if 1155Mhz is stable. So far 1150Mhz is stable. Definitely looks like they 'modded' the PowerPlay table themselves. Before this driver (and BIOS flash), the PC instantly crashed as soon as I applied 1110Mhz.


----------



## Reikoji

No improvements for Vega 64?!

何？！？！


----------



## pmc25

Quote:


> Originally Posted by *Reikoji*
> 
> No improvements for Vega 64?!
> 
> 何？！？！


See my post above ... that's a pretty big change.


----------



## Reikoji

Quote:


> Originally Posted by *pmc25*
> 
> See my post above ... that's a pretty big change.


im joking :3. I see they only tested with Vega 56 thus can only attest for performance gains on vega 56







64 probably has even bigger gains, or less.


----------



## tarot

http://www.3dmark.com/fs/13942451

works for me base









exact same settings and a nice increase I,m going to run a few more and see what happens


----------



## madmanmarz

Quote:


> Originally Posted by *Sufferage*
> 
> Wow, now that's some really nice temps
> 
> 
> 
> 
> 
> 
> 
> 
> ...Guess the alphacool won't be able to match that, no matter how often i'm gonna reseat that thing, just too little direct water contact to cool core, HBM and VRMs that effectively
> 
> 
> 
> 
> 
> 
> 
> ...
> I'm really not all too concerned about that hotspot temp anyway, as long as it stays reasonable far away from the 105° limit


I wouldn't go THAT far. On my Nexxxos GPX block I'm seeing about 30's for core and 40's for HBM no matter what voltages or settings I have under full load, but my hotspot will be around high 50's-70 during gaming or 70-100 during benchmarks depending on clocks and voltages and of course which bench. I initially used a large dot in the middle and small spots around it and that didn't work so great so I used the spread method and that worked out better.



This was right now while playing wildlands, my d5 pump is set to 1/5 and my fans are all barely spinning.
Quote:


> Originally Posted by *pmc25*
> 
> Something changed ... I can now do 1130Mhz stable HBM2. Not pushed any higher yet.


Yes it appears so! Just did 1125 and 1150 with no issues!

This is starting to get crazy


1200 seemed almost stable but crashed at the end of superposition 4k. Ran it @ 1560/1150 and got 6336 compared to old score of 5994 @ 1560/1100!!


----------



## Kyozon

Is the 17.10.2 Supported by VEGA Frontier? Those are some interesting results on the HBM Clocks!


----------



## nicodemus

Hey all.

Anyone here playing Guild Wars 2 with Vega?

I'm getting some very unusual performance with us and I could really benefit from comparing performance with other Vega users. If anyone here could chime in and let me know your performance experience with the game, it be most appreciated.

My issues are recounted in this Reddit thread, but feel free to just reply here if you want:

__
https://www.reddit.com/r/78aaye/any_rx_vega_users_how_is_it_performing/

I just really need a comparison with other Vega users.

Thanks!


----------



## cplifj

After a thorough CMOS reset , the normal value of hardware reserved memory comes down to the following:



this seems more like normality. 4.3 MB, now i wonder how long it will take before it starts rising again.

And more importantly the REASON why it happens,.... i'm feeling hacked and rootkitted by several a-holes out there.

problem has been going for years now, with different cards, it stays low for an undefined period after which it suddenly rises again (even doubles) with a boot or reboot.

very safe this UEFI crapbox, more like a tool to make sure it can be hacked, like intel managment engine...same crap.


----------



## madmanmarz

So while superposition got a nice bump at lower clocks, my firestrike only went up from 23459 to 23683 going from 1100mhz hbm to 1175. think i'll keep it at 1150 for now just to play safe. i swear i smelt some burnt electronics at 1200hbm after superposition but everything seems to be fine.

EDIT - Wasn't stable in gaming, maybe 1125HBM, who knows, but it seems the real boost was probably that 300 point boost from HBCC. Previously I was turning it down to 11600, which I think disabled it in a way - now I can't set it lower than 11787 and suddenly the score went up 300 points.


----------



## nicodemus

Quote:


> Originally Posted by *madmanmarz*
> 
> EDIT - Wasn't stable in gaming, maybe 1125HBM, who knows


i was able to stabilize at 1145, just fyi.

thanks for the info!


----------



## astrixx

Quote:


> Originally Posted by *astrixx*
> 
> Picked up myself a MSI RX Vega 64 Wave!


I ended up RMA my first RX Vega as it would crash in Turbo or custom straightaway.

My replacement is running great and can run turbo and custom no problem.

http://www.3dmark.com/fs/13894802


----------



## Sufferage

Quote:


> Originally Posted by *tarot*
> 
> 
> thanks and yes the ek block but on its own loop with a cheap ek pump and a 280 rad with 2 1600 varder fans running at about 1300(I have no idea how to get them to ramp to the gpu heat so I through them on the cpu...so if the cpu heats up the vga fans ramp up...yeah dumb I know but hey
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> now those settings are you using one of the LC bioses? last time I tried that it would over clock the thing to stupid speeds and explode like a *the rock* movie
> 
> 
> 
> 
> 
> 
> 
> 
> and also what power limit setting do you use 0 50 25...


Same problem here with the fans, i tied 'em to the MB sensor, ramping between 800-1100RPM, got to case fans blowing from the other side, so push-pull config.
No, still using the 8730 AC bios with these settings, power limit 95, GPU pulls up to 370W








Quote:


> Originally Posted by *pmc25*
> 
> Just upgraded to 17.10.2 and also changed my vBIOS from 8706 to 8730.
> 
> Something changed ... I can now do 1130Mhz stable HBM2. Not pushed any higher yet.
> 
> I suspect AMD may have allowed the SoC (IF?) clock speed to go higher than the previous limit of 1107Mhz (and thus crash whenever you went above 1105Mhz).
> 
> If others can replicate this, it looks like the PowerPlay table mod just discovered is now redundant.


This was already possible in the previous FCU driver, ran 1115 during some benches with these...
Quote:


> Originally Posted by *madmanmarz*
> 
> I wouldn't go THAT far. On my Nexxxos GPX block I'm seeing about 30's for core and 40's for HBM no matter what voltages or settings I have under full load, but my hotspot will be around high 50's-70 during gaming or 70-100 during benchmarks depending on clocks and voltages and of course which bench. I initially used a large dot in the middle and small spots around it and that didn't work so great so I used the spread method and that worked out better.


Now that's one mighty delta








So i guess i can spare myself from the hassle of reseating the cooler, hotspot during gaming usually has got a delta of around 15° to HBM temps for me, which mostly remain in the low to middle 40s, never seen the hotspot rise to even near 100° at 1737/1200mv








Getting the impression that hotspot temp might just vary for different chips where nothing much can be done about that really...
I too did a nice, even spread with the TIM, just as i always do, no other method can match that if done right from my experience.


----------



## astrixx

Anyone able to use HWINFO64 Beta v5.59-3270 and RTSS 7.0.0 Beta30 for overlay and not get stutters every few seconds in 3D rendering.

MSI afterburner and RTSS works fine but HWINFO64 stutters firestrike and whatever is running.

Anyone else got their Vega 64 to work with HWINFO64 and RTSS.

Windows 10 64bit Creators Update 1703.

Running 17.10.1

It only updates every few seconds at the time it freezes for a second.


----------



## Naeem

Calling out all Vega 64 Liquid Owner who is maker of your HBM2 chip ?


----------



## elox

As far as i know there are no 64 with hynix mem at the moment.


----------



## madmanmarz

Quote:


> Originally Posted by *Sufferage*
> 
> 1Getting the impression that hotspot temp might just vary for different chips where nothing much can be done about that really...
> I too did a nice, even spread with the TIM, just as i always do, no other method can match that if done right from my experience.


I agree I mean my temps barely hit 40 otherwise and I'm seeing some air cool cards that you perform well. I even have the molded die and Samsung hbm. Plus I've been water cooling my graphics cards for 10 years so this is nothing new to me.

Thank God the burning smell I smelled was my audio receiver I'll play with hbm speed again later.


----------



## Reikoji

Quote:


> Originally Posted by *pmc25*
> 
> Just upgraded to 17.10.2 and also changed my vBIOS from 8706 to 8730.
> 
> Something changed ... I can now do 1130Mhz stable HBM2. Not pushed any higher yet.
> 
> I suspect AMD may have allowed the SoC (IF?) clock speed to go higher than the previous limit of 1107Mhz (and thus crash whenever you went above 1105Mhz).
> 
> If others can replicate this, it looks like the PowerPlay table mod just discovered is now redundant.
> 
> Edit: Crashed BF1 and Radeon Settings and caused driver reset at 1160Mhz. Not sure if 1155Mhz is stable. So far 1150Mhz is stable. Definitely looks like they 'modded' the PowerPlay table themselves. Before this driver (and BIOS flash), the PC instantly crashed as soon as I applied 1110Mhz.


HWinfo beta update is showing the soc speed, and it is indeed going passed 1107 if it is reading correctly. showing 1199mhz.

They certainly did something to the powerplay table for more power, because my GPU hotspot is burning now. 91c

survived a superposition at 1160, but saw many glitches


----------



## kundica

Quote:


> Originally Posted by *Reikoji*
> 
> HWinfo beta update is showing the soc speed, and it is indeed going passed 1107 if it is reading correctly.
> 
> They certainly did something to the powerplay table for more power, because my GPU hotspot is burning now.


SOC is basically doubling from idle on the new driver, mine maxes at 1200.

I also want to comment on the hotspot since there's some discussion here about it. I have a molded die with an EK block and my hotspot hits low to mid 60s under load with my card capping out at 40 degrees. I've replaced the TIM several times(which is a pain in the ass on a full loop) but I always get the same result. The only time it changed was when I first reapplied and I switched from the included EK TIM to Kryonaut it went from mid 70s to mod 60s under heavy load. My max core temp went down from 43-44 to 40 with Kryonaut as well. I've tried various methods of application but it doesn't seem to matter. At this point I don't believe I will be able to get it lower.


----------



## astrixx

Quote:


> Originally Posted by *astrixx*
> 
> Anyone able to use HWINFO64 Beta v5.59-3270 and RTSS 7.0.0 Beta30 for overlay and not get stutters every few seconds in 3D rendering.
> 
> MSI afterburner and RTSS works fine but HWINFO64 stutters firestrike and whatever is running.
> 
> Anyone else got their Vega 64 to work with HWINFO64 and RTSS.
> 
> Windows 10 64bit Creators Update 1703.
> 
> Running 17.10.1
> 
> It only updates every few seconds at the time it freezes for a second.


turns out it was the RX Vega GPU VRM Voltage. GPU [#0]: AMD Radeon RX Vega 64 Liquid Cooling: uPI uP6266. After disabling it's monitoring it no longer lagged when using the HWINFO64/RTSS overlay in D3D.


----------



## The EX1

Quote:


> Originally Posted by *spyshagg*
> 
> Alphacool NexXxos GPX
> 
> or
> 
> EK-FC
> 
> or
> 
> XSPC Razor
> 
> ?


Heatkiller or Aquacomputer get my vote. Heatkiller has the option of red,blue,white, or RGB LEDs as well.


----------



## Trender07

On 17.10.2 I still have that bugged hbm locked to 800 mhz huh, did u guys used ddu?

EDIT:

Used DDU and still the moment I touch the voltages it locks to 800 mhz memory :/


----------



## Star2k

Have you oc ?
Your power limit is at ?


----------



## poisson21

I have 2 cards , with same bios/powerplay, when reset, one of them have a default hbm at 800Mhz while the other is at 945Mhz.

But it didn't interfere with the oc.


----------



## Trender07

Quote:


> Originally Posted by *Star2k*
> 
> Have you oc ?
> Your power limit is at ?


Quote:


> Originally Posted by *poisson21*
> 
> I have 2 cards , with same bios/powerplay, when reset, one of them have a default hbm at 800Mhz while the other is at 945Mhz.
> But it didn't interfere with the oc.


Just fixed it lul, even DDU or whatever doesn't matter:

- HBM 950 mv, Core p6 950 mv -> Locked to 800 mhz
- HBM 960 mV, Core p6 955 mv -> Works fine
So and as that can't be, I just set BOTH mv hbm and p6 to 951 mV and it works *** for some reason at 950 mv its just locked at 800 mhz


----------



## Star2k

probably because the memory controller is at 950mV for 800mhz on P2.


----------



## madmanmarz

Crazy thing was it passed a firestrike run at 1175 lol.
Quote:


> Originally Posted by *nicodemus*
> 
> i was able to stabilize at 1145, just fyi.
> 
> thanks for the info!


----------



## spyshagg

I presume that with the new unlocked SOC, HBM will start responding to increased voltage as well? So, how much mv for the new max 1199mhz under water? People believed ~1107mhz was the limit no matter the voltage/cooling.


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> I presume that with the new unlocked SOC, HBM will start responding to increased voltage as well? So, how much mv for the new max 1199mhz under water? People believed ~1107mhz was the limit no matter the voltage/cooling.


Well if we could change the HBM voltage maybe, currently it's GPU floor voltage that has the name HBM voltage.


----------



## fursko

Hello everybody. I have problem with my rx vega 64 lc. My temps not good and my fan makes terrible noise above 700 rpm. If i set the fan lowest rpm from the wattman which is 400rpm is ok. Rx vega 64 lc users can you share your experience with me ?


----------



## Naeem

Quote:


> Originally Posted by *fursko*
> 
> Hello everybody. I have problem with my rx vega 64 lc. My temps not good and my fan makes terrible noise above 700 rpm. If i set the fan lowest rpm from the wattman which is 400rpm is ok. Rx vega 64 lc users can you share your experience with me ?


i have same card mine is not making that much of sounds even at max fan speed its better than max fan speed of any custem aircard in market i have fan speed on auto and my gpu hits around 60-65c max with ambinat of 28c with hrs inside a game


----------



## Reikoji

Quote:


> Originally Posted by *TrixX*
> 
> Well if we could change the HBM voltage maybe, currently it's GPU floor voltage that has the name HBM voltage.


you can also set that higher than 1050 now and the memory clock chosen will be maintained, instead of locking at 800mhz.


----------



## majestynl

HBM running at 1190mhz with new 17.10.2 Driver. Again thanks to @gupsterg and @Nuke33 , they found out yesterday that increasing soc pushes the HBM higher... Today AMD released the new bios with higher SOC









For people who are interested:

*Card:* VEGA 64 LE
*DIE:* un-molded (but spread the tim really good out)
*Cooling:* EK Waterblock / 360 + 240 rad
*Bios:* LC
*Mod:* PP for 150% Power / 400
*HBM:* 1190Mhz
*P6:* 1672Mhz / 1152mv
*P7:* 1192Mhz / 1192mv

*Max clocks Superposition:* 1684Mhz
*Max clocks Firestrike:* 1717Mhz
*Firestrike Graphics score:* 26K +









HBM and Hotspot went *~3/4C* higher from 1100mhz to 1190mhz!

*Bench/Test Screenies:*


----------



## fursko

Quote:


> i have same card mine is not making that much of sounds even at max fan speed its better than max fan speed of any custem aircard in market i have fan speed on auto and my gpu hits around 60-65c max with ambinat of 28c with hrs inside a game


Thanks for info. My temps almost same. I find it little high. Brand new air nvidia cards getting almost same temps. Obviously my fan is faulty i will rma.


----------



## tarot

well just got my first ever uber crash.
went back to 17.93 for a few 3dmark runs (not valid without them) then decided to turn of hbcc memory and play a little d3...well first driver crashed then again then a supe rhard squeelign lockup of the whole system so back to 17.10.2 for now that seems to have fixed it but yeah scary noise








and yes the temps are going up in 17.10.2 buit my hotspot is still not more than 20 degrees over the others and still under 70.

I also see the 800 on the ram but clocking it up to 1100 makes it stick.

I am sorry amd but this last few batches of drivers are trumpesque crap and need redoing thanks









pretty much the same as I am getting my temps are a bit higher though but I am also running 1682 (around 1670 avg) and 1100 on the ram what is yours?

also what seemed to have helped if not a lot is putting thermal pads on the back of the card between it and the backplate , I can now touch the backplate under full load without burning my fingerprints off










Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *kundica*
> 
> SOC is basically doubling from idle on the new driver, mine maxes at 1200.
> 
> I also want to comment on the hotspot since there's some discussion here about it. I have a molded die with an EK block and my hotspot hits low to mid 60s under load with my card capping out at 40 degrees. I've replaced the TIM several times(which is a pain in the ass on a full loop) but I always get the same result. The only time it changed was when I first reapplied and I switched from the included EK TIM to Kryonaut it went from mid 70s to mod 60s under heavy load. My max core temp went down from 43-44 to 40 with Kryonaut as well. I've tried various methods of application but it doesn't seem to matter. At this point I don't believe I will be able to get it lower.


----------



## kundica

Quote:


> Originally Posted by *majestynl*
> 
> HBM running at 1190mhz with new 17.10.2 Bios. Again thanks to @gupsterg and @Nuke33 , they found out yesterday that increasing soc pushes the HBM higher... Today AMD released the new bios with higher SOC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> For people who are interested:
> 
> *Card:* VEGA 64 LE
> *DIE:* un-molded (but spread the tim really good out)
> *Cooling:* EK Waterblock / 360 + 240 rad
> *Bios:* LC
> *Mod:* PP for 150% Power / 400
> *HBM:* 1190Mhz
> *P6:* 1672Mhz / 1152mv
> *P7:* 1192Mhz / 1192mv
> 
> *Max clocks Superposition:* 1684Mhz
> *Max clocks Firestrike:* 1717Mhz
> *Firestrike Graphics score:* 26K +
> 
> 
> 
> 
> 
> 
> 
> 
> 
> HBM and Hotspot went *~3/4C* higher from 1100mhz to 1190mhz!
> 
> *Bench/Test Screenies:*


Here's mine at 1717/1200 for p7 and 1150 HBM. First time I broke 7200. I need to push it some more but I was distracted listening to AMD's earnings call.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *majestynl*
> 
> HBM running at 1190mhz with new 17.10.2 Bios. Again thanks to @gupsterg and @Nuke33 , they found out yesterday that increasing soc pushes the HBM higher... Today AMD released the new bios with higher SOC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> For people who are interested:
> 
> *Card:* VEGA 64 LE
> *DIE:* un-molded (but spread the tim really good out)
> *Cooling:* EK Waterblock / 360 + 240 rad
> *Bios:* LC
> *Mod:* PP for 150% Power / 400
> *HBM:* 1190Mhz
> *P6:* 1672Mhz / 1152mv
> *P7:* 1192Mhz / 1192mv
> 
> *Max clocks Superposition:* 1684Mhz
> *Max clocks Firestrike:* 1717Mhz
> *Firestrike Graphics score:* 26K +
> 
> 
> 
> 
> 
> 
> 
> 
> 
> HBM and Hotspot went *~3/4C* higher from 1100mhz to 1190mhz!
> 
> *Bench/Test Screenies:*






where did you nab that new bios?


----------



## kundica

Quote:


> Originally Posted by *tarot*
> 
> 
> where did you nab that new bios?


I think he meant driver.


----------



## Tgrove

Ill be on soon will oc hbm and report back


----------



## majestynl

Quote:


> Originally Posted by *kundica*
> 
> Here's mine at 1717/1200 for p7 and 1150 HBM. First time I broke 7200. I need to push it some more but I was distracted listening to AMD's earnings call.


Great result! I can push the same or even more on my 4Ghz CPU Profile. But im 1000% stable with 3975mhz and my TT memory profile.. thats why im using this one as default!


----------



## Reikoji

Quote:


> Originally Posted by *kundica*
> 
> I think he meant driver.


and I was googling for new bios too


----------



## tarot

Quote:


> Originally Posted by *Reikoji*
> 
> and I was googling for new bios too


no I remember him mentioning a bios in an earlier post...I,m sure...or I may be going nuts


----------



## kundica

Quote:


> Originally Posted by *majestynl*
> 
> Great result! I can push the same or even more on my 4Ghz CPU Profile. But im 1000% stable with 3975mhz and my TT memory profile.. thats why im using this one as default!


How much voltage do you need for 4 vs 3.975? My processor will do 3.9 at 1.375 (maybe lower but I settled there when I was trying to get my RAM to run 3333 with stilts fast timings some months back) and I'm currently at 1.39375 for 4ghz with LLC1. Without LLC1 it occasionally errors out with Ycruncher. Seems fine at 1.4v as well but my brain didn't like seeing 1.4.


----------



## fursko

Anybody know radiator placement affects temps ? Tubes up or down ?


----------



## pmc25

Quote:


> Originally Posted by *tarot*
> 
> 
> where did you nab that new bios?


He didn't.

He meant driver.


----------



## majestynl

Quote:


> Originally Posted by *tarot*
> 
> 
> where did you nab that new bios?


Typo







it's the driver
Quote:


> Originally Posted by *kundica*
> 
> How much voltage do you need for 4 vs 3.975? My processor will do 3.9 at 1.375 (maybe lower but I settled there when I was trying to get my RAM to run 3333 with stilts fast timings some months back) and I'm currently at 1.39375 for 4ghz with LLC1. Without LLC1 it occasionally errors out with Ycruncher. Seems fine at 1.4v as well but my brain didn't like seeing 1.4.


Cant remember exactly,not behind PC anymore. But I believe around 1.41.
Didn't play with CPU for a while on the 1800 ryzen. Will flash new bios soon and then I can play again


----------



## tarot

ok ok I,m nuts then








I found a xfx lc bios I might have a go with but it still worries me even on balanced how it over does the clock and crashes.
is it simply the power table sand the 50 percent power limit that is culprit or is it more silicon lottery stuff


----------



## majestynl

Quote:


> Originally Posted by *tarot*
> 
> ok ok I,m nuts then
> 
> 
> 
> 
> 
> 
> 
> 
> I found a xfx lc bios I might have a go with but it still worries me even on balanced how it over does the clock and crashes.
> is it simply the power table sand the 50 percent power limit that is culprit or is it more silicon lottery stuff


Can you sent me screen shots of your Wattmann settings. In begin I also had similar issues, and found out some slider/toggle bugs in combination with certain patterns I followed while playing with settings. Don't say you do the same but I can at least check.


----------



## Reikoji

17.10.2 has my GPU running cooler overall, compared to the 17.40 beta drivers at least. Those must have been a real dud.

Power readings in HWinfo are back to normal as well, but probably more due to downloading the latest HWinfo beta.

FFXIV doesn't seem to like overclocked vegas. Get lots of intermitten screen blackouts whenever im running memory 1090 or higher, more frequently the higher the memory. None of the other games I play have such blackouts, yet. Also in that game, with 17.40 on balanced the GPU without a temp limit would go well over 60c. Now it barely gets passed 52c with +25% while delivering more FPS.


----------



## Soggysilicon

Quote:


> Originally Posted by *madmanmarz*
> 
> What is everyone setting their HBCC size to? Has there been any benchmarking or testing with different sizes?


Lowest it will allow...









In the earliest of days performance was lost in some applications (in my testing, sample size of 1) I suspect it is due to hogging resources. I noticed some "some" very very minor improvement in some titles and in SP4k which is memory dependent.
Quote:


> Originally Posted by *spyshagg*
> 
> Well, congratulations, but how can these settings be stable? Do Vega cards vary this much in silicon lottery? I can't get my V64 to be game stable above 1670mhz/1050mv* HBM 1050mhz/1050mv. And you guys (seen others in this forum) are doing away with >900mv for both gpu and hbm while clocking higher.
> 
> * under full load, this drops to ~1600mhz


I started out going low on HBM voltages, and its stable in the benches; not so much in games where the loads can vary dynamically / randomly (from the cards perspective). New drivers coming online I "feel" it will be one of the last things I hammer down as its just so time consuming to generate useful data to tell a narrative of good better best.


----------



## tarot

Quote:


> Originally Posted by *majestynl*
> 
> Can you sent me screen shots of your Wattmann settings. In begin I also had similar issues, and found out some slider/toggle bugs in combination with certain patterns I followed while playing with settings. Don't say you do the same but I can at least check.


sure these are my current settings that work so far nothing seems to crash these.
for now I am running hbcc memory off to test cpu intensive benches and it does seem to have a bit of an impact...nothing to write home abaout but its there.


----------



## cplifj

Quote:


> Originally Posted by *fursko*
> 
> Anybody know radiator placement affects temps ? Tubes up or down ?


seems logical to allways blow hot air out of case and that considering airbubbles you make sure they can go to the highest point without returning into the loop.

so tubes allways on the bottom to prevent pulling any possible air into the loop again.


----------



## Newbie2009

Anyone notice in windows 10, wattman resets to default anytime the computer is restarted.


----------



## Chaoz

Quote:


> Originally Posted by *Newbie2009*
> 
> Anyone notice in windows 10, wattman resets to default anytime the computer is restarted.


Had that aswell. Since I started to use OverdriveNTool it didn't do that anymore.


----------



## cg4200

So I am still testing my xfx 56 cards flashed to 64 bios.. but noticed one is 10 degrees hotter on hbm than other card.
I have not took apart yet ordered some more thermal grizzly weekend project..
Any one know is xfx molded or unmolded and what is verdict is one better than the other?
Is amd tim any good ? hoping to drop even temps out I always replace NVidia cards bird sh..

Also I broke down and grabbed a gigabyte lc edition for 703.00still 70.00 to much but am impatient.
Would be same question anyone with lc who took apart to replace tim any difference??

Also what clocks for lc are people seeing on average..? Thanks


----------



## kundica

Quote:


> Originally Posted by *Newbie2009*
> 
> Anyone notice in windows 10, wattman resets to default anytime the computer is restarted.


Haven't had that issue. You might try disabling Windows fast boot if you have it enabled.


----------



## Newbie2009

Quote:


> Originally Posted by *kundica*
> 
> Haven't had that issue. You might try disabling Windows fast boot if you have it enabled.


New to 10 so might be if on as default, will check, thanks.


----------



## kundica

Quote:


> Originally Posted by *Newbie2009*
> 
> New to 10 so might be if on as default, will check, thanks.


I think it's called Fast Startup in Win 10. I have it enabled but I've read some people have Wattman settings reset with it enabled and that it stopped once they disabled it. The setting is in the power options.


----------



## Xender

I bought Vega 64 from MSI (air cooled) however I have some problems with it. Basically I have FPS drops in PUBG and games shutters every 6-7 seconds.
I think it may have something to do with drops in utilization of GPU as you can se on chart:

These are UV settings:


and here are utilization and clock during PUBG game session:


Whats more, I had several artifacts in PUBG:


Yesterday I had also two games crashes during 10 minutes of playing PUBG









My tech specs:
i7 6700k stock clocks, UV 1.28.
16GB DDR4 2666 MHz
PSU: EVGA SuperNova G2 850W
MOBO: Asus Z170 Hero
Windows 10 64bit PRO - Fall Creators Update. Fresh installation.
VGA Drivers 17.10.2.

I planned to replace reference cooler with Rajintek Morpheus 2 however this card is so troublesome, that I am not sure if it is not broken ;/ In that case, maybe I should return it and wait for some kind of custom cooler version like Asus Strix or MSI Twin Frozr?


----------



## Star2k

Try 975mV for 1000mhz of HBM, it's probably solve the problem.


----------



## gupsterg

Question for members using EK water block, are you using the stock tension plate around GPU or EK supplied screws?

If using the stock tension plate did this provide improved temps due to tension?


----------



## elox

Well, I run a 4690k @ 4,7 Ghz and I´m CPU limited in PUBG nearly all the time. When there are only a few players left I can see how my GPU clocks higher and more stable since there is less load on the CPU. I also got the constant drops to 500mhz on HBM but thats due to my CPU limit. All other games and apps where my CPU is not constantly at 100% load my GPU keeps ~1665mhz core and 1100 HBM.


----------



## Xender

Quote:


> Originally Posted by *elox*
> 
> Well, I run a 4690k @ 4,7 Ghz and I´m CPU limited in PUBG nearly all the time. When there are only a few players left I can see how my GPU clocks higher and more stable since there is less load on the CPU. I also got the constant drops to 500mhz on HBM but thats due to my CPU limit. All other games and apps where my CPU is not constantly at 100% load my GPU keeps ~1665mhz core and 1100 HBM.


Honestly, PUBG worked better on GTX 1060 6GB, of course average FPS was lower due to QHD resolution however there wasn't so big FPS drops. Game was much more stable and fluid. On Vega it drops to 20 FPS from time to time and to 40 frequently.


----------



## elox

Quote:


> Originally Posted by *Xender*
> 
> Honestly, PUBG worked better on GTX 1060 6GB, of course average FPS was lower due to QHD resolution however there wasn't so big FPS drops. Game was much more stable and fluid. On Vega it drops to 20 FPS from time to time and to 40 frequently.


Cant confirm this. My vega64 performs a lot smoother in PUBG than my gtx 970 did. Never see drops below 50fps.


----------



## Xender

Quote:


> Originally Posted by *elox*
> 
> Cant confirm this. My vega64 performs a lot smoother in PUBG than my gtx 970 did. Never see drops below 50fps.


Are you using reference air cooler?


----------



## TrixX

Ok for everyone playing PUBG such as Xender and Elox you need to use Wattman to lock it to min P6 and Max P7 (Right click on the P state bar at the top) or use ClockBlocker to lock it to P7 mode for PUBG (and any other game that doesn't keep the GPU fed properly). If using Wattman you need to do the same for the HBM or it'll downclock to 800MHz or 500MHz.


----------



## Xender

Quote:


> Originally Posted by *TrixX*
> 
> Ok for everyone playing PUBG such as Xender and Elox you need to use Wattman to lock it to min P6 and Max P7 (Right click on the P state bar at the top) or use ClockBlocker to lock it to P7 mode for PUBG (and any other game that doesn't keep the GPU fed properly). If using Wattman you need to do the same for the HBM or it'll downclock to 800MHz or 500MHz.


Is it possible to lock P7 state only for certain games, not globally? Will locking P7 state override UV settings in OverdriveNTool?

I have one more question, which position of BIOS switch is Eco and which is normal Power mode?


----------



## TrixX

I use ClockBlocker with OverdriveNTool, for that very reason.

You can have different profiles in OverdriveNTool and disable P0-5 by Right Clicking on the P State number.

220W is the Graphics port side, 200W is the inner side on the BIOS switch. Only the high power BIOS is flashable.


----------



## elox

Quote:


> Originally Posted by *Xender*
> 
> Is it possible to lock P7 state only for certain games, not globally? Will locking P7 state override UV settings in OverdriveNTool?
> 
> I have one more question, which position of BIOS switch is Eco and which is normal Power mode?


With the new driver it should be possible to do wattman profiles for every installed game separate. Will try to lock P7 but i dont think it will give me better FPS since the CPU is the limit. For me it looks like in areas where no other players are around and load goes from cpu to gpu then i reach my usual clocks. Isnt this behavior a kind of power save thing?
The switchposition on the picture is power mode. Other direction would be powersafe.


----------



## TrixX

Quote:


> Originally Posted by *elox*
> 
> With the new driver it should be possible to do wattman profiles for every installed game separate. Will try to lock P7 but i dont think it will give me better FPS since the CPU is the limit. For me it looks like in areas where no other players are around and load goes from cpu to gpu then i reach my usual clocks. Isnt this behavior a kind of power save thing?
> The switchposition on the picture is power mode. Other direction would be powersafe.


Not quite, when I ran my normal settings (not stock) and no clockblocker or P7 locking, then PUBG would be in low power state mode most of the time. When someone starts shooting at you it would switch to high power state and there'd be a moment of 10-20FPS slideshow where you'd inevitably be unable to react and just die as the card clocked up to P7. Seeing as reaction time is so important, keeping it in a single power state becomes more important and has helped with many games other than PUBG too.

I can't confirm the profiles for Wattman/Radeon Settings hold currently as previously there was a bug with them applying correctly every time for a program/game. I don't use it as a result and just Use a default setting and change profiles for certain situations with OverdriveNTool and have ClockBlocker configured to downclock for most desktop applications and block for most games.


----------



## Xender

Quote:


> Originally Posted by *TrixX*
> 
> Not quite, when I ran my normal settings (not stock) and no clockblocker or P7 locking, then PUBG would be in low power state mode most of the time. When someone starts shooting at you it would switch to high power state and there'd be a moment of 10-20FPS slideshow where you'd inevitably be unable to react and just die as the card clocked up to P7. Seeing as reaction time is so important, keeping it in a single power state becomes more important and has helped with many games other than PUBG too.
> 
> I can't confirm the profiles for Wattman/Radeon Settings hold currently as previously there was a bug with them applying correctly every time for a program/game. I don't use it as a result and just Use a default setting and change profiles for certain situations with OverdriveNTool and have ClockBlocker configured to downclock for most desktop applications and block for most games.


Interesting, is it typical for all graphics cards (not only AMD/ Vega) that switching between power states cause FPS drops? For example I never encountered such fps drops on Nvidia cards. Does it mean that Geforce card are better in power state switching or Geforce cards are always in highest states during gaming?


----------



## elox

Quote:


> Originally Posted by *TrixX*
> 
> Not quite, when I ran my normal settings (not stock) and no clockblocker or P7 locking, then PUBG would be in low power state mode most of the time. When someone starts shooting at you it would switch to high power state and there'd be a moment of 10-20FPS slideshow where you'd inevitably be unable to react and just die as the card clocked up to P7. Seeing as reaction time is so important, keeping it in a single power state becomes more important and has helped with many games other than PUBG too.
> 
> I can't confirm the profiles for Wattman/Radeon Settings hold currently as previously there was a bug with them applying correctly every time for a program/game. I don't use it as a result and just Use a default setting and change profiles for certain situations with OverdriveNTool and have ClockBlocker configured to downclock for most desktop applications and block for most games.


I´ve definitely never experienced this 10-20 FPS slide show you mentioned in PUBG so far. I have a much better experience in PUBG compared to my old 970 but will try to lock p6/p7 today after work and check if there are any improvements/gains in FPS. Thanks for the note.


----------



## TrixX

I'll see if I can get an Afterburner screenshot of the game with CB off and on. The difference in GPU load and P State is pretty dramatic









@Xender - No it's not common for this situation. Nvidia have a different method with a lower stock speed and then using Power/Thermal limits to "Overboost" the card above those stock settings. AMD have a performance ceiling with their settings which you reach if the Power/Thermal limits are within spec. Basically a reverse of the Nvidia system. It is however a much more controllable system than Nvidia's. It's a bit of a balancing act to get them all in sync.


----------



## pmc25

PubG at 1920x1080 and 2560x1440, vega64 is faster than a 1080Ti. Only above QHD does the 1080Ti overhaul it.

Also, on latest drivers, the AA method (MSAA - which usually ruins Vega, Fiji & Polaris) is virtually free from FPS cost between very low (off) and ultra (16x).

I run it at 2560x1440, Ultra Textures, Ultra AA, Very Low/Off everything else.

Outside of cities and areas with tons of players, I continually hit the frame rate cap (144).

Of course, it's PubG so there are frequent drops, mostly related to both CPU usage spikes and the server tick rate falling through the floor and the client freezing up.

Anyway, unless you're running it at 3200x1800 or 4K or 5K, Vega64 is the fastest card for PubG.

As others have stated, you need to lock gpu and hbm at their highest p-state. The low gpu utilisation in pubg and vega's tendency to aggressively downclock will cause serious stutter otherwise.

Edit: at 1150mhz hbm2, it's possible Vega may be able to stay level with the 1080Ti at even 3200x1800.


----------



## jbravo14

I was tuning my vega 64 using wattman with unigine heaven running on the background to get realtime response.

It was very stable at unigine heaven, but immidiately crashed when i tried to benachmark tomb raider DX12.

Is there a similar tool to unigine that is DX12 and can run on a loop?


----------



## spyshagg

Quote:


> Originally Posted by *jbravo14*
> 
> I was tuning my vega 64 using wattman with unigine heaven running on the background to get realtime response.
> 
> It was very stable at unigine heaven, but immidiately crashed when i tried to benachmark tomb raider DX12.
> 
> Is there a similar tool to unigine that is DX12 and can run on a loop?


Open the DX12 game you want and leave the game running in-game. You dont need to be constantly switching "scenes" to put a load on the gpu. Just leave the game running. I do it all the time.


----------



## jbravo14

Thanks. Is it really frustrating to overclock vega?

Any opinions on keeping a vega 56 vs 64? I'm torn to see that the highest OC i can get from vega 64 was only ~1650

Which do not seem like a big jump over the max clocks a vega 56 can handle.


----------



## jbravo14

also is there any trught that the limited edition (silver) vega 64 has a tad nicer/quieter sounding fan than the black version?


----------



## pmc25

Quote:


> Originally Posted by *jbravo14*
> 
> Thanks. Is it really frustrating to overclock vega?
> 
> Any opinions on keeping a vega 56 vs 64? I'm torn to see that the highest OC i can get from vega 64 was only ~1650
> 
> Which do not seem like a big jump over the max clocks a vega 56 can handle.


HBM clocks are more important, and you will get much higher memory clocks on 64, particularly after the last driver update unlocked SoC frequency.
Quote:


> Originally Posted by *jbravo14*
> 
> also is there any trught that the limited edition (silver) vega 64 has a tad nicer/quieter sounding fan than the black version?


No, they're identical.

Also, if you're serious about OC'ing on Vega, you want a block / AIO / Morpheus. Getting temperatures as low as possible significantly improves performance and stability.


----------



## jbravo14

Quote:


> Originally Posted by *pmc25*
> 
> HBM clocks are more important, and you will get much higher memory clocks on 64, particularly after the last driver update unlocked SoC frequency.
> No, they're identical.
> 
> Also, if you're serious about OC'ing on Vega, you want a block / AIO / Morpheus. Getting temperatures as low as possible significantly improves performance and stability.


Thank you for the input. may be a newb question, what is SoC?

also I may have to stick with the blower cooler or any 3rd party 2 slot cooler as I am using the vega 64 on a Sentry - ITX Case.


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> Question for members using EK water block, are you using the stock tension plate around GPU or EK supplied screws?
> 
> If using the stock tension plate did this provide improved temps due to tension?


I didn't use the tension plate at all I used the EK screws for that area. With the stock backplate I used the stock screws for the backplate. I later added the EK backplate and used the EK screws for the backplate.


----------



## fursko

Gpuz vddc readings is correct ? I get 1.0vddc minimum and wattman settings shows 0.050 higher. If i set my voltage to 1100 gpuz shows 1050.


----------



## fursko

Quote:


> Originally Posted by *jbravo14*
> 
> also is there any trught that the limited edition (silver) vega 64 has a tad nicer/quieter sounding fan than the black version?


+1

My vega 64 lc works good. Specially when i undervolt it. But my fan faulty and making a lot of noise. Im considering getting vega 64 ref design. Its much cheaper but i need information about fan noise, temps, and average clocks. My clocks around 1800/1150. Without undervolt this works fine. Power consumption around 480 watt. If i undervolt my gpu my clocks around 1650/1150 and power consumption 300-330 watt. (total system power consumption) I will add my bench scores soon.

I find good deal vega 64 ref limited edition. I wonder how much can i push with silent fan profile.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> +1
> 
> My vega 64 lc works good. Specially when i undervolt it. But my fan faulty and making a lot of noise. Im considering getting vega 64 ref design. Its much cheaper but i need information about fan noise, temps, and average clocks. My clocks around 1800/1150. Without undervolt this works fine. Power consumption around 480 watt. If i undervolt my gpu my clocks around 1650/1150 and power consumption 300-330 watt. (total system power consumption) I will add my bench scores soon.
> 
> I find good deal vega 64 ref limited edition. I wonder how much can i push with silent fan profile.


Sounds like you have a good card but a bad fan. Would contact the vendor of the product and see if they can send out a replacement shroud or fan for your unit. Take a video with your phone of the noise it's making as evidence of the issue. Most vendor's are pretty good with this.


----------



## Xender

Well, locking states only to P7 really helped with PUBG. Most of the time around 50-60 FPS in QHD.
Here are my new settings and temps: (hot spot is 86 and it worries me







. )


There is also one issue, screen flicerking on my Freesync monitor: Aoc Agon ag322qcx it looks like changes in brightness and it is really annoying. It is normal for Vega cards/ Freeasync monitors?


----------



## TrixX

Quote:


> Originally Posted by *Xender*
> 
> Well, locking states only to P7 really helped with PUBG. Most of the time around 50-60 FPS in QHD.
> Here are my new settings and temps: (hot spot is 86 and it worries me
> 
> 
> 
> 
> 
> 
> 
> . )
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> There is also one issue, screen flicerking on my Freesync monitor: Aoc Agon ag322qcx it looks like changes in brightness and it is really annoying. It is normal for Vega cards/ Freeasync monitors?


Well you can probably run a higher frequency core clock with those settings. I get ~1580MHz under 100% load in Superposition with 950mv which equates to ~1640-1660MHz in PUBG with my Clock set to 1752MHz for P7. Controlling core clock with the voltage is far more accurate than with the value for frequency as it's more a clock ceiling rather than a hard clock speed.

Setting it to 950mv with ~1700MHz P7 clock speed should net you a reduction in heat and an increase in performance too.

As for the flickering with Freesync, I can't comment as I don't have my Freesync Monitors yet.

Hot spot at 86 isn't so bad on Air, though we still don't know exactly what the Hotspot is referencing.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Sounds like you have a good card but a bad fan. Would contact the vendor of the product and see if they can send out a replacement shroud or fan for your unit. Take a video with your phone of the noise it's making as evidence of the issue. Most vendor's are pretty good with this.


Yeah i bought it from amazon and they are good at this kind of issues. But i didnt get the game codes


----------



## fursko

I start testing and tweaking. Here are my results for now. This is all unigine superposition 4k test without HBCC.

*stock clocks and volts + custom %50 = 1720/945 = 6848 score = 534 WATT Total*
*stock clocks and volts + custom %50 = 1700/1150 = 7253 score = 549 WATT Total*

As you can see if i raise hbm clock ( not voltage, all voltages are stock ) my core clock lose 20 mhz average because of power limit.


----------



## rancor

Acording to bildzoid
Quote:


> Originally Posted by *Xender*
> 
> Is it possible to lock P7 state only for certain games, not globally? Will locking P7 state override UV settings in OverdriveNTool?
> 
> I have one more question, which position of BIOS switch is Eco and which is normal Power mode?


Clock blocker will allow you to force P7 for applications. It should only run at your settings in OverdriveNTool.

Check your stability! OCCT has an error checking mode and it works for vega. They problems you seem to be having look to be stability related any you are borderline on crashing the drivers. Try upping your memory voltage to 1000 and if that doesn't work bump both core and memory voltage up to 1025. If you're still having issues start lowering your clocks.


----------



## pmc25

Quote:


> Originally Posted by *jbravo14*
> 
> also is there any trught that the limited edition (silver) vega 64 has a tad nicer/quieter sounding fan than the black version?


Quote:


> Originally Posted by *Xender*
> 
> Well, locking states only to P7 really helped with PUBG. Most of the time around 50-60 FPS in QHD.
> Here are my new settings and temps: (hot spot is 86 and it worries me
> 
> 
> 
> 
> 
> 
> 
> . )
> 
> 
> There is also one issue, screen flicerking on my Freesync monitor: Aoc Agon ag322qcx it looks like changes in brightness and it is really annoying. It is normal for Vega cards/ Freeasync monitors?


Quote:


> Originally Posted by *Xender*
> 
> Well, locking states only to P7 really helped with PUBG. Most of the time around 50-60 FPS in QHD.
> Here are my new settings and temps: (hot spot is 86 and it worries me
> 
> 
> 
> 
> 
> 
> 
> . )
> 
> 
> There is also one issue, screen flicerking on my Freesync monitor: Aoc Agon ag322qcx it looks like changes in brightness and it is really annoying. It is normal for Vega cards/ Freeasync monitors?


HBM temperature is likely to be the source of your flickering. Any overclock won't be stable at more than 80C.

This is the reason HBM2 clock is so conservative at stock - 945Mhz. It's very thermally sensitive and they have to guarantee stability.

Mine will do about 1170Mhz stable if I keep it below 33C. In winter with the windows open, and temperature below 30C, I suspect I could get to 1200Mhz or even above.

At the moment it's ~36C under sustained load, so I've got it dialed back to 1145Mhz, to guarantee stability.

1015Mhz was about the max I could run stable at 80C on older drivers ... and then it wasn't really stable, and there were artefacts and flashes as you describe.


----------



## Xender

I am worried about one thing, HW Info reports that voltage of HBM modules are 1.356 V all the time, is it bug?

I found in internet that many users have problem with Freesync flickering, are you sure it is caused by HBM temperatures? In that case I am forced to changing cooler or at least sending back my GPU and buying Strix or something else with better cooler.


----------



## pmc25

No, it's the same voltage for every Vega64 card. Can't be changed.

I don't know why you expect it to function nominally at those temperatures when you overclock the memory.


----------



## gupsterg

Quote:


> Originally Posted by *kundica*
> 
> I didn't use the tension plate at all I used the EK screws for that area. With the stock backplate I used the stock screws for the backplate. I later added the EK backplate and used the EK screws for the backplate.


Thank you







.

I had been looking at the reference cooler "teardowns" and noted the HS is separate from the main plate assembly for cooler. As the EK waterblock is not as such I'm now starting to scrap my idea of using the stock tension plate, perhaps it will create excessive tension (IDK







).

Then also having read your past shares temps are good and inline with others on WC.

I've seen in the manual for the EK backplate it comes with thermal pads, etc, may help with cooling, a little, what are your thoughts on this?

At the moment in the throws of indecision.

I order a Gigabyte RX VEGA 64 Limited Edition with 2 games from Ebuyer on 19th, for £514.97, took the "super saver" delivery so not with me yet. I can refuse delivery and it goes back FOC and gain refund. Tempted to go with OCUK latest deal on PowerColor RX VEGA 64 with 2 games for £469.99 (will be next delivery FOC as forum member there).

Pro for Gigabyte is they have UK RMA centre and 3yr warranty. Gigabyte rep via email confirmed as long as I replace stock cooler for RMA process warranty will stand. Some highlight on OCUK that Gigabyte reps don't all sing from same "hymn sheet" so could maybe run into problems. Quite like idea of LE backplate.

Pro for PowerColor is price, OCUK apparently have special terms that allow cooler swap, PowerColor as I understand have no UK centre but OCUK deal with it.


----------



## TrixX

If you feel the 50quid is worth the hassle and extra wait then ok, but personally I'd save the free delivery for the next item from OCUK and get the Gigabyte LE version.


----------



## gupsterg

Seems no limit on free delivery as forum member







.

In past done multiple orders over course of a month and non issue.

£50 saving worth it in context of time, it's one day only.

Warranty terms is basically the factor, in context of removing cooler. 3yrs Gigabyte vs 2yrs PowerColor is non issue for me.


----------



## TrixX

Quote:


> Originally Posted by *gupsterg*
> 
> Seems no limit on free delivery as forum member
> 
> 
> 
> 
> 
> 
> 
> .
> 
> In past done multiple orders over course of a month and non issue.
> 
> £50 saving worth it in context of time, it's one day only.
> 
> Warranty terms is basically the factor.


I wonder if that extends to free delivery to Australia...

Well if it's down to terms then the OCUK one maybe better. Easier if they deal with it than Gigabyte.


----------



## gupsterg

I've contact Gibbo via "Trust" (OCUK Forum PM system) and bumped a thread relevent to RMA terms, hopefully get terms confirmed. If it's green light then scrapping LE.

AFAIK only UK FOC shipping.


----------



## Xender

Does Limited (silver) edition have any benefits? For example better cooler or something?


----------



## Ipak

In case you play destiny 2, and wonder why is undervolting settings not stable in that game. https://community.amd.com/thread/221546

Well its actually something totally different


----------



## Chaoz

Quote:


> Originally Posted by *gupsterg*
> 
> Question for members using EK water block, are you using the stock tension plate around GPU or EK supplied screws?
> 
> If using the stock tension plate did this provide improved temps due to tension?


I used the EKWB screws with stock backplate. Didn't add the tension plate, tho. Didn't think it was necessary as the tension plate wasn't on the image that EK posted with the block. Temps are still amazing, tho. My 64 never goes over 35°C.


----------



## gupsterg

Quote:


> Originally Posted by *Xender*
> 
> Does Limited (silver) edition have any benefits? For example better cooler or something?


None, purely cosmetic difference.
Quote:


> Originally Posted by *Chaoz*
> 
> I used the EKWB screws with stock backplate. Didn't add the tension plate, tho. Didn't think it was necessary as the tension plate wasn't on the image that EK posted with the block. Temps are still amazing, tho. My 64 never goes over 35°C.


+rep, thanks







.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *gupsterg*
> 
> Thank you
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I had been looking at the reference cooler "teardowns" and noted the HS is separate from the main plate assembly for cooler. As the EK waterblock is not as such I'm now starting to scrap my idea of using the stock tension plate, perhaps it will create excessive tension (IDK
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> Then also having read your past shares temps are good and inline with others on WC.
> 
> I've seen in the manual for the EK backplate it comes with thermal pads, etc, may help with cooling, a little, what are your thoughts on this?
> 
> At the moment in the throws of indecision.
> 
> I order a Gigabyte RX VEGA 64 Limited Edition with 2 games from Ebuyer on 19th, for £514.97, took the "super saver" delivery so not with me yet. I can refuse delivery and it goes back FOC and gain refund. Tempted to go with OCUK latest deal on PowerColor RX VEGA 64 with 2 games for £469.99 (will be next delivery FOC as forum member there).
> 
> Pro for Gigabyte is they have UK RMA centre and 3yr warranty. Gigabyte rep via email confirmed as long as I replace stock cooler for RMA process warranty will stand. Some highlight on OCUK that Gigabyte reps don't all sing from same "hymn sheet" so could maybe run into problems. Quite like idea of LE backplate.
> 
> Pro for PowerColor is price, OCUK apparently have special terms that allow cooler swap, PowerColor as I understand have no UK centre but OCUK deal with it.






no way I would spend 35 bucks (here in ass end of world land) for a piece of metal when there is a perfectly good one here








I used thermal pads on the back and it helped cool the backplate down as for the tension thing not sure what you mean but mine is fine runs pretty cool not super sexy like some of the others with 30 radiators but fine for me.


----------



## Chaoz

Quote:


> Originally Posted by *tarot*
> 
> 
> no way I would spend 35 bucks (here in ass end of world land) for a piece of metal when there is a perfectly good one here
> 
> 
> 
> 
> 
> 
> 
> 
> I used thermal pads on the back and it helped cool the backplate down as for the tension thing not sure what you mean but mine is fine runs pretty cool not super sexy like some of the others with 30 radiators but fine for me.


Yeah, I didn't use the EKWB backplate either. Too pricy for the same thing you get for free with your GPU. Plus with the EK backplate you can't even reach the dipswitches on the back either.

Didn't use any thermal pads, tho. Temps are great, tbh.

The tension plate is that metal cross thingy, that goes over the core at the back.


----------



## Xender

Quote:


> HBM temperature is likely to be the source of your flickering. Any overclock won't be stable at more than 80C.
> 
> This is the reason HBM2 clock is so conservative at stock - 945Mhz. It's very thermally sensitive and they have to guarantee stability.
> 
> Mine will do about 1170Mhz stable if I keep it below 33C. In winter with the windows open, and temperature below 30C, I suspect I could get to 1200Mhz or even above.
> 
> At the moment it's ~36C under sustained load, so I've got it dialed back to 1145Mhz, to guarantee stability.
> 
> 1015Mhz was about the max I could run stable at 80C on older drivers ... and then it wasn't really stable, and there were artefacts and flashes as you describe.


Honestly... screen flickers no matter what setting for UV I select. It is common for AMD, just look:

__
https://www.reddit.com/r/6ms6ky/freesync_flickers_still/


----------



## gupsterg

@tarot

I hear you man!







.

@Chaoz

Yeah I'm sliding towards ordering the PowerColor and using stock backplate. I think I read it is metal?

I think the black stock plate will not standout as much as the silver LE. Even though it is nice, I reckon my rig is more of a dark theme than "bling". It's going in TR build from this thread.


----------



## rancor

Quote:


> Originally Posted by *Xender*
> 
> Honestly... screen flickers no matter what setting for UV I select. It is common for AMD, just look:
> 
> __
> https://www.reddit.com/r/6ms6ky/freesync_flickers_still/


So for you going back to stock still flickers? I haven't had any issues recently with freesync flickering. It seem to be dependent on setup.


----------



## lowdog

Quote:


> Originally Posted by *gupsterg*
> 
> Question for members using EK water block, are you using the stock tension plate around GPU or EK supplied screws?
> 
> If using the stock tension plate did this provide improved temps due to tension?


No cross tension stock plate for me just used the screws supplied with the EK block and used the stock back plate.....may have to look into using some thermal pads with the back plate though as I didn't consider that initially.


----------



## majestynl

Quote:


> Originally Posted by *tarot*
> 
> sure these are my current settings that work so far nothing seems to crash these.
> for now I am running hbcc memory off to test cpu intensive benches and it does seem to have a bit of an impact...nothing to write home abaout but its there.


Yeap looking oke.. And if it doesn't crash anymore then its oke for now i hope









@gupsterg Thats a shame not delivered yet.! , i expected some results from you today









btw: No cross tension plate either!


----------



## Reikoji

Quote:


> Originally Posted by *Xender*
> 
> Honestly... screen flickers no matter what setting for UV I select. It is common for AMD, just look:
> 
> __
> https://www.reddit.com/r/6ms6ky/freesync_flickers_still/


Usually backing down on the memory speed will eliminate that flickering. I personally have no flickering at stock memory speed, freesync off or on.


----------



## Reikoji

Quote:


> Originally Posted by *Ipak*
> 
> In case you play destiny 2, and wonder why is undervolting settings not stable in that game. https://community.amd.com/thread/221546
> 
> Well its actually something totally different


This occurs in Wolfenstein: The New Order as well. There are a few crash areas that i have gotten passed, but the prison I havent managed to not crash in.


----------



## ontariotl

I tried to install the new 17.10.2 drivers and ended up with errors to load the crimson properties tab. Of course I had to update my Windows 10 to Fall Creators to fix the issue. I've been avoiding the Spring Creators as it was a pile of crap for me. Looks like Fall is buggy as well and may have to just do a clean install.

However, I can deal with that as finally Public Unknown's Battlegrounds and Hellblade Senuas Sacrifice works with Freesync in fullscreen on my Samsung 34CF791 ultrawide. Before the screen would blank out anytime I would enable freesync for these specific games and if I really wanted freesync, I would have to use windowed mode. Oh, and the HBM2 higher overclock is cool too! But freesync mattered more to me.

Secondly, I don't know if this was mentioned before but those who flashed their AC Vega64 with the LC Bios version 016.001.001.000.008734 try version 016.001.001.000.008774? The reason I ask as I finally have a more stable card with the latter version. I thought it wouldn't matter but I decided I would flash my card tonight. Before if I try to use default LC settings such as balance or Turbo or even custom with P7 at 1752, my card would freeze as soon as I started Valley Benchmark. Everytime I cold boot, I would have to set my clocks down to 1702 if I wanted stability and if I forgot, I would eventually get a crash in a game. Now with the newer flash, I can get through Valley with settings even at LC default, Turbo or custom P7 at 1752.

I still need to do testing, but I thought I'd share my observations.


----------



## 113802

Quote:


> Originally Posted by *ontariotl*
> 
> I tried to install the new 17.10.2 drivers and ended up with errors to load the crimson properties tab. Of course I had to update my Windows 10 to Fall Creators to fix the issue. I've been avoiding the Spring Creators as it was a pile of crap for me. Looks like Fall is buggy as well and may have to just do a clean install.
> 
> However, I can deal with that as finally Public Unknown's Battlegrounds and Hellblade Senuas Sacrifice works with Freesync in fullscreen on my Samsung 34CF791 ultrawide. Before the screen would blank out anytime I would enable freesync for these specific games and if I really wanted freesync, I would have to use windowed mode. Oh, and the HBM2 higher overclock is cool too! But freesync mattered more to me.
> 
> Secondly, I don't know if this was mentioned before but those who flashed their AC Vega64 with the LC Bios version 016.001.001.000.008734 try version 016.001.001.000.008774? The reason I ask as I finally have a more stable card with the latter version. I thought it wouldn't matter but I decided I would flash my card tonight. Before if I try to use default LC settings such as balance or Turbo or even custom with P7 at 1752, my card would freeze as soon as I started Valley Benchmark. Everytime I cold boot, I would have to set my clocks down to 1702 if I wanted stability and if I forgot, I would eventually get a crash in a game. Now with the newer flash, I can get through Valley with settings even at LC default, Turbo or custom P7 at 1752.
> 
> I still need to do testing, but I thought I'd share my observations.


Same, I was running Windows 10 1607 LTSB. 1709 is buggy and laggy so I installed 1703 and the drivers work like they should.


----------



## Tgrove

Hbm oc has gone from 1100mhz to 1145mhz with 17.10.2

P7
1682 core
1145 hbm
1100mv
+50 pt
55c temp target


----------



## pmc25

Quote:


> Originally Posted by *Xender*
> 
> Honestly... screen flickers no matter what setting for UV I select. It is common for AMD, just look:
> 
> __
> https://www.reddit.com/r/6ms6ky/freesync_flickers_still/


I have that screen. Your HBM is way too hot to be above stock clocks. Tbh, same for your CPU clocks and temps.


----------



## Reikoji

Streaming Shadow of War with Radeon Relive seems to lead to GPU crash. Tried reducing the clocks and using rock bottom stream quality, no dice. I'm thinking i would just have to reduce the settings in the game to get it stream/record capable. Anyone else tried GPU recording or streaming with it Shadow of War yet?


----------



## TrixX

Has anyone run into an issue where power just runs away causing way too much heat for the stock cooler to cope with? Had it last night after a driver crash (restarted the drivers) and it took 3 restarts for the system to stop acting up. I'd have my normal undervolt daily setup running (P7 950mv and HBM 920mv) run a Superposition to test for stability and it would just charge off up to 380W for the core...

After a clean driver re-install and a few more restarts and settings changes it seems to be back to normal. No idea why it went power mad though.


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> The latest beta doesn't have the drivers option indeed. I reverted back to the non-beta, and it's still working good even will the fall update, with driver option still available. I had an error during switching to 17.9.1 gaming drivers, saying it was not installed. I got to device manager, and indeed after the "switch", Vega was showing with an exclamation mark. So, I installed the driver manually (the .inf should be here :C:\AMD\Packages\Drivers\Radeon-Crimson-ReLive-17.9.1-c0318725-64bit-171006_drvWHQL ) and voilà.
> 
> Plus, the 17.10 beta has a bug with undervolting for me (it's not working, staying at 1.2v) With the non beta or 17.9.1 gaming drivers, my undervolting is in effect.


Can you give me some insight on this? I tried doing this with 17.9.1 and Vega frontier edition is not in the list, however if i do that with frontier specific pro drivers its there in the inf.

i have to go to C:\AMD\Non-WHQL-Win10-64Bit-Radeon-Software-Crimson-ReLive-17.9.1-Sep7\Packages\Drivers\Display\WT6A_INF for driver inf, and vega frontier isn't there.

if i go to C:\AMD\WHQL-Radeon-Pro-Software-17.10-Oct2\Packages\Drivers\Display\WT6A_INF its good.

So i am trying to understand how people are mixing drivers, however i don't expect much success due to dual gpu so maybe thats why, still doesn't explain vega frontier not being listed for me but works for others

EDIT: i think i see now, are you using minimal install packages? I think i found it now.


----------



## Rootax

Quote:


> Originally Posted by *dagget3450*
> 
> Can you give me some insight on this? I tried doing this with 17.9.1 and Vega frontier edition is not in the list, however if i do that with frontier specific pro drivers its there in the inf.
> 
> i have to go to C:\AMD\Non-WHQL-Win10-64Bit-Radeon-Software-Crimson-ReLive-17.9.1-Sep7\Packages\Drivers\Display\WT6A_INF for driver inf, and vega frontier isn't there.
> 
> if i go to C:\AMD\WHQL-Radeon-Pro-Software-17.10-Oct2\Packages\Drivers\Display\WT6A_INF its good.
> 
> So i am trying to understand how people are mixing drivers, however i don't expect much success due to dual gpu so maybe thats why, still doesn't explain vega frontier not being listed for me but works for others
> 
> *EDIT: i think i see now, are you using minimal install packages? I think i found it now*.


Sorry for the delay, but yes, it's from the minimal install package, which is in fact the 17.9.1 "gaming" drivers downloaded by the radeon settings interface when you swtich drivers. It basically a 17.9.1 with Vega FE supported.


----------



## AngryLobster

I just got a Vega 64 + Freesync monitor and holy crap is this card/drivers/Freesync a total mess.

Crashes, alt tab stutter/hang, alt tab Willy Wonka colors, resizing watman crash, + half a dozen other issues I experienced over the span of 6 hours. Freesync is also super picky. The deal breaker was the card stopping progress in Destiny 2.

I put it and the monitor back in their boxes to return and went back to my 1080. Freesync is just not worth the hassle. I've never had such a poor experience with AMD but this is a all time low. Good luck beta testing.


----------



## Xender

Quote:


> Originally Posted by *AngryLobster*
> 
> I just got a Vega 64 + Freesync monitor and holy crap is this card/drivers/Freesync a total mess.
> 
> Crashes, alt tab stutter/hang, alt tab Willy Wonka colors, resizing watman crash, + half a dozen other issues I experienced over the span of 6 hours. Freesync is also super picky. The deal breaker was the card stopping progress in Destiny 2.
> 
> I put it and the monitor back in their boxes to return and went back to my 1080. Freesync is just not worth the hassle. I've never had such a poor experience with AMD but this is a all time low. Good luck beta testing.


I have the same problem, I bought reference Vega 64 and AOC AG322QCX and it works so bad that I am really considering returning it ;/
Monitor is great, however Freesync flicker makes me angry... From the another hand, my air cooled Vega64 is unusable with reference cooler, extremely loud, extremely hot...

Regarding all these Vega issues, will AMD fix it via drivers some day and it is all caused by early version of drivers or it is typical for Radeon cards?

My last Radeon card was 6950 2GB and it was really good model, drivers worked well, but now it is insane... Because of that I am asking about possibile future: will they fix and polish drivers or it will be broken forever?


----------



## cplifj

well, after running a 290X for four years, i am now catching myself smilling my ass off in front of my 4K iiyama GB2888ushu freesync monitor.

This new Gigabyte Vega 64 watercooling is just amazing.

4k ultra settings with high res texture packs, this card just pegs it at 60fps synced. In games like fallout 4, witcher 3, BF1,
even GTAV mostly runs at 60fps with a few occasions going to 46 fps , still in the freesync range of 35-62Hz.
Sure i had to reset all those games configs first and reaply 4k ultra or results were not as good. (shadercache using 290X stuff maybe ?)

i am just stunned with the buttery smooth gaming experience, this is how it should have been all those years.

All on balanced mode with the latest fall creators update beta driver 17.10.2 which has WDDM 2.3.

with fan speed up to around 1500 rpm while doing all that i can't even hear a difference between idle or gaming full loads.

Not a cheap card but it slightly makes up for that with its performance.


----------



## pmc25

It should be doing WAY better than 60FPS at 4K in BF1, under water. I get ~110FPS


----------



## Skinnered

Hey guys, is crossfire working with Destiny 2 allready?


----------



## Digitalwolf

Quote:


> Originally Posted by *pmc25*
> 
> It should be doing WAY better than 60FPS at 4K in BF1, under water. I get ~110FPS


They probably have a frame cap slightly under their free sync max range of 62....


----------



## Chaoz

Quote:


> Originally Posted by *gupsterg*
> 
> [
> @Chaoz
> 
> Yeah I'm sliding towards ordering the PowerColor and using stock backplate. I think I read it is metal?
> 
> I think the black stock plate will not standout as much as the silver LE. Even though it is nice, I reckon my rig is more of a dark theme than "bling". It's going in TR build from this thread.


Yeah, the stock backplate is indeed metal and decent looking enough to use it aswell.
Quote:


> Originally Posted by *AngryLobster*
> 
> I just got a Vega 64 + Freesync monitor and holy crap is this card/drivers/Freesync a total mess.
> 
> Crashes, alt tab stutter/hang, alt tab Willy Wonka colors, resizing watman crash, + half a dozen other issues I experienced over the span of 6 hours. Freesync is also super picky. The deal breaker was the card stopping progress in Destiny 2.
> 
> I put it and the monitor back in their boxes to return and went back to my 1080. Freesync is just not worth the hassle. I've never had such a poor experience with AMD but this is a all time low. Good luck beta testing.


I don't have any issues with my FreeSync monitor and 64. It works great on 75Hz and smooth af aswell. I have yet to run into any issues.


----------



## kundica

Quote:


> Originally Posted by *AngryLobster*
> 
> I just got a Vega 64 + Freesync monitor and holy crap is this card/drivers/Freesync a total mess.
> 
> Crashes, alt tab stutter/hang, alt tab Willy Wonka colors, resizing watman crash, + half a dozen other issues I experienced over the span of 6 hours. Freesync is also super picky. The deal breaker was the card stopping progress in Destiny 2.
> 
> I put it and the monitor back in their boxes to return and went back to my 1080. Freesync is just not worth the hassle. I've never had such a poor experience with AMD but this is a all time low. Good luck beta testing.


Quote:


> Originally Posted by *Xender*
> 
> I have the same problem, I bought reference Vega 64 and AOC AG322QCX and it works so bad that I am really considering returning it ;/
> Monitor is great, however Freesync flicker makes me angry... From the another hand, my air cooled Vega64 is unusable with reference cooler, extremely loud, extremely hot...
> 
> Regarding all these Vega issues, will AMD fix it via drivers some day and it is all caused by early version of drivers or it is typical for Radeon cards?
> 
> My last Radeon card was 6950 2GB and it was really good model, drivers worked well, but now it is insane... Because of that I am asking about possibile future: will they fix and polish drivers or it will be broken forever?


It's less an issue with Freesync and more an issue with Windows combined with screen handling, low quality monitors and chain issues. Try setting your game to launch with full screen optimizations disabled.

I have the Nixeus EDG 27 and don't experience flicker at all. Some games need to be run in full screen exclusive mode because they're using janky fullscreen windowed implementation.


----------



## poisson21

I don't experience flickering with my freesync monitor either, Samsung U32E850. And i have 2 cards running in crossfire, only problem is the lack of game which supported it.


----------



## fursko

Quote:


> Originally Posted by *AngryLobster*
> 
> I just got a Vega 64 + Freesync monitor and holy crap is this card/drivers/Freesync a total mess.
> 
> Crashes, alt tab stutter/hang, alt tab Willy Wonka colors, resizing watman crash, + half a dozen other issues I experienced over the span of 6 hours. Freesync is also super picky. The deal breaker was the card stopping progress in Destiny 2.
> 
> I put it and the monitor back in their boxes to return and went back to my 1080. Freesync is just not worth the hassle. I've never had such a poor experience with AMD but this is a all time low. Good luck beta testing.


I have same freesync problem. Nothing else. Freesync problem because of monitors not gpu. Aoc generally uses samsung va panels and this panels suck for freesync. My panel has flickering loading menus but gameplay fine. I test nvidia fast sync and amd enhanced sync too. Both useless. Causing a lot of frame skipping. My guess both work fine with 60hz monitors but useless for 144hz monitors.

My vega 64 lc works like beast. But im getting coil whine above 200fps and my fan faulty making terrible noise even at low rpm. I have to change it. For freesync you should look ips panels for now.


----------



## Xender

Quote:


> Originally Posted by *kundica*
> 
> It's less an issue with Freesync and more an issue with Windows combined with screen handling, low quality monitors and chain issues. Try setting your game to launch with full screen optimizations disabled.
> 
> I have the Nixeus EDG 27 and don't experience flicker at all. Some games need to be run in full screen exclusive mode because they're using janky fullscreen windowed implementation.


I am using Vega64 with AOC AG322QCX, so I am not sure if this should work properly or not...


----------



## cplifj

what i CAN get, and what i NEED for a smooth gameplay with no lower then 60 fps are 2 different things. I was giving a "normal" use opinion. a constant 60 fps will do ample for now, especially at 2160p ultra. (beating a 1080 Ti, blindfolded and with one hand, how is this possible.....)

I wasn't expecting such good results after all the "reviews".

Quote:


> Originally Posted by *pmc25*
> 
> It should be doing WAY better than 60FPS at 4K in BF1, under water. I get ~110FPS


----------



## jbravo14

fo some reason, overlocking/undervolting did not result in increase in FPS (tomb raider). Only when i set it to power saver mode do i get 5 more FPS.

Weird, I am almost ready to give up with undervolt/overclock of this card, not as straightforward as my r9 390


----------



## TrixX

Quote:


> Originally Posted by *AngryLobster*
> 
> I just got a Vega 64 + Freesync monitor and holy crap is this card/drivers/Freesync a total mess.
> 
> Crashes, alt tab stutter/hang, alt tab Willy Wonka colors, resizing watman crash, + half a dozen other issues I experienced over the span of 6 hours. Freesync is also super picky. The deal breaker was the card stopping progress in Destiny 2.
> 
> I put it and the monitor back in their boxes to return and went back to my 1080. Freesync is just not worth the hassle. I've never had such a poor experience with AMD but this is a all time low. Good luck beta testing.


The crash in Destiny 2 is a Vega issue, known and currently under investigation by AMD. AMDMatt on OCUK Forums reported it directly to get it fast tracked.

The resizing wattman crash is annoying, though using OverdriveNTool means no need to use wattman. So easy fix. I'm guessing you didn't get a stable run in Superposition or some other form of benchmark before you went gaming? It's quite simple to nail down a good undervolt and stabilise those performance gremlins.

Though I guess it's not for everyone. Mine's running fine now after fixing that weird overpower issue I was having yesterday, though I'm almost certain it was somehow being caused by GPU-z, though I have no idea why it would cause it...


----------



## geriatricpollywog

I experienced flickering on my Samsung CF791 with Fury X but not with Vega. AMD quietly fixed whatever was going on.


----------



## SPLWF

Just purchased a Sapphire Vega 56 for $154, why so cheap. Well Microcenter had it listed for $479. I used my sold PC parts money which was $325 - $479 = $154. I added 2yrs warranty for $70, so total was $224







.

I have under volted to 1025 and running mem at 950, no crashes at all so far. Funny thing is my GPUz reports core clock at 1590mhz, I've never hit 1590 on benchmarks, most I hit it 1530. Is 1590 my boost clock? If so, why am I not hitting it? Got my fan set to 2800rpms (wattman) aka 50% fan speed. Temps are hovering around 75-77c.

Another question, are there any good OSD other than rivatuner that doesn't require afterburner? Thanks


----------



## spyshagg

1590 is your preset gpu ceiling. In my experience, the default algorithm will rarely allow it to reach the ceiling under load. Yet, you can trick the card into going higher by increasing P7 mhz above 1590. The card will now easily go past 1590 but only until one of 2 parameters are reached (combination of your max fan speed + Max temp limit, and power limit reached), after which it will clock down.

But then you will ask 'if with an overclocked p7 the card can reach 1590mhz with no problemas whatsoever, why doesn't it go that high with the default ceiling?'

Good question.


----------



## SPLWF

I think I will leave it at what it is. Don't want anymore noise or really don't need the extra HP due to me running a ultrawide 34 LG freesync 2560x1080. Good to know though.


----------



## spyshagg

Quote:


> Originally Posted by *SPLWF*
> 
> I think I will leave it at what it is. Don't want anymore noise or really don't need the extra HP due to me running a ultrawide 34 LG freesync 2560x1080. Good to know though.


Overclocking is up to you, but please DO undervolt. Try 1050mv P6 and P7.


----------



## SPLWF

I'm actually at 1025 on both ?


----------



## geoxile

Has anyone had a case of installing Vega making their RAM, in a ryzen system, unstable? I put a Vega 56 in a few days ago and now all my RAM is reporting unstable in stress tests.


----------



## cmogle4

Has anyone with a Frontier Edition installed the new 17Q4 Driver released today?


----------



## AngryLobster

Quote:


> Originally Posted by *TrixX*
> 
> The crash in Destiny 2 is a Vega issue, known and currently under investigation by AMD. AMDMatt on OCUK Forums reported it directly to get it fast tracked.
> 
> The resizing wattman crash is annoying, though using OverdriveNTool means no need to use wattman. So easy fix. I'm guessing you didn't get a stable run in Superposition or some other form of benchmark before you went gaming? It's quite simple to nail down a good undervolt and stabilise those performance gremlins.
> 
> Though I guess it's not for everyone. Mine's running fine now after fixing that weird overpower issue I was having yesterday, though I'm almost certain it was somehow being caused by GPU-z, though I have no idea why it would cause it...


I thought it was my undervolt causing crashes but in some games like Deus Ex, it would randomly throw DX error crashes even at stock settings. These latest drivers are absolute trash. I just can't believe that in 2017 a GPU is in the state that Vega is in.

Freesync is just a headache to deal with. 144hz monitor paired with another monitor and I'm forced to drop my 144hz to 120 otherwise Freesync just stops working. I also found the transition to LFC really jarring. My XB271HU and 1080 has been plug and play since day one so I switched (got the V64 dirt cheap) to save some cash for what I assume would be the same experience but now I understand Gsync tax - AKA: Paying extra to have something that works more often.

I should have tried that OverdriveNTool for sure. Maybe I should unbox them and give it another go without Wattman.


----------



## fursko

Quote:


> Originally Posted by *AngryLobster*
> 
> I thought it was my undervolt causing crashes but in some games like Deus Ex, it would randomly throw DX error crashes even at stock settings. These latest drivers are absolute trash. I just can't believe that in 2017 a GPU is in the state that Vega is in.
> 
> Freesync is just a headache to deal with. 144hz monitor paired with another monitor and I'm forced to drop my 144hz to 120 otherwise Freesync just stops working. I also found the transition to LFC really jarring. My XB271HU and 1080 has been plug and play since day one so I switched (got the V64 dirt cheap) to save some cash for what I assume would be the same experience but now I understand Gsync tax - AKA: Paying extra to have something that works more often.
> 
> I should have tried that OverdriveNTool for sure. Maybe I should unbox them and give it another go without Wattman.


Can you compare your 1080 and vega 64 air ? Forget freesync or gsync i need user experience.


----------



## TrixX

Quote:


> Originally Posted by *AngryLobster*
> 
> I thought it was my undervolt causing crashes but in some games like Deus Ex, it would randomly throw DX error crashes even at stock settings. These latest drivers are absolute trash. I just can't believe that in 2017 a GPU is in the state that Vega is in.
> 
> Freesync is just a headache to deal with. 144hz monitor paired with another monitor and I'm forced to drop my 144hz to 120 otherwise Freesync just stops working. I also found the transition to LFC really jarring. My XB271HU and 1080 has been plug and play since day one so I switched (got the V64 dirt cheap) to save some cash for what I assume would be the same experience but now I understand Gsync tax - AKA: Paying extra to have something that works more often.
> 
> I should have tried that OverdriveNTool for sure. Maybe I should unbox them and give it another go without Wattman.


TBH Nvidia is very plug and play at the moment and Vega is the exact opposite, it's a tweakers paradise. However the negatives are that the Nvidia's have very little tweaking room and all end up roughly the same speeds, whereas Vega is a pita out of the box and requires some tlc to get off the ground.

First up AMD's Clock fluctutation is a feature of ACG = Advanced Clock Generator, which is enabled for DPM states 5/6/7 (quote AMDMatt on the OCUK forums ). This works in almost the opposite way of Nvidia's where the Clock Frequency set by the user is the ceiling and not the floor. Vega as a result will aim to hit that frequency if the power and thermal situation is favourable to do so. Hence you can control the core clock cleanly using the voltage instead of the Core Frequency. Unlocking the power target variable (via the various Registry based PowerPlay Tables available here ) removes one of the 3 variables from the equation giving you direct core clock control via voltage. For me 1050mv = ~1580MHz at full GPU load. As that load lessen's the frequency can go higher, hence my Core Frequency is set to 1750MHz (stock for LC versions of V64), though I see a maximum with 1050mv of ~1680MHz.

As I raise that voltage I start nearing my Thermal limits and for me the cooler is limit's for Air Cooling with the stock blower at max speed which is ~295W through the core (as read by HWiNFO and MSI Afterburner). Above that the temp goes above 70C on core and my LC BIOS trigger's a crash or full shutdown. So figuring out your card's limits is the first step to getting onboard with Vega.

In your case the fan load being 4900RPM will likely do your head in and during gaming I found that even 1050mv could hit 3K plus RPM. My Daily settings for quiet operation are:

P6 - 1668MHz - 920mv
P7 - 1750MHz - 950mv
HBM - 1050MHz - 920mv
Power Target +100% (using PowerPlay regedit)
Fan 500 - 3300 RPM
Temp Targets 65 - 70 Max

This keeps things cool and fast with no thermal or power throttling and good performance. If you want to go faster than that on the stock cooler you can, but it gets loud. There's quite a few people putting the Raijintek Morpheus II on the card with dual 120mm or 140mm fans and honestly for an Air solution it's easily the best going at the moment prior to the AIB releases. Even with those I don't think many will compete with the Morpheus II. That's giving performance of equal to better than the Stock Liquid Cooled units allowing speeds up to 1700MHz+ actual (depending on card) with low noise output.

Basically the potential of these cards is amazing especially if you get them cheap and they'll likely get better over time as more DX12/Vulkan games are released with full feature set's.

As for the Freesync issue it sounds like the monitor has a bit of an issue similar to the Samsung's that were available earlier this year. Hopefully one of the other members has a solution for that for you


----------



## Rexer

Hey, Vega clubbers. Been waiting around 9 months for Vega aftermarket cards. Nobody seems to be releasing any models other than the standard reference models. I just see one which is an Asus triple fan job. Who else is making them? Or better put, why isn't anyone like MSI or Sapphire making custom cards? It's kinda weird not seeing the usual card makers piling them on the market.


----------



## TrixX

Been delays due to low volumes of the GPU and there was a manufacturing issue with HBM height on some of the early dies. Asus is currently trying to work out the kinks in BIOS for the Strix card. I'd assume that after that works we'll likely see the MSI, Gigabyte and Sapphire AIB cards hit the market shortly after.


----------



## Xender

Quote:


> Originally Posted by *TrixX*
> 
> As for the Freesync issue it sounds like the monitor has a bit of an issue similar to the Samsung's that were available earlier this year. Hopefully one of the other members has a solution for that for you


Could you explain this solution because I really cant find solution for flickering ;(


----------



## tarot

ok was doing a few more stock benches for my little review and found something interesting.
if I use the slider and do powersavee balnced go out come back in it doesn't stick it reverts to the last one.
turbo works but that's it.
now
if I do it manually just change the power slider 0 for balanced -25 for powersave and +15 for turbo it works AND I get better results with the same power usage...freaking weird.
17.10.2 fall creator by the way.

also notice that the cpu scores will drop with the lower settings BUT if you do it manually they don't so yeah.


----------



## TrixX

Quote:


> Originally Posted by *Xender*
> 
> Could you explain this solution because I really cant find solution for flickering ;(




__
https://www.reddit.com/r/5m2bgh/just_received_samsung_cf791_monitor_major_problem/

This thread may have some fixes, but it's a bit of a weird issue and only occurs on some of the lower grade freesync monitors (the Samsung CF series being a bit out of sync and being costly and not working properly).


----------



## Rexer

Quote:


> Originally Posted by *TrixX*
> 
> Been delays due to low volumes of the GPU and there was a manufacturing issue with HBM height on some of the early dies. Asus is currently trying to work out the kinks in BIOS for the Strix card. I'd assume that after that works we'll likely see the MSI, Gigabyte and Sapphire AIB cards hit the market shortly after.


Good to know, thanks. The past couple of years, I've been ramping up a 390 and it finally kacked last April. I thought since Vega hadn't been out, No problem. I'll just pick up a 580 OC card and wait till Vega AIB cards comes out. The RX 580s had just come out and I waited a week to make a purchase.
I found out the card I wanted (Sapphire RX 580 Nitro+ limited edition) was out of stock and so were a few other 580 OC card makers. I wondered, "Why such a low stock?" It turns out the mining junkies where buying them by the 10 packs. I made a panic buy on a lesser 580 8gb as soon as I could. TWO days later, they were sold out and less than a week after, most every AIB 580 8gb cards was sold out. That's almost a month since 580 was released on the market! I feel I was lucky to get one, Lol.
Now I watch the market for new Vega cards every day. I don't want to get beat by the miners again.


----------



## TrixX

GPU mining is dropping again as the market for Eth has gone past most of the VRAM sizes. They are moving into other currencies, but the boom is not there at the moment. Can't guarantee against another one though









However the stock cards have great components on board already and combine it with a Raijintek Morpheus 2 or one of the AIO solutions and have some fun. Still cheaper than a GTX 1080 in Australia


----------



## abso

I read on some forum that turning on HBCC on Vega cards increases Input Lag in games. Are there any tests that actually can confirm this?


----------



## fursko

Anybody know overclock undervolt sensitive app or game ? My tweaks works fine with all benchmark utilities but fails in some games. I need sensitive game or something for absolute optimum tweak. I was using witcher 3 for my old nvidia card. It was really sensitive for overclock but dont know what can i do for vega.


----------



## fursko

Quote:


> Originally Posted by *abso*
> 
> I read on some forum that turning on HBCC on Vega cards increases Input Lag in games. Are there any tests that actually can confirm this?


I cant test it but i did some test with enhanced sync. It causes a lot of frame skipping like fast sync. So both useless features for me. But it can be work well with 60 hz monitors dunno yet.
HBCC fps increase not that great for now and it can interfere with your stable overclock or may cause some bugs. HBCC off safe bet for now.


----------



## Tgrove

Quote:


> Originally Posted by *fursko*
> 
> Anybody know overclock undervolt sensitive app or game ? My tweaks works fine with all benchmark utilities but fails in some games. I need sensitive game or something for absolute optimum tweak. I was using witcher 3 for my old nvidia card. It was really sensitive for overclock but dont know what can i do for vega.


Dying light @ 4k guaranteed to crash any unstable uv/oc, faulty power supplies, max temps, and expose any coil whine. Its gpu dependant and super taxing on the system. My go to game for testing


----------



## Tgrove

For what its worth, i have had a near flawless experience with 4k freesync (33-60hz) and vega 64 on windows 7. Its like am absolute dream coming from fury x crossfire. Only bug i have is wattman settings reset on unstable oc/uv crash (normal?), and after a crash requiring a reboot to change/reapply wattman settings

This experience is not possible on nvidia. The biggest 4k gsync is 32" and came out like 2 years ago with nothing bigger on the horizon since then. Pretty sad if you ask me, ive been using this 49" 4k freesync monitor for over 2 years. Gsync is sorely lacking in variety, rip when all tvs and monitors come with hdmi 2.1


----------



## dagget3450

Quote:


> Originally Posted by *Tgrove*
> 
> For what its worth, i have had a near flawless experience with 4k freesync (33-60hz) and vega 64 on windows 7. Its like am absolute dream coming from fury x crossfire. Only bug i have is wattman settings reset on unstable oc/uv crash (normal?), and after a crash requiring a reboot to change/reapply wattman settings
> 
> This experience is not possible on nvidia. The biggest 4k gsync is 32" and came out like 2 years ago with nothing bigger on the horizon since then. Pretty sad if you ask me, ive been using this 49" 4k freesync monitor for over 2 years. Gsync is sorely lacking in variety, rip when all tvs and monitors come with hdmi 2.1


i wonder if this would be true for windows 7 and vulkan games as well like the new wolfenstein... i am seriously considering going back to win7 on my vega box.


----------



## CaptainTom

Thought I would throw in here my latest overclocking results. I am beating or roughly matching the stock 1080 Ti in most games. Yeah at $570 + 2 games this is a complete steal vs the 1080 Ti. God knows it will eventually match the Titan Xp due to how AMD builds their cards:

https://static5.gamespot.com/uploads/original/1568/15683559/3204913-image+%288%29.png

VegabetterthanPascal.PNG 557k .PNG file


----------



## geriatricpollywog

Quote:


> Originally Posted by *CaptainTom*
> 
> Thought I would throw in here my latest overclocking results. I am beating or roughly matching the stock 1080 Ti in most games. Yeah at $570 + 2 games this is a complete steal vs the 1080 Ti. God knows it will eventually match the Titan Xp due to how AMD builds their cards:
> 
> https://static5.gamespot.com/uploads/original/1568/15683559/3204913-image+%288%29.png
> 
> VegabetterthanPascal.PNG 557k .PNG file


Nice. Do you have any ultrawide 1440/4k results for the people who can afford a Titan xP?


----------



## AngryLobster

Quote:


> Originally Posted by *fursko*
> 
> Can you compare your 1080 and vega 64 air ? Forget freesync or gsync i need user experience.


Thanks for the info TrixX, I'm giving the card another go right now with your settings. One of the DP ports on the card just died though. Tried multiple cables but they all work on the other 2 so it's going back for sure but I will mess with it to see if repurchasing is worth it.

As for the difference between my 1080 and Vega 64, well one works flawless and has since day 1 while the other is a mess. I crash in Deus Ex, I crash in Overwatch, I can't progress in Destiny 2. Sometimes I alt tab and my screen is funky colors, some games chug when first starting, etc. It's just not a card for people who don't like tinkering.

The biggest 2 differences for me though is I feel my 1080 is "smoother" and more consistent. Even when I remove throttling from the equation, the Vega 64 just feels like FPS is all over the place in comparison.

Also IMO, there is no comparison in terms of both cards coolers (FE vs Vega reference). The FE has much better noise quality in terms of it's fans sound profile. At like for like RPM, Vega is louder and more coarse. I also believe it is unacceptable in terms of noise because I can hear it upstairs while I'm downstairs.


----------



## geriatricpollywog

Quote:


> Originally Posted by *AngryLobster*
> 
> Thanks for the info TrixX, I'm giving the card another go right now with your settings. One of the DP ports on the card just died though. Tried multiple cables but they all work on the other 2 so it's going back for sure but I will mess with it to see if repurchasing is worth it.
> 
> As for the difference between my 1080 and Vega 64, well one works flawless and has since day 1 while the other is a mess. I crash in Deus Ex, I crash in Overwatch, I can't progress in Destiny 2. Sometimes I alt tab and my screen is funky colors, some games chug when first starting.
> 
> The biggest 2 differences for me though is I feel my 1080 is "smoother" and more consistent. Even when I remove throttling from the equation, the Vega 64 just feels like FPS is all over the place in comparison.
> 
> Also IMO, there is no comparison in terms of both cards coolers (FE vs Vega reference). The FE has much better noise quality in terms of it's fans sound profile. At like for like RPM even the Vega is louder and more coarse. I also believe it is unacceptable in terms of noise because I can hear it upstairs while I'm downstairs.


My Fury X also has a dead DP. That's how you know you got a good OCing AMD card.


----------



## jbravo14

Quote:


> Originally Posted by *CaptainTom*
> 
> Thought I would throw in here my latest overclocking results. I am beating or roughly matching the stock 1080 Ti in most games. Yeah at $570 + 2 games this is a complete steal vs the 1080 Ti. God knows it will eventually match the Titan Xp due to how AMD builds their cards:
> 
> https://static5.gamespot.com/uploads/original/1568/15683559/3204913-image+%288%29.png
> 
> VegabetterthanPascal.PNG 557k .PNG file


I really tried to make the Vega 64 work for me. Been an AMD fanboy for the longest time, still have 2x280x and 390.

I got on a deal for $599 for the 1080ti FTW. So I am thinking of returning the Vega and keep the 1080ti.

I got really frustrated with the overclock testing and the stock blower cooler.


----------



## SuperZan

If you're stuck with the reference cooler and not looking to waterblock the card, you're better off just going with an AIB 1080 Ti. I swapped a 1080 for a V64 because I have a Freesync display and don't need more than a 1080-level card. If you need or want more, the 1080 Ti is the only game in town.


----------



## The EX1

Quote:


> Originally Posted by *TrixX*
> 
> GPU mining is dropping again as the market for Eth has gone past most of the VRAM sizes. They are moving into other currencies, but the boom is not there at the moment. Can't guarantee against another one though
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However the stock cards have great components on board already and combine it with a Raijintek Morpheus 2 or one of the AIO solutions and have some fun. Still cheaper than a GTX 1080 in Australia


What? The DAG size for ethereum still fits well within the memory on current cards. It is just over 2.2GB right now.


----------



## Soggysilicon

Quote:


> Originally Posted by *fursko*
> 
> Anybody know overclock undervolt sensitive app or game ? My tweaks works fine with all benchmark utilities but fails in some games. I need sensitive game or something for absolute optimum tweak. I was using witcher 3 for my old nvidia card. It was really sensitive for overclock but dont know what can i do for vega.


Prey is sensitive when you are running around in the main lobby hub of the game, specifically with clock frequency / settings.

Bomber Crew, (yes Bomber Crew) is HBM sensitive, and will hang the app on a poor OC or combination of freq and HBM.

Time Spy is HBM sensitive, while Firestrike is frequency sensitive. Heaven is good for both, but you may need to observe with HWinfos or similar to track performance.

In general any game which also utilizes an overlay that is not being rendered in the scene, such as a menu calling tables is frequency sensitive and app hangs, bonus points if there are embedded video files somewhere in the scene.

I have found as a general practice that whatever you can bench is about 5-10 mhz more than you can reasonably game without hangs; reduce accordingly for reliability.
Quote:


> Originally Posted by *TrixX*
> 
> 
> __
> https://www.reddit.com/r/5m2bgh/just_received_samsung_cf791_monitor_major_problem/%5B/URL
> 
> I really tried to make the Vega 64 work for me. Been an AMD fanboy for the longest time, still have 2x280x and 390.
> 
> I got on a deal for $599 for the 1080ti FTW. So I am thinking of returning the Vega and keep the 1080ti.
> 
> I got really frustrated with the overclock testing and the stock blower cooler.


OC'n Vega is frustrating, as others have said it can pay out if you take the time and can psychologically handle the many many crashes, hangs, and reboots. When you finally get the stars to align and with the right monitor the experience imho is extremely good and in some use cases exceeds 1080 and or 1080 Ti (title dependent), although we are talking top tier monitors in the ultra WS range as the basis of my comparison, ie. 2.5k.


----------



## Ipak

Not bad in Wolfenstein 2, from gamegpu


Spoiler: Warning: Spoiler!



Full HD

2K

4k


----------



## geriatricpollywog

Quote:


> Originally Posted by *Soggysilicon*
> 
> Prey is sensitive when you are running around in the main lobby hub of the game, specifically with clock frequency / settings.
> 
> Bomber Crew, (yes Bomber Crew) is HBM sensitive, and will hang the app on a poor OC or combination of freq and HBM.
> 
> Time Spy is HBM sensitive, while Firestrike is frequency sensitive. Heaven is good for both, but you may need to observe with HWinfos or similar to track performance.
> 
> In general any game which also utilizes an overlay that is not being rendered in the scene, such as a menu calling tables is frequency sensitive and app hangs, bonus points if there are embedded video files somewhere in the scene.
> 
> I have found as a general practice that whatever you can bench is about 5-10 mhz more than you can reasonably game without hangs; reduce accordingly for reliability.
> I own the CF791 monitor, although mine was purchased months before Vegas release during the price dip. I have not experienced the flicker issues that others have reported on my Ref. 64cu ek-wb liquid bios tweaked. I have had other issues, all of which where trouble shot to garbage windows 10 handling of the scenes. MOST of these issues where corrected in the fall creators release.
> 
> I utilize Ultimate Engine FS, 48-100hz DP 1.2 Eco saving OFF Eye Saver OFF Game Mode OFF Response Time Fastest, end user must insure (cannot count on windows doing it) that the monitor drivers that came with the monitor are installed WITH EVERY install or re-install of Vega drivers, if its not clear, DDU.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I cannot rule out with my sample size of 1 that there isn't some sort of quality control issue with the monitor, I can only say that the one I purchased is "phenomenal". I would suggest it to anyone who doesn't need to reliably exceed 100 fps and who's use case isn't impacted by slight input lag 4-8 ms. Above its advertised performance in some use cases @3440 100hz with all the bells n' whistled turned on for every game on Vega... your not going to break 100 fps in much of anything anyways. If its CS:GO or something along those lines I would maybe look at something else or forget a frame buffer synchronized monitor entirely...
> OC'n Vega is frustrating, as others have said it can pay out if you take the time and can psychologically handle the many many crashes, hangs, and reboots. When you finally get the stars to align and with the right monitor the experience imho is extremely good and in some use cases exceeds 1080 and or 1080 Ti (title dependent), although we are talking top tier monitors in the ultra WS range as the basis of my comparison, ie. 2.5k.


I pre-ordered my CF791 a year ago and it shipped in January. With Fury X, it flickered in ultmate mode. In standard mode, it did not flicker, but only worked from 80-100hz, which the Fury X, AMD's most powerful card at the time, could not push without setting the quality to medium in most games. I have a Vega 64 now (EK-FC arriving tomorrow) and the flicker issue is gone, one full year after I purchased the monitor. I should have bought a GTX 1080 and G-Sync monitor.


----------



## TrixX

Quote:


> Originally Posted by *The EX1*
> 
> What? The DAG size for ethereum still fits well within the memory on current cards. It is just over 2.2GB right now.


Oh fair enough, mate of mine who was active mining said it had gone over, maybe he was meaning the 2GB cards. My bad.
Quote:


> Originally Posted by *AngryLobster*
> 
> Thanks for the info TrixX, I'm giving the card another go right now with your settings. One of the DP ports on the card just died though. Tried multiple cables but they all work on the other 2 so it's going back for sure but I will mess with it to see if repurchasing is worth it.
> 
> As for the difference between my 1080 and Vega 64, well one works flawless and has since day 1 while the other is a mess. I crash in Deus Ex, I crash in Overwatch, I can't progress in Destiny 2. Sometimes I alt tab and my screen is funky colors, some games chug when first starting, etc. It's just not a card for people who don't like tinkering.
> 
> The biggest 2 differences for me though is I feel my 1080 is "smoother" and more consistent. Even when I remove throttling from the equation, the Vega 64 just feels like FPS is all over the place in comparison.
> 
> Also IMO, there is no comparison in terms of both cards coolers (FE vs Vega reference). The FE has much better noise quality in terms of it's fans sound profile. At like for like RPM, Vega is louder and more coarse. I also believe it is unacceptable in terms of noise because I can hear it upstairs while I'm downstairs.


Sounds like your card is fubar. I don't have any of the issues you have mentioned. Nor have I seen any of the other members post up anything similar. I'd get an RMA on it and see if the card itself was just knackered on arrival.


----------



## AngryLobster

You might be right but the only option is a Vega 64 LC. I think I'm going to give the LC a try in hopes of it killing 2 birds with one stone. Fixing the noise/temp problem and the performance/bugs I'm having with what may be a busted card to begin with.


----------



## Star2k

Quote:


> Originally Posted by *tarot*
> 
> ok was doing a few more stock benches for my little review and found something interesting.
> if I use the slider and do powersavee balnced go out come back in it doesn't stick it reverts to the last one.
> turbo works but that's it.
> now
> if I do it manually just change the power slider 0 for balanced -25 for powersave and +15 for turbo it works AND I get better results with the same power usage...freaking weird.
> 17.10.2 fall creator by the way.
> 
> also notice that the cpu scores will drop with the lower settings BUT if you do it manually they don't so yeah.


same observation here, with the 200w bios, the consumption goes down to 150w according to HWInfo with 270w at the wall (for my entire system) with the same score on Superposition and Firestrike with balanced mode of the previous drivers (365w at the wall for my entire system).

Another weird thing, is, if you set 1000mV on P6, same on P7, and you set -25% for the power limit, the clock of GPU lock to 1200mhz, you set the voltage on "auto" the GPU go to 1400mhz...

I don't know what a magic trick they made, but, it's really weird...


----------



## Xender

How thick is the radiator of Liquid Edition? I am owner of Fortress Ft05 case and Noctua D15 CPU Coooler and I am worried if it will be possible to fit this radiator between top of case and CPU cooler and how good will it perform with exhausting heat...

Is it possible to replace stock fan without disassembling card? How loud it is? It is possible to refill loop after some time to maintain it in good condition?

What are the temps of Hot Spot and HBM in Liquid Edition?


----------



## Chaoz

Quote:


> Originally Posted by *Xender*
> 
> How thick is the radiator of Liquid Edition? I am owner of Fortress Ft05 case and Noctua D15 CPU Coooler and I am worried if it will be possible to fit this radiator between top of case and CPU cooler and how good will it perform with exhausting heat...
> 
> Is it possible to replace stock fan without disassembling card? How loud it is? It is possible to refill loop after some time to maintain it in good condition?
> 
> What are the temps of Hot Spot and HBM in Liquid Edition?


Rad seems to be a slim size, so 25-ish mm.

No, you have to disassemble the cooler to change the fan. The header to plug in the fan is below the shroud.

The fan is a Gentle Typhoon, so they should be pretty good.

It cannot be refilled, like most AIO's.
(Correct me if I'm wrong, don't have the Liquid Cooled version, but from what I can tell from reviews, you can't refill it.)

I had my Hydro cooler with the radiator in exhaust setup temps were pretty good. So doubt the temps should be bad.

As for temps I have no idea, as I mentioned I don't have a LC version. You could check out a few reviews.


----------



## fursko

Quote:


> Originally Posted by *CaptainTom*
> 
> Thought I would throw in here my latest overclocking results. I am beating or roughly matching the stock 1080 Ti in most games. Yeah at $570 + 2 games this is a complete steal vs the 1080 Ti. God knows it will eventually match the Titan Xp due to how AMD builds their cards:
> 
> https://static5.gamespot.com/uploads/original/1568/15683559/3204913-image+%288%29.png
> 
> VegabetterthanPascal.PNG 557k .PNG file


What is your wattman settings ?


----------



## Xender

Quote:


> Originally Posted by *Chaoz*
> 
> Rad seems to be a slim size, so 25-ish mm.
> 
> No, you have to disassemble the cooler to change the fan. The header to plug in the fan is below the shroud.
> 
> The fan is a Gentle Typhoon, so they should be pretty good.
> 
> It cannot be refilled, like most AIO's.
> (Correct me if I'm wrong, don't have the Liquid Cooled version, but from what I can tell from reviews, you can't refill it.)
> 
> I had my Hydro cooler with the radiator in exhaust setup temps were pretty good. So doubt the temps should be bad.
> 
> As for temps I have no idea, as I mentioned I don't have a LC version. You could check out a few reviews.


So what's about this liquid cooler after two-three years? When some of liquid vaporize? Will it be rubbish?


----------



## Paul17041993

Quote:


> Originally Posted by *Xender*
> 
> So what's about this liquid cooler after two-three years? When some of liquid vaporize? Will it be rubbish?


Really depends on the conditions, in australia you're lucky if an AIO lasts a year before it either explodes or half the coolant is gone... unless of course you have enough aircon to keep the room below 30C.


----------



## Naeem

here is how pump looks like from inside it has one way spring loaded reservoir that will keep the pressure for few years ( hopefully)


----------



## astrixx

Quote:


> Originally Posted by *Xender*
> 
> I bought Vega 64 from MSI (air cooled) however I have some problems with it. Basically I have FPS drops in PUBG and games shutters every 6-7 seconds.
> I think it may have something to do with drops in utilization of GPU as you can se on chart:
> 
> These are UV settings:
> 
> 
> and here are utilization and clock during PUBG game session:
> 
> 
> Whats more, I had several artifacts in PUBG:
> 
> 
> Yesterday I had also two games crashes during 10 minutes of playing PUBG
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My tech specs:
> i7 6700k stock clocks, UV 1.28.
> 16GB DDR4 2666 MHz
> PSU: EVGA SuperNova G2 850W
> MOBO: Asus Z170 Hero
> Windows 10 64bit PRO - Fall Creators Update. Fresh installation.
> VGA Drivers 17.10.2.
> 
> I planned to replace reference cooler with Rajintek Morpheus 2 however this card is so troublesome, that I am not sure if it is not broken ;/ In that case, maybe I should return it and wait for some kind of custom cooler version like Asus Strix or MSI Twin Frozr?


In HWINFO64 disable monitoring of the GPU VRM voltage, this can help with the stutters every few seconds, hope this helps.

It's found just below the main section of the GPU. - RX Vega GPU VRM Voltage. GPU [#0]: AMD Radeon RX Vega 64 Liquid Cooling: uPI uP6266

I was having the same lagging when using HWINFO64 and RTSS to overlay the info, the developer told me to stop monitoring the VRM even though it wasn't selected to be overlayed


----------



## astrixx

I was having the same lagging when using HWINFO64 and RTSS to overlay the info, the developer told me to stop monitoring the VRM even though it wasn't selected to be overlayed. This fixed my issue!


----------



## fursko

Quote:


> Originally Posted by *AngryLobster*
> 
> You might be right but the only option is a Vega 64 LC. I think I'm going to give the LC a try in hopes of it killing 2 birds with one stone. Fixing the noise/temp problem and the performance/bugs I'm having with what may be a busted card to begin with.


Yeah lc version very good. But tubes very long. Creates mess in my case lol ^^ and you should check for price. Its so closer to 1080 ti. Im considering returnin my vega 64 lc and getting 1080 ti. Im waiting my samsung chg70. If freesync 2 works good i will keep my vega but if it not works probably i will go for 1080 ti. Its like 80 fps with freesync vs 100 fps without sync. Prices almost same.

Vega 64 LC has good temps. My fan faulty and makes a lot of noise. Changing default fan would be better for noise. So good temps, good noise, good power bios. No problem with overwatch, no problem with alttab but i cant open deus ex. This game is buggy not a vega problem. But you should waste a lot of time for optimum tweak and you should enjoy while doing it. I like radeon driver interface too. Nvidia drivers looks bad.

But nvidia feels more reliable. I feel like vega can make bad surprises. Like Destiny 2 mission 6 problem. Its not good. I use my old nvidia card four years and 0 problem.


----------



## The EX1

*PSA:* For anyone who bought Vega and was waiting on the Wolfenstein codes still, they just went live on the AMD rewards site.


----------



## fursko

Quote:


> Originally Posted by *Xender*
> 
> How thick is the radiator of Liquid Edition? I am owner of Fortress Ft05 case and Noctua D15 CPU Coooler and I am worried if it will be possible to fit this radiator between top of case and CPU cooler and how good will it perform with exhausting heat...
> 
> Is it possible to replace stock fan without disassembling card? How loud it is? It is possible to refill loop after some time to maintain it in good condition?
> 
> What are the temps of Hot Spot and HBM in Liquid Edition?


Radiator must be 35mm + 25mm fan. My fan is very very loud but its faulty product im guessing. You can change it with better fan but you should disassembling card. I dont think you can refill it. Temps are not very good but its ok. Default temp target is 65 so your temps maximum 65-66C worst case scenario. Temp target affects your clocks so your temps will be static but your clocks variable.

So change the fan set constant silent rpm for it (mobo connection) and tweak your settings around this cooling capacity.


----------



## fursko

Quote:


> Originally Posted by *The EX1*
> 
> *PSA:* For anyone who bought Vega and was waiting on the Wolfenstein codes still, they just went live on the AMD rewards site.


Amazon didnt gave me any code. Can i get wolfenstein from amd ?


----------



## The EX1

Quote:


> Originally Posted by *fursko*
> 
> Amazon didnt gave me any code. Can i get wolfenstein from amd ?


You can only get the codes if they were included in the sale. Not all Vega SKUs had an attached game bundle.


----------



## geriatricpollywog

Is anybody selling their code? PM me.


----------



## fursko

Quote:


> Originally Posted by *The EX1*
> 
> You can only get the codes if they were included in the sale. Not all Vega SKUs had an attached game bundle.


Actually i pay more than game pack price but it wasnt game pack vega. Its really annoying.


----------



## SPLWF

Quote:


> Originally Posted by *The EX1*
> 
> *PSA:* For anyone who bought Vega and was waiting on the Wolfenstein codes still, they just went live on the AMD rewards site.


Damn, I just bought a Vega 56, no code


----------



## Naeem

Quote:


> Originally Posted by *fursko*
> 
> Amazon didnt gave me any code. Can i get wolfenstein from amd ?


i bought limted edtion lc from amazon and no codes either


----------



## jbravo14

Quote:


> Originally Posted by *SPLWF*
> 
> Damn, I just bought a Vega 56, no code


Will be returning the Vega 64 i got for $599 and bought a vega 56 for $407.

Based on stock performance I saw, the difference between v56 and v64 is not worth $150 - $200 more.


----------



## alphasaur

Ive been having tons of hbm downclocking issues in some games (notably bf1) that caused huge dips in fps for me.

Thanks to this thread I'm running overdriventool and I'm able to lock clocks and run bf1 all ultra at 3200x1800









Right now I'm running 1575 at 1050mv and 1040hbm at 975mv and power slider at 50%.

I've had issues with my vega freezing at anything above 1610MHz, anyone have any tips? On an EKWB with max hotspot temps of around 55-60c. I tried a couple of different bios with no luck.

Anyone able to retrieve their key for Wolfenstein yet?


----------



## rancor

Quote:


> Originally Posted by *alphasaur*
> 
> Ive been having tons of hbm downclocking issues in some games (notably bf1) that caused huge dips in fps for me.
> 
> Thanks to this thread I'm running overdriventool and I'm able to lock clocks and run bf1 all ultra at 3200x1800
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Right now I'm running 1575 at 1050mv and 1040hbm at 975mv and power slider at 50%.
> 
> I've had issues with my vega freezing at anything above 1610MHz, anyone have any tips? On an EKWB with max hotspot temps of around 55-60c. I tried a couple of different bios with no luck.
> 
> Anyone able to retrieve their key for Wolfenstein yet?


More voltage and use the LC bios for more power and core voltage if you want. Core voltage or memory voltage(1050mV max) or try raising both. With enough radiators you shouldn't have any temp issues even up to 400W gpu core power.


----------



## kundica

Quote:


> Originally Posted by *alphasaur*
> 
> I've had issues with my vega freezing at anything above 1610MHz, anyone have any tips? On an EKWB with max hotspot temps of around 55-60c. I tried a couple of different bios with no luck.


What's your voltage at when you tried above 1610?


----------



## The EX1

Quote:


> Originally Posted by *alphasaur*
> 
> Anyone able to retrieve their key for Wolfenstein yet?


Yes. The codes went live today.


----------



## Rootax

Is it me or 17.10.2 drivers doesn't respect undervolting ? With previous drivers, the max voltage was what I set with overdriveNTools or wattman or whatever. But with 17.10.2 (or the other wddm2.3 drivers), the voltage stay at default (1.2 full load)... Not a biiiig deal, but still...


----------



## Xender

Quote:


> Originally Posted by *The EX1*
> 
> Yes. The codes went live today.


What this the web page address for keys?


----------



## kundica

Quote:


> Originally Posted by *Xender*
> 
> What this the web page address for keys?


amdrewards.com


----------



## Chaoz

Quote:


> Originally Posted by *Xender*
> 
> So what's about this liquid cooler after two-three years? When some of liquid vaporize? Will it be rubbish?


It can't evaporate as it's a closed liquid cooler.

I had an H70 a while back and it lasted me 4.5 years without any refills or anything, pump just died at the end because of a bubble.


----------



## Xender

One question regarding Vega Liquid Cooler - will it work vertically? Like in Fortress 05 case?


----------



## The EX1

Quote:


> Originally Posted by *Xender*
> 
> One question regarding Vega Liquid Cooler - will it work vertically? Like in Fortress 05 case?


Yes, that won't be a problem. It is a closed and pressurized cooling loop.


----------



## Paxi

Has anyone experienced FreeSync to not work properly anymore with Fall Creators Update and newest drivers?

It was already mentioned by a guy on the offical AMD forum and some in the driver download page comments on guru3d.

I was able to enable it again after setting the monitor refresh rate to 75hz (maximum in my case) in display adapter settings in Windows, but it seems to act weird in CS GO. The game kinda stutters and does not seem to be as smooth as before..


----------



## astrixx

Just in case no one had posted it yet! 17.10.3

http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.10.3-Release-Notes.aspx

Fixed Issues
Wolfenstein™ II: The New Colossus may experience a crash or application hang on game launch with Radeon RX Vega series graphics products.

Destiny 2™ may experience a game crash or application hang during single player mission six on Radeon RX Vega series graphics products.
Known Issues


----------



## madmanmarz

I am having a crashfest in destiny 2, tried using clockblocker doesn't seem to help!!!! This is the second game to do this, seems to be in multiplayer games/downclocking issue.

I haven't tried locking the clocks with overclockntool but I am going to revert back to regularass vega 56 bios until it can be fixed. I have a feeling all the different clocks and voltages just arent stable because i cannot run the 64 bios at stock settings at all, even in power save!


----------



## astrixx

Quote:


> Originally Posted by *madmanmarz*
> 
> I am having a crashfest in destiny 2, tried using clockblocker doesn't seem to help!!!! This is the second game to do this, seems to be in multiplayer games/downclocking issue.
> 
> I haven't tried locking the clocks with overclockntool but I am going to revert back to regularass vega 56 bios until it can be fixed. I have a feeling all the different clocks and voltages just arent stable because i cannot run the 64 bios at stock settings at all, even in power save!


Which driver and version of Windows 10 are you using? The more stable would be windows CU not the fall one.


----------



## CaptainTom

Quote:


> Originally Posted by *0451*
> 
> Nice. Do you have any ultrawide 1440/4k results for the people who can afford a Titan xP?


I actually do have some. I don't have exact numbers off the top of my head since I am out of town: but I can confirm in 4K the 1080 Ti wins in Metro, wins in Bioshock, and ties in Deus Ex. it's a wash for the most part in 1440p. I will rebench once I am done tweaking clocks and have more modern games to add.


----------



## madmanmarz

Quote:


> Originally Posted by *astrixx*
> 
> Which driver and version of Windows 10 are you using? The more stable would be windows CU not the fall one.


switched to this last one that's supposed to fix crashes but im on fall windows. was having similar issues with xenoverse 2 and others reported similar downclocking etc in some games. bahhhh

welp still crashes on original bios, that's not it. definitely the downclocking. guess ill try to lock in the p7


----------



## The EX1

Quote:


> Originally Posted by *madmanmarz*
> 
> I am having a crashfest in destiny 2, tried using clockblocker doesn't seem to help!!!! This is the second game to do this, seems to be in multiplayer games/downclocking issue.
> 
> I haven't tried locking the clocks with overclockntool but I am going to revert back to regularass vega 56 bios until it can be fixed. I have a feeling all the different clocks and voltages just arent stable because i cannot run the 64 bios at stock settings at all, even in power save!


Destiny does turn on v-sync and sets it to the refresh rate of your monitor by default. Your issue might be because the game is frame capping your card, causing the p-state and clock fluctuations. Especially if you are playing at 1080p or 60hz.


----------



## AngryLobster

Quote:


> Originally Posted by *fursko*
> 
> Yeah lc version very good. But tubes very long. Creates mess in my case lol ^^ and you should check for price. Its so closer to 1080 ti. Im considering returnin my vega 64 lc and getting 1080 ti. Im waiting my samsung chg70. If freesync 2 works good i will keep my vega but if it not works probably i will go for 1080 ti. Its like 80 fps with freesync vs 100 fps without sync. Prices almost same.
> 
> Vega 64 LC has good temps. My fan faulty and makes a lot of noise. Changing default fan would be better for noise. So good temps, good noise, good power bios. No problem with overwatch, no problem with alttab but i cant open deus ex. This game is buggy not a vega problem. But you should waste a lot of time for optimum tweak and you should enjoy while doing it. I like radeon driver interface too. Nvidia drivers looks bad.
> 
> But nvidia feels more reliable. I feel like vega can make bad surprises. Like Destiny 2 mission 6 problem. Its not good. I use my old nvidia card four years and 0 problem.


My brother actually has a CHG70 + 1080 Ti and that's what I borrowed to test Freesync. It works fine and so does LFC.

We are seeing basically the same thing. On average Vega at 1440p is 20FPS behind with worse case scenario being 30FPS against his 1080 Ti. I think the "smoothness" of Freesync is worth sacrificing 25FPS for. It makes Vega's horrendous frame variation manageable.

I'm just concerned with how loud the pump is. The fan I don't care about because it's a easy swap using a mini 4 pin GPU adapter or even just the motherboard. I ordered a LC (MSI Wave with their better caps) and it's on the way, yes a 1080 Ti is faster but Freesync makes up for the performance gap. The question is how often will it stay working.


----------



## cplifj

What is this AMD ????, every ***** FRIKIN time there is a new driver and i install it , MY BLOODY HARDWARE RESERVED MEMORY FOR THE CARD GROWS and EVEN DOUBLES.

what kind of rootkit spy**** does this crap contain really ??

when it runs just as fine with 4.3MB hardware reserved memory.....

does this mean i'm gonna have to resert cmos every driver update now ??? Great hardware , ****ty software...
same as it ever was...


----------



## The EX1

Quote:


> Originally Posted by *cplifj*
> 
> What is this AMD ????, every ***** FRIKIN time there is a new driver and i install it , MY BLOODY HARDWARE RESERVED MEMORY FOR THE CARD GROWS and EVEN DOUBLES.
> 
> what kind of rootkit spy**** does this crap contain really ??
> 
> when it runs just as fine with 4.3MB hardware reserved memory.....
> 
> does this mean i'm gonna have to resert cmos every driver update now ??? Great hardware , ****ty software...
> same as it ever was...


Do you have HBCC or ReLive enabled? Try turning them off and see. How much memory does it show being used?


----------



## cplifj

those things don't change anything. it's purey with new driver install, almost every time. something wrong with the way their software works on uefi.

still not according the spec. apparently

edit: after a cmos reset, i have found following to be the case now with new driver,

hardware reserved memory amount now actually changes when switching on or off HBCC memory segment.

when HBCC is on i have 4.3 MB hardware reserved memory, and when HBCC is off it switches to 68.2 MB hardware reserved memory,

i have not witnessed this changing of hardware reserved memory before, So probably it now works as should when switching HBCC on or off?

Does this mean i'm gonna have to reapply all settings on each new driver install ?


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> I am having a crashfest in destiny 2, tried using clockblocker doesn't seem to help!!!! This is the second game to do this, seems to be in multiplayer games/downclocking issue.
> 
> I haven't tried locking the clocks with overclockntool but I am going to revert back to regularass vega 56 bios until it can be fixed. I have a feeling all the different clocks and voltages just arent stable because i cannot run the 64 bios at stock settings at all, even in power save!


Quote:


> Originally Posted by *AngryLobster*
> 
> My brother actually has a CHG70 + 1080 Ti and that's what I borrowed to test Freesync. It works fine and so does LFC.
> 
> We are seeing basically the same thing. On average Vega at 1440p is 20FPS behind with worse case scenario being 30FPS against his 1080 Ti. I think the "smoothness" of Freesync is worth sacrificing 25FPS for. It makes Vega's horrendous frame variation manageable.
> 
> I'm just concerned with how loud the pump is. The fan I don't care about because it's a easy swap using a mini 4 pin GPU adapter or even just the motherboard. I ordered a LC (MSI Wave with their better caps) and it's on the way, yes a 1080 Ti is faster but Freesync makes up for the performance gap. The question is how often will it stay working.


Again I think there's a bit of a misunderstanding here. The issues were not with the CHG70 series and it's a Freesync 2 product. The previous CF series had some issues with freesync flicker (any panel based on that Samsung VA panel actually) and it took Samsung ages to work out a fix and help AMD actually solve the issue. Freesync won't suddenly stop working either, it's not Crossfire.

The Vega LC is not meant to contend with the 1080Ti, though as more DX12 games release you'll find it does contend closer to it more often.

The "frame variation" is also purely that the card is not being fed enough info so it drops clocks. There have been many ways mentioned so far on how to lock clocks to P7 using Wattman, OverdriveNTool and ClockBlocker. However I think as there's a couple of people having trouble with clocks again I'll post the ways again.

Just use one of the methods below:

1) In Wattman you can right click on the P State number and select the minimum and maximum P States. So right click on P6 and set it to Min and leave P7 on Max. Same with HBM right click on the P4 state and set it to Min and Max. No more downclocking. Though with 10.1 and 10.2 I can't use Wattman at all due to the resize crash. So I recommend using OverdriveNTool or CB methods.

2) With OverdriveNTool you can left click on the P state number to disable it. So disable all P States lower than 7 for Core and lower than 4 for HBM. Voila no more downclocking.



3) Use ClockBlocker in combination with Wattman or OverdriveNTool. Set the OC/UV settings required, then run ClockBlocker. In ClockBlocker you need to set rules for applications using the default BLOCK or DOWNCLOCK settings can work too but you'll need to manually set that each time. Using rules you can set and forget for different programs. If you don't set rules you may find some erratic behaviour as it may not block for some Games if you run them Windowed Fullscreen etc..

Here's an example of rules:


----------



## TrixX

Quote:


> Originally Posted by *cplifj*
> 
> those things don't change anything. it's purey with new driver install, almost every time. something wrong with the way their software works on uefi.
> 
> still not according the spec. apparently
> 
> edit: after a cmos reset, i have found following to be the case now with new driver,
> 
> hardware reserved memory amount now actually changes when switching on or off HBCC memory segment.
> 
> when HBCC is on i have 4.3 MB hardware reserved memory, and when HBCC is off it switches to 68.2 MB hardware reserved memory,
> 
> i have not witnessed this changing of hardware reserved memory before, So probably it now works as should when switching HBCC on or off?
> 
> Does this mean i'm gonna have to reapply all settings on each new driver install ?


Likely. I always reset settings for my GPU after a driver update as you never know what the new driver is capable of until tested. TBH I also think worrying about hardware reserved memory based on something Windows is measuring is pointless too. We all know Windows can't count properly








For instance it's constantly telling me my CPU is running and anything but the frequency it actually runs at!


----------



## astrixx

I had to revert back I had issues running DCS 2.1 on Fall CU and 17.10.2.


----------



## pmc25

All you people having problems on W10, why do you not just revert to W7? W10 is an absolute steaming pile.


----------



## Ipak

I own Vega for almost month, only major issue was Destiny 2 crash, which is now fixed. I'm on W10


----------



## TrixX

Interesting info on the Morpheus 2 on Vega


----------



## astrixx

Quote:


> Originally Posted by *pmc25*
> 
> All you people having problems on W10, why do you not just revert to W7? W10 is an absolute steaming pile.


Windows 10 is just fine it's better than anything else, most people who want stability just defer for a while and let drivers get stable on new feature updates. Some like living on the edge and trying out new features and testing performance. Others defer for years like those on Windows 7 lol.


----------



## fursko

Quote:


> Originally Posted by *AngryLobster*
> 
> My brother actually has a CHG70 + 1080 Ti and that's what I borrowed to test Freesync. It works fine and so does LFC.
> 
> We are seeing basically the same thing. On average Vega at 1440p is 20FPS behind with worse case scenario being 30FPS against his 1080 Ti. I think the "smoothness" of Freesync is worth sacrificing 25FPS for. It makes Vega's horrendous frame variation manageable.
> 
> I'm just concerned with how loud the pump is. The fan I don't care about because it's a easy swap using a mini 4 pin GPU adapter or even just the motherboard. I ordered a LC (MSI Wave with their better caps) and it's on the way, yes a 1080 Ti is faster but Freesync makes up for the performance gap. The question is how often will it stay working.


Thanks for information. This is good news. Samsung known bad freesync issues but looks like chg70 works good. Actually in some games vega 64 lc can catch 1080 ti performance and i believe vega will getting better with time.

I dont hear pump noise but my monitor 1080p and i heard a lot of coil whine in menus and some games. Lowering power target -%50 works perfect. Gpu pulling 60-150 watt (depends on the game) while maintaining high performance. My 5ghz 7700k pulling more watts than this gpu lol ^^ When i push the limits my gpu become monster. I can see 600 watt total system power consumption and performance close to 1080 ti.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Thanks for information. This is good news. Samsung known bad freesync issues but looks like chg70 works good. Actually in some games vega 64 lc can catch 1080 ti performance and i believe vega will getting better with time.
> 
> I dont hear pump noise but my monitor 1080p and i heard a lot of coil whine in menus and some games. Lowering power target -%50 works perfect. Gpu pulling 60-150 watt (depends on the game) while maintaining high performance. My 5ghz 7700k pulling more watts than this gpu lol ^^ When i push the limits my gpu become monster. I can see 600 watt total system power consumption and performance close to 1080 ti.


Use the Radeon Settings imposed FPS limit (300 max) to stop the coil whine in menu's. Only time I've heard it is when I hit 4000FPS in the pCARS 2 menu's. Since then driver limiting to 300 FPS has stopped any further recurrence of coil whine. It's also more reliable than ingame limits as they have to be applied once past certain loading points whereas the driver's is applied all the time.


----------



## bill1971

lfc is enabled as default,or I have to enabled in radeon settings?


----------



## madmanmarz

Quote:


> Originally Posted by *The EX1*
> 
> Destiny does turn on v-sync and sets it to the refresh rate of your monitor by default. Your issue might be because the game is frame capping your card, causing the p-state and clock fluctuations. Especially if you are playing at 1080p or 60hz.


Not frame capping. Using enhanced sync, with vsync off in game. 2560x1080 @ 75hz, frames hit 100 at times.
Quote:


> Originally Posted by *TrixX*
> 
> 
> 
> 
> 
> Interesting info on the Morpheus 2 on Vega


I'm gonna try not using the daisy chained connectors, although I have an 850w Seasonic Gold so I didn't think I'd have a problem. Plus I'm undervolting.
Quote:


> Originally Posted by *TrixX*
> 
> Again I think there's a bit of a misunderstanding here. The issues were not with the CHG70 series and it's a Freesync 2 product. The previous CF series had some issues with freesync flicker (any panel based on that Samsung VA panel actually) and it took Samsung ages to work out a fix and help AMD actually solve the issue. Freesync won't suddenly stop working either, it's not Crossfire.
> 
> The Vega LC is not meant to contend with the 1080Ti, though as more DX12 games release you'll find it does contend closer to it more often.
> 
> The "frame variation" is also purely that the card is not being fed enough info so it drops clocks. There have been many ways mentioned so far on how to lock clocks to P7 using Wattman, OverdriveNTool and ClockBlocker. However I think as there's a couple of people having trouble with clocks again I'll post the ways again.
> 
> Just use one of the methods below:
> 
> 1) In Wattman you can right click on the P State number and select the minimum and maximum P States. So right click on P6 and set it to Min and leave P7 on Max. Same with HBM right click on the P4 state and set it to Min and Max. No more downclocking. Though with 10.1 and 10.2 I can't use Wattman at all due to the resize crash. So I recommend using OverdriveNTool or CB methods.
> 
> 2) With OverdriveNTool you can left click on the P state number to disable it. So disable all P States lower than 7 for Core and lower than 4 for HBM. Voila no more downclocking.
> 
> 
> 
> 3) Use ClockBlocker in combination with Wattman or OverdriveNTool. Set the OC/UV settings required, then run ClockBlocker. In ClockBlocker you need to set rules for applications using the default BLOCK or DOWNCLOCK settings can work too but you'll need to manually set that each time. Using rules you can set and forget for different programs. If you don't set rules you may find some erratic behaviour as it may not block for some Games if you run them Windowed Fullscreen etc..
> 
> Here's an example of rules:


I am going to try locking the states, but I shouldn't have to! There are plenty of people playing the game with no problem and I'm having issues even at stock settings with stock bios! I haven't had issues in any other game except this one and xenoverse 2. Many, many others and benchmarks play along just fine. I'm really trying to avoid having to use 3 or 4 different pieces of software just so this card can behave the way it's supposed to. As I also mentioned, it's also a problem using the 64 AIR/LC bios because the default clocks are not stable on my card, so if the clocks/voltages revert to stock I have crashes!

Also isn't it strange that I didn't have any problems until I got to the multiplayer part of the game? Anyway I had already tried clockblocker and I had setup a rule for destiny 2 but it still didn't work.


----------



## fursko

In overwatch for example my fps limited to 200 and if i look sky im hearing coil whine if i look wall no coil whine. This is weird. Coil whine depends on my mouse movement and my fps always 200. What do you think about this ?


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> _I am going to try locking the states, but I shouldn't have to! There are plenty of people playing the game with no problem and I'm having issues even at stock settings with stock bios! I haven't had issues in any other game except this one and xenoverse 2. Many, many others and benchmarks play along just fine. I'm really trying to avoid having to use 3 or 4 different pieces of software just so this card can behave the way it's supposed to. As I also mentioned, it's also a problem using the AIR/LC bios because the default clocks are not stable on my card, so if the clocks/voltages revert to stock I have crashes!
> 
> It's also strange that I didn't have any problems until I got to the multiplayer part of the game._


Personally I don't look at it from a 'I shouldn't have to do this' perspective. TBH every graphics card I've ever had has had it's own idiosyncrasies which required me to fault find and fix. In Vega's case I noticed immediately on stock BIOS and settings that the card was shockingly bad at holding smooth frame rates and causing massive stutters (especially in Playerunknowns's Battlegrounds). So first thing I did was wonder why my clocks were dropping to 800MHz or less and then when boosting up to full speed for a firefight or something it would freeze for a couple of seconds which would often result in my death.

Solution was to lock to P7 state, though I didn't know about the feature in Wattman, though honestly Wattman is the worst solution as it fails to have a reliable way to apply saved custom settings repeatedly.

In comes OverdriveNTool and it has a much stronger support for restarting with the same settings. However initially the disabling of other P States was non-functional for Vega though it now has the functionality.

ClockBlocker solves the issue permanently for me. No issues with it and it works to lock P7 perfectly all the time. In some ways I find the OverdriveNTool to work better for benchmarking as there's no need to apply a load to the GPU like CB does, however the load is minimal an doesn't affect any performance.

Quote:


> Originally Posted by *fursko*
> 
> In overwatch for example my fps limited to 200 and if i look sky im hearing coil whine if i look wall no coil whine. This is weird. Coil whine depends on my mouse movement and my fps always 200. What do you think about this ?


Is it locked to 200FPS in Overwatch or in the Radeon Settings? If it's locked in Overwatch it may still cause 4000FPS at the driver level but the game may just cull the extra and use 200 rather than locking it to a max via the driver.


----------



## cplifj

did AMD even test the new 17.10.3 driver on Forza 6 or 7 for instance ????

what i'm experiencing now is a major lag and strutfest; this one is a REAL JOKE

17.10.3 is a Forza KILLER.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Is it locked to 200FPS in Overwatch or in the Radeon Settings? If it's locked in Overwatch it may still cause 4000FPS at the driver level but the game may just cull the extra and use 200 rather than locking it to a max via the driver.


Yeah its locked in game settings. I dont prefer radeon settings adds a lot of input lag. Your gpu lc version too ? What is your sweet spot ? Looks like my card unstable with undervolt. Its automatically lowering clocks and still crashes. Im getting different clocks in every game. Im not losing mhz until i reach 1150mv p7 state HBM works 1150 mhz fine. HBM stock 950 mv is it changeable? My clocks like 1660/1730mhz core 1150mhz hbm2 If i set +%50 power. But im not sure enough is it stable


----------



## pmc25

Quote:


> Originally Posted by *astrixx*
> 
> Windows 10 is just fine it's better than anything else, most people who want stability just defer for a while and let drivers get stable on new feature updates. Some like living on the edge and trying out new features and testing performance. Others defer for years like those on Windows 7 lol.


It's slower, less stable, full of bugs and full of spyware? What is there to possibly gain?

Marginally better font and icon scaling at higher resolutions, and HBCC on Vega?

I don't think it's a question of deferring. It's a clearly inferior product, and one that you can't adequately control. For me, 7 is the end of the line. If software and hardware vendors stop supporting it, then it's back to a Linux only box.


----------



## jbravo14

Quote:


> Originally Posted by *AngryLobster*
> 
> My brother actually has a CHG70 + 1080 Ti and that's what I borrowed to test Freesync. It works fine and so does LFC.
> 
> We are seeing basically the same thing. On average Vega at 1440p is 20FPS behind with worse case scenario being 30FPS against his 1080 Ti. I think the "smoothness" of Freesync is worth sacrificing 25FPS for. It makes Vega's horrendous frame variation manageable.
> 
> I'm just concerned with how loud the pump is. The fan I don't care about because it's a easy swap using a mini 4 pin GPU adapter or even just the motherboard. I ordered a LC (MSI Wave with their better caps) and it's on the way, yes a 1080 Ti is faster but Freesync makes up for the performance gap. The question is how often will it stay working.


If I understand this correctly, does this mean that the freesync monitor will work with the 1080 ti's LFC?


----------



## madmanmarz

Quote:


> Originally Posted by *pmc25*
> 
> It's slower, less stable, full of bugs and full of spyware? What is there to possibly gain?
> 
> Marginally better font and icon scaling at higher resolutions, and HBCC on Vega?
> 
> I don't think it's a question of deferring. It's a clearly inferior product, and one that you can't adequately control. For me, 7 is the end of the line. If software and hardware vendors stop supporting it, then it's back to a Linux only box.












Quote:


> Originally Posted by *TrixX*
> 
> Personally I don't look at it from a 'I shouldn't have to do this' perspective. TBH every graphics card I've ever had has had it's own idiosyncrasies which required me to fault find and fix. In Vega's case I noticed immediately on stock BIOS and settings that the card was shockingly bad at holding smooth frame rates and causing massive stutters (especially in Playerunknowns's Battlegrounds). So first thing I did was wonder why my clocks were dropping to 800MHz or less and then when boosting up to full speed for a firefight or something it would freeze for a couple of seconds which would often result in my death.
> 
> Solution was to lock to P7 state, though I didn't know about the feature in Wattman, though honestly Wattman is the worst solution as it fails to have a reliable way to apply saved custom settings repeatedly.
> 
> In comes OverdriveNTool and it has a much stronger support for restarting with the same settings. However initially the disabling of other P States was non-functional for Vega though it now has the functionality.
> 
> ClockBlocker solves the issue permanently for me. No issues with it and it works to lock P7 perfectly all the time. In some ways I find the OverdriveNTool to work better for benchmarking as there's no need to apply a load to the GPU like CB does, however the load is minimal an doesn't affect any performance.
> Is it locked to 200FPS in Overwatch or in the Radeon Settings? If it's locked in Overwatch it may still cause 4000FPS at the driver level but the game may just cull the extra and use 200 rather than locking it to a max via the driver.


I feel you, but the card should work at stock settings especially in a game with a driver specifically made for it when other people aren't having this issue. Maybe there is something else at hand here.

I have not had any issues with wattman keeping my settings after restart/wakes and I will continue to use it. I have tried overclockntool many times and it hasn't made a difference for me. I just tried locking the p7 state in wattman and it worked fine except the game still crashed after about 30 min. I am running very safe overclocks and at stock clocks the same thing was happening, so that's not it. I just ran two seperate pci-e cables, so that wasn't it. I'm losing my mind!


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Yeah its locked in game settings. I dont prefer radeon settings adds a lot of input lag. Your gpu lc version too ? What is your sweet spot ? Looks like my card unstable with undervolt. Its automatically lowering clocks and still crashes. Im getting different clocks in every game. Im not losing mhz until i reach 1150mv p7 state HBM works 1150 mhz fine. HBM stock 950 mv is it changeable? My clocks like 1660/1730mhz core 1150mhz hbm2 If i set +%50 power. But im not sure enough is it stable


I'll be testing with ingame FPS caps vs Radeon Settings FPS caps, though so far in PUBG, The Division. iRacing and Empyrion I haven't noticed any input lag as yet. iRacing in particular is very very sensitive to input lag so if it does have a problem I would hope I'd have noticed it









I'm running an Air Cooled V64 at the moment. I have a Water block here for it but I haven't installed it as yet as I was waiting on my CPU block. However I've ordered more tubing and I'll be putting my water block in this week for the GPU as there's no ETA on the CPU block (expected it to be here by now...).

Settings wise here's my ONT settings for Gaming with my GPU air cooled:



I have switched to the LC 8774 BIOS for my card and I also use Hellm's PowerPlay Tables so I can use up to +200% Power Target rather than the stock power targets.

I also use mv to regulate core clocks instead of the actual core frequency settings. With the settings above I can expect a max of ~1495 MHz sustained and maxing out at ~1588 MHz with low GPU loads. Core wattage draw is ~200W at these settings and I'm mainly thermally limited in this case. Some games I know are much less GPU intensive so fore example iRacing I can use 1100-1200mv and unlock the true GPU potential giving me ~1680 MHz actual through to ~1750 MHz under low GPU loads in iRacing (all other settings the same only P7 mv setting changed).

For benching I find my limit is a 300W core draw (measured with Afterburner/HWiNFO) with the stock cooler which severely hinders the high MHz operation of the card as the heatsink is saturated incredibly fast.

I'm hoping with the Water Block I can push the card towards much much higher clocks and mv settings as a result as the heat dissipation will have a much much higher potential.


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I feel you, but the card should work at stock settings especially in a game with a driver specifically made for it when other people aren't having this issue. Maybe there is something else at hand here.


I agree it should work with the driver and BIOS settings out of the box. However with new hardware, there are often concession's to the new tech on offer. It'll sound bad but if you remember the GTX4xx launch by Nvidia, they tried very hard to hide the fact it was slow and generally cooked anything that came in close proximity with it. They couldn't fix that as it was an underlying hardware fault and the GTX5xx was released in a much shorter time frame than expected as a result.

By comparison Vega is getting a very rough launch, there's a very high variance in ASIC quality and as such a one settings setup for the cards is just not working very well despite them technically being the same cards. With the FE release we saw the same thing, many cards acting very differently to each other under the same SKU so I did expect this scenario with Vega when I bought it.

TBH I think the driver team worked out that 1200mv would work with all SKU's and that using the ACG to aggressively downclock would prevent the cards from grenading themselves initially. Problem being that some cards can handle voltages of 800mv whereas others chuck a hissy fit below 1100mv. So they settled on a known good number that currently causes way too many watts to be drawn for the cooler to cope with. If I go to stock clocks I see the throttling of the GPU under load right down to 1200MHz at times. Which while it works is definitely not optimal.
Quote:


> Originally Posted by *madmanmarz*
> 
> I have not had any issues with wattman keeping my settings after restart/wakes and I will continue to use it. I have tried overclockntool many times and it hasn't made a difference for me. I just tried locking the p7 state in wattman and it worked fine except the game still crashed after about 30 min. I am running very safe overclocks and at stock clocks the same thing was happening, so that's not it. I just ran two seperate pci-e cables, so that wasn't it. I'm losing my mind!


Personally I've had nothing but trouble with Wattman. It crashes constantly now, though prior to that behaviour it would fail to apply certain settings reliably, it would say it's applied them yet run default settings, it would fail to reset properly after 4-6 changes being applied and to my knowledge it still has all of these issues.

OverdriveNTool, doesn't do any of these things, though if there is a driver crash it can fail to apply changes until a driver restart is done either in Windows or by restarting the machine. Do you something like Afterburner with an overlay of actual frequency while gaming or benchmarking? It can highlight whether you still suffer throttling from Power/Heat while testing as it sounds like there is still something amiss in either your OC settings or even potentially you have a multi-rail PSU and running both cables off the same rail (an issue that was present in the Mindblank Morpheus II follow up video I linked earlier in the thread).


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> I'll be testing with ingame FPS caps vs Radeon Settings FPS caps, though so far in PUBG, The Division. iRacing and Empyrion I haven't noticed any input lag as yet. iRacing in particular is very very sensitive to input lag so if it does have a problem I would hope I'd have noticed it
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm running an Air Cooled V64 at the moment. I have a Water block here for it but I haven't installed it as yet as I was waiting on my CPU block. However I've ordered more tubing and I'll be putting my water block in this week for the GPU as there's no ETA on the CPU block (expected it to be here by now...).
> 
> Settings wise here's my ONT settings for Gaming with my GPU air cooled:
> 
> 
> 
> I have switched to the LC 8774 BIOS for my card and I also use Hellm's PowerPlay Tables so I can use up to +200% Power Target rather than the stock power targets.
> 
> I also use mv to regulate core clocks instead of the actual core frequency settings. With the settings above I can expect a max of ~1495 MHz sustained and maxing out at ~1588 MHz with low GPU loads. Core wattage draw is ~200W at these settings and I'm mainly thermally limited in this case. Some games I know are much less GPU intensive so fore example iRacing I can use 1100-1200mv and unlock the true GPU potential giving me ~1680 MHz actual through to ~1750 MHz under low GPU loads in iRacing (all other settings the same only P7 mv setting changed).
> 
> For benching I find my limit is a 300W core draw (measured with Afterburner/HWiNFO) with the stock cooler which severely hinders the high MHz operation of the card as the heatsink is saturated incredibly fast.
> 
> I'm hoping with the Water Block I can push the card towards much much higher clocks and mv settings as a result as the heat dissipation will have a much much higher potential.


This looks nice. Custom waterloop best. My vega tubes ruin my case lol ^^

Im watching my vddc with gpuz. I see weird behavior. Its only works out of the game. For example if i set 1250mv p7 and locked p7 gpuz shows 1250 on desktop but in game its all over the place and generally 50mv lower. So setting 1150mv shows 1100vddc or setting 1050mv shows 1000vddc (max) If i set anything lower than 1050mv it doesnt change its always 1000vddc.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> This looks nice. Custom waterloop best. My vega tubes ruin my case lol ^^
> 
> Im watching my vddc with gpuz. I see weird behavior. Its only works out of the game. For example if i set 1250mv p7 and locked p7 gpuz shows 1250 on desktop but in game its all over the place and generally 50mv lower. So setting 1150mv shows 1100vddc or setting 1050mv shows 1000vddc (max) If i set anything lower than 1050mv it doesnt change its always 1000vddc.


If you are seeing VDDC dropping then it's hitting the Power Limit from what I've seen. Hence the use of the PowerPlay Tables to relax the power limit. With stock power limit of +50% on the LC BIOS (max 260W @ 0% Power Limit) you should max out around 390W which I believe is total card limit though maybe core. With the 142% SoftPPT in the Vega BIOS thread you can now hit a theoretical 630W with the card. Definitely don't recommed trying to find out, but it removes power as a limitation when trying to work out max clocks and card capabilities. For the stock Air BIOS the 220W version will allow 330W with normal +50% power and 533W with the 142% SoftPPT.

Use the Core Frequency as a max ceiling with P7 so if on water you can set it to 1800MHz so it doesn't go above that and the regulate actual frequency with the mv and find the region you are happy with from a power draw and frequency perspective. Test with multiple heavy loads like TimeSpy/Firestrike/Superposition and some normal gaming to make sure the settings run smoothly. Being able to see actual frequency while running the benchmarks will show if there is throttling immediately.

Also make sure that you find the minimum voltage for HBM as it is the minimum voltage for the GPU in high power states. Set the min in HBM match it in P6 and then increase P7 to what you want to test. Once you find the max you are happy with you can see if you get a perf benefit from raising HBM voltage until there's no longer a perf benefit.


----------



## madmanmarz

Quote:


> Originally Posted by *TrixX*
> 
> I agree it should work with the driver and BIOS settings out of the box. However with new hardware, there are often concession's to the new tech on offer. It'll sound bad but if you remember the GTX4xx launch by Nvidia, they tried very hard to hide the fact it was slow and generally cooked anything that came in close proximity with it. They couldn't fix that as it was an underlying hardware fault and the GTX5xx was released in a much shorter time frame than expected as a result.
> 
> By comparison Vega is getting a very rough launch, there's a very high variance in ASIC quality and as such a one settings setup for the cards is just not working very well despite them technically being the same cards. With the FE release we saw the same thing, many cards acting very differently to each other under the same SKU so I did expect this scenario with Vega when I bought it.
> 
> TBH I think the driver team worked out that 1200mv would work with all SKU's and that using the ACG to aggressively downclock would prevent the cards from grenading themselves initially. Problem being that some cards can handle voltages of 800mv whereas others chuck a hissy fit below 1100mv. So they settled on a known good number that currently causes way too many watts to be drawn for the cooler to cope with. If I go to stock clocks I see the throttling of the GPU under load right down to 1200MHz at times. Which while it works is definitely not optimal.
> Personally I've had nothing but trouble with Wattman. It crashes constantly now, though prior to that behaviour it would fail to apply certain settings reliably, it would say it's applied them yet run default settings, it would fail to reset properly after 4-6 changes being applied and to my knowledge it still has all of these issues.
> 
> OverdriveNTool, doesn't do any of these things, though if there is a driver crash it can fail to apply changes until a driver restart is done either in Windows or by restarting the machine. Do you something like Afterburner with an overlay of actual frequency while gaming or benchmarking? It can highlight whether you still suffer throttling from Power/Heat while testing as it sounds like there is still something amiss in either your OC settings or even potentially you have a multi-rail PSU and running both cables off the same rail (an issue that was present in the Mindblank Morpheus II follow up video I linked earlier in the thread).


I think I got it, I forgot to lock my HBM state. Using wattman and locking p7 and 3rd HBM state seems to be working fine. Probably only really need to lock the HBM state. And yes I run GPU-Z on my second monitor at all times when gaming/benching.


----------



## spyshagg

Quote:


> Originally Posted by *TrixX*
> 
> I'll be testing with ingame FPS caps vs Radeon Settings FPS caps, though so far in PUBG, The Division. iRacing and Empyrion I haven't noticed any input lag as yet. iRacing in particular is very very sensitive to input lag so if it does have a problem I would hope I'd have noticed it
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm running an Air Cooled V64 at the moment. I have a Water block here for it but I haven't installed it as yet as I was waiting on my CPU block. However I've ordered more tubing and I'll be putting my water block in this week for the GPU as there's no ETA on the CPU block (expected it to be here by now...).
> 
> Settings wise here's my ONT settings for Gaming with my GPU air cooled:
> 
> 
> 
> I have switched to the LC 8774 BIOS for my card and I also use Hellm's PowerPlay Tables so I can use up to +200% Power Target rather than the stock power targets.
> 
> I also use mv to regulate core clocks instead of the actual core frequency settings. With the settings above I can expect a max of ~1495 MHz sustained and maxing out at ~1588 MHz with low GPU loads. Core wattage draw is ~200W at these settings and I'm mainly thermally limited in this case. Some games I know are much less GPU intensive so fore example iRacing I can use 1100-1200mv and unlock the true GPU potential giving me ~1680 MHz actual through to ~1750 MHz under low GPU loads in iRacing (all other settings the same only P7 mv setting changed).
> 
> For benching I find my limit is a 300W core draw (measured with Afterburner/HWiNFO) with the stock cooler which severely hinders the high MHz operation of the card as the heatsink is saturated incredibly fast.
> 
> I'm hoping with the Water Block I can push the card towards much much higher clocks and mv settings as a result as the heat dissipation will have a much much higher potential.


1752mhz with 970mv

f*ck me.

I think I had to use 1160mv when testing 1732mhz last weekend (4900rpm)


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> I think I got it, I forgot to lock my HBM state. Using wattman and locking p7 and 3rd HBM state seems to be working fine. Probably only really need to lock the HBM state. And yes I run GPU-Z on my second monitor at all times when gaming/benching.


Yeah locking HBM is very important, sometimes it bugs out and drops to 800MHz or 500MHz (I was getting a lot of this before I starting locking P States). Personally I've had issues with GPU-z when testing, not sure why but it was causing some issues with maintaining OC/UV settings. Hence I use HWiNFO (with GPU VRM temps disabled) and Afterburner for the overlay of CPU frequency, wattage draw, voltages etc...
With all monitoring programs there's always a slight hit to performance but I prefer the monitoring aspect most of the time. When doing a high bench run I'll disable them.
Quote:


> Originally Posted by *spyshagg*
> 
> 1752mhz with 970mv
> 
> f*ck me.
> 
> I think I had to use 1160mv when testing 1732mhz last weekend (4900rpm)


I don't get 1752MHz at that voltage, It's the ceiling. I get 1752MHz at low GPU load with 1200mv though with ~45% load at that voltage I get higher than 1680MHz all the time in iRacing, which really doesn't use the GPU much. With 970mv I get 1500MHz to 1580MHz low to 100% GPU load, which highlights how I manage my clocks and power draw via the mv rather than the clock settings.


----------



## spyshagg

Quote:


> Originally Posted by *TrixX*
> 
> Yeah locking HBM is very important, sometimes it bugs out and drops to 800MHz or 500MHz (I was getting a lot of this before I starting locking P States). Personally I've had issues with GPU-z when testing, not sure why but it was causing some issues with maintaining OC/UV settings. Hence I use HWiNFO (with GPU VRM temps disabled) and Afterburner for the overlay of CPU frequency, wattage draw, voltages etc...
> With all monitoring programs there's always a slight hit to performance but I prefer the monitoring aspect most of the time. When doing a high bench run I'll disable them.
> I don't get 1752MHz at that voltage, It's the ceiling. I get 1752MHz at low GPU load with 1200mv though with ~45% load at that voltage I get higher than 1680MHz all the time in iRacing, which really doesn't use the GPU much. With 970mv I get 1500MHz to 1580MHz low to 100% GPU load, which highlights how I manage my clocks and power draw via the mv rather than the clock settings.


yeah you are absolutely correct. With very low mv the clocks drop. A bit higher and the clocks rise.

Why does it do this? The only real way to test max clocks for any given voltage is to increase P7 clocks, like you did.


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> yeah you are absolutely correct. With very low mv the clocks drop. A bit higher and the clocks rise.
> 
> Why does it do this? The only real way to test max clocks for any given voltage is to increase P7 clocks, like you did.


AMDMatt at OCUK described it as the ACG or Advanced Clock Generator. It's a bit odd but basically it's the reverse of Nvidia's boost clocks. With NV you set the low clock and it boosts as high as it's power and thermal limits will allow in any given situation, making it very hard to work out it's max clocks or even get a properly readable overclock.

AMD's version is set the clock ceiling (in my case 1752MHz) and it will clock up to that max limit within power and thermal limits. So under full load it may downclock by 100MHz by comparison to low load. Easily visible when you unlock the power target as a limit an stick an almost ridiculous ceiling and then scale the clocks up by adjusting the mv instead of trying to do it with the main clock speeds which are so 3 gen old now


----------



## Nuke33

Quote:


> Originally Posted by *pmc25*
> 
> All you people having problems on W10, why do you not just revert to W7? W10 is an absolute steaming pile.


Quote:


> Originally Posted by *pmc25*
> 
> It's slower, less stable, full of bugs and full of spyware? What is there to possibly gain?
> 
> Marginally better font and icon scaling at higher resolutions, and HBCC on Vega?
> 
> I don't think it's a question of deferring. It's a clearly inferior product, and one that you can't adequately control. For me, 7 is the end of the line. If software and hardware vendors stop supporting it, then it's back to a Linux only box.


Yes, worst OS ever. Besides the constant spying you are also beta testing for MS ....









I have the Win10 Enterprise N Edition on my laptop and everything regarding telemetry disabled, but checking with wireshark it still sends data to microsoft telemetry servers.


----------



## AngryLobster

TrixX's tutorial post on how to make my $500 card function has made me cancel my order for the LC.

We can talk about every GPU having idiosyncrasies (I've never experienced such with either a 980Ti, 1070 or 1080), but this is far beyond that. These things were released in a unacceptable state.


----------



## geriatricpollywog

My 'Slav block has arrived! How do I install the LC bios?


----------



## Chaoz

Quote:


> Originally Posted by *0451*
> 
> My 'Slav block has arrived! How do I install the LC bios?
> 
> 
> Spoiler: Warning: Spoiler!


Download the LC BIOS then flash with ATIflash. Select the LC BIOS and click Program, then reboot. Done.

Got the same block. They look nice.


----------



## astrixx

I would avoid the Samsung HG70 Freesync only works to I think 100hz as the higher refresh rates are backlight strobing where freesync doesn't work. Also the freesync range is bad, it has two separate ranges which is stupid. The reason for the backlighting strobe is it's VA panel and they have really bad pixel response times. I recommend going for a !PS panel with a Freesync range like 30-144hz or 40-144hz.

http://www.tftcentral.co.uk/reviews/samsung_c32hg70.htm

This Acer XF270HU monitor below was awesome but it is now discontinued, unfortunately TFTCentral only reviewed the GSync version. I have no Idea why that Samsung even has Freesync 2 associated with it, it should be Freesync 0.5 if you ask me.

https://www.pccasegear.com/products/34666/acer-xf270hu-27in-freesync-ips-144hz-gaming-monitor


----------



## Reikoji

Quote:


> Originally Posted by *abso*
> 
> I read on some forum that turning on HBCC on Vega cards increases Input Lag in games. Are there any tests that actually can confirm this?


Havent noticed it myself really.
Quote:


> Originally Posted by *cplifj*
> 
> did AMD even test the new 17.10.3 driver on Forza 6 or 7 for instance ????
> 
> what i'm experiencing now is a major lag and strutfest; this one is a REAL JOKE
> 
> 17.10.3 is a Forza KILLER.


It does indeed stutter every now and then in Forza 7 now, but I wouldn't call it a stutterfest or a Forza killer.


----------



## astrixx

This is a good resource as it shows Freesync ranges!

https://www.144hzmonitors.com/list-of-freesync-monitors/


----------



## Reikoji

Quote:


> Originally Posted by *astrixx*
> 
> I would avoid the Samsung HG70 Freesync only works to I think 100hz as the higher refresh rates are backlight strobing where freesync doesn't work. Also the freesync range is bad, it has two separate ranges which is stupid. The reason for the backlighting strobe is it's VA panel and they have really bad pixel response times. I recommend going for a !PS panel with a Freesync range like 30-144hz or 40-144hz.
> 
> http://www.tftcentral.co.uk/reviews/samsung_c32hg70.htm
> 
> This monitor below was awesome but it is now discontinued, unfortunately TFTCentral only reviewed the GSync version. I have no Idea why that Samsung even has Freesync 2 associated with it, it should be Freesync 0.5 if you ask me.
> 
> https://www.pccasegear.com/products/34666/acer-xf270hu-27in-freesync-ips-144hz-gaming-monitor


There are 2 freesync engines on the Samsung C32HG70. Normal engine is 90-120hz range and Ultimate engine is 80-120hz range. They have low frequency compensation so it wont start going crazy until like 40fps or so.

Also, I have this monitor.


----------



## 99belle99

Quote:


> Originally Posted by *astrixx*
> 
> http://www.tftcentral.co.uk/reviews/samsung_c32hg70.htm
> I have no Idea why that Samsung even has Freesync 2 associated with it, it should be *Freesync 0.5* if you ask me.
> ]


I never knew that. Thanks for the link. I thought with Freesync 2 they were supposed to be a lot better than previous Freesync monitors.
Quote:


> Originally Posted by *Reikoji*
> 
> There are 2 freesync engines on the Samsung C32HG70. Normal engine is 90-120hz range and Ultimate engine is 80-120hz range. They have low frequency compensation so it wont start going crazy until like 40fps or so.
> 
> Also, I have this monitor.


Why isn't the Freesync range included in the low framerate compensation. 72-144Hz seems pretty bad to me especially for a Freesync 2 branded monitor.


----------



## Reikoji

Quote:


> Originally Posted by *99belle99*
> 
> I never knew that. Thanks for the link. I thought with Freesync 2 they were supposed to be a lot better than previous Freesync monitors.
> Why isn't the Freesync range included in the low framerate compensation. 72-144Hz seems pretty bad to me especially for a Freesync 2 branded monitor.


I can only guess someone bein lazy. The newest Samsung Monitors are still not listed on AMD's freesync page either. I dont know the exact LFC on it, but i know it the Freesync doesnt start acting up even at 60fps.

Changing to either freesync mode changes monitor refresh rate to 120fps, so dunno where they get the 144hz from for freesync mode.

In wolfenstein II with chill enabled, the game will drop down to 40fps. Have Freesync Ultimate engine enabled and its not going bonkers.


----------



## fursko

Quote:


> Originally Posted by *astrixx*
> 
> I would avoid the Samsung HG70 Freesync only works to I think 100hz as the higher refresh rates are backlight strobing where freesync doesn't work. Also the freesync range is bad, it has two separate ranges which is stupid. The reason for the backlighting strobe is it's VA panel and they have really bad pixel response times. I recommend going for a !PS panel with a Freesync range like 30-144hz or 40-144hz.
> 
> http://www.tftcentral.co.uk/reviews/samsung_c32hg70.htm
> 
> This Acer XF270HU monitor below was awesome but it is now discontinued, unfortunately TFTCentral only reviewed the GSync version. I have no Idea why that Samsung even has Freesync 2 associated with it, it should be Freesync 0.5 if you ask me.
> 
> https://www.pccasegear.com/products/34666/acer-xf270hu-27in-freesync-ips-144hz-gaming-monitor


72-144 + LFC not that bad actually. I dont like ips glow and grey blacks. Thats why i want va panel. Response times bad but its ok. 50-100 range with hdmi cable good too. If its works without flicker or any bug.


----------



## fursko

Quote:


> Originally Posted by *AngryLobster*
> 
> TrixX's tutorial post on how to make my $500 card function has made me cancel my order for the LC.
> 
> We can talk about every GPU having idiosyncrasies (I've never experienced such with either a 980Ti, 1070 or 1080), but this is far beyond that. These things were released in a unacceptable state.


Air vega cards needs more tweaks. Not LC edition.

My v64 lc works perfect and its easily beats gtx 1080 out of the box. Just plug and play. Those tweaks not necessary. But if you want little more performance and little less power consumption you can tweak your gpu. Plus you will get low noise, low temps, freesync, and better driver interface with this card.

Clarification for power consumption: My old gtx 970 pulls 330 watt (total system power draw) and new vega 64 lc stock 430 watt (total) in Overwatch.

If you find reasonable price you can buy v64 lc. Average gtx 1080 around 550$ water cooler +100$ = 650$ I think v64 lc worth 700$ easily if you want liquid cooler in your system. Air 1080 ti 750-800$ makes sense too. Its a trade off you will give 50-100$ more and you will get %20 performance more but you will lose all liquid cooler benefits and freesync.


----------



## geriatricpollywog

This is the best I could do with my potato chip on water. I have the LC bios and the hellm LC powerplay table which is set to 400 amps. What would I change in the registry to set to 500 amps?


----------



## hellm

4th row from below:
08,01,08,01,08,01,*90*,01
change 90 to F4


----------



## Arghuin

Mind if I join the club?


----------



## spyshagg

anyone else lost firestrike/timespy performance with 17.10.3 compared to 17.10.1 ?

Im having almost -1000 graphics points less on FS.

Also lost about 10mhz on HBM.


----------



## cplifj

Quote:


> Originally Posted by *Reikoji*
> 
> Havent noticed it myself really.
> It does indeed stutter every now and then in Forza 7 now, but I wouldn't call it a stutterfest or a Forza killer.


something odd with forza 7 , apparently they had an update also.

i had to max out all settings or the vega64 lc wouuldn't even ramp up it's clockspeeds. this at 2160p.......


----------



## feedthenoob

With the new drivers I seemingly have a problem with my undervolts sticking...

When i set the right after restart i can run Heaven bench with the undervolt i set, but if i run a game (eg. Rise of the Tomb Raider) it does not use the undervolt settings, but something that seems to be the "balanced" profile.

Anybody else seen this problem?!


----------



## TrixX

Quote:


> Originally Posted by *cplifj*
> 
> something odd with forza 7 , apparently they had an update also.
> 
> i had to max out all settings or the vega64 lc wouuldn't even ramp up it's clockspeeds. this at 2160p.......


Are you locking to P7 in Wattman (least effective) or using OverdriveNTool or with ClockBlocker?
Quote:


> Originally Posted by *feedthenoob*
> 
> With the new drivers I seemingly have a problem with my undervolts sticking...
> 
> When i set the right after restart i can run Heaven bench with the undervolt i set, but if i run a game (eg. Rise of the Tomb Raider) it does not use the undervolt settings, but something that seems to be the "balanced" profile.
> 
> Anybody else seen this problem?!


One of the many issues of using just Wattman. Try OverdriveNTool and see if the behaviour is still the same.


----------



## gupsterg

@dagget3450

In da club mate







.

  

My experience of the ref cooler last ~30min, I'd done some testing of card as "out of the box", when I saw monitoring data for SuperPosition 4K Optimized preset run it defo made me go







.



Next with WC.



I was happy to see I gained a molded die when fitting the WB







. Really like how nice and soft the EK included thermal pads were, so really formed nicely between components and block. I was going to use usual Arctic Silver 5, but opted to try Thermal Grizzly Hydronaut. Damn was it a PITA to spread, even with tube/card to spread heated with hairdryer.

Not done any PP mods or LC VBIOS flash yet. Just getting to know her







.

Did some [email protected] yesterday, was getting PPD ~600K/900K at times in lower left corner. Engaging all CPU threads last night seemed to pull down PDD







. Running it today again and may edit down threads used for CPU slots. The Cooler Master V850 (Seasonic OE) is holding up well with 1950X Stock and VEGA stock, seen max 500W from wall plug meter for RB/[email protected] This is inc screen, etc, etc.



Bit more juicy than the GTX 1080 for power. Seeing higher water temps as well.

What made me :-



was finally being back to having FreeSync in games







.

I fired up SWBF and LOTF last night and it was so much better than the GTX 1080 experience, regardless it boosting to ~1975MHz.
Quote:


> Originally Posted by *Chaoz*
> 
> I used the EKWB screws with stock backplate. Didn't add the tension plate, tho. Didn't think it was necessary as the tension plate wasn't on the image that EK posted with the block. Temps are still amazing, tho. My 64 never goes over 35°C.


I noticed in 4th image in this post you are used tension plate, what made you change your mind?

I may at some point use it, just to fill the X more in on backplate.
Quote:


> Originally Posted by *Chaoz*
> 
> Yeah, the stock backplate is indeed metal and decent looking enough to use it aswell.


Cheers







, I decided to use that







. May at some point add thermal pads between it and rear of PCB at VRM areas.
Quote:


> Originally Posted by *bill1971*
> 
> lfc is enabled as default,or I have to enabled in radeon settings?


Yes by default (AFAIK no toggle for it). Monitor must have max refresh rate greater than or equal to 2.5 times minimum refresh rate, see this AMD PDF.


----------



## Naeem

Quote:


> Originally Posted by *spyshagg*
> 
> anyone else lost firestrike/timespy performance with 17.10.3 compared to 17.10.1 ?
> 
> Im having almost -1000 graphics points less on FS.
> 
> Also lost about 10mhz on HBM.


yes i was htting 26000+ now its stays around 25000 to 25500


----------



## spyshagg

Quote:


> Originally Posted by *Naeem*
> 
> yes i was htting 26000+ now its stays around 25000 to 25500


good to know! thought it was my machine


----------



## Cannon19932006

My undervolts won't stick at all in Wattman or Overdriventool on 17.10.3, MSI afterburner beta 20 works to undervolt, but undervolts all power states leading to instability in the lower p-states.


----------



## fursko

Wattman works good for me. But i cant create profile for my tweak. Reset after shutdown. Game profiles works good.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Wattman works good for me. But i cant create profile for my tweak. Reset after shutdown. Game profiles works good.


Sorry mate they don't. There's a bug where it'll apply the previous profile to the next game so you'll end up with odd profiles all over the place unless you set them to be the same for each one.

Wattman resets after restart all the time for some and sometimes for others. Hence I use OverdriveNTool as it doesn't get reset after restart and allows multiple saved profiles. It doesn't autolink game profiles, but seeing as that's broken in Radeon Settings anyway it's not really a loss.
Quote:


> Originally Posted by *Cannon19932006*
> 
> My undervolts won't stick at all in Wattman or Overdriventool on 17.10.3, MSI afterburner beta 20 works to undervolt, but undervolts all power states leading to instability in the lower p-states.


Just going to install 17.10.3 now will run a few tests to see what is going down


----------



## Cannon19932006

Quote:


> Originally Posted by *TrixX*
> 
> Just going to install 17.10.3 now will run a few tests to see what is going down


Cool, let me know how it goes. 17.10.2 seems to be the same, I can't get Wattman or Overdriventool to set voltages at all, works for clocks, power limit, hbm freq basically everything except voltages. Have been playing a lot of Destiny 2 and don't really want to go back to 17.10.1 or further, may just have to deal with stock voltages and higher fan speeds until next driver.


----------



## 113802

Quote:


> Originally Posted by *Naeem*
> 
> yes i was htting 26000+ now its stays around 25000 to 25500


HBCC increases 3DMark score. 17.10.3 may have disabled HBCC with the upgrade.


----------



## TrixX

Quote:


> Originally Posted by *Cannon19932006*
> 
> Cool, let me know how it goes. 17.10.2 seems to be the same, I can't get Wattman or Overdriventool to set voltages at all, works for clocks, power limit, hbm freq basically everything except voltages. Have been playing a lot of Destiny 2 and don't really want to go back to 17.10.1 or further, may just have to deal with stock voltages and higher fan speeds until next driver.


Well just done a pair of Superposition runs, first is stock LC BIOS settings with fan profile set to 4900 Max but leaving min at 38! Second is my Gaming undervolt settings which are lower than a month ago due to Australia hitting summer time and ambient temps getting above 30C during the day.







So for me undervolting is working as intended, maybe a bad driver installation, so IMO do a clean driver install and see if the problem persists.

Should note that power draw from the Stock settings was ~265W for Core, and with the undervolt it was ~180W...


----------



## cg4200

So I got my gigabyte lc 64 Friday and it looked like fingerprints on box and not wrapped great..
It had two little bends in radiator fins I installed anyway and boy it is cool and quiet .
Have not got to test to much yet coming from a 56 with 64 bios boost is not much different..
I seem to get max 1740 but stays around 1710ish in game with 1075 hbm.
What are average clocks people getting with lc edition??
I thought they were binned better??
I see hbm voltage is low 950
I have a 56 with 64 bios that i can run 1075 hbm 1680 gaming with fan at 3600 rpm a little load but games well ..
I might get a waterblock for it think it would beat my lc..
Anyone use xspc block says they have fins at hbm?? Good bad?? i usually go ek.
I might call the egg and do rma might have been returned or really bad card


----------



## punchmonster

Just chiming in to say that everything has been very stable for me on the latest drivers. No crashes, no Wattman weirdness. I am fully enjoying my card.


----------



## TrixX

Quote:


> Originally Posted by *cg4200*
> 
> So I got my gigabyte lc 64 Friday and it looked like fingerprints on box and not wrapped great..
> It had two little bends in radiator fins I installed anyway and boy it is cool and quiet .
> Have not got to test to much yet coming from a 56 with 64 bios boost is not much different..
> I seem to get max 1740 but stays around 1710ish in game with 1075 hbm.
> What are average clocks people getting with lc edition??
> I thought they were binned better??
> I see hbm voltage is low 950
> I have a 56 with 64 bios that i can run 1075 hbm 1680 gaming with fan at 3600 rpm a little load but games well ..
> I might get a waterblock for it think it would beat my lc..
> Anyone use xspc block says they have fins at hbm?? Good bad?? i usually go ek.
> I might call the egg and do rma might have been returned or really bad card


1710MHz sustained is pretty good for an LC. HBM voltage is actually GPU floor voltage for the high P States so keep it at 950mv and test your P6/P7 voltages first. Once you find limits with P6/P7 you can start raising HBM voltage until you fail to get any further performance increases.

Also I'd advise using the modified PowerPlay Tables by Hellm in the Vega BIOS thread as they remove power as a limiting factor when testing OC's.


----------



## TrixX

Well just done some extensive benching for the last couple of hours. First things first, Firestrike can get ****ed. Useless ******* unstable piece of ****. Ah that feels so much better









Anyway lots of images coming...

First up since I did the above Superposition testing I decided to give Firestrike a second chance. I hate Firestrike...

Stock Settings only fan adjusted to 4900 RPM Max and boy did it need it. Throttling occurred a lot, power, thermal all forms









Spoiler: Warning: Spoiler!







Stock Settings with ClockBlocker active so P7 state locked and again power and thermal throttling ensued.


Spoiler: Warning: Spoiler!







My Gaming settings from the previous Superposition testing here and no throttling, power or themally. Also around 80w less power draw...


Spoiler: Warning: Spoiler!







Now here should be my bench test settings. It's not here cos Firestrike decided to go on strike and now doesn't work again. Damn union's...

So back to Superposition for some bench test grade OC's on air still with below settings:




Spoiler: Warning: Spoiler!







Now this result is just shy of 5K in 1080p Extreme, which before 17.10.3 was the realm of 1100mv for my card, now into the 1050mv realm. Really happy with this driver, no Wattman resize driver crash, though still having hit/miss setting saving from Wattman.
Also noting my max undervolt is still working fine for daily use, but stressful GPU apps can cause it to crash so 900mv still not good for my setup, but 920mv is still ok.


----------



## pmc25

Quote:


> Originally Posted by *pmc25*
> 
> Ok, well, no benchmark. [Why though???]
> 
> Snip.
> 
> Aside from that, I'm not certain RPM is enabled. Also seems highly suspicious that asynchronous compute barely makes a difference on AMD cards, either. I suspect it's not working properly. Gains were very good in DOOM.


As I posted in another thread the other day, it seemed like asynchronous compute was either not enabled or not working for AMD in Wolfenstein 2.

Lo and behold, they released a beta patch that "re-enabled" asynchronous compute (it was never enabled on public builds) on AMD cards. Apparently it was suspended and remains suspended on the 1080Ti, too.

They haven't commented publicly yet about whether RPM is working.


----------



## spyshagg

Quote:


> Originally Posted by *TrixX*
> 
> Well just done some extensive benching for the last couple of hours. First things first, Firestrike can get ****ed. Useless ******* unstable piece of ****. Ah that feels so much better
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Anyway lots of images coming...
> 
> First up since I did the above Superposition testing I decided to give Firestrike a second chance. I hate Firestrike...
> 
> Stock Settings only fan adjusted to 4900 RPM Max and boy did it need it. Throttling occurred a lot, power, thermal all forms
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Stock Settings with ClockBlocker active so P7 state locked and again power and thermal throttling ensued.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> My Gaming settings from the previous Superposition testing here and no throttling, power or themally. Also around 80w less power draw...
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Now here should be my bench test settings. It's not here cos Firestrike decided to go on strike and now doesn't work again. Damn union's...
> 
> So back to Superposition for some bench test grade OC's on air still with below settings:
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Now this result is just shy of 5K in 1080p Extreme, which before 17.10.3 was the realm of 1100mv for my card, now into the 1050mv realm. Really happy with this driver, no Wattman resize driver crash, though still having hit/miss setting saving from Wattman.
> Also noting my max undervolt is still working fine for daily use, but stressful GPU apps can cause it to crash so 900mv still not good for my setup, but 920mv is still ok.


Hi

Your technique from controlling the gpu mhz using the mv doesn't work here.
1752mhz with 1180mv = STABLE (average clock 1700mhz)
1752mhz with 1050mv = UNSTABLE after seconds, and the clock still goes up to 1700mhz.

I dont know how your card is dropping to ~1550mhz on load from 1752mhz P7 just by having 970mv in there.

Also my HBM doesn't clock above 1040mhz stable with 1050mv. Your is running 1120 with 950mv. There are huge huge differences, specially with both cards on AIR.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Sorry mate they don't. There's a bug where it'll apply the previous profile to the next game so you'll end up with odd profiles all over the place unless you set them to be the same for each one.
> 
> Wattman resets after restart all the time for some and sometimes for others. Hence I use OverdriveNTool as it doesn't get reset after restart and allows multiple saved profiles. It doesn't autolink game profiles, but seeing as that's broken in Radeon Settings anyway it's not really a loss.
> Just going to install 17.10.3 now will run a few tests to see what is going down


Amd feels cheap because of this kind of things. Nvidia feels quality. Still i didnt decide return my vega and get 1080 ti or just replace the fan and keep it.


----------



## geriatricpollywog

Day 2 on water.


Quote:


> Originally Posted by *spyshagg*
> 
> Hi
> 
> Your technique from controlling the gpu mhz using the mv doesn't work here.
> 1752mhz with 1180mv = STABLE (average clock 1700mhz)
> 1752mhz with 1050mv = UNSTABLE after seconds, and the clock still goes up to 1700mhz.
> 
> I dont know how your card is dropping to ~1550mhz on load from 1752mhz P7 just by having 970mv in there.
> 
> Also my HBM doesn't clock above 1040mhz stable with 1050mv. Your is running 1120 with 950mv. There are huge huge differences, specially with both cards on AIR.


I can't get 1700+ MHz with his voltages either, on water. He hit the silicon lottery.


----------



## Chaoz

Quote:


> Originally Posted by *gupsterg*
> 
> I noticed in 4th image in this post you are used tension plate, what made you change your mind?
> 
> Cheers
> 
> 
> 
> 
> 
> 
> 
> , I decided to use that
> 
> 
> 
> 
> 
> 
> 
> . May at some point add thermal pads between it and rear of PCB at VRM areas.


I decided not to use it when I saw all the images of reviewers and the images EKWB posted with the stock backplate and no tension plate.

So decided not to use it. Just to be sure.

I might add thermal pads aswell, if I ever decide to change or flush my loop.


----------



## geriatricpollywog

Quote:


> Originally Posted by *hellm*
> 
> 4th row from below:
> 08,01,08,01,08,01,*90*,01
> change 90 to F4


Thanks!


----------



## spyshagg

anyone found this bug yet? HBM or SOC overclocking in 17.10.3 must not be implemented correctly.

1 - Restart Computer

2 - Open OverdriveNTool

3 - Set HBM to 1120mhz @ 1050mv

4 - Load Firestrike = instant crash

5 - Wait for driver to recover

6- Set HBM to 1120mhz @ 1050mv again

7 - Load Firestrike = 100% stable.

This is 100% repeatable after many windows reboots.

Its not a ghost clock either. Firestrike graphics score increases 3.5% compared to 945mhz hbm.
https://www.3dmark.com/compare/fs/13992227/fs/13992166


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Amd feels cheap because of this kind of things. Nvidia feels quality. Still i didnt decide return my vega and get 1080 ti or just replace the fan and keep it.


Well that's easily debunked. Nvidia don't include the updates for things like DX12/Vulkan (or any previous DX support requirements) until after they are being very commonly used, quite literally hamstringing their cards forcing people to upgrade to get the latest spec. The stock blowers on the cards are awful, though admittedly it's only recently that AMD started to get good blowers on their cards.

Nvidia have damaged more cards with broken drivers than AMD even going on market share percentages. The only recent serious issue with AMD's drivers was the first Crimson release where fan profiles got stuck at 20% instead of ramping up. Whereas there have been a few Nvidia drivers which literally bricked cards or burnt them out going back to the 8xxx series.

Sorry but Nvidia nickels and dimes their fanbase to an insane extent (Titan used to be the Ti range prior to the 7xx series for instance) to the point that the 1080 which is a mid range card was marketed as the top of the line initially.

I guess i just dislike Nvidia's business practices, but hey if either of them feel cheap, to me it's Nvidia.


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> Hi
> 
> Your technique from controlling the gpu mhz using the mv doesn't work here.
> 1752mhz with 1180mv = STABLE (average clock 1700mhz)
> 1752mhz with 1050mv = UNSTABLE after seconds, and the clock still goes up to 1700mhz.
> 
> I dont know how your card is dropping to ~1550mhz on load from 1752mhz P7 just by having 970mv in there.
> 
> Also my HBM doesn't clock above 1040mhz stable with 1050mv. Your is running 1120 with 950mv. There are huge huge differences, specially with both cards on AIR.


Very interesting that yours doesn't undervolt that far. I don't say mine is the only way, but I still think it's the most reliable way. Find the floor voltage your card is happy with and then control core with mv instead of frequency. What are your temps like with 1180mv and whats the power draw like during a Superposition run? Afterburner's overlay is very useful to keep track on those live during the runs. I think Afterburner only display's core wattage though not total card wattage. Still at least it's a consistent metric even if not 100% accurate for the card.


----------



## Greg.m

Hi guys!
I had a blackout for some minutes and now when i start my system the screen just stays black, my Vega 64 does not give any signal and its lights are off. When i start the system using the igpu it works fine. I change pcie cables and pcie slot but nothing, dead!








Any ideas?


----------



## TrixX

Quote:


> Originally Posted by *Greg.m*
> 
> Hi guys!
> I had a blackout for some minutes and now when i start my system the screen just stays black, my Vega 64 does not give any signal and its lights are off. When i start the system using the igpu it works fine. I change pcie cables and pcie slot but nothing, dead!
> 
> 
> 
> 
> 
> 
> 
> 
> Any ideas?


Try a different slot, failing that try in a different PC to confirm if card works or not. Pure speculation here, could be a rail dead in the PSU if it's multi-rail or could just be unfortunate enough to have a power spike kill the card.


----------



## diggiddi

Quote:


> Originally Posted by *gupsterg*
> 
> @dagget3450
> 
> In da club mate
> 
> 
> 
> 
> 
> 
> 
> .


What tubes are those? love the matt black look


----------



## Greg.m

Quote:


> Originally Posted by *TrixX*
> 
> Try a different slot, failing that try in a different PC to confirm if card works or not. Pure speculation here, could be a rail dead in the PSU if it's multi-rail or could just be unfortunate enough to have a power spike kill the card.


Unfortunately my psu has only one rail... I will check the other 3 pcie slots and see. Otherwise tomorrow i will take the card to a friends system and try it.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Well that's easily debunked. Nvidia don't include the updates for things like DX12/Vulkan (or any previous DX support requirements) until after they are being very commonly used, quite literally hamstringing their cards forcing people to upgrade to get the latest spec. The stock blowers on the cards are awful, though admittedly it's only recently that AMD started to get good blowers on their cards.
> 
> Nvidia have damaged more cards with broken drivers than AMD even going on market share percentages. The only recent serious issue with AMD's drivers was the first Crimson release where fan profiles got stuck at 20% instead of ramping up. Whereas there have been a few Nvidia drivers which literally bricked cards or burnt them out going back to the 8xxx series.
> 
> Sorry but Nvidia nickels and dimes their fanbase to an insane extent (Titan used to be the Ti range prior to the 7xx series for instance) to the point that the 1080 which is a mid range card was marketed as the top of the line initially.
> 
> I guess i just dislike Nvidia's business practices, but hey if either of them feel cheap, to me it's Nvidia.


I dont like nvidia too. But they are offering better complete products. I cant blame amd. Nvidia big company and has much more resources. Amd break intel domination but nvidia still rules. Hope they will improve radeon. Vega release was a big mess. Immature drivers, broken features, huge stock problem and prices etc...


----------



## Soggysilicon

Quote:


> Originally Posted by *Paxi*
> 
> Has anyone experienced FreeSync to not work properly anymore with Fall Creators Update and newest drivers?
> 
> It was already mentioned by a guy on the offical AMD forum and some in the driver download page comments on guru3d.
> 
> I was able to enable it again after setting the monitor refresh rate to 75hz (maximum in my case) in display adapter settings in Windows, but it seems to act weird in CS GO. The game kinda stutters and does not seem to be as smooth as before..


I have mentioned this numerous times, but I will mention it again. YOU (the end user) must be certain that your monitors drivers are installed correctly after installing Vega drivers or upgrading your video drivers, do not count on windows doing it for you. The issue isn't so much Vega or the drivers, its windows. If you haven't done a DDU I would HIGHLY recommend it.

Power cycling the monitor many times will enable the frame buffering which is an indicator that windows dropped the ball, may of even dropped your refresh rate into the toilet as well.
Quote:


> Originally Posted by *pmc25*
> 
> As I posted in another thread the other day, it seemed like asynchronous compute was either not enabled or not working for AMD in Wolfenstein 2.
> 
> Lo and behold, they released a beta patch that "re-enabled" asynchronous compute (it was never enabled on public builds) on AMD cards. Apparently it was suspended and remains suspended on the 1080Ti, too.
> 
> They haven't commented publicly yet about whether RPM is working.


Does it even matter, having of played through the game the stages are so tiny and cramped that very little seems loaded... very console design conscious.







Off topic but I found the game to be "Very Fancy Hot Garbage".
Quote:


> Originally Posted by *0451*
> 
> Thanks!
> 
> 
> Spoiler: Warning: Spoiler!


Nice Score!








Quote:


> Originally Posted by *spyshagg*
> 
> anyone found this bug yet? HBM or SOC overclocking in 17.10.3 must not be implemented correctly.
> 
> 1 - Restart Computer
> 
> 2 - Open OverdriveNTool
> 
> 3 - Set HBM to 1120mhz @ 1050mv
> 
> 4 - Load Firestrike = instant crash
> 
> 5 - Wait for driver to recover
> 
> 6- Set HBM to 1120mhz @ 1050mv again
> 
> 7 - Load Firestrike = 100% stable.
> 
> This is 100% repeatable after many windows reboots.
> 
> Its not a ghost clock either. Firestrike graphics score increases 3.5% compared to 945mhz hbm.
> https://www.3dmark.com/compare/fs/13992227/fs/13992166


I have seen this behavior some drivers ago, where I could run Heaven before running other applications and achieve very very high overclocks... of course the actual frame rate was all over the place. I suspect the drivers have broken in some way and have disabled some functionality such as the draw binning raster or is using a different flavor of shader. Have you tried task manager and restarting the Radeon application and then running Firestrike after your OC is set? I would suspect something is hung in volatile memory on the card.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> I dont like nvidia too. But they are offering better complete products. I cant blame amd. Nvidia big company and has much more resources. Amd break intel domination but nvidia still rules. Hope they will improve radeon. Vega release was a big mess. Immature drivers, broken features, huge stock problem and prices etc...


Can't argue with that. I am a surprised the drivers are so immature too. Looks like they are just reacting on the fly instead of running with a plan in mind currently. However while for Vega the drivers are a mess, there are solutions to that which work very well.

Hopefully they'll get on top of those during Nov as this 17.10.3 driver has proven to be very good for my card. Though I've seen a lot of negative reports too.


----------



## diggiddi

So has anyone been to 1800mhz? and how well does a 240 or 360mm rad cool these things?
I read/watched the LC rad is not enough to tame the thermals


----------



## astrixx

Hey guys, I noticed that the hotspot is the limiting factor in how high our clocks go. Mine reached 71c which is the max temp you can have it set up in Wattman.

Is there anyway to increase that? The air cards regularly go higher so I don't see why it shouldn't be okay.

I don't want to flash my bios, are there other software that could do this?

These are my temps after a Firestrike test after pushing my card almost as far as she goes. I've had HBM at 1170 this was at 1150Mhz.


----------



## geriatricpollywog

Quote:


> Originally Posted by *diggiddi*
> 
> So has anyone been to 1800mhz? and how well does a 240 or 360mm rad cool these things?
> I read/watched the LC rad is not enough to tame the thermals


I have 2 x 280mm, 2 x 140mm, and a 360mm slim radiator. During gaming, my core / memory are steady at 1720/1120mhz. Temps are 38/39/58 (core/hbm/hot spot). The highest clock speeds I can run are 1745/1150, but only for benching. Bullzoid mentioned that under LN2, the V56 was not drawing some of the objects in the display cases in Time Spy, so the card may not be stable at all over 1800mhz.


----------



## fursko

Quote:


> Originally Posted by *diggiddi*
> 
> So has anyone been to 1800mhz? and how well does a 240 or 360mm rad cool these things?
> I read/watched the LC rad is not enough to tame the thermals


If i lock p7 state i can see 1800+ mhz on desktop. It depends app to app. Some games low mhz some benchmarks high mhz. If you unlock the power limits and keep temps low you can reach 1800+ with 1250mv p7 state.


----------



## TrixX

Hopefully I'll be putting it under water today, will give it a good run if i can


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Hopefully I'll be putting it under water today, will give it a good run if i can


Im waiting for results ^^ can you beat my LC ^^


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Im waiting for results ^^ can you beat my LC ^^


Dunno but I'll be using an Aquacomputer block with a 360mm EK XS radiator and EK XRES D5 pump


----------



## geriatricpollywog

Bring it!


----------



## dagget3450

Quote:


> Originally Posted by *gupsterg*
> 
> @dagget3450
> 
> In da club mate
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> My experience of the ref cooler last ~30min, I'd done some testing of card as "out of the box", when I saw monitoring data for SuperPosition 4K Optimized preset run it defo made me go
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> Next with WC.
> 
> 
> 
> I was happy to see I gained a molded die when fitting the WB
> 
> 
> 
> 
> 
> 
> 
> . Really like how nice and soft the EK included thermal pads were, so really formed nicely between components and block. I was going to use usual Arctic Silver 5, but opted to try Thermal Grizzly Hydronaut. Damn was it a PITA to spread, even with tube/card to spread heated with hairdryer.
> 
> Not done any PP mods or LC VBIOS flash yet. Just getting to know her
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Did some [email protected] yesterday, was getting PPD ~600K/900K at times in lower left corner. Engaging all CPU threads last night seemed to pull down PDD
> 
> 
> 
> 
> 
> 
> 
> . Running it today again and may edit down threads used for CPU slots. The Cooler Master V850 (Seasonic OE) is holding up well with 1950X Stock and VEGA stock, seen max 500W from wall plug meter for RB/[email protected] This is inc screen, etc, etc.
> 
> 
> 
> Bit more juicy than the GTX 1080 for power. Seeing higher water temps as well.
> 
> What made me :-
> 
> 
> 
> was finally being back to having FreeSync in games
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I fired up SWBF and LOTF last night and it was so much better than the GTX 1080 experience, regardless it boosting to ~1975MHz.
> I noticed in 4th image in this post you are used tension plate, what made you change your mind?
> 
> I may at some point use it, just to fill the X more in on backplate.
> Cheers
> 
> 
> 
> 
> 
> 
> 
> , I decided to use that
> 
> 
> 
> 
> 
> 
> 
> . May at some point add thermal pads between it and rear of PCB at VRM areas.
> Yes by default (AFAIK no toggle for it). Monitor must have max refresh rate greater than or equal to 2.5 times minimum refresh rate, see this AMD PDF.


Added!


----------



## gupsterg

@punchmonster

I'd concur that so far enjoying VEGA. Originally v17.10.2, now v17.10.3. On .2 was having driver panel crash with resize, not tried with .3. Now on .3 started to tweak card clocks/volts and so far WattMan keeps settings between boots, etc.

@spyshagg

I only changed P6/7 to manual Freq/mV, kept Freq stock, changed mV to 1000/1100. Changed HBM 1100MHz 1050mV. Did PP mod to have +100% PowerLimit slider and then set to 60%. Card in 3D boosts to ~1600/1100MHz, compute ~1690/1100MHz. I idle is something silly like ~30/167MHz.

@Chaoz

Cheers for info







.

@diggiddi

EK ZMT OD: 16mm (5/8") ID: 10mm (3/8"), besides the look is really nice to use in my limited WC experience







.

@dagget3450

Cheers







.


----------



## AngryLobster

Quote:


> Originally Posted by *Reikoji*
> 
> I can only guess someone bein lazy. The newest Samsung Monitors are still not listed on AMD's freesync page either. I dont know the exact LFC on it, but i know it the Freesync doesnt start acting up even at 60fps.
> 
> Changing to either freesync mode changes monitor refresh rate to 120fps, so dunno where they get the 144hz from for freesync mode.
> 
> In wolfenstein II with chill enabled, the game will drop down to 40fps. Have Freesync Ultimate engine enabled and its not going bonkers.


Dude update your monitor via USB. The original shipped firmware (6 versions old now) had a dumb 80-120hz Freesync range which I'm assuming is what you're referring to when you say it drops your refresh rate to 120hz when activating Freesync via monitor menu. It is now 72-144hz and LFC works great.

Great way to verify Freesync/LFC is the Nvidia pendulum demo.


----------



## spyshagg

Quote:


> Originally Posted by *spyshagg*
> 
> anyone found this bug yet? HBM or SOC overclocking in 17.10.3 must not be implemented correctly.
> 
> 1 - Restart Computer
> 
> 2 - Open OverdriveNTool
> 
> 3 - Set HBM to 1120mhz @ 1050mv
> 
> 4 - Load Firestrike = instant crash
> 
> 5 - Wait for driver to recover
> 
> 6- Set HBM to 1120mhz @ 1050mv again
> 
> 7 - Load Firestrike = 100% stable.
> 
> This is 100% repeatable after many windows reboots.
> 
> Its not a ghost clock either. Firestrike graphics score increases 3.5% compared to 945mhz hbm.
> https://www.3dmark.com/compare/fs/13992227/fs/13992166


Quote:


> Originally Posted by *Soggysilicon*
> 
> I have seen this behavior some drivers ago, where I could run Heaven before running other applications and achieve very very high overclocks... of course the actual frame rate was all over the place. I suspect the drivers have broken in some way and have disabled some functionality such as the draw binning raster or is using a different flavor of shader. Have you tried task manager and restarting the Radeon application and then running Firestrike after your OC is set? I would suspect something is hung in volatile memory on the card.


The bug above, has show me a way to *UNTETHER GPU and HBM floor voltage*. Allowing me to run high HBM voltages and frequencies while keeping the GPU low and cool.

When the driver crashes and recovers in step nº4, it automatically adds +50mv on top of the voltages you set in step 3 for both the GPU and HBM _(confirmed in gpu-z for gpu voltage. I cant read HBM voltages, but If I reboot windows and manually set 1100mv on HBM, it also becomes stable, proving that indeed the crash also adds 50mv to the HBM)._

After the crash, I can re-set the GPU P7 @ 1050mv (confirmed in GPU-Z) while the driver bug maintains those +50mv for HBM, making it possible to run 1120mhz on HBM with 1100mv and the GPU with only 1050mv. So, in conclusion, there is way to un-tether GPU floor from HBM P3.


----------



## Particle

Newegg has the Gigabyte Vega 64 card "on sale" until Tuesday for $550. This makes pricing only being 10% over MSRP finally. The same thing happened at the 570 dollar price point right before they started moving all their cards to that price for a couple of weeks.


----------



## AngryLobster

They also have the Powercolor version for $516 on eBay.


----------



## Particle

Quote:


> Originally Posted by *AngryLobster*
> 
> They also have the Powercolor version for $516 on eBay.


Great catch. I ordered one immediately.


----------



## cg4200

Quote:


> Originally Posted by *TrixX*
> 
> 1710MHz sustained is pretty good for an LC. HBM voltage is actually GPU floor voltage for the high P States so keep it at 950mv and test your P6/P7 voltages first. Once you find limits with P6/P7 you can start raising HBM voltage until you fail to get any further performance increases.
> 
> Also I'd advise using the modified PowerPlay Tables by Hellm in the Vega BIOS thread as they remove power as a limiting factor when testing OC's.


yeah thanks still testing busy weekend..I have got gaming gta stable 1714-1750. 1150p6 1175p7 1100 hbm @950..
Funny though running firestrike timespy or extreme it will pass 1180hbm @ 1000
And seems to score seems to scale..
Will have to see if I can get that game stable...
I have a 56 was going to get water block for now I am thinking I will take my lc card put water block on it and lc kit on my 56 should fit right anyone take apart lc same as 64/56?? also what are some Max hbm speeds game stable??thanks


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> @dagget3450
> 
> In da club mate
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> My experience of the ref cooler last ~30min, I'd done some testing of card as "out of the box", when I saw monitoring data for SuperPosition 4K Optimized preset run it defo made me go
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> Next with WC.
> 
> 
> 
> I was happy to see I gained a molded die when fitting the WB
> 
> 
> 
> 
> 
> 
> 
> . Really like how nice and soft the EK included thermal pads were, so really formed nicely between components and block. I was going to use usual Arctic Silver 5, but opted to try Thermal Grizzly Hydronaut. Damn was it a PITA to spread, even with tube/card to spread heated with hairdryer.
> 
> Not done any PP mods or LC VBIOS flash yet. Just getting to know her
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Did some [email protected] yesterday, was getting PPD ~600K/900K at times in lower left corner. Engaging all CPU threads last night seemed to pull down PDD
> 
> 
> 
> 
> 
> 
> 
> . Running it today again and may edit down threads used for CPU slots. The Cooler Master V850 (Seasonic OE) is holding up well with 1950X Stock and VEGA stock, seen max 500W from wall plug meter for RB/[email protected] This is inc screen, etc, etc.
> 
> 
> 
> Bit more juicy than the GTX 1080 for power. Seeing higher water temps as well.
> 
> What made me :-
> 
> 
> 
> was finally being back to having FreeSync in games
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I fired up SWBF and LOTF last night and it was so much better than the GTX 1080 experience, regardless it boosting to ~1975MHz.
> I noticed in 4th image in this post you are used tension plate, what made you change your mind?
> 
> I may at some point use it, just to fill the X more in on backplate.
> Cheers
> 
> 
> 
> 
> 
> 
> 
> , I decided to use that
> 
> 
> 
> 
> 
> 
> 
> . May at some point add thermal pads between it and rear of PCB at VRM areas.
> Yes by default (AFAIK no toggle for it). Monitor must have max refresh rate greater than or equal to 2.5 times minimum refresh rate, see this AMD PDF.


Nice, I like the minimalist EK design in combination with the limited back-plate









You should add thermal-pads to the doublers on the backside though, the get really very hot. I use the Alphacool back-plate with thermal-pads applied and after long benchmarks/gaming sessions I can´t touch the back-plate in the doubler areas, it immediately burns my finger.
That is undervolted to 960mv, I don´t want to know how hot they get with stock volts.


----------



## AngryLobster

Newegg has the Sapphire Vega 64 @ MSRP finally. $499.


----------



## dagget3450

Quote:


> Originally Posted by *AngryLobster*
> 
> Newegg has the Sapphire Vega 64 @ MSRP finally. $499.


So tempting


----------



## jbravo14

Quote:


> Originally Posted by *dagget3450*
> 
> So tempting


is Vega 64 still worh the $100 extra compared to a Vega 56 + Bios MOD - if you don't plan to Liquid Cool it?


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> is Vega 64 still worh the $100 extra compared to a Vega 56 + Bios MOD?


Nope. V56 + BIOS Mod and a good GPU wins 100% of the time value wise. There is a slight binning aspect where theoretically more V56's won't react as well as V64's to OC/UV but I don't think that's really that much of an issue.


----------



## jbravo14

Quote:


> Originally Posted by *TrixX*
> 
> Nope. V56 + BIOS Mod and a good GPU wins 100% of the time value wise. There is a slight binning aspect where theoretically more V56's won't react as well as V64's to OC/UV but I don't think that's really that much of an issue.


Awesome!, thanks for the response.

I already got a vega 56 ($405) on its way, and was tempted with the $499 vega64.


----------



## jstefanop

where the hell are all the AIB cards that were supposed to be out by now??


----------



## Particle

I bought a Vega 64 this morning for $517. Funny that now it's at $500--exactly as it is supposed to be. Oh well, $17 overpaid isn't nearly as bad as last week's $70 would have been.


----------



## Rexer

Quote:


> Originally Posted by *TrixX*
> 
> GPU mining is dropping again as the market for Eth has gone past most of the VRAM sizes. They are moving into other currencies, but the boom is not there at the moment. Can't guarantee against another one though
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However the stock cards have great components on board already and combine it with a Raijintek Morpheus 2 or one of the AIO solutions and have some fun. Still cheaper than a GTX 1080 in Australia


Quote:


> Originally Posted by *AngryLobster*
> 
> Newegg has the Sapphire Vega 64 @ MSRP finally. $499.


I was just there... yeah, I agree, it's about time. AIB cards are coming out soon so I'm thinking the AMD partners are trying to move the reference stock out the door. I'm tempted to buy but I think I'm gonna wait since I'm not hurting for a new card at the moment. AMD reference cards are heater boxes and I want that small cooling advantage AIB cards have. If the price goes down another $50 I'm gonna scoop on it. Thanks for the info. Bro.


----------



## Chaoz

Quote:


> Originally Posted by *Particle*
> 
> I bought a Vega 64 this morning for $517. Funny that now it's at $500--exactly as it is supposed to be. Oh well, $17 overpaid isn't nearly as bad as last week's $70 would have been.


I overpaid quite a bit as I got mine the first week they came out. But don't really care, as I was one of the first to have one waterblocked in my country. I paid €650 for mine as EU MSRP is quite a bit higher than US MSRP.

Even 56's are a lot higher priced. A 56 here will cost me €580 for a black cooler version.


----------



## tarot

Quote:


> Originally Posted by *Paul17041993*
> 
> Really depends on the conditions, in australia you're lucky if an AIO lasts a year before it either explodes or half the coolant is gone... unless of course you have enough aircon to keep the room below 30C.


never had that happen with any aio I have especially the fury x that still powered on with the same temps after 2 years.
I have a4 something year old h100 still cooling a fx 8350 as well and temps in this sauna get up around 35 40 degrees in summer...in the friggen mountains...stupid tin roof








Quote:


> Originally Posted by *pmc25*
> 
> All you people having problems on W10, why do you not just revert to W7? W10 is an absolute steaming pile.


no real issues here with 10 from the fury x to this I have more issues on 7 with another build actually. (fall creator update by the way)


----------



## TrixX

Quote:


> Originally Posted by *tarot*
> 
> never had that happen with any aio I have especially the fury x that still powered on with the same temps after 2 years.
> I have a4 something year old h100 still cooling a fx 8350 as well and temps in this sauna get up around 35 40 degrees in summer...in the friggen mountains...stupid tin roof


Live near an ocean, have AIO's die in a year or two...

Don't have a steel case by an ocean...


----------



## Reikoji

Quote:


> Originally Posted by *AngryLobster*
> 
> Dude update your monitor via USB. The original shipped firmware (6 versions old now) had a dumb 80-120hz Freesync range which I'm assuming is what you're referring to when you say it drops your refresh rate to 120hz when activating Freesync via monitor menu. It is now 72-144hz and LFC works great.
> 
> Great way to verify Freesync/LFC is the Nvidia pendulum demo.


ive never updated a monitor firmware before. i had a tv that did it easily because it connected to the internet. ive also searched and have not been able to find those firmware updates.


----------



## dagget3450

Quote:


> Originally Posted by *TrixX*
> 
> Live near an ocean, have AIO's die in a year or two...
> 
> Don't have a steel case by an ocean...


On the east coast of U.S. i am, seen many of cases exposed to ocean wind. Nicely rusted and corroded indeed.


----------



## AngryLobster

Quote:


> Originally Posted by *Reikoji*
> 
> ive never updated a monitor firmware before. i had a tv that did it easily because it connected to the internet. ive also searched and have not been able to find those firmware updates.


Go to the product page for your size monitor on the Samsung site (32 or 27) and click "Support." It will scroll you down to the support links where you can click the "See All Support>" link and get access to "Software & Apps" where you'll find the Firmware update itself and a separate PDF with instructions.


----------



## geriatricpollywog

Quote:


> Originally Posted by *dagget3450*
> 
> On the east coast of U.S. i am, seen many of cases exposed to ocean wind. Nicely rusted and corroded indeed.


How close do you need to live for it to be a problem? I am a 3 minute walk to the surf.


----------



## cephelix

Quote:


> Originally Posted by *0451*
> 
> How close do you need to live for it to be a problem? I am a 3 minute walk to the surf.


i think that is close enough!lol

Edit: thanks for the heads up on the sale on newegg... Thinking of pulling the trigger on a 64 or 56 since it'll be alot cheaper than getting one locally, assuming I could even get one. Anyone with experience with newegg for overseas purchases/returns?


----------



## Reikoji

Quote:


> Originally Posted by *AngryLobster*
> 
> Go to the product page for your size monitor on the Samsung site (32 or 27) and click "Support." It will scroll you down to the support links where you can click the "See All Support>" link and get access to "Software & Apps" where you'll find the Firmware update itself and a separate PDF with instructions.


Well thank you very much sir. Could they have hidden that any better? If i didnt see it here I would have never know there was a firmware update or how to do it.

Got 72-144hz freesync 2 range reported now. Also confirmed freesync 2, as the menu for the monitor displays the Freesync 2. Someone said they dropped it on this monitor, but they havent.

Standard engine is a horrible 120-144hz tho.. Ultimate engine is now 72-144hz.

Also another way to see the Freesync range, for any of us using Radeon Software, go to the display tab in Radeon Settings and mouse over the AMD Freesync toggle. It will display it there too.


----------



## fursko

Quote:


> Originally Posted by *Reikoji*
> 
> Well thank you very much sir. Could they have hidden that any better? If i didnt see it here I would have never know there was a firmware update or how to do it.
> 
> Got 72-144hz freesync 2 range reported now. Also confirmed freesync 2, as the menu for the monitor displays the Freesync 2. Someone said they dropped it on this monitor, but they havent.
> 
> Standard engine is a horrible 120-144hz tho.. Ultimate engine is now 72-144hz.
> 
> Also another way to see the Freesync range, for any of us using Radeon Software, go to the display tab in Radeon Settings and mouse over the AMD Freesync toggle. It will display it there too.


Is it work good ? Any problem, flicker etc ? How is monitor did you like it ? hdr freesync 2 etc...


----------



## hyp36rmax

So i finally ordered a Vega 64 from NewEgg at MSRP $499. Just about to pick up an EK Waterblock too. I have a feeling we're going to see generous allocation at retailers to see MSRP stick around very shortly.


----------



## cephelix

ok, placed my order for the msi 56. Was quite disappointed that the sapphire 56 wasn't on sale. Really wanted to try something from them. Yes yes, I know, reference is reference, there's no difference.


----------



## dagget3450

Quote:


> Originally Posted by *0451*
> 
> How close do you need to live for it to be a problem? I am a 3 minute walk to the surf.


Hehe,. Pretty close. If you keep the pc in a well controlled climate probably wont be an issue. If you like to keep windows and sliding doors open then who knows. I recall a pc that some folks had right on beachfront side of house, they kept the sliding glass door open. I cant be certain if they had it too close for moisture from rain/on shore winds but it was crazy wild when i saw the pc. The whole backside facing door was rusted and corroded. Things like video ports and sound ports were corroded looking. It also had same effect inside the pc but at a reduced rate.

If i had not seen it with my own eyes i wouldnt have believed it. The pc died sadly, not sure about AIO's but i would imagine salt water/mist could be bad news for radiators and stuff.


----------



## geriatricpollywog

Quote:


> Originally Posted by *dagget3450*
> 
> Hehe,. Pretty close. If you keep the pc in a well controlled climate probably wont be an issue. If you like to keep windows and sliding doors open then who knows. I recall a pc that some folks had right on beachfront side of house, they kept the sliding glass door open. I cant be certain if they had it too close for moisture from rain/on shore winds but it was crazy wild when i saw the pc. The whole backside facing door was rusted and corroded. Things like video ports and sound ports were corroded looking. It also had same effect inside the pc but at a reduced rate.
> 
> If i had not seen it with my own eyes i wouldnt have believed it. The pc died sadly, not sure about AIO's but i would imagine salt water/mist could be bad news for radiators and stuff.


No corrosion after 3 months of living here, but I'll reply to this thread when I see something.


----------



## diggiddi

Quote:


> Originally Posted by *cg4200*
> 
> yeah thanks still testing busy weekend..I have got gaming gta stable 1714-1750. 1150p6 1175p7 1100 hbm @950..
> Funny though running firestrike timespy or extreme it will pass 1180hbm @ 1000
> And seems to score seems to scale..
> Will have to see if I can get that game stable...
> I have a 56 was going to get water block for now I am thinking I will take my lc card put water block on it and lc kit on my 56 should fit right anyone take apart lc same as 64/56?? also what are some Max hbm speeds game stable??thanks


So which of the 2 do you prefer/recommend esp under water?


----------



## The EX1

Quote:


> Originally Posted by *cephelix*
> 
> ok, placed my order for the msi 56. Was quite disappointed that the sapphire 56 wasn't on sale. Really wanted to try something from them. Yes yes, I know, reference is reference, there's no difference.


MSI offers a 3 year warranty as opposed to Sapphire's 2 year. Gigabyte and MSI offer the longest warranties out of all the AMD partners. Not sure on ASUS yet as they don't have any cards out.

Be happy you got the MSI card


----------



## cephelix

Quote:


> Originally Posted by *The EX1*
> 
> MSI offers a 3 year warranty as opposed to Sapphire's 2 year. Gigabyte and MSI offer the longest warranties out of all the AMD partners. Not sure on ASUS yet as they don't have any cards out.
> 
> Be happy you got the MSI card


Well that's true. Thanks. Just my last 2 cards have been an MSI and was thinking of trying something new. Also ordered a FC WB from EK, can't seem to apply the ekhalloween coupon code though. that's a bummer


----------



## Reikoji

Quote:


> Originally Posted by *fursko*
> 
> Is it work good ? Any problem, flicker etc ? How is monitor did you like it ? hdr freesync 2 etc...


no flicker noticed while playing lots of Destiny 2 and Final Fantasy XIV @ 2k. I tried HDR in destiny 2 once, but it had a horrible error that made it look like my screen burned thousands of pixels







, but that was before I updated.


----------



## Paul17041993

Quote:


> Originally Posted by *tarot*
> 
> never had that happen with any aio I have especially the fury x that still powered on with the same temps after 2 years.
> I have a4 something year old h100 still cooling a fx 8350 as well and temps in this sauna get up around 35 40 degrees in summer...in the friggen mountains...stupid tin roof
> 
> 
> 
> 
> 
> 
> 
> 
> no real issues here with 10 from the fury x to this I have more issues on 7 with another build actually. (fall creator update by the way)


Where I live, the outside temperature can be as high as 48C, indoor and case ambient temps can easily surpass 55C, which makes it super easy for an AIO to surpass the material limit of 60C (I set my loop to run full-tilt at 50C for this reason).

Adding to that, all the AIO's I've gotten have always been only half-full of coolant, doesn't take long for them to only be a quarter full...


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> So i finally ordered a Vega 64 from NewEgg at MSRP $499. Just about to pick up an EK Waterblock too. I have a feeling we're going to see generous allocation at retailers to see MSRP stick around very shortly.


Congrats hope it serves you well! Ill add you in when you get it! Post back soon


----------



## Particle

Back in the day when I bought my first Sapphire video card, their warranty was only one year. It made sense, too, as it seemed like their cards died a lot. My Radeon 9500 died after 13 months or something like that.


----------



## cg4200

Quote:


> Originally Posted by *diggiddi*
> 
> So which of the 2 do you prefer/recommend esp under water?


Sorry I don't think I worded it right.. l have gigabte 64lc eddition..
I also have a 56 flashed to 64 bios and am ordering water block for that..
Gave me idea if my 64lc will do 1750 at aio temp 49c if I take aio off and put it on my 56 and put ek block on my 64 lc get temps to Max 39 maybe squeeze some more out off core..
Off topic I usually always get ek wb but thought I read lc are binned and I wanted the best ..
Any one else hear that??


----------



## cplifj

Quote:


> Originally Posted by *cplifj*
> 
> some proofing was still in order:


really dagget ???? Ha....


----------



## fursko

17.10.2 driver 5300 superposition 1080p extreme score. Now 17.10.3 driver 5000 score *** ?


----------



## madmanmarz

Quote:


> Originally Posted by *fursko*
> 
> 17.10.2 driver 5300 superposition 1080p extreme score. Now 17.10.3 driver 5000 score *** ?


probably hbcc related. Try changing the memory amount and hitting apply.


----------



## geriatricpollywog

Quote:


> Originally Posted by *fursko*
> 
> 17.10.2 driver 5300 superposition 1080p extreme score. Now 17.10.3 driver 5000 score *** ?


I hit 5392 in superposition 1080p Extreme during a single run, but it was artificially high because my GPU was not drawing all the objects in the room. I forget the details, but the core was running at 1745mhz and I think the memory was set to 1170. I lowered the core and memory by 10mhz each and my score was 5100, which is about right. No objects dissappeared on that run. I don't see how a legitamite score of 5300+ would even be possible.


----------



## gupsterg

Quote:


> Originally Posted by *Nuke33*
> 
> Nice, I like the minimalist EK design in combination with the limited back-plate
> 
> 
> 
> 
> 
> 
> 
> 
> 
> You should add thermal-pads to the doublers on the backside though, the get really very hot. I use the Alphacool back-plate with thermal-pads applied and after long benchmarks/gaming sessions I can´t touch the back-plate in the doubler areas, it immediately burns my finger.
> That is undervolted to 960mv, I don´t want to know how hot they get with stock volts.


Cheers







, will add pads very soon







.


----------



## fursko

Quote:


> Originally Posted by *madmanmarz*
> 
> probably hbcc related. Try changing the memory amount and hitting apply.


I didnt use hbcc.


----------



## fursko

Quote:


> Originally Posted by *0451*
> 
> I hit 5392 in superposition 1080p Extreme during a single run, but it was artificially high because my GPU was not drawing all the objects in the room. I forget the details, but the core was running at 1745mhz and I think the memory was set to 1170. I lowered the core and memory by 10mhz each and my score was 5100, which is about right. No objects dissappeared on that run. I don't see how a legitamite score of 5300+ would even be possible.


It was fully stable for superposition. Everything was right. Wattman settings: +%50power 1150mhz hbm and p6 1100 p7 1150mv temp target 70C. During test my clocks average 1720-1730/1150.

edit: I get 7300 4k optimized score and 26415 firestrike graphics score too. All tests without hbcc. This wattman settings works without any problem in some games too. Power consumption reaching 590watt sometimes. It was 17.10.2 driver and without windows fc update.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> It was fully stable for superposition. Everything was right. Wattman settings: +%50power 1150mhz hbm and p6 1100 p7 1150mv temp target 70C. During test my clocks average 1720-1730/1150.
> 
> edit: I get 7300 4k optimized score and 26415 firestrike graphics score too. All tests without hbcc. This wattman settings works without any problem in some games too. Power consumption reaching 590watt sometimes. It was 17.10.2 driver and without windows fc update.


HBCC gives you a boost of ~15% in Superposition hence most people test with it active to give an apples to apples comparison.


----------



## diggiddi

Quote:


> Originally Posted by *cg4200*
> 
> Sorry I don't think I worded it right.. l have gigabte 64lc eddition..
> I also have a 56 flashed to 64 bios and am ordering water block for that..
> Gave me idea if my 64lc will do 1750 at aio temp 49c if I take aio off and put it on my 56 and put ek block on my 64 lc get temps to Max 39 maybe squeeze some more out off core..
> Off topic I usually always get ek wb but thought I read lc are binned and I wanted the best ..
> Any one else hear that??


I'm just asking which of the two would you recommend, does the 56 with the LC reach the 64 LC performance any?


----------



## cg4200

I don,t have the block for my 56 yet ..
But it does 1650-1680 gaming 1175 hbm at 70c so with block I would guess real close to 64 lc as long as you don't get a crappy clocking 56..
But i would say 56 with block price wise better deal yes..
My buddys 56 card is a dud with hbm does not oc well at all ...
Good luck


----------



## dagget3450

Quote:


> Originally Posted by *TrixX*
> 
> HBCC gives you a boost of ~15% in Superposition hence most people test with it active to give an apples to apples comparison.


HBCC is not an option when i enable crossfire, i wonder if RX vega CF users have same issue.


----------



## dagget3450

Quote:


> Originally Posted by *cplifj*
> 
> really dagget ???? Ha....


sorry i missed this! adding now!


----------



## diggiddi

Quote:


> Originally Posted by *cg4200*
> 
> I don,t have the block for my 56 yet ..
> But it does 1650-1680 gaming 1175 hbm at 70c so with block I would guess real close to 64 lc as long as you don't get a crappy clocking 56..
> But i would say 56 with block price wise better deal yes..
> My buddys 56 card is a dud with hbm does not oc well at all ...
> Good luck


Thx


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> HBCC gives you a boost of ~15% in Superposition hence most people test with it active to give an apples to apples comparison.


I try hbcc on but my scores going down with hbcc. Dunno why my ram ddr4 3200mhz 2x8. My all scores without hbcc.


----------



## fursko

Quote:


> Originally Posted by *diggiddi*
> 
> I'm just asking which of the two would you recommend, does the 56 with the LC reach the 64 LC performance any?


I think it cannot reach 64lc performance. Even with 64 lc bios flash. Its like 1700=v56 1700x=v64 1800x=v64lc. Its more or less silicon lottery.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> I try hbcc on but my scores going down with hbcc. Dunno why my ram ddr4 3200mhz 2x8. My all scores without hbcc.


Now that is very odd. I wonder why that's happening...


----------



## diggiddi

Quote:


> Originally Posted by *fursko*
> 
> I think it cannot reach 64lc performance. Even with 64 lc bios flash. Its like 1700=v56 1700x=v64 1800x=v64lc. Its more or less silicon lottery.


Ok repped up


----------



## Soggysilicon

Quote:


> Originally Posted by *diggiddi*
> 
> So has anyone been to 1800mhz? and how well does a 240 or 360mm rad cool these things?
> I read/watched the LC rad is not enough to tame the thermals


No. Ohhh I mean, I have been there sure, and its as stable as balancing a ball on the top of a triangle.







I found my sweet spot in gaming to be a 10 minute average (hwinfo) to be 1700-1710. Stable, good performance.
Quote:


> Originally Posted by *spyshagg*
> 
> The bug above, has show me a way to *UNTETHER GPU and HBM floor voltage*. Allowing me to run high HBM voltages and frequencies while keeping the GPU low and cool.
> 
> When the driver crashes and recovers in step nº4, it automatically adds +50mv on top of the voltages you set in step 3 for both the GPU and HBM _(confirmed in gpu-z for gpu voltage. I cant read HBM voltages, but If I reboot windows and manually set 1100mv on HBM, it also becomes stable, proving that indeed the crash also adds 50mv to the HBM)._
> 
> After the crash, I can re-set the GPU P7 @ 1050mv (confirmed in GPU-Z) while the driver bug maintains those +50mv for HBM, making it possible to run 1120mhz on HBM with 1100mv and the GPU with only 1050mv. So, in conclusion, there is way to un-tether GPU floor from HBM P3.


10.2 and 10.3 just set the HBM to 1190... I can't say the benchmarks benefited from such a setting. If you could show a baseline and then a re-run with these other settings that would be interesting.


----------



## Kyozon

Quote:


> Originally Posted by *dagget3450*
> 
> sorry i missed this! adding now!


Hello Dagget. I would like to ask you, how are you running Crossfire with VEGA FE? I believe the latest Q4 Enterprise Driver doesn't allow me to Crossfire even knowing it recognizes both GPUs.

It also doesn't allow me to change to 'Gaming Drivers' once running Two FEs.

Thanks.


----------



## lowdog

Quote:


> Originally Posted by *cg4200*
> 
> Sorry I don't think I worded it right.. l have gigabte 64lc eddition..
> I also have a 56 flashed to 64 bios and am ordering water block for that..
> Gave me idea if my 64lc will do 1750 at aio temp 49c if I take aio off and put it on my 56 and put ek block on my 64 lc get temps to Max 39 maybe squeeze some more out off core..
> Off topic I usually always get ek wb but thought I read lc are binned and I wanted the best ..
> Any one else hear that??


Won't work the 56 doesn't have a connector on the pcb to power the pump for the AIO......already thought of that as I have both a 56 and a 64 AIO. I have had the 56 with a EK block on it and it would not be stable with the 64 AIO bios but was stable with the 64 AIR bios. I now have the 64 AIO striped off and put the EK block on it....temps dropped dramatically especially the hot spot!!!

I'm just going to put the air cooler back on the 56 with it's stock bios and stick it in the wife's comp....at stock the cooler is more than fine for that card.


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> Hello Dagget. I would like to ask you, how are you running Crossfire with VEGA FE? I believe the latest Q4 Enterprise Driver doesn't allow me to Crossfire even knowing it recognizes both GPUs.
> 
> It also doesn't allow me to change to 'Gaming Drivers' once running Two FEs.
> 
> Thanks.


I have only been able to get 17.6 launch drivers to allow crosfire and gaming mode when using both gpus.(Vega FE x2) The new drivers with custom install still do not allow me to use driver options once both gpus are enabled. This may be by design with AMD or a bug i don't know. So maybe someone has found a way to enable crosfire for 2x vega FE but i doubt it thus far.

I also want to know anyone running RX vega crossfire, if HBCC is disabled when enabling crossfire?


----------



## Kyozon

Quote:


> Originally Posted by *dagget3450*
> 
> I have only been able to get 17.6 launch drivers to allow crosfire and gaming mode when using both gpus.(Vega FE x2) The new drivers with custom install still do not allow me to use driver options once both gpus are enabled. This may be by design with AMD or a bug i don't know. So maybe someone has found a way to enable crosfire for 2x vega FE but i doubt it thus far.
> 
> I also want to know anyone running RX vega crossfire, if HBCC is disabled when enabling crossfire?


Thank you! I am definitely going to try that out. How are the 17.6 Drivers when it comes to stability? I remember i didn't have a good time with it back then. And also do they work with the Windows FCU Update?

Thanks.


----------



## cg4200

Big thanks was not seeing that far ahead just figuring hotspot and overall temp would drop adding wb on it ..and hate to waste aio actually not bad unit just creeps up to 50C to quick..
And forgot all about pump connector yikes..now I have to get another wb if worth it.
So not a lot of info I found out there on what was max clocks and hbm before adding waterblock and after??
Also was it molded die?thanks again


----------



## diggiddi

Quote:


> Originally Posted by *Soggysilicon*
> 
> No. Ohhh I mean, I have been there sure, and its as stable as balancing a ball on the top of a triangle.
> 
> 
> 
> 
> 
> 
> 
> I found my sweet spot in gaming to be a 10 minute average (hwinfo) to be 1700-1710. Stable, good performance.
> 10.2 and 10.3 just set the HBM to 1190... I can't say the benchmarks benefited from such a setting. If you could show a baseline and then a re-run with these other settings that would be interesting.


Quote:


> Originally Posted by *lowdog*
> 
> Won't work the 56 doesn't have a connector on the pcb to power the pump for the AIO......already thought of that as I have both a 56 and a 64 AIO. I have had the 56 with a EK block on it and it would not be stable with the 64 AIO bios but was stable with the 64 AIR bios. I now have the 64 AIO striped off and put the EK block on it....temps dropped dramatically especially the hot spot!!!
> 
> I'm just going to put the air cooler back on the 56 with it's stock bios and stick it in the wife's comp....at stock the cooler is more than fine for that card.


Thanks for that info y'all, repped up


----------



## lowdog

Both were molded dies, 56 was Sapphire and 64 AIO was MSI.....just picked up the MSI 2nd hand from a guy who had only had it 4 weeks and decided the AIO didn't suit his case....they Retail for $1050 here in AUS and I got it for $750 then had to pay an extra $50 to have it posted across AUS to me so $800 all up....I'm not complaining at all.


----------



## Naeem

Is there anyway to know if GPU die is molded without tearing it apart ? i have Vega 64 Liquid from Gigabyte


----------



## geriatricpollywog

Quote:


> Originally Posted by *Naeem*
> 
> Is there anyway to know if GPU die is molded without tearing it apart ? i have Vega 64 Liquid from Gigabyte


Run GPU-Z. Samsung memory is molded.


----------



## Naeem

Quote:


> Originally Posted by *0451*
> 
> Run GPU-Z. Samsung memory is molded.


well i guess i have molded one


----------



## JasonMZW20

Quote:


> Originally Posted by *0451*
> Samsung memory is molded.


I don't know if that's true any more. I thought all Vega64s were molded, but I removed my heatsink yesterday and found that my Vega64 is unmolded and Korean made; that's similar to the 56s. My reason for tearing it apart was because I was getting a massive 10C differential between GPU and HBM where HBM was the hotter running part. So, that 40um space really does need a good TIM. I've reduced it to 4-5C with a good cleaning and GC Extreme. Side note: cleaning the unmolded packages is a PITA. The GC Extreme plastic spreader fits in the spaces, so I just used that to get the old TIM out of the cracks and wiped it on a cotton ball with alcohol.

I think AMD started skipping molding to speed up assembly. Would've been nice to have the GPU and HBM level though.

Albums are here if you're interested:


http://imgur.com/U7NzM

 and


http://imgur.com/0SDKC


Anyway, I'm getting frustrated with Vega's performance in older games. I'm playing Wolfenstein: The Old Blood (finished The New Order), and FPS is 28-35fps. It feels like it's in a Fiji compatibility mode of sorts; OpenGL overall is pretty terrible on Vega, and I know AMD considers it deprecated with Vulkan in the mix now. I'm running it with 8x antialiasing at 1080p since it doesn't matter what settings I modify, the FPS stay pretty much within the 30fps range (VSR 4K runs pretty much in that range too). Other settings are all maxed except shadows. Left that at 4096.

Can't wait to get to Wolfenstein 2 with Vulkan. idtech5 was a pretty terrible engine. The texture pop-in particularly bothers me.


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Run GPU-Z. Samsung memory is molded.


They are all Samsung...

Moulded die's are 2nd wave of cards onwards, unmoulded were among the three initial die types to be released with the first wave of cards. This is what I've pieced together so far at least. Moulded's should be the ones available for all the AIB cards too, though one of the hold ups is getting enough of them for the AIB's to have enough inventory.


----------



## raysheri

Quote:


> Originally Posted by *JasonMZW20*
> 
> I don't know if that's true any more. I thought all Vega64s were molded, but I removed my heatsink yesterday and found that my Vega64 is unmolded and Korean made; that's similar to the 56s. My reason for tearing it apart was because I was getting a massive 10C differential between GPU and HBM where HBM was the hotter running part. So, that 40um space really does need a good TIM. I've reduced it to 4-5C with a good cleaning and GC Extreme. Side note: cleaning the unmolded packages is a PITA. The GC Extreme plastic spreader fits in the spaces, so I just used that to get the old TIM out of the cracks and wiped it on a cotton ball with alcohol.
> 
> I think AMD started skipping molding to speed up assembly. Would've been nice to have the GPU and HBM level though.
> 
> Albums are here if you're interested:
> 
> 
> http://imgur.com/U7NzM
> 
> and
> 
> 
> http://imgur.com/0SDKC
> 
> 
> Anyway, I'm getting frustrated with Vega's performance in older games. I'm playing Wolfenstein: The Old Blood (finished The New Order), and FPS is 28-35fps. It feels like it's in a Fiji compatibility mode of sorts; OpenGL overall is pretty terrible on Vega, and I know AMD considers it deprecated with Vulkan in the mix now. I'm running it with 8x antialiasing at 1080p since it doesn't matter what settings I modify, the FPS stay pretty much within the 30fps range (VSR 4K runs pretty much in that range too). Other settings are all maxed except shadows. Left that at 4096.
> 
> Can't wait to get to Wolfenstein 2 with Vulkan. idtech5 was a pretty terrible engine. The texture pop-in particularly bothers me.


I'd be very careful with removing Tim from between the chips on the unmolded die packages as you may damage the infinity fabric.


----------



## Naeem

Quote:


> Originally Posted by *TrixX*
> 
> They are all Samsung...
> 
> Moulded die's are 2nd wave of cards onwards, unmoulded were among the three initial die types to be released with the first wave of cards. This is what I've pieced together so far at least. Moulded's should be the ones available for all the AIB cards too, though one of the hold ups is getting enough of them for the AIB's to have enough inventory.


i have temp difference of about 3c-4c on my card at idle and in gaming hbm2 always has higher temp


----------



## gupsterg

Yesterday I went to HBM 1150MHz. What I noted was I got repeatable small gains in SuperPosition 4K preset, 3DM FS and large memory copy gains in AIDA64 GPGPU, but I lost FPS/points in 3DM TS







.

Example of SP 4K left 1150MHz, right 1100MHz.



Example of 3DM FS left 1150MHz, right 1100MHz.

AIDA64 GPGPU



Example of 3DM TS 1150MHz vs 1100MHz.

I noted no artifacting, etc.

I had also ran [email protected] on GPU for ~3hrs on HBM 1150MHz without errors.

I also tried SOC Clock mod of 1200MHz even though not need on v17.10.3 drivers and still TimeSpy with HBM 1150MHz tanks. I can only assume the higher speed HBM is having errors and those errors create performance loss in that test.

Has anyone also noted this issue?


----------



## TrixX

Quote:


> Originally Posted by *Naeem*
> 
> i have temp difference of about 3c-4c on my card at idle and in gaming hbm2 always has higher temp


Same, the HBM is usually 3-6C higher than the Core. Hotspot is about 10-20C hotter than both. The differences vary card to card.


----------



## gupsterg

As others (and myself







) maybe curious on data regarding molded/unmolded dies for VEGA I have started a thread with this context







. A photo of serials is not necessary, only the info as I have shared of my card







.


----------



## fursko

I cant eliminate coil whine. Gigabyte vega 64 LC. Any suggestions ? If i add +%50 power my coil whine reach my speakers. Probably it affecting my soundcard. Which is just below of the gpu.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> I cant eliminate coil whine. Gigabyte vega 64 LC. Any suggestions ? If i add +%50 power my coil whine reach my speakers. Probably it affecting my soundcard. Which is just below of the gpu.


Most of those on the OCUK forums RMA'd for coil whine. It really shouldn't be there on a $400+ card.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Most of those on the OCUK forums RMA'd for coil whine. It really shouldn't be there on a $400+ card.


Still i can return it. Fan is faulty too. Actually its good gpu silicon lottery wise but faulty both fan and coil whine. I will return it and i will decide buy another v64 lc or 1080 ti. Im waiting my 1440p 144hz freesync monitor to see how vega performs. Hope prices go down more for v64 lc. So i can justify my purchase. Looks like 1080 ti has more value (price/perf and hassle free) right now. Gtx 1080 and v56/64 out of question. I dont like the ref design. Temps and noise important for me.


----------



## jbravo14

Quote:


> Originally Posted by *fursko*
> 
> Still i can return it. Fan is faulty too. Actually its good gpu silicon lottery wise but faulty both fan and coil whine. I will return it and i will decide buy another v64 lc or 1080 ti. Im waiting my 1440p 144hz freesync monitor to see how vega performs. Hope prices go down more for v64 lc. So i can justify my purchase. Looks like 1080 ti has more value (price/perf and hassle free) right now. Gtx 1080 and v56/64 out of question. I dont like the ref design. Temps and noise important for me.


I was on a similar decision point as you, but I already have a 144hz freesync monitor. I first got the 1080ti FTW3 for $599 when it was on sale. Then shortly a vega 64 LE showed up for $499. I ended up getting both, the vega will be on my sentry ITX case, and the 1080ti FTW3 will be on my DAN A4 SFX case.


----------



## spyshagg

no coil whine in here unless games go above to 400fps plus.


----------



## TrixX

Quote:


> Originally Posted by *spyshagg*
> 
> no coil whine in here unless games go above to 400fps plus.


Same though I had to hit 2000 FPS before I heard whine. Not sure at what point the whine started though, hit 2K FPS in pCARS2 main menu.


----------



## Kyozon

Hello!

I would like to ask you guys if you know where i can find the VEGA Frontier Edition - Inaugural Drivers - 17.6 for W10 x64 in order to test Crossfire.

It seems that the Driver has been replaced by the Q4 Enterprise Driver on the Website.

Thanks.


----------



## fursko

Quote:


> Originally Posted by *jbravo14*
> 
> I was on a similar decision point as you, but I already have a 144hz freesync monitor. I first got the 1080ti FTW3 for $599 when it was on sale. Then shortly a vega 64 LE showed up for $499. I ended up getting both, the vega will be on my sentry ITX case, and the 1080ti FTW3 will be on my DAN A4 SFX case.


Wow very good prices. I have 144hz freesync monitor but its 1080p lol ^^ So how is your experience with both card. 1080 ti ftw3 temps, noise and average core clocks during gaming ?


----------



## Nuke33

Quote:


> Originally Posted by *gupsterg*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Yesterday I went to HBM 1150MHz. What I noted was I got repeatable small gains in SuperPosition 4K preset, 3DM FS and large memory copy gains in AIDA64 GPGPU, but I lost FPS/points in 3DM TS
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Example of SP 4K left 1150MHz, right 1100MHz.
> 
> 
> 
> Example of 3DM FS left 1150MHz, right 1100MHz.
> 
> AIDA64 GPGPU
> 
> 
> 
> Example of 3DM TS 1150MHz vs 1100MHz.
> 
> I noted no artifacting, etc.
> 
> I had also ran [email protected] on GPU for ~3hrs on HBM 1150MHz without errors.
> 
> I also tried SOC Clock mod of 1200MHz even though not need on v17.10.3 drivers and still TimeSpy with HBM 1150MHz tanks. I can only assume the higher speed HBM is having errors and those errors create performance loss in that test.
> 
> Has anyone also noted this issue?


I am getting similar results, above 1148mhz HBM performance is getting worse in memory intensive workloads.
You could check with OCCT, it has a builtin GPU error checker. It can detect errors before artifacts are visible to the human eye. It does show errors starting from 1147mhz HBM and higher for my V64.
I am using shader level 7, 2000mb memory occupation and 1000fps cap to max out heat output.
But be aware that power consumption and heat on the hotspot will go trough the roof!


----------



## Nuke33

Quote:


> Originally Posted by *spyshagg*
> 
> no coil whine in here unless games go above to 400fps plus.


I can hear whine in Witcher 3 even if fps are around 90. It is a lot less noticeable than 400fps+ in menus though.


----------



## feedthenoob

Those with working undervolt in games.. are you using the latest fall update for win 10??


----------



## SpecChum

New vega 64 owner signing in.

Powercolor air cooled.

Not done anything yet really, just undervolted to 1v and ran a few benches


----------



## Naeem

Quote:


> Originally Posted by *SpecChum*
> 
> New vega 64 owner signing in.
> 
> Powercolor air cooled.
> 
> Not done anything yet really, just undervolted to 1v and ran a few benches


do post your scores


----------



## TrixX

Quote:


> Originally Posted by *feedthenoob*
> 
> Those with working undervolt in games.. are you using the latest fall update for win 10??


P7 - 1752MHz - 900mv

HBM 1050MHz - 900mv (doesn't work at 950MHz oddly)

Power Target +100%

Fan Max 4900 though never passes around 2800RPM

However just applying water block, will do more testing after


----------



## SpecChum

Quote:


> Originally Posted by *Naeem*
> 
> do post your scores


Sure, I've only run a couple, Superposition and some 3D mark ones, which ones are you after?


----------



## feedthenoob

Quote:


> Originally Posted by *TrixX*
> 
> P7 - 1752MHz - 900mv
> 
> HBM 1050MHz - 900mv (doesn't work at 950MHz oddly)
> 
> Power Target +100%
> 
> Fan Max 4900 though never passes around 2800RPM
> 
> However just applying water block, will do more testing after


Nice, what bios, driver and windows version?


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> Hello!
> 
> I would like to ask you guys if you know where i can find the VEGA Frontier Edition - Inaugural Drivers - 17.6 for W10 x64 in order to test Crossfire.
> 
> It seems that the Driver has been replaced by the Q4 Enterprise Driver on the Website.
> 
> Thanks.


I cannot find it on AMD's site, however this link below appears valid, i did download it but i didn't use it.

http://drivers.softpedia.com/dyn-postdownload.php/2895b37e0b33912c5e91294cb277c079/59fa36f2/8c9af/4/1

I thought it would be HERE but thats apprently only regular video gpu drivers i guess.


----------



## dagget3450

Quote:


> Originally Posted by *SpecChum*
> 
> New vega 64 owner signing in.
> 
> Powercolor air cooled.
> 
> Not done anything yet really, just undervolted to 1v and ran a few benches


I ordered the same card last night, was it silver or black? Mine is silver. Adding you to OP.


----------



## SpecChum

Quote:


> Originally Posted by *dagget3450*
> 
> I ordered the same card last night, was it silver or black? Mine is silver. Adding you to OP.


Black.


----------



## TrixX

Quote:


> Originally Posted by *feedthenoob*
> 
> Nice, what bios, driver and windows version?


Win 10 Ent
BIOS 8774 (MSI Liquid Cooled BIOS)
Driver 17.10.3


----------



## cg4200

Quote:


> Originally Posted by *TrixX*
> 
> P7 - 1752MHz - 900mv
> 
> HBM 1050MHz - 900mv (doesn't work at 950MHz oddly)
> 
> Power Target +100%
> 
> Fan Max 4900 though never passes around 2800RPM
> 
> However just applying water block, will do more testing after


Man that is a nice card you got air doing that..
With wb wonder if you will boost near 1800!


----------



## cg4200

Curious I saw a couple people mention 1250v for liquid cooled cards.
I have gigabyte 64 lc and 1200v is max i see.
entered 1250 in twatman but in game only hits 1200v??
Is there a better lc bios that mine??
thanks


----------



## asder00

Quote:


> Originally Posted by *cg4200*
> 
> Curious I saw a couple people mention 1250v for liquid cooled cards.
> I have gigabyte 64 lc and 1200v is max i see.
> entered 1250 in twatman but in game only hits 1200v??
> Is there a better lc bios that mine??
> thanks


There is 50mv of vdroop with full 3d load on vega.


----------



## PontiacGTX

Have you seen this?


----------



## cg4200

Cool thanks for explaining


----------



## JasonMZW20

Quote:


> Originally Posted by *feedthenoob*
> 
> Those with working undervolt in games.. are you using the latest fall update for win 10??


I'm still on Insider Previews. Think I'm on build 17025.1000 (yep); a bit newer than Fall Creators (16299). Undervolt does work and runs at around 1.025v @ 1575-1595MHz in Wolfenstein: TOB, though I have it set to 1.050v P6 and 1.085v P7 for now (still finding limits after reapplying paste). Stock air clocks. 17.10.3.


----------



## kundica

Quote:


> Originally Posted by *PontiacGTX*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Have you seen this?


Yes. I'm actually experiencing it since I put in about 3 hours over the weekend.

Nvidia fans are freaking out though, looking for any reason they can to discredit the results. One thing they're currently fixated on is the 1620 clock on the 1080Ti. Pretty sure that review site isn't claiming that's the max boost the card reached but that doesn't keep people from claiming false results.

Here's a video comparison some guy did with his slightly OC'd 64 LC. Stock core, HBM +90, and +20% Power Limit. He took the Nvidia footage from another user who only compared with the Air 64.


----------



## PontiacGTX

Quote:


> Originally Posted by *kundica*
> 
> Yes. I'm actually experiencing it since I put in about 3 hours over the weekend.
> 
> Nvidia fans are freaking out though, looking for any reason they can to discredit the results. One thing they're currently fixated on is the 1620 clock on the 1080Ti. Pretty sure that review site isn't claiming that's the max boost the card reached but that doesn't keep people from claiming false results.
> 
> Here's a video comparison some guy did with his slightly OC'd 64 LC. Stock core, HBM +90, and +20% Power Limit. He took the Nvidia footage from another user who only compared with the Air 64.


it was at 1440 though, I would like to see a 1080 comparison also oen including VEGA 56 at 1600MHz


----------



## kundica

Quote:


> Originally Posted by *PontiacGTX*
> 
> it was at 1440 though, I would like to see a 1080 comparison also oen including VEGA 56 at 1600MHz


Okay. You weren't specific. There are plenty of results on sites and several videos showing framerates at 1080p. You're also asking for something very specific with the 56 I doubt you'll find.


----------



## madmanmarz

Anyone playing Destiny 2 noticed pretty high hotspot temps compared to other games?

I think I'm going to remount one more time because 70c at 1.05v and 1500mhz seems a bit much for my setup. spinning the fans up etc doesn't seem to help, either.


----------



## PontiacGTX

Quote:


> Originally Posted by *kundica*
> 
> Okay. You weren't specific. There are plenty of results on sites and several videos showing framerates at 1080p. You're also asking for something very specific with the 56 I doubt you'll find.


well I was wondering how both they compared at same clock speed


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> P7 - 1752MHz - 900mv
> 
> HBM 1050MHz - 900mv (doesn't work at 950MHz oddly)
> 
> Power Target +100%
> 
> Fan Max 4900 though never passes around 2800RPM
> 
> However just applying water block, will do more testing after


What is your ingame clocks. This clocks impossible with that mv.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> What is your ingame clocks. This clocks impossible with that mv.


Ingame clocks are game dependant. Though with 900mv it's around ~1480MHz max clock. But that's just for max undervolt.

Just ran a few tests with the water block on.


Spoiler: Warning: Spoiler!







Seems to scale nicely though for some reason it doesn't like having high core clocks set. Anything above 1770 P7 seems to crash it during a stress test for some reason.


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> Ingame clocks are game dependant. Though with 900mv it's around ~1480MHz max clock. But that's just for max undervolt.
> 
> Just ran a few tests with the water block on.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Seems to scale nicely though for some reason it doesn't like having high core clocks set. Anything above 1770 P7 seems to crash it during a stress test for some reason.


Nice results. What are your settings?


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Nice results. What are your settings?


For that run it was:
P7 1750MHz @ 1150mv
HBM 1050MHz @ 950mv
+100% Power Target

Unfotunately I seem to have a bit of a runaway hotspot temp issue at the moment so looking for a way to reduce that. It may have to wait for the CPU block to arrive.


----------



## madmanmarz

Now I don't feel so bad lol. What are your.max sustained clocks at whatever voltage in game? I've noticed the lower ur voltage the bigger the gap between actual clocks and what you set.
Quote:


> Originally Posted by *TrixX*
> 
> Ingame clocks are game dependant. Though with 900mv it's around ~1480MHz max clock. But that's just for max undervolt.
> 
> Just ran a few tests with the water block on.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Seems to scale nicely though for some reason it doesn't like having high core clocks set. Anything above 1770 P7 seems to crash it during a stress test for some reason.


----------



## SpecChum

Just got 4499 in Superposition, which I'm happy with as these are my "24/7" settings; I've get PL at 0 and undervolted to 1000mv; fan is also 2400 max

Clocks were 1548 throughout, temps did reach 80C tho.

Will try a "loud and proud" run tomorrow, just testing the water at the moment...

Oh, that was hbcc off too


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> For that run it was:
> P7 1750MHz @ 1150mv
> HBM 1050MHz @ 950mv
> +100% Power Target
> 
> Unfotunately I seem to have a bit of a runaway hotspot temp issue at the moment so looking for a way to reduce that. It may have to wait for the CPU block to arrive.


I am getting similar results using similar settings on water, but I think my chip is not very high quality.


----------



## madmanmarz

Is there any LC bios or mod or anything that brings the temp up from 70 to 85 or whatever?


----------



## TrixX

Quote:


> Originally Posted by *madmanmarz*
> 
> Now I don't feel so bad lol. What are your.max sustained clocks at whatever voltage in game? I've noticed the lower ur voltage the bigger the gap between actual clocks and what you set.


Sustained with that are ~1730 entire time. Currently slightly lower for my main running ~1711 continuous. Power draw difference was quite high though, down to 290W from 330W for 20Mhz









Hoping it might improve with future patches, but if not sustained 1711MHz is still pretty good.
Quote:


> Originally Posted by *SpecChum*
> 
> Just got 4499 in Superposition, which I'm happy with as these are my "24/7" settings; I've get PL at 0 and undervolted to 1000mv; fan is also 2400 max
> 
> Clocks were 1548 throughout, temps did reach 80C tho.
> 
> Will try a "loud and proud" run tomorrow, just testing the water at the moment...
> 
> Oh, that was hbcc off too


That's pretty good with an undervolt to 1000mv. If you can set a lower target temp you lose performance pushing over 65C. Not sure what your P7 clocks are but push that as high as you can as it'll find a point it's ok with and boom no further.


----------



## fursko

This is my score. Power +%50, max temp target 70C, p7 1180mv, hbm 1150mhz. Vega 64 LC of course.

Edit: I can reach 1190mhz without artifact in this superposition benchmark. But im getting same score. 1150mhz my daily stable clocks. Core clocks during test 1720mhz. HBCC on. Weird bug my old 17.10.2 driver was always using hbcc. Even i disable it. So i get same score every time with my old driver. I was assuming HBCC off but it was always on.

Edit2: If i set hbm to 1190 mhz (i didnt touch hbm voltage) my core clocks loses around 10mhz because of power throttling. Probably i can get more mhz if i raise the power target. I try adding manually some mhz. %2 overclock is not stable but %1 pass the test. But its not worth lol. Same clocks during test anyway.


----------



## diggiddi

Quote:


> Originally Posted by *PontiacGTX*
> 
> well I was wondering how both they compared at same clock speed


Like 



 Vid by Gamers Nexxus


----------



## madmanmarz

Decided to try air 64 bios on my 56 and I must say the clocks read much more accurately to what you set them. Also the 85c limit may help me since my hotspot runs hot. Mess around with it some more tomorrow. I just wanna run about 1600mhz with the lowest possible voltage so I'm curious if this will be a little better. It's also possible that the AC stock clocks (being a bit lower) will probably be stable in case settings reset or whatever since the LC was not for me.


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> Sustained with that are ~1730 entire time. Currently slightly lower for my main running ~1711 continuous. Power draw difference was quite high though, down to 290W from 330W for 20Mhz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hoping it might improve with future patches, but if not sustained 1711MHz is still pretty good.
> That's pretty good with an undervolt to 1000mv. If you can set a lower target temp you lose performance pushing over 65C. Not sure what your P7 clocks are but push that as high as you can as it'll find a point it's ok with and boom no further.


I ran with your exact settings and my score was 5221 because sustained clock speed was only around 1700mhz. I can achieve 1735mhz sustained, but I need to set core target mhz/mv to 1775/1175. At 1735mhz, my score is 5310. It's so frustrating that I can't set the clockspeed to exactly what I want it to be. I can't seem to get clockbocker working right either.


----------



## betaflame

I glanced through the thread, but didn't see anything relevant to this question.

I flashed my brand new MSI V56 (got it today) with the 64 bios, and it picked up the clocks... but the power limit (and HBM clocks) don't seem any different.

Power draw at +50% caps at 300W exactly in GPU-Z, and HBM artifacts past 985 at anything but a core undervolt+underclock.

I've tried wattman watttool and overdriventool.

It also refuses any voltage above 1200mv (but I think that's normal?).

Edit: DDU had no effect

Modded powerplay tables have no effect (they let the percent go higher, but it still hard caps at 300W exactly)


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Sustained with that are ~1730 entire time. Currently slightly lower for my main running ~1711 continuous. Power draw difference was quite high though, down to 290W from 330W for 20Mhz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hoping it might improve with future patches, but if not sustained 1711MHz is still pretty good.
> That's pretty good with an undervolt to 1000mv. If you can set a lower target temp you lose performance pushing over 65C. Not sure what your P7 clocks are but push that as high as you can as it'll find a point it's ok with and boom no further.


P7 is set to 1702. That actually gives me 1600mhz at 1050mv but I just thought I'd try it on 1v instead.

I'm only on air (probably shouldn't have used the phrase "testing the water" lol) so maintaining 65c would be a challenge at reasonable noise levels, although I don't mind full loudness for a bench run.


----------



## Tgrove

Quote:


> Originally Posted by *betaflame*
> 
> I glanced through the thread, but didn't see anything relevant to this question.
> 
> I flashed my brand new MSI V56 (got it today) with the 64 bios, and it picked up the clocks... but the power limit (and HBM clocks) don't seem any different.
> 
> Power draw at +50% caps at 300W exactly in GPU-Z, and HBM artifacts past 985 at anything but a core undervolt+underclock.
> 
> I've tried wattman watttool and overdriventool.
> 
> It also refuses any voltage above 1200mv (but I think that's normal?).
> 
> Edit: DDU had no effect
> 
> Modded powerplay tables have no effect (they let the percent go higher, but it still hard caps at 300W exactly)


You didnt ask a question lol


----------



## SpecChum

I'm determined to break 18000 Firestrike points tonight.

Currently at 17955 so hopefully won't be too hard


----------



## PontiacGTX

Quote:


> Originally Posted by *diggiddi*
> 
> Like
> 
> 
> 
> Vid by Gamers Nexxus


but in wolfenstein


----------



## SPLWF

FYI, all REF cards are NOW selling at MSRP. Go Get it!!


----------



## diabetes

I am still trying to find out what hotspot is and cunducted some testing on my own:

I have a watercooled Vega56 card and from my own testing, I found out that "Hotspot" might be either the interposer temp or the temp of the PCB under the package. When unlocking the card to 264W (like V64 liquid edition), "Hotspot" spikes to 80C on my card (that is with somewhat poor TIM application though, but core and HBM temps are fine). I can reduce these spikes to 75C by placing a fan on the EK backplate of my card. The fan blows on the backside of the GPU package area.

When leaving the powerlimit at stock, hotspot reaches 55-60°C. I think hotspot temperature is caused by the traces in the card's PCB which get hot due to the high current flow (264W/1.15V = 229.5A !!). This would also explain why some cards are much hotter than others: Trace quality varies from card to card due to manufacturing. Minor (oxide) impurities within the traces can greatly increase the electrical resistance and therefore the heat output.

This would also explain why Vega has thermal pads on the coils around the GPU. They are meant to help draw out the heat from the PCB.

You might ask yourself now "But how could people improve their Hotspot temps when remounting their Morpheus aircooler, which only makes contact to the die?" Heat always travels from hot to cold. If these people could improve their cooler contact, making the GPU core cooler than hotspot, some of the heat might be drawn through the GPU-die into the cooler.

All of this is just a theory but to me it makes sense. I would like to see someone with a scientific background commenting on this, evaluating whether my claims could be valid or not. You could also help testing this. Measure hotspot with a fan blowing onto the back of the card and without, see if it makes a difference for you.

Some Math:
I(180W - Stock) = 180/1.15 = 156.5A
I(264W - like LC) = 264/1.15 = 229.5A

229.5/156.5 = 1.466 (Current ratio)
80C/55C = 1.454 (Temp ratio)

As you can see, the current ratio and the temp ratio are nearly identical. Is there a law of physics that says that heat output of a wire raises linearlily with current flow? If yes, this might be the solution.


----------



## SPLWF

Quote:


> Originally Posted by *SPLWF*
> 
> Just purchased a Sapphire Vega 56 for $154, why so cheap. Well Microcenter had it listed for $479. I used my sold PC parts money which was $325 - $479 = $154. I added 2yrs warranty for $70, so total was $224
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I have under volted to 1025 and running mem at 950, no crashes at all so far. Funny thing is my GPUz reports core clock at 1590mhz, I've never hit 1590 on benchmarks, most I hit it 1530. Is 1590 my boost clock? If so, why am I not hitting it? Got my fan set to 2800rpms (wattman) aka 50% fan speed. Temps are hovering around 75-77c.
> 
> Another question, are there any good OSD other than rivatuner that doesn't require afterburner? Thanks


Since all Ref cards are now MSRP. I returned my over priced RX Vega 56, picked up another sapphire RX Vega 56 at $399 MSRP. So in total I spend was $399 - $325 = $74 + $50 (2yrs warranty) = Total with Tax $163.81.

Now I have to see if this thing can handle my undervolt at 1025 or less.


----------



## nicodemus

Hey all!

Do we have a feel yet for where the best perf/voltage is for a 56 flashed to 64? I haven't been able to find any kind of scale or chart as to how much perf i gain for pumping voltage to 1200 and clocks to ~1730-1750. As opposed to aiming for, say, 1550/1025.

My mem is 1100 or 1145, depending on game stability.

Any help is appreciated. Thanks!


----------



## kundica

Quote:


> Originally Posted by *nicodemus*
> 
> Hey all!
> 
> Do we have a feel yet for where the best perf/voltage is for a 56 flashed to 64? I haven't been able to find any kind of scale or chart as to how much perf i gain for pumping voltage to 1200 and clocks to ~1730-1750. As opposed to aiming for, say, 1550/1025.
> 
> My mem is 1100 or 1145, depending on game stability.
> 
> Any help is appreciated. Thanks!


There's so much variation from card to card I think you really need to run multiple tests yourself to find the sweetspot.


----------



## betaflame

Quote:


> Originally Posted by *betaflame*
> 
> I glanced through the thread, but didn't see anything relevant to this question.
> 
> I flashed my brand new MSI V56 (got it today) with the 64 bios, and it picked up the clocks... but the power limit (and HBM clocks) don't seem any different.
> 
> Power draw at +50% caps at 300W exactly in GPU-Z, and HBM artifacts past 985 at anything but a core undervolt+underclock.
> 
> I've tried wattman watttool and overdriventool.
> 
> It also refuses any voltage above 1200mv (but I think that's normal?).
> 
> Edit: DDU had no effect
> 
> Modded powerplay tables have no effect (they let the percent go higher, but it still hard caps at 300W exactly)


Quote:


> Originally Posted by *Tgrove*
> 
> You didnt ask a question lol


I suppose that's technically true.

Did I miss a step? I was under the impression that flashing the V56 with the 64 bios was all I needed to do for the power limit to change from 300W to 360W at +50% PL


----------



## fursko

Quote:


> Originally Posted by *kundica*
> 
> There's so much variation from card to card I think you really need to run multiple tests yourself to find the sweetspot.


Also game to game. My best results for superposition benchmark 1180mv p7. So i can keep core clocks around 1720mhz. 1190mv power throttling and 1170mv lowering my core clocks. For Far Cry Primal best results 1200mv 1745mhz core clock. Going down to 1160mv i get similiar results 1740mhz. Also 1250mv = 1725mhz.

Looks like i have room for overclocking. Because 1160mv = 1740mhz and 1200mv = 1745mhz. Now i add %1 oc from wattman. Now my ingame clocks 1755mhz with 1200mv. Power limit holding me back.


----------



## spyshagg

Quote:


> Originally Posted by *SpecChum*
> 
> I'm determined to break 18000 Firestrike points tonight.
> 
> Currently at 17955 so hopefully won't be too hard


Heres a 18813 for you to compare

https://www.3dmark.com/fs/13992401

Combined is tricky. Between computer reboots it either scores 5200 or 6300. I haven't found what triggers the lower score.


----------



## SpecChum

Quote:


> Originally Posted by *spyshagg*
> 
> Heres a 18813 for you to compare
> 
> https://www.3dmark.com/fs/13992401
> 
> Combined is tricky. Between computer reboots it either scores 5200 or 6300. I haven't found what triggers the lower score.


Cool, thanks. There's various things I could do to help but I'm not going all out speed, I'm trying to get it using "24/7" settings really.

My 1700 can run at 4.0Ghz quite happily but keep it at 3.9Ghz for the substantially lower vcore, for example, I could even enable 3dmark mode on the motherboard but I'm going to try it with gfx card only









EDIT: Who am I kidding...you just know if I can't do it normally I'll force it


----------



## spyshagg

Its easily doable man.

That score is my 24/7 when I'm playing in VR. For regular games I drop the clocks down to 1680 for about 80mv less.

1752 @ 1180mv (1700mhz on full load).
1680 @ 1100mv (1660mhz on full load).

HBM is really the key. Unfortunately I need 1100mv for it to work @ 1100mhz at about 65ºc. Although, yesterday it crashed at 1100mhz when it didn't a day before. Hope its not degradation so soon.

The stock cooler can handle it but at the expense of noise. As for myself, the computer is in another room and the keyboard/mouse/monitor cables cross to another room through a small hole. Full power and no noise!.

good luck


----------



## betaflame

Post flash and powerplay modded 56 still refuses to exceed 300w exactly.

Everything up to +50% scales as expected, but anything above +50% has no effect on the 300w power draw. Even though 0 should be 300w as it has a 64 bios on it.

I hope there's not a new hardware revision or something...

Edit: 17.10.3 or whatever the most recent drivers are


----------



## pmc25

Quote:


> Originally Posted by *0451*
> 
> Run GPU-Z. Samsung memory is molded.


It's entirely random.

I have a Vega64, it's Samsung, and it's unmoulded.

Also has very low HotSpot and HBM temperatures compared to most, given relatively modest (water) cooling.

IMO unmoulded is better if you're going to do a good TIM job and put a block on.

The moulded dies have an extra layer of thermal insulation, which in my view is not helpful.


----------



## Particle

Quote:


> Originally Posted by *diabetes*
> 
> Some Math:
> I(180W - Stock) = 180/1.15 = 156.5A
> I(264W - like LC) = 264/1.15 = 229.5A
> 
> 229.5/156.5 = 1.466 (Current ratio)
> 80C/55C = 1.454 (Temp ratio)
> 
> As you can see, the current ratio and the temp ratio are nearly identical. Is there a law of physics that says that heat output of a wire raises linearlily with current flow? If yes, this might be the solution.


Waste heat in a conductor rises with current, not voltage or power. It's a function of current and resistance. Higher temperatures generally raise resistance which will cause a drop in current provided the applied voltage stays constant, so there is some degree of negative feedback. Higher temperatures also cause semiconductors to become leakier though and thus bleed more power that way, so there is some degree of positive feedback as well. It depends on if you're interested in talking about waste heat in conductors or semiconductors.


----------



## betaflame

1) Physically install Vega 56
2) Install 17.10.3
3) Use wattman, it's terrible, reset to balanced
4) Use watttool, it's terrible, reset
5) Use Overdriventool, works fine, use that to mess with voltages and settings
6) GPU-Z shows 200W at +0% PL
7) GPU-Z shows 300W at +50% PL
8) Reset
9) Flash 64 bios
10) Go into Safe mode, DDU
11) Install 17.10.3
12) Use Overdriventool to mess with voltages and settings
13) Verify all the default clocks changed to Vega 64's clocks. Can't see HBM voltage.
14) HBM artifacts like mad if 985 is exceeded
15) GPU-Z shows 200W at +0% PL
16) GPU-Z shows 300W at +50% PL
???
17) Apply the modded powerplay tables.
18) GPU-Z shows 200W at +0% PL
19) GPU-Z shows 300W at +50% PL
20) GPU-Z shows 300W at +60% PL
21) GPU-Z shows 300W at +70% PL
22) GPU-Z shows 300W at +80% PL

Did I skip a step or something? I've done bios modding before, I feel like I'm taking crazy pills


----------



## TrixX

Quote:


> Originally Posted by *betaflame*
> 
> 1) Physically install Vega 56
> 2) Install 17.10.3
> 3) Use wattman, it's terrible, reset to balanced
> 4) Use watttool, it's terrible, reset
> 5) Use Overdriventool, works fine, use that to mess with voltages and settings
> 6) GPU-Z shows 200W at +0% PL
> 7) GPU-Z shows 300W at +50% PL
> 8) Reset
> 9) Flash 64 bios
> 10) Go into Safe mode, DDU
> 11) Install 17.10.3
> 12) Use Overdriventool to mess with voltages and settings
> 13) Verify all the default clocks changed to Vega 64's clocks. Can't see HBM voltage.
> 14) HBM artifacts like mad if 985 is exceeded
> 15) GPU-Z shows 200W at +0% PL
> 16) GPU-Z shows 300W at +50% PL
> ???
> 17) Apply the modded powerplay tables.
> 18) GPU-Z shows 200W at +0% PL
> 19) GPU-Z shows 300W at +50% PL
> 20) GPU-Z shows 300W at +60% PL
> 21) GPU-Z shows 300W at +70% PL
> 22) GPU-Z shows 300W at +80% PL
> 
> Did I skip a step or something? I've done bios modding before, I feel like I'm taking crazy pills


Looks like you might have had a bad flash. Try re-flashing with the 8730 BIOS for a V64 Air card and see whether it was a bad flash.

Though I have seen a few cards that can't cope with HBM clocks above 950MHz, probably why there is a 945MHz starting level for the HBM.


----------



## diabetes

Quote:


> Originally Posted by *Particle*
> 
> Waste heat in a conductor rises with current, not voltage or power. It's a function of current and resistance. (...) It depends on if you're interested in talking about waste heat in conductors or semiconductors.


Thats why I compared the two currents. Power has to rise when current rises and voltage is constant as P=U*I. When comparing the current ratio and the temp ratio, I assume that these differ in my calculations because of margins of error when measuring. When assuming that they are the same in an environment where flawless measurements can be made, I have just proven that Hotspot temp raises linearily with current flow.

The question now is whether this allows the assumption that hotspot temp is emitted from either the traces in the PCB+solder blobs on the package or from within the chip itself? I assume that waste heat generation in a chip does not scale linearily with current but maybe polynomial or exponential. That in turn would mean that hotspot temp can only be emitted from the PCB and can be brought down by adding more thermal pads to the backplate for example.


----------



## betaflame

Quote:


> Originally Posted by *TrixX*
> 
> Looks like you might have had a bad flash. Try re-flashing with the 8730 BIOS for a V64 Air card and see whether it was a bad flash.
> 
> Though I have seen a few cards that can't cope with HBM clocks above 950MHz, probably why there is a 945MHz starting level for the HBM.


It was about to hit 950 no sweat on the 56 bios.

I've flashed the 64 bios (both versions) about 4 times with no luck. I'll try again and make sure it's that version.

Edit: The versions I got were from techpowerup (8706/8707). I'll look for it, but where's 8730?

Edit: ****. I didn't even realize that techpower's bios site was divided by manufacturer. I bet that bios fixes it. I'll try when I get home from work.


----------



## Kyozon

Quote:


> Originally Posted by *dagget3450*
> 
> I cannot find it on AMD's site, however this link below appears valid, i did download it but i didn't use it.
> 
> http://drivers.softpedia.com/dyn-postdownload.php/2895b37e0b33912c5e91294cb277c079/59fa36f2/8c9af/4/1
> 
> I thought it would be HERE but thats apprently only regular video gpu drivers i guess.


Hello dagget!

I would like to report to you that i have successfully managed to enable Crossfire on both of my Cards.

What is interesting is that i got really weird Graphics Score, considering both of them are Overclocked - 5%, as you can see here: https://www.3dmark.com/fs/14018046 Could it be because they are different variants? One is the LC Edition and the other is the Air Edition.

Also, i would like to tell you that the Radeon Pro Crimson - Windows Falls Update Beta, the one that came before the Q4 Enterprise Driver -

__ https://twitter.com/i/web/status/921381001338560513
This one allowed me to Crossfire the FE just as the 17.6 Did. The difference i noticed is that this one is much more stable, despite being a Beta.


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> Hello dagget!
> 
> I would like to report to you that i have successfully managed to enable Crossfire on both of my Cards.
> 
> What is interesting is that i got really weird Graphics Score, considering both of them are Overclocked - 5%, as you can see here: https://www.3dmark.com/fs/14018046 Could it be because they are different variants? One is the LC Edition and the other is the Air Edition.
> 
> Also, i would like to tell you that the Radeon Pro Crimson - Windows Falls Update Beta, the one that came before the Q4 Enterprise Driver -
> 
> __ https://twitter.com/i/web/status/921381001338560513
> This one allowed me to Crossfire the FE just as the 17.6 Did. The difference i noticed is that this one is much more stable, despite being a Beta.


Can you give me a rundown on what you did step by step so i can try this? Its possible i didnt try that driver. Ill investigate now.

Also setup your using hardware /software i.e. windows build number


----------



## hyp36rmax

Have you guys tried removing the driver and installing the latest version from AMD's site? Heard Vega users were experiencing crossfire issues after Windows Fall Creators Update pushing the driver back to 17.6 Gaming RX when it should have been 17.9.2 (Crossfire enabled for RX). Maybe something similar is happening to FE users?


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> Have you guys tried removing the driver and installing the latest version from AMD's site? Heard Vega users were experiencing crossfire issues after Windows Fall Creators Update pushing the driver back to 17.6 Gaming RX when it should have been 17.9.2 (Crossfire enabled for RX). Maybe something similar is happening to FE users?


I tired many things and i suspected that may be the case. However with vega fe you have "radeon pro ui", "gaming mode" or "Driver options" in newer drivers. For me the issue is always with 2 vega fe installed i can never get prompt for multiple drivers. So this causes gaming mode i.e. "driver options" in newer driver to not even show up. I get stuck with Radeon PRO UI and thats all i get. (which has no wattman or gaming options like crossfire)

I am about to just do a fresh win10 load i think. I may have something wrong with my windows 10 pro - or maybe windows 10 pro isnt supported i don't know.(seems unlikely to not support pro win10)


----------



## SpecChum

Quote:


> Originally Posted by *spyshagg*
> 
> Heres a 18813 for you to compare
> 
> https://www.3dmark.com/fs/13992401
> 
> Combined is tricky. Between computer reboots it either scores 5200 or 6300. I haven't found what triggers the lower score.


Oddly enough, I beat your physics and combined scores.

https://www.3dmark.com/fs/14015474

Weird, your 1800x is clocked higher. Could be my 3200 14-14-14 memory


----------



## Kyozon

Quote:


> Originally Posted by *dagget3450*
> 
> Can you give me a rundown on what you did step by step so i can try this? Its possible i didnt try that driver. Ill investigate now.
> 
> Also setup your using hardware /software i.e. windows build number


I have ran DDU before instaling this Falls Update Beta Driver. Once the installation was finished, i have then rebooted the System and checked if the CrossFire Option was there. Suprisingly, it was.

I have then enabled the Option, but i noticed a strange behavior, which was my OS freezing, therefore i had to manually reboot the System. It has then turned on normally, and then the CrossFire Switch there said it was On. As you can see by the Picture.




System Specs:

AMD Ryzen ThreadRipper 1950X @4.15Ghz
Corsair H115i AIO
ASUS X399 Zenith Extreme
G.Skill RipJaws V 3200Mhz CL14 (4x8) 32GB - @3600Mhz CL15
Samsung 850 EVO 250GB SSD
VEGA Frontier Edition - Liquid
VEGA Frontier Edition - Air
Thermaltake ToughPower 1050W PSU

OS: Windows 10 Educational - Compilation (16299).

I am also about to do a proper Clean Install of Windows, as i haven't done prior switching from Ryzen 7 to TR. I have thought about installing the Windows Server 2016 Edition, as it seems to have less Bloatware running on the Background. But some people said it can be tricky to get Drivers working properly there.


----------



## AmateurExpert

Quote:


> Originally Posted by *Chaoz*
> 
> I decided not to use it when I saw all the images of reviewers and the images EKWB posted with the stock backplate and no tension plate.
> 
> So decided not to use it. Just to be sure.
> 
> I might add thermal pads aswell, if I ever decide to change or flush my loop.


I have a Sapphire ref air V64 on which I installed an acrylic EK block on some time ago - at the time I considered using the original tension spider, but if I remember correctly the original screws and the EK screws are different, and the EK screws wouldn't fit through the spider's screw holes. I'd be wary of damaging threads by using screws that don't fit properly.


----------



## Particle

Quote:


> Originally Posted by *diabetes*
> 
> Thats why I compared the two currents. Power has to rise when current rises and voltage is constant as P=U*I. When comparing the current ratio and the temp ratio, I assume that these differ in my calculations because of margins of error when measuring. When assuming that they are the same in an environment where flawless measurements can be made, I have just proven that Hotspot temp raises linearily with current flow.
> 
> The question now is whether this allows the assumption that hotspot temp is emitted from either the traces in the PCB+solder blobs on the package or from within the chip itself? I assume that waste heat generation in a chip does not scale linearily with current but maybe polynomial or exponential. That in turn would mean that hotspot temp can only be emitted from the PCB and can be brought down by adding more thermal pads to the backplate for example.


I wasn't arguing with you. I was adding confirmation since you asked for it.


----------



## TrixX

Quote:


> Originally Posted by *Kyozon*
> 
> I have ran DDU before instaling this Falls Update Beta Driver. Once the installation was finished, i have then rebooted the System and checked if the CrossFire Option was there. Suprisingly, it was.
> 
> I have then enabled the Option, but i noticed a strange behavior, which was my OS freezing, therefore i had to manually reboot the System. It has then turned on normally, and then the CrossFire Switch there said it was On. As you can see by the Picture.
> 
> 
> 
> 
> System Specs:
> 
> AMD Ryzen ThreadRipper 1950X @4.15Ghz
> Corsair H115i AIO
> ASUS X399 Zenith Extreme
> G.Skill RipJaws V 3200Mhz CL14 (4x8) 32GB - @3600Mhz CL15
> Samsung 850 EVO 250GB SSD
> VEGA Frontier Edition - Liquid
> VEGA Frontier Edition - Air
> Thermaltake ToughPower 1050W PSU
> 
> OS: Windows 10 Educational - Compilation (16299).
> 
> I am also about to do a proper Clean Install of Windows, as i haven't done prior switching from Ryzen 7 to TR. I have thought about installing the Windows Server 2016 Edition, as it seems to have less Bloatware running on the Background. But some people said it can be tricky to get Drivers working properly there.


Check out MyDigitalLife forums for Win 10 ISO's. Can get the Win N and pretty much any version required really. LTSB's included


----------



## Kyozon

Quote:


> Originally Posted by *TrixX*
> 
> Check out MyDigitalLife forums for Win 10 ISO's. Can get the Win N and pretty much any version required really. LTSB's included


Thank you!! I am definitely going to check it out! I am looking to extract the most out of ThreadRipper. It seems that Microsoft announced a Windows 10 Pro For Workstations, a new Edition that seems to be optimized for the latest Core i9 and ThreadRipper.

But unfortunately, up to this date i found no news on that front. Which is one of the reasons i decided to look for other, current available versions.


----------



## TrixX

Quote:


> Originally Posted by *Kyozon*
> 
> Thank you!! I am definitely going to check it out! I am looking to extract the most out of ThreadRipper. It seems that Microsoft announced a Windows 10 Pro For Workstations, a new Edition that seems to be optimized for the latest Core i9 and ThreadRipper.
> 
> But unfortunately, up to this date i found no news on that front. Which is one of the reasons i decided to look for other, current available versions.


Must admit it's why I'm using Enterprise, can also disable most of the stupid **** with GPEdit as it's not hamstrung.


----------



## Kyozon

Quote:


> Originally Posted by *TrixX*
> 
> Must admit it's why I'm using Enterprise, can also disable most of the stupid **** with GPEdit as it's not hamstrung.


I am curious to know the key differences between the Enterprise and the Server Edition.


----------



## hyp36rmax

It's official! Added another to the family. Picking up another one soon. I also have GTX 1080Ti's in SLI in another system. Can't wait to put them up against one another.


----------



## TrixX

Quote:


> Originally Posted by *Kyozon*
> 
> I am curious to know the key differences between the Enterprise and the Server Edition.


Mostly compatibility, if the Server designation is picked up, there's a fairly large amount of software out there that will just refuse to work on Server editions. Get's annoying fast. MDL has much more info on the differences.


----------



## gupsterg

@betaflame

I tried OverdriveNTool and for me it didn't seem to work as well as WattMan. So I guess it just may depend on user/settings/card which works best for them.

I had just been gunning to get VEGA on a par with a MSI GTX 1080 EK X which boosted to ~1975MHz "out of the box". Which I managed in SP 4K, 3DM FS/TS. Now I really just tweaking P6/7 mV, GPU clocks/HBM I've got where I want, to have the performance I wanted. Bonus is I got FreeSync back from dumping GTX 1080







.

It maybe that the driver is blocking some aspect of the mods to V56. There had been some posts on reddit that I saw at one time, how true no idea. I have seen above 300W in HWINFO for my V64 with 65% PL. Currently in PP reg I only mod PL limit to 100% for slider.

@SpecChum

WHASsss UPP!







, dude did you get rid of the T-Shirts?







.


----------



## geriatricpollywog

Quote:


> Originally Posted by *hyp36rmax*
> 
> It's official! Added another to the family. Picking up another one soon. I also have GTX 1080Ti's in SLI in another system. Can't wait to put them up against one another.


Nice. Is Sapphire still covering the backplate and fan shroud with those ugly stickers?


----------



## Chaoz

Quote:


> Originally Posted by *AmateurExpert*
> 
> I have a Sapphire ref air V64 on which I installed an acrylic EK block on some time ago - at the time I considered using the original tension spider, but if I remember correctly the original screws and the EK screws are different, and the EK screws wouldn't fit through the spider's screw holes. I'd be wary of damaging threads by using screws that don't fit properly.


The original EKWB screws work fine with the tension plate. I've installed it in the beginning without any issues.

As you can see in the pic, it's work perfectly with the EKWB screws.



I removed the tension plate because EKWB didn't use it in their images. So I removed it and used the same screws to mount the waterblock.


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> @SpecChum
> 
> WHASsss UPP!
> 
> 
> 
> 
> 
> 
> 
> , dude did you get rid of the T-Shirts?
> 
> 
> 
> 
> 
> 
> 
> .


Haha, hiya mate, thought I'd join the Vega crew!









T-shirts didn't sell too well lol

Need one for Vega, 1.099 under full load! He's not here is he? lol


----------



## hyp36rmax

Quote:


> Originally Posted by *0451*
> 
> Nice. Is Sapphire still covering the backplate and fan shroud with those ugly stickers?


Not on this one. I remember my Sapphire FuryX's had stickers on it.


----------



## Chaoz

Quote:


> Originally Posted by *0451*
> 
> Nice. Is Sapphire still covering the backplate and fan shroud with those ugly stickers?


I have the same one and it only has 1 green sticker on the backplate.


----------



## lowdog

Quote:


> Originally Posted by *Chaoz*
> 
> The original EKWB screws work fine with the tension plate. I've installed it in the beginning without any issues.
> 
> As you can see in the pic, it's work perfectly with the EKWB screws.
> 
> 
> 
> I removed the tension plate because EKWB didn't use it in their images. So I removed it and used the same screws to mount the waterblock.


No the screws DON'T work fine! They don't even fit through the holes in the tension bracket thing....don't know how you can say they do when they in actual fact don't fit, they are larger in diameter then the holes in the tension bracket.


----------



## Reikoji

2 Brands on newegg now at MSRP, XFX and Sapphire for both Vega 64 and Vega 56 black versions. Sapphire Limited edition air 64 for only $20 more.

Tempted to get the 2nd 64 right now....


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> I have ran DDU before instaling this Falls Update Beta Driver. Once the installation was finished, i have then rebooted the System and checked if the CrossFire Option was there. Suprisingly, it was.
> 
> I have then enabled the Option, but i noticed a strange behavior, which was my OS freezing, therefore i had to manually reboot the System. It has then turned on normally, and then the CrossFire Switch there said it was On. As you can see by the Picture.
> 
> 
> 
> 
> System Specs:
> 
> AMD Ryzen ThreadRipper 1950X @4.15Ghz
> Corsair H115i AIO
> ASUS X399 Zenith Extreme
> G.Skill RipJaws V 3200Mhz CL14 (4x8) 32GB - @3600Mhz CL15
> Samsung 850 EVO 250GB SSD
> VEGA Frontier Edition - Liquid
> VEGA Frontier Edition - Air
> Thermaltake ToughPower 1050W PSU
> 
> OS: Windows 10 Educational - Compilation (16299).
> 
> I am also about to do a proper Clean Install of Windows, as i haven't done prior switching from Ryzen 7 to TR. I have thought about installing the Windows Server 2016 Edition, as it seems to have less Bloatware running on the Background. But some people said it can be tricky to get Drivers working properly there.


Well i am fairly certain i checked this many times, but not sure i used that exact driver. + rep - i am going to go look and see if i had that driver on the machine now lol... i am gonna feel silly if it works but not surprised if it doesn't for me. My kinda luck i'll probably have to reload system


----------



## TrixX

Quote:


> Originally Posted by *Reikoji*
> 
> 2 Brands on newegg now at MSRP, XFX and Sapphire for both Vega 64 and Vega 56 black versions. Sapphire Limited edition air 64 for only $20 more.
> 
> Tempted to get the 2nd 64 right now....


Me too, tempted to get an LE 64...


----------



## Reikoji

Quote:


> Originally Posted by *TrixX*
> 
> Me too, tempted to get an LE 64...


the LC ones however are never going to be worth it @ $700... Maybe if they were $600.

Oh, the Sapphire LE card is actually on Backorder, but there is the XFX LE card for $30 more, which is also wonderful.


----------



## Chaoz

Quote:


> Originally Posted by *lowdog*
> 
> No the screws DON'T work fine! They don't even fit through the holes in the tension bracket thing....don't know how you can say they do when they in actual fact don't fit, they are larger in diameter then the holes in the tension bracket.


They actually do fit perfectly fine, you just push a bit through. How did I manage to use the tension plate otherwise, with the EKWB screws as seen in the pic?

The original stock cooler screws are too narrow for the EKWB block, so please enlighten me how I managed to actually not dethread the screws and managed to mount the tension plate without any issues.

Please tell me. Seeing as you're so sure about this when I actually tested it.


----------



## hyp36rmax

Quote:


> Originally Posted by *lowdog*
> 
> No the screws DON'T work fine! They don't even fit through the holes in the tension bracket thing....don't know how you can say they do when they in actual fact don't fit, they are larger in diameter then the holes in the tension bracket.


EK may have added the correct screws in later batches. They did the same thing for my GTX 1080TI FTW3 EK blocks. The original screws didn't fit the FTW3 factory plate either, but later revisions had no issues. I can actually test this today since I just got my Vega 64 EK block.


----------



## Chaoz

Quote:


> Originally Posted by *hyp36rmax*
> 
> EK may have added the correct screws in later batches. They did the same thing for my GTX 1080TI FTW3 EK blocks. The original screws didn't fit the FTW3 factory plate either, but later revisions had no issues. I can actually test this today since I just got my Vega 64 EK block.


I got one of the first batches of the EKWB block. Ordered mine couple of weeks before August 14th (release date of Vega EKWB block) and it got delivered on the 18th.

I seriously have no issues pushing the EKWB screws through the tension plate.

I would test this myself and prove him wrong, but I'm not home for the next couple of days.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Me too, tempted to get an LE 64...


Probably best to just get an AIB...when they finally arrive.

XFX has just put images of one on

__
https://www.reddit.com/r/7aeoiw/first_look_at_xfx_vega_partner_board/

I'm tempted to get one. I really like Vega but this blower inhibits performance too much for me unless you put it on hairdryer


----------



## AmateurExpert

Quote:


> Originally Posted by *Chaoz*
> 
> The original EKWB screws work fine with the tension plate. I've installed it in the beginning without any issues.
> 
> As you can see in the pic, it's work perfectly with the EKWB screws.
> 
> 
> 
> I removed the tension plate because EKWB didn't use it in their images. So I removed it and used the same screws to mount the waterblock.


Hmm - just dug out my spider again - I took a spare EK screw (M2.5x4 AX1) and had to use some force to fit fit through the hole. Fair enough - EK supplies M2.5x6 AX1 screws which are the minimum length I'd try to make sure there's enough thread engagement (and there are a couple of M2.5x8 in the pack too) - the EK block doesn't protrude through the PCB like the original heatsink IIRC. The tapered springs won't fit on the EK screws - you must have left these off as they're not that important.


----------



## AmateurExpert

Quote:


> Originally Posted by *Chaoz*
> 
> I have the same one and it only has 1 green sticker on the backplate.


The same here (Sapphire ref air V64), and I was able to peel it off and restick it on the appropriately marked area for it on the PCB - hey presto, no untidy sticker on the backplate!


----------



## Reikoji

Quote:


> Originally Posted by *SpecChum*
> 
> Probably best to just get an AIB...when they finally arrive.
> 
> XFX has just put images of one on
> 
> __
> https://www.reddit.com/r/7aeoiw/first_look_at_xfx_vega_partner_board/
> 
> I'm tempted to get one. I really like Vega but this blower inhibits performance too much for me unless you put it on hairdryer


IMO unless its a Waterblocked or LC with greater than a 120mm radiator AIB card then i wouldnt even bother. I'd just rip the air crap off for a Waterblock anyway.

So far ive seen that they aren't smart enough to water block / LC them or they are just trolling the AMD community /shrug.


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> IMO unless its a Waterblocked or LC with greater than a 120mm radiator AIB card then i wouldnt even bother. I'd just rip the air crap off for a Waterblock anyway.
> 
> So far ive seen that they aren't smart enough to water block / LC them or they are just trolling the AMD community /shrug.


My bad, forgot where I was for a second there lol

Yeah, good shout.


----------



## Chaoz

Quote:


> Originally Posted by *AmateurExpert*
> 
> Hmm - just dug out my spider again - I took a spare EK screw (M2.5x4 AX1) and had to use some force to fit fit through the hole. Fair enough - EK supplies M2.5x6 AX1 screws which are the minimum length I'd try to make sure there's enough thread engagement (and there are a couple of M2.5x8 in the pack too) - the EK block doesn't protrude through the PCB like the original heatsink IIRC. The tapered springs won't fit on the EK screws - you must have left these off as they're not that important.


I even managed to use the springs aswell. Those were a bit harder to push over the EKWB screws, but not impossible.

That's why I removed the tension plate, as it didn't seem necessary for the EKWB block.
Quote:


> Originally Posted by *AmateurExpert*
> 
> The same here (Sapphire ref air V64), and I was able to peel it off and restick it on the appropriately marked area for it on the PCB - hey presto, no untidy sticker on the backplate!


I didn't need to peel off the sticker either. The sticker is a bit under my 4 dimms, so it isn't that visible. It didn't bother me that much, so I didn't mind to leave it alone.


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Haha, hiya mate, thought I'd join the Vega crew!


Sweet!







.
Quote:


> Originally Posted by *SpecChum*
> 
> T-shirts didn't sell too well lol


Yep we need updated VEGA edition!







.
Quote:


> Originally Posted by *SpecChum*
> 
> Need one for Vega, 1.099 under full load! He's not here is he? lol


LOL! Blue should be about, not really seen him post in a while







, might PM him to see what he's up to, he's defo got VEGA.


----------



## Reikoji

Quote:


> Originally Posted by *gupsterg*
> 
> Sweet!
> 
> 
> 
> 
> 
> 
> 
> .
> Yep we need updated VEGA edition!
> 
> 
> 
> 
> 
> 
> 
> .
> LOL! Blue should be about, not really seen him post in a while
> 
> 
> 
> 
> 
> 
> 
> , might PM him to see what he's up to, he's defo got VEGA.


Unfortunantly we have a new low voltage under load from TrixX :3


----------



## gupsterg

Quote:


> Originally Posted by *Reikoji*
> 
> Unfortunantly we have a new low voltage under load from TrixX :3


True that!







! I think it's







time.


----------



## SpecChum

Cheers!


----------



## Reikoji

Nom nom nom


----------



## gupsterg

We just need @dorbot to get VEGA and we'll have the C6H crew back here having a














.

Well back on topic







.

So I've been using WattMan. Only PP mod I have done is to allow slider to 100%. I use 65% but 40% seems right for clocks.

I initially settled for:

P6: 1557MHz 1000mV
P7: 1652MHZ 1112mV
HBM: 1100MHz 1000mV.

Today have snagged P6/HBM down to 975mV. Passed 1hr loop 3DM FS Demo, ~45min Valley, ~40min Heaven, ~1hr SWBF and ~30min LOTF plus 30min RB. Will probably throw some [email protected] and bionic at profile soon. Then try some more games and see if can tweak lower mV. I get ~1600MHz 3D loads, ~1700MHz Compute.


----------



## AmateurExpert

Quote:


> Originally Posted by *Chaoz*
> 
> I didn't need to peel off the sticker either. The sticker is a bit under my 4 dimms, so it isn't that visible. It didn't bother me that much, so I didn't mind to leave it alone.


The sticker comment was really for the likes of 0451 who prefer not to have them visible. I used a spudger and found it peeled off intact fairly easily.

Anyhow - having stuck with the original standard reference backplate (yes, it's metal) for my waterblock conversion, I don't see the point in shelling out for the EK backplate at all, especially if you can't get at the switches for the LEDs.

Currently looking at how high I can get the HBM clocks on the 17.10.3 drivers - previously I'd stuck to 1095MHz as I didn't see any improvement or stability beyond that.


----------



## VicsPC

Hey guys new here. Would love to join. Heres my Vega 64 bought on release day and added an ekwb on it. I used the screws furnished with the EKWB and used the factory backplate. had zero issues with the screws, the shorts ones are a bit short but they had no issues securing the backplate to the PCB.


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> We just need @dorbot to get VEGA and we'll have the C6H crew back here having a
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Well back on topic
> 
> 
> 
> 
> 
> 
> 
> .
> 
> So I've been using WattMan. Only PP mod I have done is to allow slider to 100%. I use 65% but 40% seems right for clocks.
> 
> I initially settled for:
> 
> P6: 1557MHz 1000mV
> P7: 1652MHZ 1112mV
> HBM: 1100MHz 1000mV.
> 
> Today have snagged P6/HBM down to 975mV. Passed 1hr loop 3DM FS Demo, ~45min Valley, ~40min Heaven, ~1hr SWBF and ~30min LOTF plus 30min RB. Will probably throw some [email protected] and bionic at profile soon. Then try some more games and see if can tweak lower mV. I get ~1600MHz 3D loads, ~1700MHz Compute.


I'm still playing, doing loud benchmark runs at the minute but 1000mv at 1702 p7 seems nice and gets me 1548Mhz constant clocks with 2400RPM max fan, not too bad


----------



## hyp36rmax

Quote:


> Originally Posted by *VicsPC*
> 
> Hey guys new here. Would love to join. Heres my Vega 64 bought on release day and added an ekwb on it. I used the screws furnished with the EKWB and used the factory backplate. had zero issues with the screws, the shorts ones are a bit short but they had no issues securing the backplate to the PCB.


Copper love!


----------



## Chaoz

Quote:


> Originally Posted by *AmateurExpert*
> 
> The sticker comment was really for the likes of 0451 who prefer not to have them visible. I used a spudger and found it peeled off intact fairly easily.
> 
> Anyhow - having stuck with the original standard reference backplate (yes, it's metal) for my waterblock conversion, I don't see the point in shelling out for the EK backplate at all, especially if you can't get at the switches for the LEDs.
> 
> Currently looking at how high I can get the HBM clocks on the 17.10.3 drivers - previously I'd stuck to 1095MHz as I didn't see any improvement or stability beyond that.


Yeah, didn't want to pay that much for some generic EKWB backplate when the original looks great with my Acetal Nickel block.

Got my HBM on 1050 with 950mV, which is fine for me. I'm only on 75Hz with FreeSync. Which my 64 doesn't even strain that much either, usage is usually around 70-80%. Got my P6 at 1580 and P7 at 1650 with 1v and +50% power limit, temps are amazing and never go over 36°C.

These are the clocks I started from and gradually upped them a bit until I could get a stable Undervolt.



Performance still amazes me even in BF1 I can get 150fps without FreeSync, which my GTX1070 never could reach.


----------



## VicsPC

Quote:


> Originally Posted by *hyp36rmax*
> 
> Copper love!


Absolutely, wouldn't have it any other way. It's just so beautiful, especially when mounted vertically. My fans blow directly over the card (fresh intake air) and my temps are as follows.

This is with around 25°C ambient or so. Now that it's around 21°C its a bit cooler.
Core: 39°C
HBM: 43°C
Hotspot: ~58°C

Card has been FANTASTIC and have had zero issues with it, the drivers are still quite buggy especially in some games and radeon resetting my fps cap at random times. Otherwise it runs everything i throw at it. Recently got fiber connection as well so the PC is fully decked out now. Went from 12/1 to 300/100 ish.


----------



## betaflame

Quote:


> Originally Posted by *betaflame*
> 
> 1) Physically install Vega 56
> 2) Install 17.10.3
> 3) Use wattman, it's terrible, reset to balanced
> 4) Use watttool, it's terrible, reset
> 5) Use Overdriventool, works fine, use that to mess with voltages and settings
> 6) GPU-Z shows 200W at +0% PL
> 7) GPU-Z shows 300W at +50% PL
> 8) Reset
> 9) Flash 64 bios
> 10) Go into Safe mode, DDU
> 11) Install 17.10.3
> 12) Use Overdriventool to mess with voltages and settings
> 13) Verify all the default clocks changed to Vega 64's clocks. Can't see HBM voltage.
> 14) HBM artifacts like mad if 985 is exceeded
> 15) GPU-Z shows 200W at +0% PL
> 16) GPU-Z shows 300W at +50% PL
> ???
> 17) Apply the modded powerplay tables.
> 18) GPU-Z shows 200W at +0% PL
> 19) GPU-Z shows 300W at +50% PL
> 20) GPU-Z shows 300W at +60% PL
> 21) GPU-Z shows 300W at +70% PL
> 22) GPU-Z shows 300W at +80% PL
> 
> Did I skip a step or something? I've done bios modding before, I feel like I'm taking crazy pills


Quote:


> Originally Posted by *TrixX*
> 
> Looks like you might have had a bad flash. Try re-flashing with the 8730 BIOS for a V64 Air card and see whether it was a bad flash.
> 
> Though I have seen a few cards that can't cope with HBM clocks above 950MHz, probably why there is a 945MHz starting level for the HBM.


Nope. Still 300W locked on superposition. Furmark gets it to 330W, but Furmark is known for being weird.

Edit: Tried both W7 and W10. 220W normally, 300W at +50% PL.

Edit2: going to try powerplay mod again

Edit3: It must have been that bios, because it's working with the powerplay mod now, Thanks!


----------



## lowdog

Quote:


> Originally Posted by *Chaoz*
> 
> They actually do fit perfectly fine, you just push a bit through. How did I manage to use the tension plate otherwise, with the EKWB screws as seen in the pic?
> 
> The original stock cooler screws are too narrow for the EKWB block, so please enlighten me how I managed to actually not dethread the screws and managed to mount the tension plate without any issues.
> 
> Please tell me. Seeing as you're so sure about this when I actually tested it.


Because I tried to fit the EK screws through the holes in the tension plate and the diameter of the screws was larger than the holes in the tension plate so they would not go through and I certainly wasn't going to try and force them through = not rocket science.

Furthermore if force is need in order to push the screw through the hole then it can't be said they fit perfectly can it.


----------



## VicsPC

Quote:


> Originally Posted by *lowdog*
> 
> Because I tried to fit the EK screws through the holes in the tension plate and the diameter of the screws was larger than the holes in the tension plate so they would not go through and I certainly wasn't going to try and force them through = not rocket science.


You shouldn't need the tension plate for the ek one. It's a full block so theres no need, the tension is there from mounting the block onto the pcb. There totally different. I didn't use it and my temps are more then fine. Playing some siege with 22°C ambient here are my temps.


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> I have ran DDU before instaling this Falls Update Beta Driver. Once the installation was finished, i have then rebooted the System and checked if the CrossFire Option was there. Suprisingly, it was.
> 
> I have then enabled the Option, but i noticed a strange behavior, which was my OS freezing, therefore i had to manually reboot the System. It has then turned on normally, and then the CrossFire Switch there said it was On. As you can see by the Picture.
> 
> 
> 
> 
> System Specs:
> 
> AMD Ryzen ThreadRipper 1950X @4.15Ghz
> Corsair H115i AIO
> ASUS X399 Zenith Extreme
> G.Skill RipJaws V 3200Mhz CL14 (4x8) 32GB - @3600Mhz CL15
> Samsung 850 EVO 250GB SSD
> VEGA Frontier Edition - Liquid
> VEGA Frontier Edition - Air
> Thermaltake ToughPower 1050W PSU
> 
> OS: Windows 10 Educational - Compilation (16299).
> 
> I am also about to do a proper Clean Install of Windows, as i haven't done prior switching from Ryzen 7 to TR. I have thought about installing the Windows Server 2016 Edition, as it seems to have less Bloatware running on the Background. But some people said it can be tricky to get Drivers working properly there.


Alright so this worked!! i followed the link on twitter to that driver. Apparently i had every other driver but this one, doesn't help some of the version numbers overlap between rx and pro....
I still don't see wattman but ill use other software for now.


----------



## Chaoz

Quote:


> Originally Posted by *lowdog*
> 
> Because I tried to fit the EK screws through the holes in the tension plate and the diameter of the screws was larger than the holes in the tension plate so they would not go through and I certainly wasn't going to try and force them through = not rocket science.
> 
> Furthermore if force is need in order to push the screw through the hole then it can't be said they fit perfectly can it.


It's not like I used a hammer to get them to go in, the only one that was a bit difficult was the one with the sticker over the hole, because well, there was a sticker over the hole, the rest went in fairly easy.

If they didn't go in that well I wouldn't have forced it anyways. But seeing as they did manage to go in and come back out with ease I thought I'd give it a try.

I didn't destroy the thread as I removed the screws and used them on my block. Plus I got like 5 extra screws, so even if it did de-thread the screws I would care less as I have spares anyways.

So yeah back to you. We can argue about this all day, you won't convince me it doesn't work. It did for me. Period.

Also, how would I be able to take a picture with the tension plate on with the EKWB screws if the screws didn't fit?

I would show you but I'm not home and I won't be for a little while.


----------



## SpecChum

Yay, broke my 18k duck










https://www.3dmark.com/3dm/23048764?


----------



## VicsPC

Quote:


> Originally Posted by *Chaoz*
> 
> It's not like I used a hammer to get them to go in, the only one that was a bit difficult was the one with the sticker over the hole, because well, there was a sticker over the hole, the rest went in fairly easy.
> 
> If they didn't go in that well I wouldn't have forced it anyways. But seeing as they did manage to go in and come back out with ease I thought I'd give it a try.
> 
> I didn't destroy the thread as I removed the screws and used them on my block. Plus I got like 5 extra screws, so even if it did de-thread the screws I would care less as I have spares anyways.
> 
> So yeah back to you. We can argue about this all day, you won't convince me it doesn't work. It did for me. Period.
> 
> Also, how would I be able to take a picture with the tension plate on with the EKWB screws if the screws didn't fit?
> 
> I would show you but I'm not home and I won't be for a little while.


I used mine as well and had no issues with the ek screws and factory backplate. They are a bit short and maybe only go in a full turn but since its only there to hold the backplate its not an issue at all.


----------



## Chaoz

Quote:


> Originally Posted by *SpecChum*
> 
> Yay, broke my 18k duck
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/3dm/23048764?


Not bad, this is mine with stock settings.

https://www.3dmark.com/fs/13388664

Not sure how I scored higher on overall seeing as your Gfx and Physics score is a lot higher than mine.

This was another test I ran with a mild OC:
https://www.3dmark.com/fs/13393335

Not sure what happened and why it shows that error but it completed the benchmark perfectly.
Quote:


> Originally Posted by *VicsPC*
> 
> I used mine as well and had no issues with the ek screws and factory backplate. They are a bit short and maybe only go in a full turn but since its only there to hold the backplate its not an issue at all.


Finally someone else who also got it to work fine.


----------



## VicsPC

Quote:


> Originally Posted by *Chaoz*
> 
> Not bad, this is mine with stock settings.
> 
> https://www.3dmark.com/fs/13388664
> 
> Not sure how I scored higher on overall seeing as your Gfx and Physics score is a lot higher than mine.
> 
> This was another test I ran with a mild OC:
> https://www.3dmark.com/fs/13393335
> 
> Not sure what happened and why it shows that error but it completed the benchmark perfectly.
> Finally someone else who also got it to work fine.


Well the screws are going into the ek block so the thread shouldnt be an issue. All their doing is holding the backplate onto the block so no problem. As far as using the tension plate? If ek doesn't say to reuse it then it shouldn't be used. I found it a bit weird that they were short but it worked no problem and I've had it since day one.


----------



## Chaoz

Quote:


> Originally Posted by *VicsPC*
> 
> Well the screws are going into the ek block so the thread shouldnt be an issue. All their doing is holding the backplate onto the block so no problem. As far as using the tension plate? If ek doesn't say to reuse it then it shouldn't be used. I found it a bit weird that they were short but it worked no problem and I've had it since day one.


I removed mine shortly after I took the pics, because nobody seemed to have the tension plate on theirs either with the EK waterblock, so figured it wasn't necessary.


----------



## TrixX

Quote:


> Originally Posted by *Chaoz*
> 
> I removed mine shortly after I took the pics, because nobody seemed to have the tension plate on theirs either with the EK waterblock, so figured it wasn't necessary.


I think I may put the tension plate on for my Aquacomputer block. Hotspot's hitting 68C under load and it's annoying me as the core is 48C and Mem 54C. I think I may have messed up the Kryonaut application though, will find out in a few days when the CPU block arrives and I expand the loop


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> Hello dagget!
> 
> I would like to report to you that i have successfully managed to enable Crossfire on both of my Cards.
> 
> What is interesting is that i got really weird Graphics Score, considering both of them are Overclocked - 5%, as you can see here: https://www.3dmark.com/fs/14018046 Could it be because they are different variants? One is the LC Edition and the other is the Air Edition.
> 
> Also, i would like to tell you that the Radeon Pro Crimson - Windows Falls Update Beta, the one that came before the Q4 Enterprise Driver -
> 
> __ https://twitter.com/i/web/status/921381001338560513%5B%2FURL


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> I think I may put the tension plate on for my Aquacomputer block. Hotspot's hitting 68C under load and it's annoying me as the core is 48C and Mem 54C. I think I may have messed up the Kryonaut application though, will find out in a few days when the CPU block arrives and I expand the loop


I used the stock tension plate with my EK-FC. Core and mem are 40/42 and hotspot is 60 after a couple hours of gaming.


----------



## Chaoz

Quote:


> Originally Posted by *TrixX*
> 
> I think I may put the tension plate on for my Aquacomputer block. Hotspot's hitting 68C under load and it's annoying me as the core is 48C and Mem 54C. I think I may have messed up the Kryonaut application though, will find out in a few days when the CPU block arrives and I expand the loop


That's quite high, tbh. My core and HBM stays around 36-40°C. Not sure about Hotspot, I need to keep an eye on it. But keep forgetting to monitor it.

I find the best application is the triple X method. An X on every die and that's it. Also used Kryonaut.

I have a 360 and 480 rad so maybe that's why my temps are lower, dunno, tbh.


----------



## SPLWF

Question: What cause me for not hitting my stock clocks?

Settings are:

mV: 1025 on both
Clock are stock at: 1537 low and 1590 high
Temps hover around 75c
memory clock: 950mhz
PL: 50%

Hitting peak clocks of: 1498/1500/1510 ranges.


----------



## TrixX

Quote:


> Originally Posted by *Chaoz*
> 
> That's quite high, tbh. My core and HBM stays around 36-40°C. Not sure about Hotspot, I need to keep an eye on it. But keep forgetting to monitor it.
> 
> I find the best application is the triple X method. An X on every die and that's it. Also used Kryonaut.
> 
> I have a 360 and 480 rad so maybe that's why my temps are lower, dunno, tbh.


Yeah I figured it was running a bit hot, but at the same time ambient is 28C so not too bad









Also my fans are slaved to my CPU currently so need to adjust how they are working. Though CPU is air cooled so they do spin up a fair bit...


----------



## dagget3450

Quote:


> Originally Posted by *VicsPC*
> 
> Hey guys new here. Would love to join. Heres my Vega 64 bought on release day and added an ekwb on it. I used the screws furnished with the EKWB and used the factory backplate. had zero issues with the screws, the shorts ones are a bit short but they had no issues securing the backplate to the PCB.
> 
> 
> Spoiler: Warning: Spoiler!


Added you to club!
Quote:


> Originally Posted by *hyp36rmax*
> 
> It's official! Added another to the family. Picking up another one soon. I also have GTX 1080Ti's in SLI in another system. Can't wait to put them up against one another.


Added also!


----------



## VicsPC

Quote:


> Originally Posted by *TrixX*
> 
> I think I may put the tension plate on for my Aquacomputer block. Hotspot's hitting 68C under load and it's annoying me as the core is 48C and Mem 54C. I think I may have messed up the Kryonaut application though, will find out in a few days when the CPU block arrives and I expand the loop


Quote:


> Originally Posted by *Chaoz*
> 
> That's quite high, tbh. My core and HBM stays around 36-40°C. Not sure about Hotspot, I need to keep an eye on it. But keep forgetting to monitor it.
> 
> I find the best application is the triple X method. An X on every die and that's it. Also used Kryonaut.
> 
> I have a 360 and 480 rad so maybe that's why my temps are lower, dunno, tbh.


I used Kryonaut as well but my case is a Core X5 so my fans sit right above the card (I'm using the top fan slots as intakes so cool air just comes right in). Even at 28°C ambient though i didn't hit 48°C on the core I'm running a 360 in push/pull and a 240 in push with one fan in pull (i need another one for push/pull its just one i had laying around).

I'm guessing that me having a coupls fans above my card is whats keeping the hotspot much cooler then most people here, even though I'm on water.

I'm on stock clocks and stock voltages as well. Consuming around 200w core power and 22 memory power making chip power 222w. Was around 250ish or so on air. I have everything on stock and haven't messed with undervolts and overclocks yet.


----------



## TrixX

Quote:


> Originally Posted by *SPLWF*
> 
> Question: What cause me for not hitting my stock clocks?
> 
> Settings are:
> 
> mV: 1025 on both
> Clock are stock at: 1537 low and 1590 high
> Temps hover around 75c
> memory clock: 950mhz
> 
> Hitting peak clocks of: 1498/1500/1510 ranges.


Clocks are ceilings, not set. Think of it as if the temp/power max/power actual equalise to perfect then they'll hit those clocks.

In your case I'd guess thermals and power are both limiting factors however I don't have a full picture of your settings to go on. What power target is set, what max fan speeds etc...

For my Vega64 on air with 1050mv on P7 and locking it to P7 for testing (using Wattman, OverdriveNTool or ClockBlocker) and a max of 70C and target of 65C I'd get temps up to 62C with 4956 points in SuperPosition (1080p Extreme) and an actual clock of ~1580MHz during the test. HBM was set to 1050MHz and 950mv and power set to +100% using a PowerPlay Tables registry entry.

I actually should do a bunch of undervolting runs with water to see max clocks per mv for my card.

Quote:


> Originally Posted by *VicsPC*
> 
> I used Kryonaut as well but my case is a Core X5 so my fans sit right above the card (I'm using the top fan slots as intakes so cool air just comes right in). Even at 28°C ambient though i didn't hit 48°C on the core I'm running a 360 in push/pull and a 240 in push with one fan in pull (i need another one for push/pull its just one i had laying around).
> 
> I'm guessing that me having a coupls fans above my card is whats keeping the hotspot much cooler then most people here, even though I'm on water.
> 
> I'm on stock clocks and stock voltages as well. Consuming around 200w core power and 22 memory power making chip power 222w. Was around 250ish or so on air. I have everything on stock and haven't messed with undervolts and overclocks yet.


Yeah I'm going to mount a fan over the back of the card to see if that alleviates the hotspot issue. Running an EK XS360 with 3 fans in Push at the moment. If I have to move to Push Pull the rad is gonna be hanging out the back of the case


----------



## Soggysilicon

Quote:


> Originally Posted by *gupsterg*
> 
> Yesterday I went to HBM 1150MHz. What I noted was I got repeatable small gains in SuperPosition 4K preset, 3DM FS and large memory copy gains in AIDA64 GPGPU, but I lost FPS/points in 3DM TS
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Example of SP 4K left 1150MHz, right 1100MHz.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> Example of 3DM FS left 1150MHz, right 1100MHz.
> 
> AIDA64 GPGPU
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Example of 3DM TS 1150MHz vs 1100MHz.
> 
> I noted no artifacting, etc.
> 
> I had also ran [email protected] on GPU for ~3hrs on HBM 1150MHz without errors.
> 
> I also tried SOC Clock mod of 1200MHz even though not need on v17.10.3 drivers and still TimeSpy with HBM 1150MHz tanks. I can only assume the higher speed HBM is having errors and those errors create performance loss in that test.
> 
> Has anyone also noted this issue?


HBM timing and the frequency of Vega seem to share an interdependence, I suspect there are timings within certain frequency ranges which suffer a lower case penalty (wait states or re-fetch) request, which in turn, could impact an overall score. This would be even more pronounced in the so-called RPM scenario, where there is a decode asic in the core and the fetch packet represents 2 bits per on / off state as opposed to one (or more) dependent on the coding sequence.

I have seen this play out (or more accurately I suppose it is whats happening) where increasing clock frequency lowers a bench / fps, as well as increasing HBM lowers scores (or remains more or less the same) if holding frequency constant vis a vis holding HBM constant; supposing then that there "exist" combinations which yield better scores overall. I further suspect this will remain a trend in future video cards both with AMD and Nvidia as they move to multi-chip dies and fabric style architectures to get better yields / lower cost.
Quote:


> Originally Posted by *Chaoz*
> 
> I removed mine shortly after I took the pics, because nobody seemed to have the tension plate on theirs either with the EK waterblock, so figured it wasn't necessary.


You made the right play here. FC block doesn't really need a linear spring setup to insure proper pressure on the die. TIM'ing is a little more important though, but if I recall your setup seemed legit let the haters hate.








Quote:


> Originally Posted by *SPLWF*
> 
> Question: What cause me for not hitting my stock clocks?
> 
> Settings are:
> 
> mV: 1025 on both
> Clock are stock at: 1537 low and 1590 high
> Temps hover around 75c
> memory clock: 950mhz
> 
> Hitting peak clocks of: 1498/1500/1510 ranges.


Bump that power limit, Vega won't clock up if the power isn't there to sustain it. Hence the never ending quest to tune your particular card.


----------



## SPLWF

Quote:


> Originally Posted by *TrixX*
> 
> Clocks are ceilings, not set. Think of it as if the temp/power max/power actual equalise to perfect then they'll hit those clocks.
> 
> In your case I'd guess thermals and power are both limiting factors however I don't have a full picture of your settings to go on. What power target is set, what max fan speeds etc...
> 
> For my Vega64 on air with 1050mv on P7 and locking it to P7 for testing (using Wattman, OverdriveNTool or ClockBlocker) and a max of 70C and target of 65C I'd get temps up to 62C with 4956 points in SuperPosition (1080p Extreme) and an actual clock of ~1580MHz during the test. HBM was set to 1050MHz and 950mv and power set to +100% using a PowerPlay Tables registry entry.
> 
> I actually should do a bunch of undervolting runs with water to see max clocks per mv for my card.
> Yeah I'm going to mount a fan over the back of the card to see if that alleviates the hotspot issue. Running an EK XS360 with 3 fans in Push at the moment. If I have to move to Push Pull the rad is gonna be hanging out the back of the case


I'm using wattman, fan speeds are set to 2800rpms aka 50% fan speed.

PL is set to max at 50%

HBM is set to stock mV


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> You made the right play here. FC block doesn't really need a linear spring setup to insure proper pressure on the die. TIM'ing is a little more important though, but if I recall your setup seemed legit let the haters hate.


Thanks, mate. Appreciated







.
My setup still amazes me sometimes. The temps are so low, imho.

Plus it's still quite a nice spaceheater. I even don't have the heater on in my room and the temps outside drop to 10°C this time of the year. So I'm happy. Even my idle is damn low, my 64 idles at 23ish°C, when it's colder weather.

Although idle temps don't mean anything.

Quote:


> Originally Posted by *VicsPC*
> 
> I used Kryonaut as well but my case is a Core X5 so my fans sit right above the card (I'm using the top fan slots as intakes so cool air just comes right in). Even at 28°C ambient though i didn't hit 48°C on the core I'm running a 360 in push/pull and a 240 in push with one fan in pull (i need another one for push/pull its just one i had laying around).
> 
> I'm guessing that me having a coupls fans above my card is whats keeping the hotspot much cooler then most people here, even though I'm on water.
> 
> I'm on stock clocks and stock voltages as well. Consuming around 200w core power and 22 memory power making chip power 222w. Was around 250ish or so on air. I have everything on stock and haven't messed with undervolts and overclocks yet.


I have my 480 on the bottom of my case with 4 fans in push and the 360 at the top with 3 fans in pull, temps are quite good especially with my UV. CPU never goes over 50°C and GPU barely goes over 36°C. Got my 7 Noctua Industrials spinning at a static ±900rpm, which makes 'em quite silent also got 2 serial d5 pumps running on 1900rpm static.

When I'm back at home, I'll do a test run to see how high my Hotspot goes.


----------



## geriatricpollywog

My power limit is 150% and 500 amps. Hotspot temp below 60. And actual core speed is always about 30-40 mhz below target, whether I set it to 1600 or 1780.


----------



## TrixX

Quote:


> Originally Posted by *SPLWF*
> 
> I'm using wattman, fan speeds are set to 2800rpms aka 50% fan speed.
> 
> PL is set to max at 50%
> 
> HBM is set to stock mV


First up your fan is limiting cooling, when testing bump it to 4900RPM as during normal operation you can drop it back as required, plus it's a PWM fan so will clock back as needed.

Grab one of the PowerPlay Tables (the one designed for your card) from the Vega BIOS thread. It'll allow up to 142% power. It just removes a restriction during testing.

Also I'd recommend using OverdriveNTool as it's a little more reliable on application of settings for power etc...

In Wattman if you still use it you can set the minimum power state by right clicking on the power state number at the top of the sliders. Useful for locking to P7. Same can be done in OverdriveNTool, by just left clicking on the Power State number left of the MHz box.


----------



## JasonMZW20

Okay, as I'm now playing Wolfenstein 2, Vega64 has redeemed itself; I'm happy to move away from OpenGL games (beat The Old Blood finally). It's buttery smooth like Doom and FPS is around 140-200fps with everything maxed at 1080p (using publicbeta build with Async Compute). Also using Enhanced Sync, which is pretty decent actually. Compute usage does show up in Win10 GPU profiler in task manager, but only on Compute 0.

I was running my HBM at 1100MHz (1005mV), but HBM temperature is a limiting factor for me on air. If it exceeds 80C, it starts artifacting heavily; I did bump my fan profile up, but it's rather noisy, so I may just run at stock clocks; they do have better timings, IIRC. Water cooling is the best option, and I'll consider getting a block soon. I did some GPU-Z logging while playing, and Vega didn't do too badly.

Average GPU only power draw was around 185w (max at 207w) with clocks averaging 1565MHz and VDDC ranging from 1.025v-1.0313v pretty steadily. Memory usage was a steady 5GB as well. Should note that this particular level I'm on wasn't very demanding (still on the airship at the beginning).

After exiting Wolf 2, I am getting stuck max clocks and voltages at WattMan specified values, at least. Need to restart.


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> My power limit is 150% and 500 amps. Hotspot temp below 60. And actual core speed is always about 30-40 mhz below target, whether I set it to 1600 or 1780.


Sounds about right. It will rarely if ever hit the speed set. Hence upping the core can massively improve performance without any real extra cost power wise. Testing the MHz available using undervolting helps to see the max for a card. If anything you'll see the set speeds only when the GPU is at 0% load conditions. On air it would drop by almost 100MHz under 100% load conditions to the MHz set given heat and power not being a factor.


----------



## TrixX

Quote:


> Originally Posted by *JasonMZW20*
> 
> Okay, as I'm now playing Wolfenstein 2, Vega64 has redeemed itself; I'm happy to move away from OpenGL games (beat The Old Blood finally). It's buttery smooth like Doom and FPS is around 140-200fps with everything maxed at 1080p (using publicbeta build with Async Compute). Also using Enhanced Sync, which is pretty decent actually. Compute usage does show up in Win10 GPU profiler in task manager, but only on Compute 0.
> 
> I was running my HBM at 1100MHz (1005mV), but HBM temperature is a limiting factor for me on air. If it exceeds 80C, it starts artifacting heavily; I did bump my fan profile up, but it's rather noisy, so I may just run at stock clocks; they do have better timings, IIRC. Water cooling is the best option, and I'll consider getting a block soon. I did some GPU-Z logging while playing, and Vega didn't do too badly.
> 
> Average GPU only power draw was around 185w (max at 207w) with clocks averaging 1565MHz and VDDC ranging from 1.025v-1.0313v pretty steadily. Memory usage was a steady 5GB as well. Should note that this particular level I'm on wasn't very demanding (still on the airship at the beginning).
> 
> After exiting Wolf 2, I am getting stuck max clocks and voltages at WattMan specified values, at least. Need to restart.


Test that Undervolt my friend. You'll probably be able to get similar clocks ingame with 1050mv instead of stock 1200mv. Reduces power and heat in proportion and increases efficiency


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> Sounds about right. It will rarely if ever hit the speed set. Hence upping the core can massively improve performance without any real extra cost power wise. Testing the MHz available using undervolting helps to see the max for a card. If anything you'll see the set speeds only when the GPU is at 0% load conditions. On air it would drop by almost 100MHz under 100% load conditions to the MHz set given heat and power not being a factor.


I am referring to the P7 state vs the live clock speed during superposition. I'll try setting minimum power state like you mentioned in your previous post. I didn't know you could do that.


----------



## JasonMZW20

Quote:


> Originally Posted by *TrixX*
> 
> Test that Undervolt my friend. You'll probably be able to get similar clocks ingame with 1050mv instead of stock 1200mv. Reduces power and heat in proportion and increases efficiency


I'm not running at 1200mV. It's at 1025mV P6 with jumps to 1031.3mV; I don't hit P7 often, but it's set to 1065mV (had instability at 1050mV). Stock air clocks and +50% PL.


----------



## TrixX

Quote:


> Originally Posted by *JasonMZW20*
> 
> I'm not running at 1200mV. It's at 1025mV P6 with jumps to 1031.3mV; I don't hit P7 often, but it's set to 1065mV (had instability at 1050mV). Stock air clocks and +50% PL.


Try locking it to P7, perf should improve in games and testing. also it should be stable at 1050 with it locked to P7 (any 3D or testing prog I have locked to P7 now as the jumping power states cause merry hell with stability). If it runs at 1025mv with P6 it'll do that at P7 happily too, ACG does my head in!


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Chaoz*
> 
> I got one of the first batches of the EKWB block. Ordered mine couple of weeks before August 14th (release date of Vega EKWB block) and it got delivered on the 18th.
> 
> I seriously have no issues pushing the EKWB screws through the tension plate.
> 
> I would test this myself and prove him wrong, but I'm not home for the next couple of days.






yeah umm
why did you use the tension plate I installed mine without it. as per pics and the instructions.
I'm assuming you mean the x plate that was used with the stock cooler...


----------



## Mr.N00bLaR

My Gigabyte Vega 56 should arrive tomorrow


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Chaoz*
> 
> Not bad, this is mine with stock settings.
> 
> https://www.3dmark.com/fs/13388664
> 
> Not sure how I scored higher on overall seeing as your Gfx and Physics score is a lot higher than mine.
> 
> This was another test I ran with a mild OC:
> https://www.3dmark.com/fs/13393335
> 
> Not sure what happened and why it shows that error but it completed the benchmark perfectly.
> Finally someone else who also got it to work fine.






I just got that and a wattman crash because I tried the overclock tool.
killed the tool redid the settings rebooted no more error.
now saying that I alos tried the version before that of the tool and never got the error so maybe it is the newest version doing it.

I just went back to ol faithful wattman for now


----------



## VicsPC

Quote:


> Originally Posted by *Chaoz*
> 
> Thanks, mate. Appreciated
> 
> 
> 
> 
> 
> 
> 
> .
> My setup still amazes me sometimes. The temps are so low, imho.
> 
> Plus it's still quite a nice spaceheater. I even don't have the heater on in my room and the temps outside drop to 10°C this time of the year. So I'm happy. Even my idle is damn low, my 64 idles at 23ish°C, when it's colder weather.
> 
> Although idle temps don't mean anything.
> I have my 480 on the bottom of my case with 4 fans in push and the 360 at the top with 3 fans in pull, temps are quite good especially with my UV. CPU never goes over 50°C and GPU barely goes over 36°C. Got my 7 Noctua Industrials spinning at a static ±900rpm, which makes 'em quite silent also got 2 serial d5 pumps running on 1900rpm static.
> 
> When I'm back at home, I'll do a test run to see how high my Hotspot goes.


I have my fans at 1100rpms and its plenty quiet for me, with headphones you don't even hear it so it's fine. My 1700x stays at around 47°C peak and probably ~38°C depending on the game. Most non demanding games my gpu stays at around water temp with HBM 3-4°C warmer. I honestly was expecting it to run hotter then my r9 390 but the temps are identical. I flushed my system for the first time in a year and cleaned the dust of the rads as well and the temps are fantastic. I run my pump at 3600rpm set speed and you can't even hear the damn thing. The GPU blocks are quite restrictive so i run the pump a bit faster to get good flow. My alphacool 390 block wouldn't do well with less then 50% pump speed. I'm pretty sure i have this one at either 60 or 75% i haven't been in my BIOS in ages.

Here's a quick and dirty fps i did of siege comparing my 390 and vega 64. Settings weren't even the same and the 64 just destroys the 390. Left is 390 right is 64.


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> First up your fan is limiting cooling, when testing bump it to 4900RPM as during normal operation you can drop it back as required, plus it's a PWM fan so will clock back as needed.
> 
> Grab one of the PowerPlay Tables (the one designed for your card) from the Vega BIOS thread. It'll allow up to 142% power. It just removes a restriction during testing.
> 
> Also I'd recommend using OverdriveNTool as it's a little more reliable on application of settings for power etc...
> 
> In Wattman if you still use it you can set the minimum power state by right clicking on the power state number at the top of the sliders. Useful for locking to P7. Same can be done in OverdriveNTool, by just left clicking on the Power State number left of the MHz box.


Just tried both wattman and overdriventool. Neither had the desired effect.


----------



## Hanjin

Nice upgrade over my RX 560


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Just tried both wattman and overdriventool. Neither had the desired effect.


So you couldn't lock the state to P7???


----------



## Chaoz

Quote:


> Originally Posted by *tarot*
> 
> yeah umm
> why did you use the tension plate I installed mine without it. as per pics and the instructions.
> I'm assuming you mean the x plate that was used with the stock cooler...


Because I used the stock backplate and there was a space for it. Never had a GPU with a tension plate, so wasn't sure.

It didn't say in the manuel that you didn't have to use it. So I just mounted it took the pictures then changed my mind and removed it.

Yes, the X-plate is called a tension plate, as it puts tension on the stock cooler.

What's with the spoilertags on a normal non-image quote?


----------



## TrixX

Ooooh 17.11.1 driver is out!


----------



## spyshagg

People who went from AIR to WATER

How did your HBM overclock respond? mhz and mv if possible please.


----------



## Naeem

17.11.1 Drivers

50% Power target
HBM2 1120mhz

https://www.3dmark.com/3dm/23056067


----------



## Chaoz

Quote:


> Originally Posted by *spyshagg*
> 
> People who went from AIR to WATER
> 
> How did your HBM overclock respond? mhz and mv if possible please.


Pretty good. I got mine on 1050MHz with 950mV and +50% power target. Can go higher at 1150MHz with 1v and +50% power target aswell, but it's not necessary for me as I'm using FreeSync.


----------



## webhito

Howdy fellas!

Recently picked up a vega 64 card, great gpu, not as good as the 1080ti I sold but still very good overall.

I am having issues when my screen turns off, for some reason its messing with my resolution, mind you, this only seems to happen when I leave it several hours idle, to fix it all I do is unplug the DP cable and plug it back in. Its not that much of an issue, but it does get kind of annoying.

Anyone else have this?


----------



## SpecChum

Do you leave HBCC on all the time?


----------



## steadly2004

Quote:


> Originally Posted by *webhito*
> 
> Howdy fellas!
> 
> Recently picked up a vega 64 card, great gpu, not as good as the 1080ti I sold but still very good overall.
> 
> I am having issues when my screen turns off, for some reason its messing with my resolution, mind you, this only seems to happen when I leave it several hours idle, to fix it all I do is unplug the DP cable and plug it back in. Its not that much of an issue, but it does get kind of annoying.
> 
> Anyone else have this?


Doesn't happen to me. Have you tried just hitting the power on the monitor off and on when it does that?

Only other thing I could think that might alleviate it is changing the screen power seeing to never turn ok off? Maybe


----------



## webhito

Quote:


> Originally Posted by *steadly2004*
> 
> Doesn't happen to me. Have you tried just hitting the power on the monitor off and on when it does that?
> 
> Only other thing I could think that might alleviate it is changing the screen power seeing to never turn ok off? Maybe


Nope, turning the screen off and on doesn't fix it, I have to pull out the dp cable, or reset my monitor.

Yea, modding the power saving feature to never turn off the screen also works but I would rather keep it on as I am forgetful at times and don't want to come home to a burnt in image lol.


----------



## Chaoz

I turned off my hibernate and screensaver as it screws up my Ultra wide resolution when it comes out of hibernation/standby/screensaver.

Since I turned everything off but monitor standby I never experienced those problems again.


----------



## Ne01 OnnA

AMD RX Vega Custom Card From XFX Pictured- Backplate, Dual Fan Design & 8+6 PCIe Power Connectors

This has been long overdue, but it's finally here. Fully custom Radeon RX Vega graphics cards, this particular one, which is the first we've seen since Asus debuted its custom design nearly two months back, comes from AMD's venerable add-in-board partner XFX.

XFX is the very first Radeon exclusive graphics card maker to actually showcase any custom AMD Radeon RX Vega graphics card design to date, and what better way to tease its new creations than on the popular /r/AMD subreddit. Which is exactly what XFX did. So these photos you're about to see haven't been sourced via some precarious leak, they actually come directly from the source.

With that said, we actually know very little about these new custom Vega cards from XFX. We haven't seen this particular design before from the company, so it appears to be a brand new creation developed just for Vega. We can clearly see from the photos that it's an open air design that features two large fans, a sexy backplate and an 8+6 pin PCIe power configuration.





https://wccftech.com/amd-rx-vega-custom-card-xfx-pictured-features-backplate-dual-fan-open-air-design-86-pcie-power-connectors


----------



## Chaoz

Quote:


> Originally Posted by *Ne01 OnnA*
> 
> AMD RX Vega Custom Card From XFX Pictured- Backplate, Dual Fan Design & 8+6 PCIe Power Connectors
> 
> This has been long overdue, but it's finally here. Fully custom Radeon RX Vega graphics cards, this particular one, which is the first we've seen since Asus debuted its custom design nearly two months back, comes from AMD's venerable add-in-board partner XFX.
> 
> XFX is the very first Radeon exclusive graphics card maker to actually showcase any custom AMD Radeon RX Vega graphics card design to date, and what better way to tease its new creations than on the popular /r/AMD subreddit. Which is exactly what XFX did. So these photos you're about to see haven't been sourced via some precarious leak, they actually come directly from the source.
> 
> With that said, we actually know very little about these new custom Vega cards from XFX. We haven't seen this particular design before from the company, so it appears to be a brand new creation developed just for Vega. We can clearly see from the photos that it's an open air design that features two large fans, a sexy backplate and an 8+6 pin PCIe power configuration.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://wccftech.com/amd-rx-vega-custom-card-xfx-pictured-features-backplate-dual-fan-open-air-design-86-pcie-power-connectors


Looks like a Nano, PCB is really short with a massive heatsink going over the PCB.


----------



## webhito

Quote:


> Originally Posted by *Chaoz*
> 
> I turned off my hibernate and screensaver as it screws up my Ultra wide resolution when it comes out of hibernation/standby/screensaver.
> 
> Since I turned everything off but monitor standby I never experienced those problems again.


Cheers!


----------



## Chaoz

Quote:


> Originally Posted by *webhito*
> 
> Cheers!


Np, hope it works out for you.
Could be an issue with Win 10. Never had those issues with Win 7.


----------



## Newbie2009

Quote:


> Originally Posted by *spyshagg*
> 
> People who went from AIR to WATER
> 
> How did your HBM overclock respond? mhz and mv if possible please.


No real difference for me

Quote:


> Originally Posted by *webhito*
> 
> Howdy fellas!
> 
> Recently picked up a vega 64 card, great gpu, not as good as the 1080ti I sold but still very good overall.
> 
> I am having issues when my screen turns off, for some reason its messing with my resolution, mind you, this only seems to happen when I leave it several hours idle, to fix it all I do is unplug the DP cable and plug it back in. Its not that much of an issue, but it does get kind of annoying.
> 
> Anyone else have this?


I have run into that issue specifically with DP. Running dual screen, and on main screen I switch channel to hdmi to play some PS4 for a bit, sometimes when I switch back to pc dp channel it is @ a resolution as of no drivers installed (in the mean time the other screen is functioning fine)

I found just turning off the monitor and turning back on fixes it instantly.


----------



## webhito

Quote:


> Originally Posted by *Newbie2009*
> 
> No real difference for me
> I have run into that issue specifically with DP. Running dual screen, and on main screen I switch channel to hdmi to play some PS4 for a bit, sometimes when I switch back to pc dp channel it is @ a resolution as of no drivers installed (in the mean time the other screen is functioning fine)
> 
> I found just turning off the monitor and turning back on fixes it instantly.


Yea, turning off mine doesn't work at all, I have to unplug it from the wall, reset to factory settings or just remove the dp, sadly hdmi doesn't do 60+ hz so I am stuck with dp.

The 1080ti I had did not do this.


----------



## Trender07

Guys any way you can change the settings on the bios so they're forever? In like making the defaults settings as I set mV MHz and fans rpm


----------



## Newbie2009

Quote:


> Originally Posted by *webhito*
> 
> Yea, turning off mine doesn't work at all, I have to unplug it from the wall, reset to factory settings or just remove the dp, sadly hdmi doesn't do 60+ hz so I am stuck with dp.
> 
> The 1080ti I had did not do this.


Hmm, have you tried a different dp cable?


----------



## webhito

Quote:


> Originally Posted by *Trender07*
> 
> Guys any way you can change the settings on the bios so they're forever? In like making the defaults settings as I set mV MHz and fans rpm


You would have to flash your card with a custom voltage/fan profile. I have never flashed an amd card so I am unaware of what programs to use. Pretty sure if you search in this thread something will come up.
Quote:


> Originally Posted by *Newbie2009*
> 
> Hmm, have you tried a different dp cable?


Yea, same thing. Bought a new one just in case, no difference.


----------



## Newbie2009

Quote:


> Originally Posted by *webhito*
> 
> You would have to flash your card with a custom voltage/fan profile. I have never flashed an amd card so I am unaware of what programs to use. Pretty sure if you search in this thread something will come up.
> Yea, same thing. Bought a new one just in case, no difference.


I don't like DP. Miss my dvi-d connection.


----------



## Chaoz

Quote:


> Originally Posted by *Newbie2009*
> 
> I don't like DP. Miss my dvi-d connection.


Meh, it's alright. I have to use it to be able to use the FreeSync option.


----------



## Sickened1

Thinking of moving from my R9 Fury to a Vega 56 with the 64 bios+UV+OC. Anyone do this? Looks like I could stand to gain 40-45% from that upgrade.


----------



## webhito

Quote:


> Originally Posted by *Chaoz*
> 
> Meh, it's alright. I have to use it to be able to use the FreeSync option.


Ditto, well, cough cough, when I get my freesync monitor as right now I have a gsync predator, need to sell this one and pick up a freesync.


----------



## Mandarb

I will be doing a Morpheus II conversion on my Vega 64.

From different photographs I'd say the big VRM chokes are 10×10mm while the smaller ones are 7×7mm.

Can anyone confirm? Trying to get the right sinks. The C-Type sinks from the Raijintek kit - good enough for the MOSFETS?

Will be using two NF-F12 fans. Good choice?


----------



## SpecChum

Quote:


> Originally Posted by *Sickened1*
> 
> Thinking of moving from my R9 Fury to a Vega 56 with the 64 bios+UV+OC. Anyone do this? Looks like I could stand to gain 40-45% from that upgrade.


Kinda.

I went unlocked Fury non-x to Vega 64.

Some nice synthetic gains, not played many games yet tho, only Crysis 3


----------



## dagget3450

Quote:


> Originally Posted by *Chaoz*
> 
> Looks like a Nano, PCB is really short with a massive heatsink going over the PCB.


Id say its more like r9 fury , nano had a small heatsink along with small pcb.


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Do you leave HBCC on all the time?


I do. Only issue I found was before I could set RealBench to use 16GB (rig has that), now it will bomb if I use 16GB. [email protected], Bionic and other things I've used non issue.


----------



## Chaoz

Quote:


> Originally Posted by *dagget3450*
> 
> Id say its more like r9 fury , nano had a small heatsink along with small pcb.


That's what they did for a Fury Nano. Doesn't mean they would do the same for Vega.

It looks definately like a Nano with that small PCB. It's a 300W GPU so a decent large enough heatsink is necessary.

What kind of heatsink were you expecting for such a high powered GPU?


----------



## cplifj

installed driver 17.11.1 , leading to an instant crash with garbled screen the very first game i start which was BF1...








congratulations AMD.....again.

Also, hardware reserved memory shot up from 4.3 to 5.2 MB with this new driver. also got a free extra spook monitor that is in no way attached to my PC.

GREAT JOB AMD, kuddos.


----------



## spyshagg

Did you do it the right way?


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> I do. Only issue I found was before I could set RealBench to use 16GB (rig has that), now it will bomb if I use 16GB. [email protected], Bionic and other things I've used non issue.


Cool, cheers gup


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> installed driver 17.11.1 , leading to an instant crash with garbled screen the very first game i start which was BF1...
> 
> 
> 
> 
> 
> 
> 
> 
> congratulations AMD.....again.
> 
> Also, hardware reserved memory shot up from 4.3 to 5.2 MB with this new driver. also got a free extra spook monitor that is in no way attached to my PC.
> 
> GREAT JOB AMD, kuddos.


you come up with some real unique problems with driver installs.


----------



## dagget3450

Quote:


> Originally Posted by *Chaoz*
> 
> That's what they did for a Fury Nano. Doesn't mean they would do the same for Vega.
> 
> It looks definately like a Nano with that small PCB. It's a 300W GPU so a decent large enough heatsink is necessary.
> 
> What kind of heatsink were you expecting for such a high powered GPU?


Yes, nano was a small pcb with small heatink becuase they limited power usage and clocks. R9 fury was almost a fury x using more power than nano thus it had same size pcb and longer bigger heatsink.

If this gpu is 300w then its basically a vega with smaller pcb and sufficient heatsink(larger heatsink)

If you basing it on size of pcb only then yeah it could resemble nano or any of the fury gpus. If you basing it on small pcb with large long heatsink then r9fury?

Wasnt saying anyone is wrong here, was just thinking it could resemble either?


----------



## cplifj

Quote:


> Originally Posted by *Reikoji*
> 
> you come up with some real unique problems with driver installs.


Well not so unique really, they are quite repeatable with every new amd driver itteration they release.

The problem lies entirely with the software they use for their installer which is frankly put , a piece of bum.

After many installs i have noticed the installer can do about 1 out of 3 or 4 ways to install itself , and only 1 way is the right one.

And all too often it just borks up itself. I even notice a difference in time it takes to extract itself from time to time...meaning ...what the hell is it doing really, anything else that needs unpacking can be used to set your clock to. But with amd , driver unpacking seems to be a game of chance in itself allready. Yes , this only happens on amd driverpackages.

No wonder the installs often tend to fail. The installer itself is not a trustworthy tool. Maybe talk to microsoft some more AMD.

This has been going on for years if not decades. Also the reason why tools as DDU can exist. Mostly for amd, nvidia drivers seem to have way less trouble installing correctly.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Sickened1*
> 
> Thinking of moving from my R9 Fury to a Vega 56 with the 64 bios+UV+OC. Anyone do this? Looks like I could stand to gain 40-45% from that upgrade.


What games are not running at 60+ FPS max settings on a Fury and 1080p monitor?


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> Well not so unique really, they are quite repeatable with every new amd driver itteration they release.
> 
> The problem lies entirely with the software they use for their installer which is frankly put , a piece of bum.
> 
> After many installs i have noticed the installer can do about 1 out of 3 or 4 ways to install itself , and only 1 way is the right one.
> 
> And all too often it just borks up itself. I even notice a difference in time it takes to extract itself from time to time...meaning ...what the hell is it doing really, anything else that needs unpacking can be used to set your clock to. But with amd , driver unpacking seems to be a game of chance in itself allready. Yes , this only happens on amd driverpackages.
> 
> No wonder the installs often tend to fail. The installer itself is not a trustworthy tool. Maybe talk to microsoft some more AMD.
> 
> This has been going on for years if not decades. Also the reason why tools as DDU can exist. Mostly for amd, nvidia drivers seem to have way less trouble installing correctly.




well, this is what my install folder looks like right now after many driver installs. It looks pretty well organized to me and i've never run into such an issue where the install picks the wrong folder. Having to extract all the files from the large file you just downloaded to install is annoying yes.


----------



## VicsPC

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> well, this is what my install folder looks like right now after many driver installs. It looks pretty well organized to me and i've never run into such an issue where the install picks the wrong folder. Having to extract all the files from the large file you just downloaded to install is annoying yes.


Yea same. The past 4 years I've used nothing but AMD gpus for my personal PC and my test rigs and I've had no issues whatsoever installing GPU drivers. I do however unplug my ethernet (so windows and bitdefender don't mess with it), uninstall using AMD cleanup utility, run CCleaner then restart and reinstall. i also turn off windows driver automatic update and I've had no problems. Once in a while windows will reinstall an older version but easily solved.


----------



## Chaoz

Quote:


> Originally Posted by *dagget3450*
> 
> Yes, nano was a small pcb with small heatink becuase they limited power usage and clocks. R9 fury was almost a fury x using more power than nano thus it had same size pcb and longer bigger heatsink.
> 
> If this gpu is 300w then its basically a vega with smaller pcb and sufficient heatsink(larger heatsink)
> 
> If you basing it on size of pcb only then yeah it could resemble nano or any of the fury gpus. If you basing it on small pcb with large long heatsink then r9fury?
> 
> Wasnt saying anyone is wrong here, was just thinking it could resemble either?


I know. Was just basing it solely on PCB dimensions.

We'll see what the official release holds, could also be a prototype.


----------



## cplifj

Har Har









Where a reinstall fixed the crashing of BF1 , now when i start Wolfenstein II, IT CRASHES instantly.

I LOVE YOU AMD.

but this does it, i'm going back to an earlier version. not even a decent clean will help this 17.11.1.


----------



## Chaoz

That's the reason why I'm still on 17.9.3.
Drivers work fine. No need for me to install a newer driver for games I don't play. I also play BF1 and 9.3 is perfectly fine even with an Undervolt of 1v.


----------



## spyshagg

I never had a failed amd install by doing DDU (into safe mode) and deleting the AMD folder. I think this windows installation is 6 months old and had the 290X up until a week ago.

Its been smooth sailing. The bugs that exist are known and relate more to overclocking than general usability.


----------



## Sickened1

Quote:


> Originally Posted by *0451*
> 
> What games are not running at 60+ FPS max settings on a Fury and 1080p monitor?


I'm on a 1440P 144hz monitor.

EDIT: Just realized I forgot to add that to my rig.


----------



## Chaoz

Quote:


> Originally Posted by *spyshagg*
> 
> I never had a failed amd install by doing DDU (into safe mode) and deleting the AMD folder. I think this windows installation is 6 months old and had the 290X up until a week ago.
> 
> Its been smooth sailing. The bugs that exist are known and relate more to overclocking than general usability.


Yeah, me too. Haven't had a driver issue ever since updating from 17.8.1 to 17.9.3 what I have now.


----------



## SpecChum

Well damn, had 1v 1702 undervolt for a couple of days now running 3dmark and superposition without an issue.

Decide to have 10 minutes of CS:GO and I crash to desktop back and default settings lol

PC's do baffle me sometimes...

EDIT: I'm assuming this means a need a couple more mv?


----------



## spyshagg

or less clock.

I had the same issue wednesday. Instant firestrike lock ups when it never failed a day before. Difference being I spent all wednesday playing Wolfenstein before trying firestrike


----------



## By-Tor

I have been thinking about finally replacing my old relieable 290X that has served me well for a long time.. I have read a lot of reviews on many cards from both camps and its hard to choose what path to take.

I'm not a fan of the green team, but have been looking at there offerings as well as AMDs. At the moment I'm playing BF1, BF4, Rise of the Tomb Raider, Witcher 3, and would like to play Destiny 2 but not sure my current card would push it very well.

Is it worth replacing my aging 290X with a Vega 56?

For those that have both AMD and NVidia how well does it stack up to the 1070/1080 cards?

Thank you


----------



## SpecChum

Quote:


> Originally Posted by *spyshagg*
> 
> or less clock.
> 
> I had the same issue wednesday. Instant firestrike lock ups when it never failed a day before. Difference being I spent all wednesday playing Wolfenstein before trying wolfy.


I updated for the new driver couple hours ago too, so I guess it could be that.

I'll add 20mv and see how I get on. Should give slightly higher clocks too


----------



## Chaoz

I got mine running on 1630MHz with 1v +50% Power target. It's stable af and max clock it reaches is 1580MHz, when it's needed.


----------



## SpecChum

Quote:


> Originally Posted by *Chaoz*
> 
> I got mine running on 1630MHz with 1v +50% Power target. It's stable af and max clock it reaches is 1580MHz, when it's needed.


Yeah, I was actually somewhat surprised 1702Mhz @ 1v worked at all - gets me 1530Mhz in benchmarks.

I'll keep 1702 and try 1020mv.

If that fails I'll try 1v again and lower p7.


----------



## spyshagg

Quote:


> Originally Posted by *By-Tor*
> 
> I have been thinking about finally replacing my old relieable 290X that has served me well for a long time.. I have read a lot of reviews on many cards from both camps and its hard to choose what path to take.
> 
> I'm not a fan of the green team, but have been looking at there offerings as well as AMDs. At the moment I'm playing BF1, BF4, Rise of the Tomb Raider, Witcher 3, and would like to play Destiny 2 but not sure my current card would push it very well.
> 
> Is it worth replacing my aging 290X with a Vega 56?
> 
> For those that have both AMD and NVidia how well does it stack up to the 1070/1080 cards?
> 
> Thank you


well its just a bit of a coincidence that half the games you mentioned favor vega, particularly destiny.

I just came from the 290x myself. But I must confess I only jumped ship to the reference Vega64 because my pc is in another room and the noise doesn't cross walls.

Performance wise, its close to what my crossfire 290x's used to do.

I would wait for AIB cards. A 1080 is no match for a cool and power unconstrained vega if you don't mind twice the electrical bill.


----------



## Kyozon

Just a particular curiosity.

I have tried to Overclock my VEGA FE LC.

It seems that the Highest so far i was able to make was around 1685Mhz, 1.2V on the Core + 50% PLimit.

Anything higher than that, it crashes despite running at 45C Highest Temps.

Could it be because it needs more Power? Like 100%-142% Power Limit? Or just because the VEGA RX is capable of going higher?

Thanks.


----------



## SPLWF

Thank you everyone for the help. I was able to get my clocks rectified. Turns out with was my fan speed and setting P6 to minimum really helped. Thanks Trixx







. [email protected] and [email protected] [email protected], [email protected] Hitting 1544mhz while gaming with temps around 72c-75c at 2850rpms fan (I like piece and quiet, well not really quiet but good enough).

One thing, I'm not going to flash my card to V64 because I'm using a 2560x1080 freesync monitor. Don't need the power unless I plan on benching. My Ryzen 5 1600 isn't even OC'ed. No time to tinker right now, Work is always keeping me busy.


----------



## By-Tor

Quote:


> Originally Posted by *spyshagg*
> 
> well its just a bit of a coincidence that half the games you mentioned favor vega, particularly destiny.
> 
> I just came from the 290x myself. But I must confess I only jumped ship to the reference Vega64 because my pc is in another room and the noise doesn't cross walls.
> 
> Performance wise, its close to what my crossfire 290x's used to do.
> 
> I would wait for AIB cards. A 1080 is no match for a cool and power unconstrained vega if you don't mind twice the electrical bill.


Not worried about noise as I'll be mounting a EK block on it and adding it to my custom loop. I overclock and don't care about power draw as long as my PSU can feed it.

Thank you


----------



## Chaoz

Quote:


> Originally Posted by *SpecChum*
> 
> Yeah, I was actually somewhat surprised 1702Mhz @ 1v worked at all - gets me 1530Mhz in benchmarks.
> 
> I'll keep 1702 and try 1020mv.
> 
> If that fails I'll try 1v again and lower p7.


Yeah, 1v works great and temps are amazing, my 64 never goes over 36°C. So I'm happy as before with the stock cooler temps were at a hot 85°C.

I tried to run 1750 with 1100mV and +50% Power target and it ran great in-game but benchmarks made it go funky.


----------



## VicsPC

I've noticed that while playing siege if i run TAA the clocks will ramp all the way up to around 1630mhz and stay around 1600 and gpu usage will hover around 80-90% but when running FXAA or MSAA the clocks will be around 1550-1580mhz and gpu usage will pin itself at 99%. Anyone else notice that?


----------



## SpecChum

CS:GO seems to be fine at 1020mv 1702 p7 now









I didn't play for hours tho.


----------



## Chaoz

It's not like CSGO is a very demanding game.


----------



## SpecChum

Quote:


> Originally Posted by *Chaoz*
> 
> It's not like CSGO is a very demanding game.


That's what confused me, I put RTSS on so I could see the clocks and it's like 40, 50 60%, nothing high at all.

I have just installed the new driver tho, so I can't rule that out yet.

Anyway, 1000mv failed, 1020mv didn't, so that's where we are for now.


----------



## Chaoz

Quote:


> Originally Posted by *SpecChum*
> 
> That's what confused me, I put RTSS on so I could see the clocks and it's like 40, 50 60%, nothing high at all.
> 
> I have just installed the new driver tho, so I can't rule that out yet.
> 
> Anyway, 1000mv failed, 1020mv didn't, so that's where we are for now.


Yeah, exactly. It probably never got up to max clocks. It's the same when I use my FreeSync, it only stays around 70-80% and sometimes it still reaches 1580MHz on 75Hz.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> That's what confused me, I put RTSS on so I could see the clocks and it's like 40, 50 60%, nothing high at all.
> 
> I have just installed the new driver tho, so I can't rule that out yet.
> 
> Anyway, 1000mv failed, 1020mv didn't, so that's where we are for now.


That crash was likely the transition from the ACG active power states (P5/6/7) to the lower power states (experienced it a few times initially before i started locking to P7). Lock in Wattman/OverdriveNTool or using ClockBlocker as all it does at the moment is cause annoyance.

As an aside I use iRacing a lot and until I locked it to P7 the game barely caused enough load to push it past P2 state. Most of the time it was in idle state running around 200MHz. Problem was it was causing stutters and wasn't giving up enough performance. Locked to P7 performance is solid, also only uses about ~80W for the core (so about ~120W full card).


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> That crash was likely the transition from the ACG active power states (P5/6/7) to the lower power states (experienced it a few times initially before i started locking to P7). Lock in Wattman/OverdriveNTool or using ClockBlocker as all it does at the moment is cause annoyance.
> 
> As an aside I use iRacing a lot and until I locked it to P7 the game barely caused enough load to push it past P2 state. Most of the time it was in idle state running around 200MHz. Problem was it was causing stutters and wasn't giving up enough performance. Locked to P7 performance is solid, also only uses about ~80W for the core (so about ~120W full card).


Oddly enough, I was just thinking it might have been some lower state issue.

So just set p7 as min/max?


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> Oddly enough, I was just thinking it might have been some lower state issue.
> 
> So just set p7 as min/max?


Yeah that works, though I use ClockBlocker so I can have default downclocking for normal usage and automatic P7 locking for 3D programs/games


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Yeah that works, though I use ClockBlocker so I can have default downclocking for normal usage and automatic P7 locking for 3D programs/games


OK, cool.

Could just set a Wattman profile for the low demanding games instead?


----------



## geriatricpollywog




----------



## Kyozon

Quote:


> Originally Posted by *0451*


Wow, Very nice!!! VEGA 64?


----------



## geriatricpollywog

Quote:


> Originally Posted by *Kyozon*
> 
> Wow, Very nice!!! VEGA 64?


Vega 64 with EK block and i7 7700k.


----------



## webhito

Anyone here have 2 vega 64's in crossfire? What kind of power supply are you using?


----------



## steadly2004

Quote:


> Originally Posted by *webhito*
> 
> Anyone here have 2 vega 64's in crossfire? What kind of power supply are you using?


I do. I have a 1600w evga titanium, but that's Overkill. I haven't seen much over 1000w at the wall.


----------



## TrixX

Quote:


> Originally Posted by *webhito*
> 
> Anyone here have 2 vega 64's in crossfire? What kind of power supply are you using?


With OC potential of those things you'll be needing 800W just for those two components...

So 1200W if you have a TR4 or more if i9.


----------



## Chaoz

My current system consumes around 650W out of the wall with my 64 on Turbo.

So imagine what Crossfire would pull out of the wall







.

So 1000W would probably not be enough if you factor in OC consumption.


----------



## geriatricpollywog

There was an article about a PC with an overclocked Radeon Pro Duo using 920 watts at the wall and 2 Vegas would use even more power. I would say 1200 watts minimum for 2 vegas, 1600 watts for 3, and 2000 watts and 240 volts for 4.


----------



## fursko

I have problem with latest driver. When i quitting youtube or any fullscreen video, game my screen shows weird colors. Not everytime but often.


----------



## webhito

Damnit! I have a 1000 evga g3 psu with an r7 1700 @3.7, I believe I have never seen it pull more than 450 watts from the wall though... I wonder if I can get away with them at stock...


----------



## geriatricpollywog

Quote:


> Originally Posted by *webhito*
> 
> Damnit! I have a 1000 evga g3 psu with an r7 1700 @3.7, I believe I have never seen it pull more than 450 watts from the wall though... I wonder if I can get away with them at stock...


That's a high quality unit and I'm sure you'd be fine. Just don't run a 500 amp power table with 150% power consumption without first testing at stock and 100%/400 amps.


----------



## Kyozon

I am utilizing a 1050W ToughPower Thermaltake PSU for VEGA FE LC + VEGA FE Air + sTR4 1950X.

I am worried now @[email protected]

But i am also running everything at Stock. Looking forward to Undervolt the GPUs.


----------



## webhito

Quote:


> Originally Posted by *0451*
> 
> That's a high quality unit and I'm sure you'd be fine. Just don't run a 500 amp power table with 150% power consumption without first testing at stock and 100%/400 amps.


Cheers! Ordered a second one =).
Quote:


> Originally Posted by *Kyozon*
> 
> I am utilizing a 1050W PSU for VEGA FE LC + VEGA FE Air + sTR4 1950X.
> 
> I am worried now @[email protected]


Lol, get a kill a watt, that should tell you exactly how much juice your system is drinking/eating.


----------



## fursko

So this is my problem. Closing fullscreen youtube video can cause this. If i launch any game and press alt+tab can solve this weird colors temporary.


----------



## SPLWF

Question, how come when I set minimum on P6, my core clock idles at a much higher clock. For instance, I have P6 set at 1577 and P7 at 1632. It idles at 1584, but when gaming it clocks down?


----------



## poisson21

I have 2 vega 64 Full oc, from gpu-z i can get 400-410W for each card on heavy game, i bought an overkill axi1500 psu from corsair and i didn't regret it.


----------



## fursko

Quote:


> Originally Posted by *SPLWF*
> 
> Question, how come when I set minimum on P6, my core clock idles at a much higher clock. For instance, I have P6 set at 1577 and P7 at 1632. It idles at 1584, but when gaming it clocks down?


Because your card can find room for overclocking due to low stress. When gaming your power consumption, gpu utilization, temps go high. My card reach 1850mhz at idle 1720mhz load if i set minimum p7


----------



## Soggysilicon

Quote:


> Originally Posted by *spyshagg*
> 
> People who went from AIR to WATER
> 
> How did your HBM overclock respond? mhz and mv if possible please.


Your sustainable clocks may not be all that different, but there seems to be some loosening timings which occur > 40 C and again > 70 C. So a set-able frequency isn't really here or there, but the underlying performance "should" see some improvement under water for longer up to thermal equilibrium.
Quote:


> Originally Posted by *SpecChum*
> 
> Do you leave HBCC on all the time?


Yeap, 3440 native res... I don't see it at 1080, waste of memory resources?
Quote:


> Originally Posted by *Chaoz*
> 
> I turned off my hibernate and screensaver as it screws up my Ultra wide resolution when it comes out of hibernation/standby/screensaver.
> 
> Since I turned everything off but monitor standby I never experienced those problems again.


Same, also noticed that the fall creators update solved some of this garbage issue.


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> Same, also noticed that the fall creators update solved some of this garbage issue.


Haven't updated in a while now. Don't even have the first creators update either. Not risking it, tbh. Everythings working fine as is.


----------



## VicsPC

Quote:


> Originally Posted by *Chaoz*
> 
> Haven't updated in a while now. Don't even have the first creators update either. Not risking it, tbh. Everythings working fine as is.


I haven''t had any issues with it, it's on 2 of my PCs with both running AMD GPUs.


----------



## SpecChum

Wow, if anyone wants a GPU burn test just run Hard Reset and leave the menu open - I've never seen my Vega get so hot so quick









It's quite a fun game to boot


----------



## spyshagg

Quote:


> Originally Posted by *Soggysilicon*
> 
> Your sustainable clocks may not be all that different, but there seems to be some loosening timings which occur > 40 C and again > 70 C. So a set-able frequency isn't really here or there, but the underlying performance "should" see some improvement under water for longer up to thermal equilibrium..


Well I asked because most people are running hbm above 1100mhz with very little voltage, when I need 1100mv to do it.


----------



## SpecChum

Quote:


> Originally Posted by *spyshagg*
> 
> Well I asked because most people are running hbm above 1100mhz with very little voltage, when I need 1100mv to do it.


I need to work on my HBM too.

I have noticed that if I set 1000Mhz at 950mV it sometimes jumps back to 800Mhz but 975mV seems OK. I had a crash at 1050Mhz on the HBM, but I was also running a high core so it could have been either; I need to run default core and just OC the HBM really.

Problem for me is the HBM seems to heat up more than the core; I'm on the stock air cooler tho.


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Wow, if anyone wants a GPU burn test just run Hard Reset and leave the menu open - I've never seen my Vega get so hot so quick
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's quite a fun game to boot


Some game menus are nutty FPS. GTA4 used go to like 1000FPS+, when not capped. Some are just so poorly programmed IMO.


----------



## Reikoji

Quote:


> Originally Posted by *fursko*
> 
> 
> 
> So this is my problem. Closing fullscreen youtube video can cause this. If i launch any game and press alt+tab can solve this weird colors temporary.


that happens to me after closing destiny 2. Turning off your monitor and turning it back on fixes it for me.
Quote:


> Originally Posted by *Kyozon*
> 
> I am utilizing a 1050W ToughPower Thermaltake PSU for VEGA FE LC + VEGA FE Air + sTR4 1950X.
> 
> I am worried now @[email protected]
> 
> But i am also running everything at Stock. Looking forward to Undervolt the GPUs.


Depends on if you have them all at full load. 1000w PSU is the recommendation for a single Vega LC card. Wall draw for me with 1950x not even going all out and vega LC at +50% is around 650w. Full load on the processor at stock would increase that by about 100w. 4ghz overclock around 120w even more than that.

I have an EVGA G2 Supernova 1600w PSU.


----------



## cplifj

AMD's recomendation of a 1000Watt psu is exagurated. On average a system with one vega will max out around 500-600 Watts. Then you'd end up in the 50% psu load range with a 1000-Watter, meaning you would have the highest efficiency % on your psu.

But it's not a MUST. Aslong as your decent powersupply can deliver that amount it will work without problems.

I run one vega 64 LC on an oc'd i7-3770 and the system pulls a max of 560 Watts when everything is loaded , still not enough to make my corsair HX650 buckle, A psu that can peak up to 800 Watts.

A 1050 Watt supply should be ample for your dual Vega FE edition. If in doubt, check what your system is actually pulling from the wallsocket with one of those cheap kill-a-watt meters.


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> AMD's recomendation of a 1000Watt psu is exagurated. On average a system with one vega will max out around 500-600 Watts. Then you'd end up in the 50% psu efficiency range xwith a 1000-Watter.
> 
> But it's not a MUST. Aslong as your decent powersupply can deliver that amount it will work without problems.
> 
> I run one vega 64 LC on an oc'd i7-3770 and the system pulls a max of 560 Watts when everything is loaded , still not enough to make my corsair HX650 buckle, A psu that can peak up to 800 Watts.
> 
> A 1050 Watt supply should be ample for your dual Vega FE edition. If in doubt, check what your system is actually pulling from the wallsocket with one of those cheap kill-a-watt meters.


If you managed to have nothing else running on your PC, it would draw 800w... just from the GPUs. Not trying to run them at +50 power without an undervolt is the only way id not recommend a stronger PSU. Remember, he also has a HEDT processor. Probably differs from someones low-power target setup.


----------



## TrixX

Quote:


> Originally Posted by *cplifj*
> 
> AMD's recomendation of a 1000Watt psu is exagurated. On average a system with one vega will max out around 500-600 Watts. Then you'd end up in the 50% psu load range with a 1000-Watter, meaning you would have the highest efficiency % on your psu.
> 
> But it's not a MUST. Aslong as your decent powersupply can deliver that amount it will work without problems.
> 
> I run one vega 64 LC on an oc'd i7-3770 and the system pulls a max of 560 Watts when everything is loaded , still not enough to make my corsair HX650 buckle, A psu that can peak up to 800 Watts.
> 
> A 1050 Watt supply should be ample for your dual Vega FE edition. If in doubt, check what your system is actually pulling from the wallsocket with one of those cheap kill-a-watt meters.


First that's not how PSU efficiency works...

Efficiency is the Power into the PSU and then into the system. A high efficiency PSU has less disparity between power from the Wall to Power delivered to the system. For instance below 600W my PSU is about 90% efficient (for every 100W drawn 90W is used by the system, 10W is lost) however at 600+ it increases to around 93-94% efficiency (obviously not stepped it's a curve but hey).

My single water cooled Vega can happily pull north of 400W on it's own, not including the rest of the system. System draw in The Division (high CPU and GPU loading) was 761 W into the PSU according to my PSU's info.

That 1000W recommendation is genuine. As there's an efficiency peak usually around 80%-85% load on the PSU. After that it starts to lose efficiency hence one of the recommendations to not load PSU's to 100% rated wattage.


----------



## dagget3450

Without digging for my post a while back i am pretty sure i peaked 1200ish watts with 2x Vega FE i think i was on a ryzen [email protected] then


----------



## webhito

Quote:


> Originally Posted by *dagget3450*
> 
> Without digging for my post a while back i am pretty sure i peaked 1200ish watts with 2x Vega FE i think i was on a ryzen [email protected] then


Did/do you have your gpu's overclocked?


----------



## Reikoji

Power color joins up with vega 64 black finally at $499. Gigabyte vega 56 still at $409, but with a $20 rebate. Retailers slowly putting an end to the gouging.


----------



## SpecChum

What relation is HBM voltage to HBM stability? I know it's not the actual supply voltage (that's 1.35v?)

1Ghz seems fine at 960mv but 1050Mhz isn't, but my p7 is 970mv so I don't want to go above that. Is it fine to to the HBM voltage floor to 970 as well?

Also, I don't want to set the HBM voltage too high as I'm already hitting 85C on that in the Hard Reset Menu (which is fast becoming my stability test of choice, it's brutal!)


----------



## Particle

The PowerColor Vega 64 I ordered came in yesterday, and I installed it. It seems to run well, and I don't get what people were talking about with it being loud.


----------



## SpecChum

Quote:


> Originally Posted by *Particle*
> 
> The PowerColor Vega 64 I ordered came in yesterday, and I installed it. It seems to run well, and I don't get what people were talking about with it being loud.


I've got the PoweColor one too









I can tolerate up to about 3000RPM for normal use. For testing I'm not too bothered about 4900RPM.

It's actually not as bad as I thought it would be, noise wise.


----------



## Particle

Quote:


> Originally Posted by *SpecChum*
> 
> I've got the PoweColor one too
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I can tolerate up to about 3000RPM for normal use. For testing I'm not too bothered about 4900RPM.
> 
> It's actually not as bad as I thought it would be, noise wise.


I'd have to agree. In my case, I'm in a noisy room already with server hardware, so it takes quite a bit to even register. I do need to move it up a slot though as I think it's inducing a buzz in my sound card under load. I can hear it in my headphones on the right side but just barely.


----------



## fursko

Quote:


> Originally Posted by *Reikoji*
> 
> that happens to me after closing destiny 2. Turning off your monitor and turning it back on fixes it for me.


It wasnt happening before. My chg70 arrived yesterday and 17.11.1 driver. Dunno which one cause this driver or monitor ?


----------



## gupsterg

Worst case for me currently is Bionic on CPU/GPU. This results in MAX ~600W from wall, SP 4K MAX 550W, 3DM FS Demo MAX 500W.

TR 1950X (stock)
F4-3200C14D-16GTZ (stock)
ASUS ZE

RX VEGA 64 (Factory VBIOS, DPM6: 1557MHz/975mV, DPM7: 1652MHz/1125mV, HBM: 1100MHz/975mV, PowerLimit: 65%, testing shows 40% is ample though to hold same perf/clocks)
1x SATA SSD, 2x SATA HDD

6x Arctic Cooling F12 120mm PWM
1x Be Quiet Silent Wings 3 140mm PWM
EK D5 PWM

These figures are inc screen, ASUS MG279Q / Keyb / Mouse / Headphones.

I believe the VRM on VEGA is large not just what the GPU could use but for coping with low/high peaks of power draw. For example in Valley as scene changes occur and screen go blank I have observed drops to like ~380W, then when rendering it can peak as high as ~550W. These large swings must create a lot of work for the VRM on transients, etc. I would also believe a PSU experiences similar stress from the lows/highs.

All in all it seems a PSU with decent components is a must.
Quote:


> Originally Posted by *SpecChum*
> 
> What relation is HBM voltage to HBM stability? I know it's not the actual supply voltage (that's 1.35v?)
> 
> 1Ghz seems fine at 960mv but 1050Mhz isn't, but my p7 is 970mv so I don't want to go above that. Is it fine to to the HBM voltage floor to 970 as well?
> 
> Also, I don't want to set the HBM voltage too high as I'm already hitting 85C on that in the Hard Reset Menu (which is fast becoming my stability test of choice, it's brutal!)


Not much from what I saw, think of it as GPU floor voltage limit for ACG/AVFS. Use MSI AB to graph GPU monitoring, observe lows/peaks of voltage and you will see it. Even when we set voltage, to me it seems ACG/AVFS still does what it wants. For example say if the "profiling" by ACG/AVFS determines that at lower clocks it needs 0.975mV and you DPM6/HBM set above that you won't see it, but when it is set lower you may see it if it has determined it can use that, otherwise it remains above it.

Likewise for DPM 7, I have 1125mV. Generally my GPU for ~1600MHz uses 1.062V-1.075V, it may swing very occasionally above, but not continuously. If I lower DPM 7 to 1100mV I still see ~1600MHz uses 1.062V-1.075V but now the peaks don't ever go over 1.100V (when they occur rarely).

I am finding allowing a higher DPM 7 isn't increasing power usage, as ACG/AVFS is still sticking to 1.062V-1.075V for ~1600MHz when 1125mV is in DPM7. But what this extra +25mV is doing is improving stability, for when momentarily ACG may creep slightly above ~1600MHz the extra juice stabilize GPU.

IMO all this is happening way faster than what monitoring is showing us. VEGA really seems a real huge step forward from AMD on how they do boost/voltage control. From Fiji to Hawaii it didn't seem much of an advancement in this aspect. Polaris again seems incremental. Here's looking forward to NAVI







.


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> Not much from what I saw, think of it as GPU floor voltage limit for ACG/AVFS. Use MSI AB to graph GPU monitoring, observe lows/peaks of voltage and you will see it. Even when we set voltage, to me it seems ACG/AVFS still does what it wants. For example say if the "profiling" by ACG/AVFS determines that at lower clocks it needs 0.975mV and you DPM6/HBM set above that you won't see it, but when it is set lower you may see it if it has determined it can use that, otherwise it remains above it.
> 
> Likewise for DPM 7, I have 1125mV. Generally my GPU for ~1600MHz uses 1.062V-1.075V, it may swing very occasionally above, but not continuously. If I lower DPM 7 to 1100mV I still see ~1600MHz uses 1.062V-1.075V but now the peaks don't ever go over 1.100V (when they occur rarely).
> 
> I am finding allowing a higher DPM 7 isn't increasing power usage, as ACG/AVFS is still sticking to 1.062V-1.075V for ~1600MHz when 1125mV is in DPM7. But what this extra +25mV is doing is improving stability, for when momentarily ACG may creep slightly above ~1600MHz the extra juice stabilize GPU.
> 
> IMO all this is happening way faster than what monitoring is showing us. VEGA really seems a real huge step forward from AMD on how they do boost/voltage control. From Fiji to Hawaii it didn't seem much of an advancement in this aspect. Polaris again seems incremental. Here's looking forward to NAVI
> 
> 
> 
> 
> 
> 
> 
> .


OK, cool.

Any tips for HBM stability? Or just keep the temps as low as possible?

If that's that is the case I'll stick to 1Ghz, maybe even 950Mhz as I'm hitting 85C already - core barely reaches 75C.


----------



## gupsterg

NP







.

The reference blower isn't ideal. So that may limit you, to what extent no idea.

I used my card for ~30min with it. As I saw hotspot reach MAX 105C at "out of box" setup/driver defaults in SP 4K. I then decided I'll but the waterblock on ASAP. Then it was 55C, core and HBM also virtually halved in temps.

The PowerPlay has temp limits for GPU/hotspot/HBM, 105C is hotspot, 95 is HBM IIRC. Be also aware HBM could potentially drop performance before that temp, as I had read in JEDEC HBM1 PDF it can have trip points for lowering performance before max temp. All this is already posted in this thread and the VEGA bios thread







.


----------



## Kyozon

Quote:


> Originally Posted by *gupsterg*
> 
> NP
> 
> 
> 
> 
> 
> 
> 
> .
> 
> The reference blower isn't ideal. So that may limit you, to what extent no idea.
> 
> I used my card for ~30min with it. As I saw hotspot reach MAX 105C at "out of box" setup/driver defaults in SP 4K. I then decided I'll but the waterblock on ASAP. Then it was 55C, core and HBM also virtually halved in temps.
> 
> The PowerPlay has temp limits for GPU/hotspot/HBM, 105C is hotspot, 95 is HBM IIRC. Be also aware HBM could potentially drop performance before that temp, as I had read in JEDEC HBM1 PDF it can have trip points for lowering performance before max temp. All this is already posted in this thread and the VEGA bios thread
> 
> 
> 
> 
> 
> 
> 
> .


Have you been able to push 1700Mhz+ without doing any Mods, just as it is out of the box?

I have the FE LC variant. I have increased the Power Limit to 50% but it crashes beyond 1675Mhz at 1.2mV on the P7. What is interesting is that WattMan reported only 45C under Load.


----------



## SpecChum

OK, this is interesting (although I suspect already known).

I'm just running my fave stress test and made it throttle to see what happens, and, oddly, whilst the core frequenecy goes down, as expected, the vCore goes up! By about 0.5v.

What's that all about? Seems counter intuitive to cool yourself down.

Could it be that during throttling the set Wattman voltages are ignored and it uses it's own? As in it drops from 1.2v to 1v, even though I was under 1v already?


----------



## gupsterg

Quote:


> Originally Posted by *Kyozon*
> 
> Have you been able to push 1700Mhz+ without doing any Mods, just as it is out of the box?
> 
> I have the FE LC variant. I have increased the Power Limit to 50% but it crashes beyond 1675Mhz at 1.2mV on the P7. What is interesting is that WattMan reported only 45C under Load.


Nope.

If I leave it "out of box" driver defaults and have WC, it's approx ~1500MHz in Valley, 1440P, ultra IIRC but no AA. I only change PowerLimit to 65% it's ~1575MHz. Then the OC profile I used last night:-

DPM6: 1557MHz 975mV
DPM7: 1652MHz 1125mV
HBM: 1100MHz 975mV

is ~1600MHz.

I like Valley as a test, as it does more swings up/down than SP 4K and 3DM.

Here are the HML files, have MSI AB installed, double click and you can see graphs.

Valley_V64_HML.zip 57k .zip file


This same profile depending on level of load on GPU can reach peak ~1710MHz in Compute, but averages ~1675MHz. 3D Loads there is no chance of same for me. Best is ~1625MHz IIRC.


----------



## SpecChum

Ah, ok, so as soon as a I hit 85c on HBM my fps goes down by 4 or so. That'll be the timings change then. OK.

Stays at 1Ghz tho.


----------



## Reikoji

Quote:


> Originally Posted by *fursko*
> 
> It wasnt happening before. My chg70 arrived yesterday and 17.11.1 driver. Dunno which one cause this driver or monitor ?


Id say combination of software and monitor. For me it only happens with Destiny 2 after playing an extended period of time. Never happened with any other game.


----------



## Reikoji

I finally changed my avatar


----------



## AngryLobster

I dunno if it's normal or this is how the LC responds due to lower temps but my card does:

DPM6: 925mv
DPM7: 975mv
HBM: 950mv

After manually setting HBM to 1100 it results in a sustained 1580/1100 with voltage sitting around 0.931v most of the time. The GPU only power draw is around 190-196w.

I wanna try going even lower voltage if possible because I'm seeing like 1.5FPS improvement per 100mhz.

I'm for the most part satisfied with acoustics as well.


----------



## SpecChum

Does VSync still introduce input lag even with FreeSync?

Enhances sync doesn't work as while that stops tearing the card still outputs as many frames as it can.

Does FRTC have an impact on Freesync?

I'm just trying to only use the power I need to limit thermals so I want to limit games to my 75hz max refresh rate but getting conflicting info on Google.


----------



## tarot

I tried the xfx lc bios and even at balanced it crashes as it tries to boost to 1800 tried lower volts etc etc but the results are exactly the same as my normal bios so back it went.
with the settings I have I get 99.8 and 99.1 in fire strike stress and time spy extreme stress with under or just over 300 watts and can play all day long without any hiccups so my card is I guess middle of the road.

put the results of the stress test in my little review and I,ll add a few more once I can figure out obs







big files little files ugly nice...its a nightmare









but recording at 4k with 4k output gave me a splendid file of 4 gig(that is the only one that affected the frame rate in UT3 dropped it around average 50fps every other one had no noticeable impact...got to love a thread ripper


----------



## Soggysilicon

Quote:


> Originally Posted by *Chaoz*
> 
> Haven't updated in a while now. Don't even have the first creators update either. Not risking it, tbh. Everythings working fine as is.


If I had it to do over, I would of never bothered with the creators crap to begin with...








Quote:


> Originally Posted by *spyshagg*
> 
> Well I asked because most people are running hbm above 1100mhz with very little voltage, when I need 1100mv to do it.


I have in that past with 1105 in the mid 800s, but as the HBM voltage seems to set a voltage floor for the core frequency bumping it up has improved card stability as I went for better sustained averages rather than peak frequencies... I may look at it again but I wasn't all that satisfied with 1105+ in benchies... and the game stability was very meh.
Quote:


> Originally Posted by *SpecChum*
> 
> I need to work on my HBM too.
> 
> I have noticed that if I set 1000Mhz at 950mV it sometimes jumps back to 800Mhz but 975mV seems OK. I had a crash at 1050Mhz on the HBM, but I was also running a high core so it could have been either; I need to run default core and just OC the HBM really.
> 
> Problem for me is the HBM seems to heat up more than the core; I'm on the stock air cooler tho.


The 800 mhz HBM lockout seems induced by a recent driver, switch that up and try again.


----------



## SpecChum

Quote:


> Originally Posted by *tarot*
> 
> The 800 mhz HBM lockout seems induced by a recent driver, switch that up and try again.


It didn't lock to 800Mhz but if you ran a game or something and check Afterburner the graph would go _____v______v_____

Where the v is 800mhz dip for a very short time, I found out now a light bump to 960mv stops this


----------



## Particle

There is no hardware acceleration support in Linux yet for Vega. It's scheduled for kernel 4.15 which is the branch after the current development branch. 4.13 is stable and 4.14 is going through RCs right now. Software rendering via Mesa is working so I do have video output to my monitor at least. I tried playing CS:GO for kicks and my AMD 1950X was managing about 5-8 fps at 1280x720 with minimum quality settings except for texture filtering being set to trilinear instead of bilinear. It was funny but unplayable. The card works great in Windows 10 though when I play Battlegrounds.


----------



## Ernwild108

Hello guys new owner here(not rly I was playing with Vega for about month).
It is on water with temp max 41c.
My FS normal settings for 27k score 1730 MHz core with 1.2V + HBM 1190 MHz (HBM floor volt 1000 mV). HBM seems to be unlocked after 17.10.2 before i only could do 1105 MHz anything more and I had insta crash. Lower temp (under 45) seems to benefit for HBM OC. With 60c I could only do around 1170 without artefacts.
Core is temp sensitive too. With 60c I could only do 1670-1680 MHz in superpos bench with LC bios 1.25V after adding Vega to loop i can do 1695-1705 MHz on 1.2 V with same settings(looks like 10c = 10-15MHz more, dunno if it's from current limit cuz i applied pp mod for 400A).

Here's my daily settings which gives me 1630~ clock and 26k score in FS.


And some score from SP bench on full OC


----------



## Kyozon

Quote:


> Originally Posted by *Ernwild108*
> 
> Hello guys new owner here(not rly I was playing with Vega for about month).
> It is on water with temp max 41c.
> My FS normal settings for 27k score 1730 MHz core with 1.2V + HBM 1190 MHz (HBM floor volt 1000 mV). HBM seems to be unlocked after 17.10.2 before i only could do 1105 MHz anything more and I had insta crash. Lower temp (under 45) seems to benefit for HBM OC. With 60c I could only do around 1170 without artefacts.
> Core is temp sensitive too. With 60c I could only do 1670-1680 MHz in superpos bench with LC bios 1.25V after adding Vega to loop i can do 1695-1705 MHz on 1.2 V with same settings(looks like 10c = 10-15MHz more, dunno if it's from current limit cuz i applied pp mod for 400A).
> 
> Here's my daily settings which gives me 1630~ clock and 26k score in FS.
> 
> 
> And some score from SP bench on full OC


Excellent Scores. I am having trouble to Overclock VEGA FE to the same extent, as i believe it requires more than just 50% Power Limit. And i am not sure how can i can give it 100% and beyond.

As for the HBM2, i remember reading somewhere that this Driver must have increased the SOC Voltages allowing Higher HBM Clocks? But still not sure if this is correct.


----------



## Ernwild108

You can do PP mod if u want to have power limit higher than 50%. On my card hard limit is something around 1710MHz~ on 1.21V, anything higher don't make any difference.
My max OC is doing around 330ish watt.
As for SoC i discovered in HWInfo that it stays at 600Mhz idle and 1199 Mhz when load, any idea if we can overclock it and how?
Here u have 150% power limit for Vega FE.

VEGA_FE_Soft_PP.zip 1k .zip file


----------



## Kyozon

Quote:


> Originally Posted by *Ernwild108*
> 
> You can do PP mod if u want to have power limit higher than 50%. On my card hard limit is something around 1710MHz~ on 1.21V, anything higher don't make any difference.
> My max OC is doing around 330ish watt.
> As for SoC i discovered in HWInfo that it stays at 600Mhz idle and 1199 Mhz when load, any idea if we can overclock it and how?
> Here u have 150% power limit for Vega FE.
> 
> VEGA_FE_Soft_PP.zip 1k .zip file


Thank you! Does that one also works for the FE Liquid Edition? Thanks.


----------



## Ernwild108

It should work but with clocks from normal version and 1.2 volt on P7
Here u have 150% power limit 400A, 1.25 V on P7.

VEGA_FE_Soft_PP.zip 1k .zip file


----------



## Chaoz

Quote:


> Originally Posted by *Soggysilicon*
> 
> If I had it to do over, I would of never bothered with the creators crap to begin with...


My thoughts exactly. If it ain't broke ...


----------



## geriatricpollywog

Nice. I am also getting 27k graphics score in FS and 5300/5400 in Superposition 1080p extreme with watercooling. I don't think I can possibly bench higher though.

What does SOC voltage mean?


----------



## cplifj

Did AMD set the HBM2 to only 945MHz so it would match the 1080Ti bandwidth (484 GB/sec), or.....?

Kind of silly when it can do much more. Titan Xp gets 548 GB/sec , seeing this number getting surpassed easily by oc'ing the hbm2.


----------



## geriatricpollywog

Quote:


> Originally Posted by *cplifj*
> 
> Did AMD set the HBM2 to only 945MHz so it would match the 1080Ti bandwidth (484 GB/sec), or.....?
> 
> Kind of silly when it can do much more. Titan Xp gets 548 GB/sec , seeing this number getting surpassed easily by oc'ing the hbm2.


What's the bandwidth on overclocked Titan Xp memory?


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> Did AMD set the HBM2 to only 945MHz so it would match the 1080Ti bandwidth (484 GB/sec), or.....?
> 
> Kind of silly when it can do much more. Titan Xp gets 548 GB/sec , seeing this number getting surpassed easily by oc'ing the hbm2.


945 is probably most stable at the temperatures the HBM is expected to be with stock cooling.


----------



## fursko

Is there any gtx 1080 and 1080 ti users who can compare benchmark results with my V64 LC ?


----------



## Chaoz

Quote:


> Originally Posted by *fursko*
> 
> Is there any gtx 1080 and 1080 ti users who can compare benchmark results with my V64 LC ?


It's probably easier and faster to ask this question in the 1080/1080ti thread.


----------



## fursko

Quote:


> Originally Posted by *Chaoz*
> 
> It's probably easier and faster to ask this question in the 1080/1080ti thread.


You right. http://www.overclock.net/t/1641421/is-there-any-gtx-1080-and-1080-ti-users-who-can-compare-benchmark-results-with-my-v64-lc lol


----------



## Chaoz

Quote:


> Originally Posted by *fursko*
> 
> You right. http://www.overclock.net/t/1641421/is-there-any-gtx-1080-and-1080-ti-users-who-can-compare-benchmark-results-with-my-v64-lc lol


Not what I meant, but okay. I meant in the official 1080 and 1080ti threads.

http://www.overclock.net/forum/newestpost/1601288
http://www.overclock.net/forum/newestpost/1624521


----------



## bill1971

hot spot on my vega 56 is 88 celcius,is it ok?


----------



## PontiacGTX

Quote:


> Originally Posted by *cplifj*
> 
> Did AMD set the HBM2 to only 945MHz so it would match the 1080Ti bandwidth (484 GB/sec), or.....?
> 
> Kind of silly when it can do much more. Titan Xp gets 548 GB/sec , seeing this number getting surpassed easily by oc'ing the hbm2.


probably they had to do binning and the best spot to leave it was 945MHz,also running lower HBM clock speed would reduce gpu performance since it is ROP bound, pixelfillrate


----------



## Reikoji

So... the vega LC card in its present state can be cooled even better if you swap out the crap fan on there with an overkill hyper fan, particularly:

https://www.newegg.com/Product/Product.aspx?Item=N82E16835706015&ignorebbr=1

But, not even this thing is enough and has to operate at max speed to keep the GPU from surpassing 45c. Does a lot better than the stock fan however. It also affects the reported hotspot temperature.


----------



## SpecChum

Just had a system reboot with Minecraft sat on pause O_O

Clocks were like 100Mhz.

There's something a bit fishy about the low clocks


----------



## AmateurExpert

Quote:


> Originally Posted by *Reikoji*
> 
> So... the vega LC card in its present state can be cooled even better if you swap out the crap fan on there with an overkill hyper fan, particularly:
> 
> https://www.newegg.com/Product/Product.aspx?Item=N82E16835706015&ignorebbr=1
> 
> But, not even this thing is enough and has to operate at max speed to keep the GPU from surpassing 45c. Does a lot better than the stock fan however. It also affects the reported hotspot temperature.


I expected a 120mm radiator (even medium/thick ones) wouldn't be enough for >240W thermal load even with high air flow. Even though the LC edition looks great, I chose to get a regular ref edition and put an EK block on it; my single 420mm slim rad has been plenty for an OC V64 with a OC R7 too (>500W thermal load judging from my wall power draw) as my fans can dissipate this at 800-900rpm and keep the GPU temp at 42-44°C under full load. I guess AMD had to stick with a 120mm rad for widest compatibility.


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> So... the vega LC card in its present state can be cooled even better if you swap out the crap fan on there with an overkill hyper fan, particularly:
> 
> https://www.newegg.com/Product/Product.aspx?Item=N82E16835706015&ignorebbr=1
> 
> But, not even this thing is enough and has to operate at max speed to keep the GPU from surpassing 45c. Does a lot better than the stock fan however. It also affects the reported hotspot temperature.


66.5dB! O_O


----------



## 113802

Quote:


> Originally Posted by *Reikoji*
> 
> So... the vega LC card in its present state can be cooled even better if you swap out the crap fan on there with an overkill hyper fan, particularly:
> 
> https://www.newegg.com/Product/Product.aspx?Item=N82E16835706015&ignorebbr=1
> 
> But, not even this thing is enough and has to operate at max speed to keep the GPU from surpassing 45c. Does a lot better than the stock fan however. It also affects the reported hotspot temperature.


The stock can isn't crap at all. It's a gentle typhoon. Of course you can use a 5500RPM fan to keep it cooler. I keep the fan at 2185 RPM and it stays at 52C with an ambient of 65F


----------



## Reikoji

Quote:


> Originally Posted by *WannaBeOCer*
> 
> The stock can isn't crap at all. It's a gentle typhoon. Of course you can use a 5500RPM fan to keep it cooler. I keep the fan at 2185 RPM and it stays at 52C with an ambient of 65F


Its going to depend on what youre playing and if youre at +50 power. Some games you can find your GPU temp has gone above 60c, with HBM temp closer to 70c, with the stock fan at max speed. My ambient air temp is also 65f. Only just recently turned up the AC power, tho i dont think my AC can get down to 45f..

And its crap imo :3 Crap for the radiator it is attatched to anyway. In order for it to not be crap, there would need to be 3 of them, on a 360mm radiator if using the same quality of radiator. It just doesnt have the cooling capacity for what vega puts out. Thats why I said 'in its present state'. Its better than air, but a custom loop with the correct cooling capacity would be even better.
Quote:


> Originally Posted by *SpecChum*
> 
> 66.5dB! O_O


yep :3 I have 2 of them, and only just recently plugged it up to see if it can lower my vega temps. Unrelated, if I hold it over my vrm that temp drops by about 6c or more :3.


----------



## AmateurExpert

Quote:


> Originally Posted by *Particle*
> 
> There is no hardware acceleration support in Linux yet for Vega. It's scheduled for kernel 4.15 which is the branch after the current development branch. 4.13 is stable and 4.14 is going through RCs right now. Software rendering via Mesa is working so I do have video output to my monitor at least. I tried playing CS:GO for kicks and my AMD 1950X was managing about 5-8 fps at 1280x720 with minimum quality settings except for texture filtering being set to trilinear instead of bilinear. It was funny but unplayable. The card works great in Windows 10 though when I play Battlegrounds.


I'm looking forward to 4.15 too. If you can live with kernel 4.10 until then, you can use the hybrid driver (amdgpu-pro); it works but sacrifices game compatibility and performance compared to the open source stack.


----------



## Reikoji

meh


----------



## Kyozon

What Voltages are you guys utilizing for 1200Mhz on HBM2? Thanks.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Kyozon*
> 
> What Voltages are you guys utilizing for 1200Mhz on HBM2? Thanks.


Can't bench past 1180mhz on water.


----------



## Kyozon

Quote:


> Originally Posted by *0451*
> 
> Can't bench past 1180mhz on water.


Nice, what are the Voltages that you utilize for 1180?


----------



## geriatricpollywog

Quote:


> Originally Posted by *Kyozon*
> 
> Nice, what are the Voltages that you utilize for 1180?


Either 1180 or 1190mv. Don't recall.


----------



## Reikoji

I believe even if you change the HBM voltage setting, you aren't changing the HBM voltage. It stays 1.256 for vega 56 and 1.356 for vega 64.


----------



## geriatricpollywog

Quote:


> Originally Posted by *0451*
> 
> Either 1180 or 1190mv. Don't recall.


Actually nevermind this was my core voltage. HBM was set to 950, but actual voltage was above 1.1v and fluctuating.


----------



## Kyozon

Quote:


> Originally Posted by *Reikoji*
> 
> I believe even if you change the HBM voltage setting, you aren't changing the HBM voltage. It stays 1.256 for vega 56 and 1.356 for vega 64.


It seems to be the case. I found strange that, 2 months ago, i couldn't set 1Mhz past 1100 and my System would just crash on VEGA FE.

Now i am able to push 1200Mhz, but it artifacts. I believe that recent Drivers must have increased either SOC or HBM2 Voltages by default. Changing from 1100Mv to 1200Mv on WattMan or OverdriveNTool, as you have said, seems to not make a difference.


----------



## Reikoji

Quote:


> Originally Posted by *Kyozon*
> 
> It seems to be the case. I found strange that, 2 months ago, i couldn't set 1Mhz past 1100 and my System would just crash on VEGA FE.
> 
> Now i am able to push 1200Mhz, but it artifacts. I believe that recent Drivers must have increased either SOC or HBM2 Voltages by default. Changing from 1100Mv to 1200Mv on WattMan or OverdriveNTool, as you have said, seems to not make a difference.


ya that is what happened. Soc speed goes up to 1200 since i think 17.10.2. Allows higher HBM speed without instantly exploding but may not be stable visually, especially if you cant keep HBM temperature to an acceptable low.


----------



## 113802

At stock HBM voltage 950Mv my max I can bench at is 1105Mhz and 1075Mhz is 100% game stable. With 1000Mv I can bench at 1170Mhzand i run 1130Mhz now.


----------



## Tgrove

Sapphire vega 64 LC

P7 1682mhz 1100mv (doesn't surpass 1050v in game/benching)
Hbm 1145mhz 950mv
Thermal target 50c

Hbm can go over 1150mhz but artifacts, before update ran at 1100mhz

In a year we will be on the heels of 1080ti, especially with vulkan


----------



## SpecChum

I'm really confused as to what the memory voltage is - it's not the base vCore as I can't bench at 1100Mhz with it on 960 so, oh a whim, I thought I'd try it at 1v and it passed at 1100Mhz BUT my Core voltage, and wattage used, remained about the same, 950 or so mV, it didn't increase at all. The only thing that seems to happen is my HBM got more stable.

EDIT: My guess is it's the SOC voltage, like the one on Ryzen which contains the IMC.


----------



## SpecChum

OK, not exactly scientific but 970mV on mem voltage is OK at 1100Mhz - so in Superposition:
950mV 1000 fails (no crash but it keeps dropping to 800mhz periodically)
960mV 1100 fails (driver crash)
970mV 1100 passes


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> OK, not exactly scientific but 970mV on mem voltage is OK at 1100Mhz - so in Superposition:
> 950mV 1000 fails (no crash but it keeps dropping to 800mhz periodically)
> 960mV 1100 fails (driver crash)
> 970mV 1100 passes


That's the Core floor voltage you are changing. It's labelled HBM Voltage but it's not that at all. HBM voltage is locked (as Rekoji mentioned earlier) at 1.256 for V56 and 1.356 for V64/LC BIOS's.

What you are finding with those results are the minimum core voltage settings your GPU is happy with. Also locking to P3 for HBM helps prevent the 800MHz drop down occurring, though with one of the recent drivers my card would sometimes just discard all changes and drop to stupid levels and pull huge power for no reason. Doesn't seem to be happening on 17.11.1 for me though.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> That's the Core floor voltage you are changing. It's labelled HBM Voltage but it's not that at all. HBM voltage is locked (as Rekoji mentioned earlier) at 1.256 for V56 and 1.356 for V64/LC BIOS's.
> 
> What you are finding with those results are the minimum core voltage settings your GPU is happy with. Also locking to P3 for HBM helps prevent the 800MHz drop down occurring, though with one of the recent drivers my card would sometimes just discard all changes and drop to stupid levels and pull huge power for no reason. Doesn't seem to be happening on 17.11.1 for me though.


That's what I thought it was, yes, but when I set a 1v "floor" I still get 0.950v on the core - that's lower than the floor.

The 0.950v is constant regardless of whatever I set vMem to.


----------



## madmanmarz

Has anybody asked AMD or an AMD rep exactly what the hotspot temp is?


----------



## diabetes

Quote:


> Originally Posted by *madmanmarz*
> 
> Has anybody asked AMD or an AMD rep exactly what the hotspot temp is?


Steve from Gamer's Nexus is trying to find that out. He asked AMD but got no answer. I asked AMDMatt on the official AMD forum and he refused to give an answer. Now Steve asks Samsung if they know anything. I did some testing on my own and it is likely that hotspot is either the interposer temp or the temp of the PCB under the package. All I know is that hotspot temp raises linearily (not exponential or polynomial) with GPU power draw, which makes me think that it is caused by electrical resistance of the power leads in the PCB. Hotspot temp can be reduced by having a fan blow at the back of the card. That are all the clues I have.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> That's what I thought it was, yes, but when I set a 1v "floor" I still get 0.950v on the core - that's lower than the floor.
> 
> The 0.950v is constant regardless of whatever I set vMem to.


Yeah that's the vdroop, it's 50mv on the core, I think Gupsterg mentioned this earlier in the thread.
Quote:


> Originally Posted by *madmanmarz*
> 
> Has anybody asked AMD or an AMD rep exactly what the hotspot temp is?


Yes, answer is still unknown.


----------



## SpecChum

Ah, ok, that makes sense.

Cheers.

I can run 1550Mhz core and 1100mhz hbm and still be under 200w now.

Cool.

Literally.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> Ah, ok, that makes sense.
> 
> Cheers.
> 
> I can run 1550Mhz core and 1100mhz hbm and still be under 200w now.
> 
> Cool.
> 
> Literally.


Something I've noticed with my recent game testing is that even when running P7 mode unless the GPU is fully loaded it's power draw drops dramatically. On average with P7 locked I was using around 80-100W core during a 6hr iRacing session as it loads CPU far more than GPU, with my old 290 it would have been pegged at max power draw the entire time pulling close to 3x that.

When I relaxed the P7 lock down to P5/6/7 it only reduced it by a around 5-10W load dependant. Unlocking down to P0 however dropped draw to around 30W and it ran in P1 nearly the entire time! Not a nice experience though as stutter when the scene got busy was real.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Something I've noticed with my recent game testing is that even when running P7 mode unless the GPU is fully loaded it's power draw drops dramatically. On average with P7 locked I was using around 80-100W core during a 6hr iRacing session as it loads CPU far more than GPU, with my old 290 it would have been pegged at max power draw the entire time pulling close to 3x that.
> 
> When I relaxed the P7 lock down to P5/6/7 it only reduced it by a around 5-10W load dependant. Unlocking down to P0 however dropped draw to around 30W and it ran in P1 nearly the entire time! Not a nice experience though as stutter when the scene got busy was real.


I'm just using Superposition and FireStrike for this.

it could fail in seconds on a game lol

I'll test more tomorrow.

Seriously tho, if you need a stress test please do run Hard Reset and leave it on the menu, I've honestly never seen my Vega heat up so quick - it's just constant 100% usage.


----------



## gupsterg

@SpecChum

HBM Voltage is lower limit of GPU Voltage. Reread Post 3797.

Forget idle voltage







.

You are setting floor/ceiling voltage for when GPU under load, ie DPM 6/7.

What you have done in post 3847 is found that your GPU to stabilise with 1100MHz HBM needs 970mV as floor voltage when under load.


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> @SpecChum
> 
> HBM Voltage is lower limit of GPU Voltage. Reread Post 3797.
> 
> Forget idle voltage
> 
> 
> 
> 
> 
> 
> 
> .
> 
> You are setting floor/ceiling voltage for when GPU under load, ie DPM 6/7.
> 
> What you have done in post 3847 is found that your GPU to stabilise with 1100MHz HBM needs 970mV as floor voltage when under load.


0.970v under full load!


----------



## gupsterg

Get to the printer!







.

I know I won't ~1.075V here







.

Yeah you deserve the







, you







! LOL


----------



## dagget3450

Darn so these newer FE drivers are doing way better than launch ones. Also the Pro version of eyefinity is 100 times better than the radeon one. awesome sutff. I am happy now that i can game with some CF


----------



## Kyozon

Quote:


> Originally Posted by *dagget3450*
> 
> Darn so these newer FE drivers are doing way better than launch ones. Also the Pro version of eyefinity is 100 times better than the radeon one. awesome sutff. I am happy now that i can game with some CF


It is really interesting how much it has improved vs the 17.6

I also would like to ask you, what was your Max Clocks on VEGA FE? For some reason, beyond 1715Mhz for me has been a hard task, despite being Liquid Cooled. HBM2 is rock solid at 1180Mhz, on Time Spy and FireStrike.


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> It is really interesting how much it has improved vs the 17.6
> 
> I also would like to ask you, what was your Max Clocks on VEGA FE? For some reason, beyond 1715Mhz for me has been a hard task, despite being Liquid Cooled. HBM2 is rock solid at 1180Mhz, on Time Spy and FireStrike.


I haven't done any overclocking really just undervolting, both mine are air so they don't do so well.. i am thinking about ek waterblocks now since these things seems to be coming alive.


----------



## Kyozon

Quote:


> Originally Posted by *dagget3450*
> 
> I haven't done any overclocking really just undervolting, both mine are air so they don't do so well.. i am thinking about ek waterblocks now since these things seems to be coming alive.


Indeed. I kind feel a little bad because i decided to go with the Liquid Version, where i was planning to go for a Custom Loop at some point next year. But now with this AIO Card on the System it won't be possible.


----------



## gupsterg

Quote:


> Originally Posted by *Kyozon*
> 
> Indeed. I kind feel a little bad because i decided to go with the Liquid Version, where i was planning to go for a Custom Loop at some point next year. But now with this AIO Card on the System it won't be possible.


I luv'd Fury X with AIO. I bin'd 8 cards and kept best. I had it from March 16. I absolutely caned it for hours of use. One of my concerns was if the AIO failed what was I gonna do. Especially once it had become EOL and GPU blocks were non existent at that time in retail channels. I didn't fancy losing a nice card in RMA if cooling failed.

So this time I went WC right at purchase.


----------



## Kyozon

Quote:


> Originally Posted by *gupsterg*
> 
> I luv'd Fury X with AIO. I bin'd 8 cards and kept best. I had it from March 16. I absolutely caned it for hours of use. One of my concerns was if the AIO failed what was I gonna do. Especially once it had become EOL and GPU blocks were non existent at that time in retail channels. I didn't fancy losing a nice card in RMA if cooling failed.
> 
> So this time I went WC right at purchase.


Yes, that's my main concern. I never owned an AIO GPU before, this is the first time. It seems to be so premium quality. But the Cooling Performance can't match a Custom Loop.

I might consider getting another Air Card if i truly want this Custom Loop to happen. But knowing that i was already planning to do it, this was a terrible mistake on my behalf. But regardless, it is a very nice Product, i am very happy so far. But the Air version gives more freedom.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Kyozon*
> 
> Yes, that's my main concern. I never owned an AIO GPU before, this is the first time. It seems to be so premium quality. But the Cooling Performance can't match a Custom Loop.
> 
> I might consider getting another Air Card if i truly want this Custom Loop to happen. But knowing that i was already planning to do it, this was a terrible mistake on my behalf. But regardless, it is a very nice Product, i am very happy so far. But the Air version gives more freedom.


No, you can still sell/trade your AIO for a hundred bucks.


----------



## LordDain

I was able to achieve close to 5000 points in superposition 1080p extreme with my RX Vega 56.

I managed this by flashing my Vega 56 with both the air and lc BIOS versions of the 64.

However after the first reboot of my PC ór driver crash I am losing around 8-10% of this performance and after this I am unable to reach this score. I'll be stuck around 4600 points.

What's annoying the hell out of me is that after every reflash of the cards BIOS ór driver reinstall with DDU I am able to push score towards 5k points again with 1,1V core voltage!

What the hell is going on. It's almost like something is tweaking the cards settings somewhere deep in the registry as to limit it's performance.


----------



## geriatricpollywog

Quote:


> Originally Posted by *LordDain*
> 
> I was able to achieve close to 5000 points in superposition 1080p extreme with my RX Vega 56.
> 
> I managed this by flashing my Vega 56 with both the air and lc BIOS versions of the 64.
> 
> However after the first reboot of my PC ór driver crash I am losing around 8-10% of this performance and after this I am unable to reach this score. I'll be stuck around 4600 points.
> 
> What's annoying the hell out of me is that after every reflash of the cards BIOS ór driver reinstall with DDU I am able to push score towards 5k points again with 1,1V core voltage!
> 
> What the hell is going on. It's almost like something is tweaking the cards settings somewhere deep in the registry as to limit it's performance.


Download this: https://www.monitortests.com/forum/Thread-Custom-Resolution-Utility-CRU

After a crash, run Restart64, then turn your monitor on/off and reset your settings and reapply your overclocks. Not sure if that's the right order or if the order matters.


----------



## AngryLobster

Thats really impressive for a 56. My LC scores just under 5100 in the 1080P extreme test.


----------



## LordDain

Quote:


> Originally Posted by *0451*
> 
> Download this: https://www.monitortests.com/forum/Thread-Custom-Resolution-Utility-CRU
> 
> After a crash, run Restart64, then turn your monitor on/off and reset your settings and reapply your overclocks. Not sure if that's the right order or if the order matters.


I can confirm that this works! Tyvm! I don't have a freesync monitor at this moment so I didn't turn my monitor off & on. Sad that a tool is needed to be able to do that tho..
Quote:


> Originally Posted by *AngryLobster*
> 
> Thats really impressive for a 56. My LC scores just under 5100 in the 1080P extreme test.


Obviously it's only possible due to swapping to custom water cooling loop and probably having a good chip lottery. As soon as I start bumping up voltage over 1100mv though I do (still) see clock increases but at 1730mhz I am actually losing fps. for my card the sweet spot seems to be around 1080-1100mv and I can keep the clock settings from the 'Turbo' profile. Its strange that CU difference doesn't translate into much performance difference.

Did you also try the restart64 trick on your LC?


----------



## geriatricpollywog

Quote:


> Originally Posted by *LordDain*
> 
> I can confirm that this works! Tyvm! I don't have a freesync monitor at this moment so I didn't turn my monitor off & on. Sad that a tool is needed to be able to do that tho..
> Obviously it's only possible due to swapping to custom water cooling loop and probably having a good chip lottery. As soon as I start bumping up voltage over 1100mv though I do (still) see clock increases but at 1730mhz I am actually losing fps. for my card the sweet spot seems to be around 1080-1100mv and I can keep the clock settings from the 'Turbo' profile. Its strange that CU difference doesn't translate into much performance difference.
> 
> Did you also try the restart64 trick on your LC?




It's possible to hit over 5100 even with a potato chip like mine. But yes, water helps.


----------



## AmateurExpert

Quote:


> Originally Posted by *LordDain*
> 
> However after the first reboot of my PC ór driver crash I am losing around 8-10% of this performance and after this I am unable to reach this score. I'll be stuck around 4600 points.
> 
> What's annoying the hell out of me is that after every reflash of the cards BIOS ór driver reinstall with DDU I am able to push score towards 5k points again with 1,1V core voltage!
> 
> What the hell is going on. It's almost like something is tweaking the cards settings somewhere deep in the registry as to limit it's performance.


For me I have seen a similar loss of performance on 17.11.1 (but only rarely) - I have to disable and then re-enable HBCC and reset and reapply my overclocks in Global Wattman.


----------



## cephelix

Finally received my Vega 56! Can't believe how excited I was to see the package when I got home. Fast delivery from Newegg. Everything neatly packed. Now I get to play around with it and see how it compares to my trusty ol' R9 290.



With that, can I now join the club?

Still waiting on my FC WB from EK. Till then I'll just muck around with undervolting the card and seeing how it goes. After waiting 4 years I can finally have my whole system under water.


----------



## Mandarb

I was originally planning to see how the pricing on AIB cards was going to pan out, but with them still missing I caved and decided to put a morpheus II on the card.

Should be soon on the way and I will try to improve on the VRM heatsinks as opposed to most other people who did the build. Or maybe I botch it.. who knows. ^^

On a side note, I'll be streaming the process. (twitch.tv/therconjair if anyone's interested)


----------



## poisson21

Working with pp table , i directly put my cloks and hbm oc setting limit in it in case of a driver reset.

Can someone know if we can put in it the power limit we want to be set at reset (eg 150% in my case)?

(i don't know if i am really clear, english is not my primary language)


----------



## VicsPC

So i tried to mess with HBM memory in wattman, just because, and to my surprise as soon as i set it to custom and 1050mhz (without changing voltages or anything) it ended up getting stuck at 800mhz in fs. I may have to give overdriventool a go to see what happens.

I'm on water with 240/360 in push/pull so HBM temps are a non issue. They usually hover around 40°C.


----------



## elox

Try to set HBM Voltage to 975. Had the same issue with mem stock at 800mhz. If that does not help, change the mem speed to 1045mhz for example and try again.
Another option would be using OverdriveNTool and lock Memory to p3 state.


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> Try to set HBM Voltage to 975. Had the same issue with mem stock at 800mhz. If that does not help, change the mem speed to 1045mhz for example and try again.
> Another option would be using OverdriveNTool and lock Memory to p3 state.


Yea may give that a try, i think its at 1.050v now. I tink ill try ntool and see what happens, locking memory to p3 makes sense but then it would be running 1050+ at all times, not sure if it makes any difference but shouldn't be an issue it's such low voltage. I run my 1700x at full speed at all times anyways.

P.S. If anyone with a ryzen/tr wants better combined scores in fs or in some games, try using 50% core parking in the power options (needs to be enabled via the regedit) but it does work a treat.


----------



## elox

Quote:


> Originally Posted by *VicsPC*
> 
> Yea may give that a try, i think its at 1.050v now. I tink ill try ntool and see what happens, locking memory to p3 makes sense but then it would be running 1050+ at all times, not sure if it makes any difference but shouldn't be an issue it's such low voltage. I run my 1700x at full speed at all times anyways.
> 
> P.S. If anyone with a ryzen/tr wants better combined scores in fs or in some games, try using 50% core parking in the power options (needs to be enabled via the regedit) but it does work a treat.


Hehe my old i5 4690k is running constant 4,7Ghz at 1,25v for years








Do you need HBM (gpu voltage floor) at 1,05v? HBM voltage is the minimum voltage applied to the GPU, or did that change?
For me 975mv is more then enough on "HBM".


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> Hehe my old i5 4690k is running constant 4,7Ghz at 1,25v for years
> 
> 
> 
> 
> 
> 
> 
> 
> Do you need HBM (gpu voltage floor) at 1,05v? HBM voltage is the minimum voltage applied to the GPU, or did that change?
> For me 975mv is more then enough on "HBM".


Honestly i haven't tried tweaking it at all yet, I'm not bothered by temps or wattage used (uses around the same as my r9 390 according to hwinfo) and the only game i don't run capped (freesync) is rainbow six siege, as i like the lower frametimes rather then freesync.


----------



## elox

Quote:


> Originally Posted by *VicsPC*
> 
> Honestly i haven't tried tweaking it at all yet, I'm not bothered by temps or wattage used (uses around the same as my r9 390 according to hwinfo) and the only game i don't run capped (freesync) is rainbow six siege, as i like the lower frametimes rather then freesync.


Are you on the stock air cooler? Can i ask what core frequency you see while gaming?


----------



## Naeem

Quote:


> Originally Posted by *Ernwild108*
> 
> It should work but with clocks from normal version and 1.2 volt on P7
> Here u have 150% power limit 400A, 1.25 V on P7.
> 
> VEGA_FE_Soft_PP.zip 1k .zip file


i used this one with Vega 64 LC and it made my gpu run slower around 1630mhz clock anyway to remove it ? i want power limt for my vega lc ? and clock as well ?


----------



## cplifj

did AMD really put their pump on a variable speed ???

i believe so cause i can hear it starting to make noise, it is not the radiatorfan that rumbles , it is the coolingpump.

starts to make noise at full load when the fan ain't even spinning faster then 1000-1100 rpm, a level at wich the fan makes no noise at all.


----------



## gupsterg

Quote:


> Originally Posted by *Naeem*
> 
> i used this one with Vega 64 LC and it made my gpu run slower around 1630mhz clock anyway to remove it ? i want power limt for my vega lc ? and clock as well ?


VEGA FE has lower clocks.

To remove, delete the key in registry using regedit or run DDU and reinstall driver.

In the VEGA bios thread is a registry file editing tool in OP, use that to make a custom reg for your V64 LC, then apply it.


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> Are you on the stock air cooler? Can i ask what core frequency you see while gaming?


I'm on water. In Siege, since it's uncapped, i hit around 1550mhz+ on stock bios/balanced mode, thats with fxaa on, with taa ill see 1600-1630 no problem with less gpu usage then with fxaa on.


----------



## Trender07

Quote:


> Originally Posted by *Tgrove*
> 
> Sapphire vega 64 LC
> 
> P7 1682mhz 1100mv (doesn't surpass 1050v in game/benching)
> Hbm 1145mhz 950mv
> Thermal target 50c
> 
> Hbm can go over 1150mhz but artifacts, before update ran at 1100mhz
> 
> In a year we will be on the heels of 1080ti, especially with vulkan


What do you set your power limit to? 50%?


----------



## Kyozon

Quote:


> Originally Posted by *gupsterg*
> 
> VEGA FE has lower clocks.
> 
> To remove, delete the key in registry using regedit or run DDU and reinstall driver.
> 
> In the VEGA bios thread is a registry file editing tool in OP, use that to make a custom reg for your V64 LC, then apply it.


I couldn't be mistaken, but i remember seeing Buildzoid at some point posting that VEGA FE was doing around 1750Mhz. That's awesome, if possible. But i don't know how to push it this far, perhaps it was just lottery.


----------



## diggiddi

Quote:


> Originally Posted by *dagget3450*
> 
> Darn so these newer FE drivers are doing way better than launch ones. *Also the Pro version of eyefinity is 100 times better than the radeon one.* awesome sutff. I am happy now that i can game with some CF


Sorry for being







but How so?


----------



## Kyozon

Quote:


> Originally Posted by *diggiddi*
> 
> Sorry for being
> 
> 
> 
> 
> 
> 
> 
> but How so?


Well, for me the FE Inaugural Drivers were a total mess.

I just couldn't get away of visiting this Website of typing a Document on Word without getting a BSOD each 5 minutes. The RX Vega Inaugural Drivers were much more polished when it comes to Stability as the FE Drivers were.

Recent ReLive PRO Beta allows us to Crossfire just as the Inaugural One did, but this one is light years ahead when it comes to overall Stability.


----------



## diggiddi

Quote:


> Originally Posted by *Kyozon*
> 
> Well, for me the FE Inaugural Drivers were a total mess.
> 
> I just couldn't get away of visiting this Website of typing a Document on Word without getting a BSOD each 5 minutes. The RX Vega Inaugural Drivers were much more polished when it comes to Stability as the FE Drivers were.
> 
> Recent ReLive PRO Beta allows us to Crossfire just as the Inaugural One did, but this one is light years ahead when it comes to overall Stability.


Cool but how is the Pro eyefinity experience different from regular radeon


----------



## Kyozon

Quote:


> Originally Posted by *diggiddi*
> 
> Cool but how is the Pro eyefinity experience different from regular radeon


Oh that! Sorry, i haven't experienced EyeFinity yet, unfortunately.


----------



## dagget3450

Quote:


> Originally Posted by *diggiddi*
> 
> Cool but how is the Pro eyefinity experience different from regular radeon


Here is a few screenshots, really my thing is its very straightforward and works without glitching. I've had nightmares with the regular eyefinity setup utility.

Its easy you just hit the identify and drag corresponding number to slots, and create. it doesnt bug out like the non pro setup always did for me.



Right click and open in new tab for original size

been testing warframe on 8k and it runs super good and looks decent lowest ive seen is 53ps with vsynch @ 60 - max settings everything except no AA not needed at this resolution

few test shots
biggest negative is the in game UI scaling is wacky, some like game menu is pixelated, and most UI is super tiny i.e. chat box etc.


Spoiler: Warning: Spoiler!












These have also been recompressed a few times so they have lost some detail to post online.

Edit: forgot to mention i am doing this 2x2 eyefinity setup on a crossover 44k 4k monitor using the 4 way PBP(picture by picture)
some snaps from another users thread but my hookups are different

4PBP:




So its one monitor with 4 inputs and seamless panels so no bezels


----------



## AngryLobster

Quote:


> Originally Posted by *cplifj*
> 
> did AMD really put their pump on a variable speed ???
> 
> i believe so cause i can hear it starting to make noise, it is not the radiatorfan that rumbles , it is the coolingpump.
> 
> starts to make noise at full load when the fan ain't even spinning faster then 1000-1100 rpm, a level at wich the fan makes no noise at all.


Yes it is variable. I can't find what the trigger point is but the pump definitely gets louder after extended load.


----------



## Reikoji

Quote:


> Originally Posted by *AngryLobster*
> 
> Yes it is variable. I can't find what the trigger point is but the pump definitely gets louder after extended load.


It might be the Target Temperature i see a lot of people setting to max. In my observation target temperature controls the fan curve, and can possibly control pump ramp up. Max temperature sets the throttle point.


----------



## futr_vision

Not really finding a good answer through searching but then again maybe I'm not searching using the right keywords. I'm basically trying to understand if a Vega 56 can be undervolted in Linux(Ubuntu) like you can on Windows 10 for mining purposes.

This covers what I would like to do on the linux side of things.


__
https://www.reddit.com/r/74hjqn/monero_and_vega_the_definitive_guide/


----------



## The EX1

Newegg has the black Vega 64s for $465.99 on their ebay store. That is $34 under MSRP


----------



## PontiacGTX

I wish they had a Vega 56 for 34USD lower than MSRP


----------



## The EX1

Quote:


> Originally Posted by *PontiacGTX*
> 
> I wish they had a Vega 56 for 34USD lower than MSRP


They have Gigabyte ones that are $10 below MSRP after rebate


----------



## webhito

I believe I got my silver rx 64's for $570, shipping + tax sucks, but it was still a good $200 cheaper than what they are asking for here in Mexico... Crazy expensive down here.


----------



## webhito

By the way, anyone here that plays rocket league been getting their frames cut in half? For some reason ocassionaly when I tab out and back, after a few times it will cut my fps in half.
If I limit it to 100hz, it will go to 50, If I limit it to 85hz, it will go to 42. Changing in game from borderless to fullscreen and back fixes it, but its a tad annoying.


----------



## VicsPC

Quote:


> Originally Posted by *webhito*
> 
> By the way, anyone here that plays rocket league been getting their frames cut in half? For some reason ocassionaly when I tab out and back, after a few times it will cut my fps in half.
> If I limit it to 100hz, it will go to 50, If I limit it to 85hz, it will go to 42. Changing in game from borderless to fullscreen and back fixes it, but its a tad annoying.


Im on 17.10.1 and have no issues with fps. I still get 73 no problem, i do get a whole lot less lag since updating to fall creators. However, Farming Sim 17 once you alt tab will cut your core clock in half and then end up cutting your fps in half in busy areas. My rocket league has been fine but it's a possibility that some games DO NOT like being tabbed whatsoever. I alt+tab out of RL quite often and haven't had any issues. I will however get some games where if i alt+tab a bit too much the game will black screen and freeze, can't even task manager but eventually crash and get back in.

If anyone is interested, humble bundle has a nice sim bundle going on right now. https://www.humblebundle.com/strategy-sim-bundle?linkID=&mcID=102:5a011da0f0c360df115777a8t:5824c958f7bb513ff19a3670:1&utm_source=Humble+Bundle+Newsletter&utm_medium=email&utm_campaign=2017_11_07_strategysim_bundle&linkID=&utm_content=logo


----------



## ducegt

I ordered an open box PowerColor 64 Liquid edition for $500 from NewEgg. So I'm sure the aftermarket cards will finally be coming out next week with my luck.


----------



## Naeem




----------



## Trender07

Guys Raja left AMD


----------



## cephelix

Yeah, just saw it in the News Section. Hopefully someone just as if not more qualified is put in his position and will take RTG to a better place


----------



## boot318

Quote:


> Originally Posted by *webhito*
> 
> By the way, anyone here that plays rocket league been getting their frames cut in half? For some reason ocassionaly when I tab out and back, after a few times it will cut my fps in half.
> If I limit it to 100hz, it will go to 50, If I limit it to 85hz, it will go to 42. Changing in game from borderless to fullscreen and back fixes it, but its a tad annoying.


Radeon Chill feature was turning on by default for a few people. Not sure if that is your problem, but it is worth a look to see if it is on.


----------



## webhito

Quote:


> Originally Posted by *boot318*
> 
> Radeon Chill feature was turning on by default for a few people. Not sure if that is your problem, but it is worth a look to see if it is on.


Thanks for the suggestion, but no, its not active.


----------



## Reikoji

Quote:


> Originally Posted by *cephelix*
> 
> Yeah, just saw it in the News Section. Hopefully someone just as if not more qualified is put in his position and will take RTG to a better place


Someone that finds the gaming aspect of GPU's more important than Raja did.


----------



## nyk20z3

I currently have a mini itx rig under water using a 1080 Ti Strix but as an enthusiast i still have an interest in Vega. On that note ive been thinking about picking up a Vega 64 on black Friday and putting a EK or Heatkiller water block on it. I don't know how much use i would get out of the card since it would most likely be in a secondary rig but its just so sexy imo esp with the right block on it.

Drooling -

http://shop.watercool.de/15046


----------



## Reikoji

Also

https://www.newegg.com/Product/Product.aspx?Item=N82E16814202301&cm_re=rx_vega-_-14-202-301-_-Product

Neweggs been on a Roller coaster with these prices. If it jumps back up it will eventually come back down in a few days. rinse repeat.


----------



## webhito

Quote:


> Originally Posted by *Reikoji*
> 
> Also
> 
> https://www.newegg.com/Product/Product.aspx?Item=N82E16814202301&cm_re=rx_vega-_-14-202-301-_-Product


Already snatched 2 of those, last one I got for that price, they go out of stock fast though.


----------



## nyk20z3

Quote:


> Originally Posted by *Reikoji*
> 
> Also
> 
> https://www.newegg.com/Product/Product.aspx?Item=N82E16814202301&cm_re=rx_vega-_-14-202-301-_-Product
> 
> Neweggs been on a Roller coaster with these prices. If it jumps back up it will eventually come back down in a few days. rinse repeat.


That's the exact card i was looking at. There is no guarantee BF will bring the price down some but i am willing to wait and see what happens.


----------



## Reikoji

Quote:


> Originally Posted by *nyk20z3*
> 
> That's the exact card i was looking at. There is no guarantee BF will bring the price down some but i am willing to wait and see what happens.


I really want to see if one of these AIB's have the sense to Water cool, with better cooling than the stock card, or water block before grabbing another. So far they are disappointing me. And, they are apparently still months away.

Screw it, ordered







2nd vega 64 on the way soon. Maybe sometime next year ASUS will have that dual vega GPU, ARES V, out and i will grab one of those too !


----------



## cephelix

Quote:


> Originally Posted by *Reikoji*
> 
> Someone that finds the gaming aspect of GPU's more important than Raja did.


True. But i do understand their decisions. Based on past trends, the majority would still buy nvidia. To shift the gaming market in their favour they would need a clean and clear win. Unfortunately, thus far it hasn't happened yet. For the most part i've been thoroughly impressed with my 290. The only reason I got the vega was so i could put my whole system under water. Been waiting 3 long years for that to happen.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Reikoji*
> 
> Someone that finds the gaming aspect of GPU's more important than Raja did.






I heard it was going to be the rock...he's doing everything else why not...and imagine him in the boardroom...I want to go this direction...ok mate..no worries...whatever you want









as for newegg its around 100 less than here but the shipping would make it close to the same.
I will just wait patiently for acheap one to crossfire


----------



## fursko

I discover new thing about hbm overclock guys. As we already know hbm voltage do nothing. Its stock 950mv in wattman settings. I was searching sweet spot for my vega 64 LC. I did some undervolt and suddenly my hbm crashed. So probably hbm clocks bound core voltage not hbm voltage.

p6=1040 p7=1090mV hbm 1100mhz
p6=1100 p7=1150mV stable hbm 1120mhz
Without undervolt my hbm works 1150mhz.

I find my sweetspot for daily usage. Because if i push the card my LC edition not enough for cooling and power consumption is insane. Its 1720/1150 maximum power. But temps easily hit 70C and start throttling. Plus high noise because of high power consumption. Coil whine... Now my clocks 1700/1120 and temps awesome, power consumption managable (My rm1000i works without fan) so i didnt lose much performance either.

My vega probably can beat 1080 ti in Wolfenstein 2 but i wonder my vega can beat highly overclocked gtx 1080 in AC: Origins ?


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> I discover new thing about hbm overclock guys. As we already know hbm voltage do nothing. Its stock 950mv in wattman settings. I was searching sweet spot for my vega 64 LC. I did some undervolt and suddenly my hbm crashed. So probably hbm clocks bound core voltage not hbm voltage.
> 
> p6=1040 p7=1090mV hbm 1100mhz
> p6=1100 p7=1150mV stable hbm 1120mhz
> Without undervolt my hbm works 1150mhz.
> 
> I find my sweetspot for daily usage. Because if i push the card my LC edition not enough for cooling and power consumption is insane. Its 1720/1150 maximum power. But temps easily hit 70C and start throttling. Plus high noise because of high power consumption. Coil whine... Now my clocks 1700/1120 and temps awesome, power consumption managable (My rm1000i works without fan) so i didnt lose much performance either.


Thanks for the info, remounting my block today so will get some testing done.
Quote:


> Originally Posted by *fursko*
> 
> My vega probably can beat 1080 ti in Wolfenstein 2 but i wonder my vega can beat highly overclocked gtx 1080 in AC: Origins ?


Until Ubisoft fix that **** I doubt that we'll get very good test results from that game unfortunately. It's pretty hit or miss from a performance perspective.


----------



## Reikoji

Quote:


> Originally Posted by *fursko*
> 
> I discover new thing about hbm overclock guys. As we already know hbm voltage do nothing. Its stock 950mv in wattman settings. I was searching sweet spot for my vega 64 LC. I did some undervolt and suddenly my hbm crashed. So probably hbm clocks bound core voltage not hbm voltage.
> 
> p6=1040 p7=1090mV hbm 1100mhz
> p6=1100 p7=1150mV stable hbm 1120mhz
> Without undervolt my hbm works 1150mhz.
> 
> I find my sweetspot for daily usage. Because if i push the card my LC edition not enough for cooling and power consumption is insane. Its 1720/1150 maximum power. But temps easily hit 70C and start throttling. Plus high noise because of high power consumption. Coil whine... Now my clocks 1700/1120 and temps awesome, power consumption managable (My rm1000i works without fan) so i didnt lose much performance either.
> 
> My vega probably can beat 1080 ti in Wolfenstein 2 but i wonder my vega can beat highly overclocked gtx 1080 in AC: Origins ?


I wrote earlier that if you were to swap out or add on a decent fan to the LC editions crappy radiator, it would cool better. It would unfortunately need to be a really noisy fan, capable of pushing or pulling diamonds through the rad. But, it sounds like you care about noise


----------



## TrixX

Quote:


> Originally Posted by *Reikoji*
> 
> I wrote earlier that if you were to swap out or add on a decent fan to the LC editions crappy radiator, it would cool better. It would unfortunately need to be a really noisy fan, capable of pushing or pulling diamonds through the rad. But, it sounds like you care about noise


Even a second Gentle Typhoon in push/pull would improve performance, though a pair of Noctua Industrials would improve the airlfow at the cost of noise (though I'm impressed how quiet they are).


----------



## Reikoji

Quote:


> Originally Posted by *TrixX*
> 
> Even a second Gentle Typhoon in push/pull would improve performance, though a pair of Noctua Industrials would improve the airlfow at the cost of noise (though I'm impressed how quiet they are).


https://www.newegg.com/Product/Product.aspx?Item=N82E16835706015&ignorebbr=1

Take care of all your heat and dust build up problems.
Quote:


> Originally Posted by *TrixX*
> 
> Thanks for the info, remounting my block today so will get some testing done.
> Until Ubisoft fix that **** I doubt that we'll get very good test results from that game unfortunately. It's pretty hit or miss from a performance perspective.


Ubisoft and fix in the same sentence in close proximity and in that order. Don't hold your breath :3


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> Thanks for the info, remounting my block today so will get some testing done.
> Until Ubisoft fix that **** I doubt that we'll get very good test results from that game unfortunately. It's pretty hit or miss from a performance perspective.


Are you using the tension X plate with your water block?


----------



## fursko

Quote:


> Originally Posted by *Reikoji*
> 
> I wrote earlier that if you were to swap out or add on a decent fan to the LC editions crappy radiator, it would cool better. It would unfortunately need to be a really noisy fan, capable of pushing or pulling diamonds through the rad. But, it sounds like you care about noise


Yeah i care noise. My pc almost completely silent. Actually radiator not bad. Its not thin but not thick either. They want compability. It can fit many cases and many situations. But long tubes ridiculous and noise is bad. Dont know which one responsible. Probably coil whine maybe pump. I didnt like stock fan either. I will use corsair magnetic fans.

Some people says Vega cards gonna get big driver updates with aftermarket gpus. Broken features will work like primitive shaders. What do you guys think ? I think this is best we can get from vega. I dont expect big improvements. All we can get is optimized games like Wolfenstein. I heard volta gpus will use rapid packed math. So we can see more rapid packed math in games and vega line up can beat pascals. Looks like vega more futureproof than pascal.


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Are you using the tension X plate with your water block?


I think I will this time, see if there's any difference in hotspot temp.


----------



## VicsPC

Quote:


> Originally Posted by *TrixX*
> 
> I think I will this time, see if there's any difference in hotspot temp.


I didn't use the tension plate and my hotspot temps are fine. Right now aroudn 20-21°C ambient (on the sensors, not sure if they're reliable), my temps are as follows at idle.

GPU: 20°C
HBM: 22°C
Hotspot: 20°C

I'm using a core X5 cube case from Tt and using 3 140mm Noctuas as intake pushing air down thru the case to the rads, both fans sit above the gpu pushing air on it and thru it (using the factory backplate as well no tension plate). I may leave Siege running to see what hotspot temps get to but I'm usually around 10-12°C above core/hbm on my hotspot temps.


----------



## AngryLobster

I don't think you can find a fan better than the stock LC unless you buy one of those Delta/Sunon high RPM fans.

I tried swapping mine to a Noctua and got 3-4C worse results at like for like RPM. The problem with the GT it comes with is the bearing noise, it's whiny and rough at 800RPM+.

Thankfully my card is undervolted so much that the fan never needs to cross 500RPM to keep the card around 50-55C. Other than the coil whine you would never know it's even there which was worth the premium over the blower for me.

1580/1100 @ 0.931v.


----------



## VicsPC

Quote:


> Originally Posted by *AngryLobster*
> 
> I don't think you can find a fan better than the stock LC unless you buy one of those Delta/Sunon high RPM fans.
> 
> I tried swapping mine to a Noctua and got 3-4C worse results at like for like RPM. The problem with the GT it comes with is the bearing noise, it's whiny and rough at 800RPM+.
> 
> Thankfully my card is undervolted so much that the fan never needs to cross 500RPM to keep the card around 50-55C. Other than the coil whine you would never know it's even there which was worth the premium over the blower for me.
> 
> 1580/1100 @ 0.931v.


Problem is for 225w a 120mm radiator is a bit too small. Granted it has no cpu on it i still think a 240 would do MUCH better with even less noise.


----------



## geriatricpollywog

Quote:


> Originally Posted by *AngryLobster*
> 
> I don't think you can find a fan better than the stock LC unless you buy one of those Delta/Sunon high RPM fans.
> 
> I tried swapping mine to a Noctua and got 3-4C worse results at like for like RPM. The problem with the GT it comes with is the bearing noise, it's whiny and rough at 800RPM+.
> 
> Thankfully my card is undervolted so much that the fan never needs to cross 500RPM to keep the card around 50-55C. Other than the coil whine you would never know it's even there which was worth the premium over the blower for me.
> 
> 1580/1100 @ 0.931v.


The stock fan is the legendary Scythe Gentle Typhoon. Good luck finding something better.


----------



## hyp36rmax

Quote:


> Originally Posted by *AngryLobster*
> 
> I don't think you can find a fan better than the stock LC unless you buy one of those Delta/Sunon high RPM fans.
> 
> I tried swapping mine to a Noctua and got 3-4C worse results at like for like RPM. The problem with the GT it comes with is the bearing noise, it's whiny and rough at 800RPM+.
> 
> Thankfully my card is undervolted so much that the fan never needs to cross 500RPM to keep the card around 50-55C. Other than the coil whine you would never know it's even there which was worth the premium over the blower for me.
> 
> 1580/1100 @ 0.931v.


You can never go wrong with Nidec Servo Gentle Typhoons on radiators. AMD made a wise choice going with those fans.

Quote:


> Originally Posted by *0451*
> 
> The stock fan is the legendary Scythe Gentle Typhoon. Good luck finding something better.
> 
> Exactly! People who are replacing them with other fans have no clue what they actually have.
> 
> 
> 
> 
> 
> 
> 
> I still use them exclusively for all my watercooling builds.


----------



## Naeem

RX Vega 64 Black for *$466* with Free Shipping on ebay Newegg store

https://rover.ebay.com/rover/1/711-53200-19255-0/1?icep_id=114&ipn=icep&toolid=20004&campid=5338212510&mpre=https%3A%2F%2Fwww.ebay.com%2Fitm%2FPowerColor-Radeon-RX-VEGA-64-DirectX-12-AXRX-VEGA-64-8GBHBM2-3DH-8GB-2048-Bit-HB-%2F382198554012%3Fhash%3Ditem58fccf1d9c


----------



## VicsPC

As promised, here's the temps after playing Siege for around an hr, clocks average around 1600/945 on balanced factory settings.


----------



## gupsterg

Quote:


> Originally Posted by *VicsPC*
> 
> I didn't use the tension plate and my hotspot temps are fine. Right now aroudn 20-21°C ambient (on the sensors, not sure if they're reliable), my temps are as follows at idle.
> 
> GPU: 20°C
> HBM: 22°C
> Hotspot: 20°C
> 
> I'm using a core X5 cube case from Tt and using 3 140mm Noctuas as intake pushing air down thru the case to the rads, both fans sit above the gpu pushing air on it and thru it (using the factory backplate as well no tension plate). I may leave Siege running to see what hotspot temps get to but I'm usually around 10-12°C above core/hbm on my hotspot temps.


Check under load, as you can see my idle is the same.



Above is worst case for me, max ~600W from wall, Bionic 32 works units on CPU and 3 works units on GPU.

*** edit ***

Gaming I have usually lower temps.


----------



## VicsPC

Quote:


> Originally Posted by *gupsterg*
> 
> Check under load, as you can see my idle is the same.
> 
> 
> 
> Above is worst case for me, max ~600W from wall, Bionic 32 works units on CPU and 3 works units on GPU.
> 
> *** edit ***
> 
> Gaming I have usually lower temps.


I just posted my above at load, gaming temps. I've yet to see it get past 60°C even in the summer where my ambient with no AC is around 28°C. As i said though i do have 2 fans a couple cms away from the top of the card blowing right into it. Today is 21°C in the room so temps are quite lower.

Siege is the only game i don't cap that Ive found really hammers the gpu AND the cpu therefore getting the water temps at full load level. Realbench doesnt even come close. Not sure what other program i have to really cook it up but it's fair to say that a fan right above/behind the gpu would help hotspot temps. It's not common to have a vertically mounted GPU so could be why most in here, even on water, are having hotspot temps higher then I do even when they're undervolted.

Here's my 4k superposition on stock bios/stock clocks on an ekwb. There's the temps next to that.


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> Check under load, as you can see my idle is the same.
> 
> 
> 
> Above is worst case for me, max ~600W from wall, Bionic 32 works units on CPU and 3 works units on GPU.
> 
> *** edit ***
> 
> Gaming I have usually lower temps.


Is this on air or under water?


----------



## VicsPC

Quote:


> Originally Posted by *0451*
> 
> Is this on air or under water?


I believe he's under water but he also has a 1950x in the mix as well so would explain the ~37°C water temp he has compared to mine that's closer to 27°C.


----------



## spyshagg

Quote:


> Originally Posted by *fursko*
> 
> I discover new thing about hbm overclock guys. As we already know hbm voltage do nothing. Its stock 950mv in wattman settings. I was searching sweet spot for my vega 64 LC. I did some undervolt and suddenly my hbm crashed. So probably hbm clocks bound core voltage not hbm voltage.
> 
> p6=1040 p7=1090mV hbm 1100mhz
> p6=1100 p7=1150mV stable hbm 1120mhz
> Without undervolt my hbm works 1150mhz.
> 
> I find my sweetspot for daily usage. Because if i push the card my LC edition not enough for cooling and power consumption is insane. Its 1720/1150 maximum power. But temps easily hit 70C and start throttling. Plus high noise because of high power consumption. Coil whine... Now my clocks 1700/1120 and temps awesome, power consumption managable (My rm1000i works without fan) so i didnt lose much performance either.
> 
> My vega probably can beat 1080 ti in Wolfenstein 2 but i wonder my vega can beat highly overclocked gtx 1080 in AC: Origins ?


1100mhz hbm with 1100mv = stable
1100mhz hbm with 1050mv = Crashes

HBM voltage setting does do something for me.


----------



## Ipak

Yesterday i installed Morpheus 2 on my Vega 64. Unfortunately theres not enough small radiators for vrm, but i found some left overs from arctic extreme 4 that i have on my 290.
With closed case and stock 300W power limit, benching temps reach 66'C core and 68'C hbm and hot spot around 94'C max.
I also stick some thick thermal pads and small radiators on doublers on the back side of the card. They still get pretty toasty, touching radiators hurts, but they would just burn before so thats improvement.
New scores coming soon


----------



## kundica

Quote:


> Originally Posted by *fursko*
> 
> My vega probably can beat 1080 ti in Wolfenstein 2 but i wonder my vega can beat highly overclocked gtx 1080 in AC: Origins ?


I think the poor performance in AC Origins is due to the card being underutilized. Compared to other games Origins doesn't push my Vega to 100% utilization and it runs about 3-4 degrees cooler in my custom loop. Extended gaming in Wolfenstein 2 for example, my card heats up to 40-41 degrees while it peaks around 37 with AC Origins.


----------



## Reikoji

Quote:


> Originally Posted by *hyp36rmax*
> 
> You can never go wrong with Nidec Servo Gentle Typhoons on radiators. AMD made a wise choice going with those fans.


Well, the radiator they chose doesnt do the fan any justice. There are fans that can easily outperform these in cooling, just not silence. Silence isn't my top concern so there are a vast number of fans i can chose that can cool this radiator better. Wouldn't matter much if they chose a better radiator.


----------



## ducegt

Is it best for the LC edition radiator to be above the card/pump? I've seen air bubbles mentioned by some as a concern. I have an inverted ATX case and if its best to go above the card, I'll likely need to mount it outside of the case next to empty PCI slots.


----------



## Reikoji

Quote:


> Originally Posted by *ducegt*
> 
> Is it best for the LC edition radiator to be above the card/pump? I've seen air bubbles mentioned by some as a concern. I have an inverted ATX case and if its best to go above the card, I'll likely need to mount it outside of the case next to empty PCI slots.


There wont be any bubbles. Closed systems dont have any air pockets in them.


----------



## fursko

Quote:


> Originally Posted by *spyshagg*
> 
> 1100mhz hbm with 1100mv = stable
> 1100mhz hbm with 1050mv = Crashes
> 
> HBM voltage setting does do something for me.


What is your clock voltages ? HBM voltage means minimum voltage. If you set hbm voltage 1100mv your core voltage minimum 1100mv thats the trick. Doesnt matter if u set core voltage 1000mv or 900mv.


----------



## fursko

Quote:


> Originally Posted by *kundica*
> 
> I think the poor performance in AC Origins is due to the card being underutilized. Compared to other games Origins doesn't push my Vega to 100% utilization and it runs about 3-4 degrees cooler in my custom loop. Extended gaming in Wolfenstein 2 for example, my card heats up to 40-41 degrees while it peaks around 37 with AC Origins.


Cpu usage low too. Dunno why. is it driver problem or ubisoft problem


----------



## VicsPC

Quote:


> Originally Posted by *Reikoji*
> 
> There wont be any bubbles. Closed systems dont have any air pockets in them.


That's not entirely true, you MUST have air in a system even a little bit for expansion and pressure. Even a closer loop system has air in it, even if they didn't and you shook it while shipping youd have air bubbles in em.


----------



## geriatricpollywog

Quote:


> Originally Posted by *VicsPC*
> 
> I believe he's under water but he also has a 1950x in the mix as well so would explain the ~37°C water temp he has compared to mine that's closer to 27°C.


27 degrees is legendary. My water is around 10 degrees above ambient. So 32 C in a 22 C room with fans and pumps at 50%. Gpu/hbm around 40 C and hotspot 60 C. Of coursr I can crank fans and pumps to max and achieve a delta T of <5.


----------



## VicsPC

Quote:


> Originally Posted by *0451*
> 
> 27 degrees is legendary. My water is around 10 degrees above ambient. So 32 C in a 22 C room with fans and pumps at 50%. Gpu/hbm around 40 C and hotspot 60 C. Of coursr I can crank fans and pumps to max and achieve a delta T of <5.


Yea that's today when ambient temp is 19°C lol. I'm usually at a delta t of 8°C and for most games capped with Vega 64 its usually a delta T of 3-4°C, even F1 2016 maxed out doesnt raise the water temp much. This is with a 1700x in the loop as well. I did clean my rads and system before installing my 64 and my case has lots of fans lol.


----------



## fursko

Quote:


> Originally Posted by *VicsPC*
> 
> Yea that's today when ambient temp is 19°C lol. I'm usually at a delta t of 8°C and for most games capped with Vega 64 its usually a delta T of 3-4°C, even F1 2016 maxed out doesnt raise the water temp much. This is with a 1700x in the loop as well. I did clean my rads and system before installing my 64 and my case has lots of fans lol.


I already build new system so i cant change it for now. But my next build definitely gonna be custom waterloop. Actually im happy with my build. Looks really awesome and cooling performance good too. 5ghz 7700k 45-50C during gaming. Vega LC 50-60C. Besides its completely silent system. My ambient temp around 22C. For summer i need boost up fan speeds tho.


----------



## By-Tor

I have heard talk of companies releasing Vega with after market coolers, but have not seen any.

Anyone know if there is any truth to this?

When?

Thank you...


----------



## Redeemer

Guys any news on 3rd party RX Vega cards..when?


----------



## kundica

Quote:


> Originally Posted by *fursko*
> 
> Cpu usage low too. Dunno why. is it driver problem or ubisoft problem


Huh? My CPU is performing fine with the game. If anything usage seems a bit high.


----------



## TrixX

Quote:


> Originally Posted by *kundica*
> 
> Huh? My CPU is performing fine with the game. If anything usage seems a bit high.


Agreed the CPU usage is high, there's a few theories as to why, most involved the VMProtect and Denuvo combination. There's also a thought that as happened in one of the other Denuvo protected games that the Denuvo calls/triggers are too often causing excess CPU usage.


----------



## SpecChum

How does the memory frequency relate to wattage?

I'm just testing using superposition and seeing this:
1100Mhz mem; ~1520Mhz core = ~200W BUT 950Mhz mem; ~1520Mhz core = ~250W

Why is the higher frequency lowering the power?


----------



## By-Tor

Quote:


> Originally Posted by *Naeem*
> 
> 
> 
> RX Vega 64 Black for *$466* with Free Shipping on ebay Newegg store
> 
> https://rover.ebay.com/rover/1/711-53200-19255-0/1?icep_id=114&ipn=icep&toolid=20004&campid=5338212510&mpre=https%3A%2F%2Fwww.ebay.com%2Fitm%2FPowerColor-Radeon-RX-VEGA-64-DirectX-12-AXRX-VEGA-64-8GBHBM2-3DH-8GB-2048-Bit-HB-%2F382198554012%3Fhash%3Ditem58fccf1d9c


Thanks, just ordered one.. My aging 290X is getting a bit old and needs a rest..

If anyone is looking for a Powercolor 290X LCS card with a factory install EK full cover water block just PM me. Once I pull it from my rig I'll post in for sale in the forums..


----------



## Reikoji

Quote:


> Originally Posted by *VicsPC*
> 
> That's not entirely true, you MUST have air in a system even a little bit for expansion and pressure. Even a closer loop system has air in it, even if they didn't and you shook it while shipping youd have air bubbles in em.


I don't think you could introduce air bubbles into a system sealed without air in it by merely shaking it. Air pockets have the potential to kill pumps. If there is air, it is not allowed to travel in the normal system and enter the pump. It would have to set in a chamber purely for to counter thermal expansion that is part of the water system but not the water loop. Perhaps the system on vega LC does have this air chamber, which is the block on to the left of the pumps where hoses go. Technically there is nothing there too cool.


----------



## By-Tor

I'm using 17.7.2 drivers at the moment with my 290X. Once I install the Vega when it gets here will these drivers run it ok or is there another I should use?

Thank you


----------



## VicsPC

Quote:


> Originally Posted by *Reikoji*
> 
> I don't think you could introduce air bubbles into a system sealed without air in it by merely shaking it. Air pockets have the potential to kill pumps. If there is air, it is not allowed to travel in the normal system and enter the pump. It would have to set in a chamber purely for to counter thermal expansion that is part of the water system but not the water loop. Perhaps the system on vega LC does have this air chamber, which is the block on to the left of the pumps where hoses go. Technically there is nothing there too cool.


Possible but I've heard plenty of AIOs that had tiny micro air bubbles in it. An air bubble wont kill a pump, an air pocket yea but that needs to be a very large pocket and in an AIO or even custom loop doesnt happen.


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> I'm using 17.7.2 drivers at the moment with my 290X. Once I install the Vega when it gets here will these drivers run it ok or is there another I should use?
> 
> Thank you


Those are fine, but there are newer drivers.


----------



## Soggysilicon

Quote:


> Originally Posted by *By-Tor*
> 
> I'm using 17.7.2 drivers at the moment with my 290X. Once I install the Vega when it gets here will these drivers run it ok or is there another I should use?
> 
> Thank you


DDU before you pull that 290X out and install a'fresh or winblows may crap the bed.


----------



## Reikoji

Quote:


> Originally Posted by *VicsPC*
> 
> Possible but I've heard plenty of AIOs that had tiny micro air bubbles in it. An air bubble wont kill a pump, an air pocket yea but that needs to be a very large pocket and in an AIO or even custom loop doesnt happen.


Those micro air bubbles would have to be kept to a low enough minimum to make them insignificant, as they could combine to form an air pocket. I guess that is the margin of error, but they aren't necessary for expansion reasons.


----------



## fursko

Quote:


> Originally Posted by *kundica*
> 
> Huh? My CPU is performing fine with the game. If anything usage seems a bit high.


Check this: 



 Its generally much lower than nvidia. Nvidia cards has better cpu usage.

End of the video results:

vega 64= 64 fps 15ms cpu 15ms gpu.
gtx 1080= 70 fps 6ms cpu 14ms gpu

So vega cards cant use cpu properly. This is problem.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Check this:
> 
> 
> 
> Its generally much lower than nvidia. Nvidia cards has better cpu usage.
> 
> End of the video results:
> 
> vega 64= 64 fps 15ms cpu 15ms gpu.
> gtx 1080= 70 fps 6ms cpu 14ms gpu
> 
> So vega cards cant use cpu properly. This is problem.


That's a problem with game optimisation. Not the card to CPU communication. Stinks of Gameworks to me.


----------



## dagget3450

Quote:


> Originally Posted by *TrixX*
> 
> That's a problem with game optimisation. Not the card to CPU communication. Stinks of Gameworks to me.


also if you watch the numbers while it is running, The cpu usage is way higher on the nvidia gpus 1080/180ti. So not so sure it makes much sense with the games "meter" summary at the end. Not to mention this game has some DRM issues with cpu usages also.

Edit missed your above post:
Quote:


> Agreed the CPU usage is high, there's a few theories as to why, most involved the VMProtect and Denuvo combination. There's also a thought that as happened in one of the other Denuvo protected games that the Denuvo calls/triggers are too often causing excess CPU usage.


----------



## Aenra

Quote:


> Originally Posted by *Reikoji*
> 
> https://www.newegg.com/Product/Product.aspx?Item=N82E16835706015&ignorebbr=1
> 
> Take care of all your heat and dust build up problems.


57.6 Watts.. you sure you can just swap the stock with this one and call it a day? That's a significantly higher amount of Amps, possibly over the limit of what the PCB allows to go through, may well just burn it; don't have the relevant info to know for sure, but unless you do (in which case please state so), careful with suggestions


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *fursko*
> 
> Check this:
> 
> 
> 
> Its generally much lower than nvidia. Nvidia cards has better cpu usage.
> 
> End of the video results:
> 
> vega 64= 64 fps 15ms cpu 15ms gpu.
> gtx 1080= 70 fps 6ms cpu 14ms gpu
> 
> So vega cards cant use cpu properly. This is problem.






http://www.pcgamer.com/assassins-creed-origins-performance-guide/

seems about right the games using that 300 year old dx11 which is dogs balls.bring me vulkan or even crappy dx12 and we will talk








http://www.guru3d.com/articles_pages/wolfenstein_ii_the_new_colossus_pc_graphics_analysis_benchmark_review,4.html

like this







much more like it


----------



## Reikoji

Quote:


> Originally Posted by *Aenra*
> 
> 57.6 Watts.. you sure you can just swap the stock with this one and call it a day? That's a significantly higher amount of Amps, possibly over the limit of what the PCB allows to go through, may well just burn it; don't have the relevant info to know for sure, but unless you do (in which case please state so), careful with suggestions


No one said anything about trying to plug it into the GPU, and no this fan is powered by molex. I do believe the fan connector on the LC card under the shroud is also not a standard 3 or 4pin fan header. Only the HAMP fan connector on select motherboards would be able to support it. it comes with molex power adapter, and is recommended to be used with a fan speed controller. It has a wide RPM range, and can run very silent and low power.


----------



## fursko

Quote:


> Originally Posted by *tarot*
> 
> 
> http://www.pcgamer.com/assassins-creed-origins-performance-guide/
> 
> seems about right the games using that 300 year old dx11 which is dogs balls.bring me vulkan or even crappy dx12 and we will talk
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.guru3d.com/articles_pages/wolfenstein_ii_the_new_colossus_pc_graphics_analysis_benchmark_review,4.html
> 
> like this
> 
> 
> 
> 
> 
> 
> 
> much more like it


I just played Wolfenstein 2 1440p maxed settings. Minimum fps 120 lol. Almost never go below.


----------



## AngryLobster

You must still be in the starting area. Later parts of the game sees dips into the low 70's.


----------



## Reikoji

Quote:


> Originally Posted by *AngryLobster*
> 
> You must still be in the starting area. Later parts of the game sees dips into the low 70's.


I've actually made a disheartening discovery. Do you by chance still have Async compute enabled in advanced settings after the patch re-enabled it for AMD gpus? Ive found it causes the card to, for the lack of better words, malfunction. If you are monitoring your GPU frequency, you can observe that it starts over boosting, and once it starts doing that your max FPS drops quite substantially. I can consider them soft driver crashes, as even exiting the game the GPU frequency stays well over 1800mhz, requiring the drivers to be restarted.

I found this to be the cause because I have just recently disabled Async Compute in the advanced settings, and not only did my average FPS skyrocket, it no longer overboosts.

This may not be the case for everyone, but check it out. However once you disable Async Compute and restart the game, the toggle for it will no longer be present. I'd say the mode still has issues.


----------



## The EX1

Quote:


> Originally Posted by *By-Tor*
> 
> I have heard talk of companies releasing Vega with after market coolers, but have not seen any.
> 
> Anyone know if there is any truth to this?
> 
> When?
> 
> Thank you...


Only AIB that has releases pictures so far is XFX. They tweeted some last week and there is also a thread here on OCN about it.


----------



## cephelix

Quote:


> Originally Posted by *The EX1*
> 
> Only AIB that has releases pictures so far is XFX. They tweeted some last week and there is also a thread here on OCN about it.


Wasn't there one for Asus as well? Or am I misremembering things again?


----------



## diggiddi

Yeah Asus has one too


----------



## spyshagg

Quote:


> Originally Posted by *fursko*
> 
> What is your clock voltages ? HBM voltage means minimum voltage. If you set hbm voltage 1100mv your core voltage minimum 1100mv thats the trick. Doesnt matter if u set core voltage 1000mv or 900mv.


We know HBM P3 sets GPU floor, but what you were saying is that HBM P3 voltage does nothing for HBM overclock, and that it is a result of Vcore voltage alone. I dont agree because my tests dont demonstrate your findings.

GPU @ 1180mv + HBM @ 1050mv = 1750mhz/1050mhz

GPU @ 1180mv + HBM @ 1100mv = 1750mhz/1100mhz

As you can see, in this scenario gpu is always above hbm p3 floor voltage. The increase in HBM overclock from 1050mhz to 1100mhz comes from the P3 voltage and not the gpu voltage.


----------



## spyshagg

Quote:


> Originally Posted by *Reikoji*
> 
> I've actually made a disheartening discovery. Do you by chance still have Async compute enabled in advanced settings after the patch re-enabled it for AMD gpus? Ive found it causes the card to, for the lack of better words, malfunction. If you are monitoring your GPU frequency, you can observe that it starts over boosting, and once it starts doing that your max FPS drops quite substantially. I can consider them soft driver crashes, as even exiting the game the GPU frequency stays well over 1800mhz, requiring the drivers to be restarted.
> 
> I found this to be the cause because I have just recently disabled Async Compute in the advanced settings, and not only did my average FPS skyrocket, it no longer overboosts.
> 
> This may not be the case for everyone, but check it out. However once you disable Async Compute and restart the game, the toggle for it will no longer be present. I'd say the mode still has issues.


I found this to be the case as well. I just didn't know it was the game causing it because it was OK the day before after playing the game.


----------



## VicsPC

Vega 64 very high vs r9 390 medium. This is all factory clocks as well, this thing is a BEAST.

The r9 390 is using t-aa i believe and the vega 64 is using fxaa, using 2xmsaa kills about 20-30fps off the average though.


----------



## gupsterg

Quote:


> Originally Posted by *VicsPC*
> 
> I just posted my above at load, gaming temps. I've yet to see it get past 60°C even in the summer where my ambient with no AC is around 28°C. As i said though i do have 2 fans a couple cms away from the top of the card blowing right into it. Today is 21°C in the room so temps are quite lower.
> 
> Siege is the only game i don't cap that Ive found really hammers the gpu AND the cpu therefore getting the water temps at full load level. Realbench doesnt even come close. Not sure what other program i have to really cook it up but it's fair to say that a fan right above/behind the gpu would help hotspot temps. It's not common to have a vertically mounted GPU so could be why most in here, even on water, are having hotspot temps higher then I do even when they're undervolted.
> 
> Here's my 4k superposition on stock bios/stock clocks on an ekwb. There's the temps next to that.


Mine peaks higher than yours for hotspot, etc.

"out of box", ref cooler, driver defaults.



Only change was WB for this run.



I used Thermal Grizzly Hydronaut, spread by card, molded die. I do not think I have bad application, may just be silicon to silicon differences, etc







.
Quote:


> Originally Posted by *0451*
> 
> Is this on air or under water?


Yes







.

The loop is pump/res > GPU > CPU > top rad > front rad.

D5 PWM > EK VEGA > EK TR > MagiCool G2 Slim 360mm with 3x AC F12 PWM > MagiCool G2 Slim 360mm with 3x AC F12 PWM.

The warmed water temp sensor is just as water enters top rad, cooled water is just as it leaves front rad.


----------



## VicsPC

Quote:


> Originally Posted by *gupsterg*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Mine peaks higher than yours for hotspot, etc.
> 
> "out of box", ref cooler, driver defaults.
> 
> 
> 
> Only change was WB for this run.
> 
> 
> 
> I used Thermal Grizzly Hydronaut, spread by card, molded die. I do not think I have bad application, may just be silicon to silicon differences, etc
> 
> 
> 
> 
> 
> 
> 
> .
> Yes
> 
> 
> 
> 
> 
> 
> 
> .
> 
> The loop is pump/res > GPU > CPU > top rad > front rad.
> 
> D5 PWM > EK VEGA > EK TR > MagiCool G2 Slim 360mm with 3x AC F12 PWM > MagiCool G2 Slim 360mm with 3x AC F12 PWM.
> 
> The warmed water temp sensor is just as water enters top rad, cooled water is just as it leaves front rad.


I used Kryonaut because i despise the consistency of Hydronaut, its just way too thick, mine dried out in no time as well so tossed it in the trash. I think the fact that i have 3 intake fans above the card helps as well. The core x5 has a horizontal mobo tray and the fans are probably an inch or so away from the card, i think that cool air blowing into the card and backplate is what's helping A LOT.

Right now on hwinfo because its cool in my room gives you this. Subtract the min difference from yours and you got about what you'd have at idle.
Core: 19°C
HBM: 22°C
Hotspot: 19°C

My hotspot temps never got ridiculously hot, even on air i was always under 100°C so that would be my guess. Those unfamiliar this is what it looks like inside my case. Not my actual case but above the gpu i have 2 140mm noctuas and another on the other side that cools the mobo VRMs. At idle they sit at about 35°C with full voltage going thru em but no load.


----------



## kundica

Quote:


> Originally Posted by *fursko*
> 
> Check this:
> 
> 
> 
> Its generally much lower than nvidia. Nvidia cards has better cpu usage.
> 
> End of the video results:
> 
> vega 64= 64 fps 15ms cpu 15ms gpu.
> gtx 1080= 70 fps 6ms cpu 14ms gpu
> 
> So vega cards cant use cpu properly. This is problem.


Nvidia drivers are known to have more CPU overhead which is reflected in the video. If anything, CPU usage is abnormally high on both cards but higher on the Nvidia due to what I just mentioned. Also, there is no CPU latency shown in the clip so I'm not sure where you're coming up with those numbers, not that it really matters because your claim isn't supported by the link you shared.

Edit: Another user pointed out the summary at the end of the video.


----------



## spyshagg

Never seen my hotspot (lol) with such a high delta over the core, even when testing with a 85ºc target.


----------



## VicsPC

Here's my max after 30mins or so of siege left on in game maxed out, 99% gpu usage about 50% cpu usage. Don't mind the time running or average it's going to be way off.


----------



## gupsterg

@VicsPC

Yes Hydronaut was PITA to spread. I just used it as I wanted to as a trial and had it as freebie with EK AM4 & TR blocks. Usually AS5 is what I use.

I have used AS5 for too many years, every time I think I'll buy another TIM I don't. Basically from say der8auer's and THG TIM roundups I have deemed I see no point in paying for another TIM.

I believe you have way better airflow in your case







. I don't







. I chose the Dark Base 900 as it had size I wanted, features plus was on a ridiculous price at the time. I had envisaged the way I was doing build I was comprising on airflow within case. So somewhat I pay for it in temps







, but's no way shabby compared to others data I have seen.

I'm surprised our default V64 runs in SP 4K differ so much, with just going to water.

Yours: min 35.64 aver 42.33 max 50.58
Mine: min 38.71 aver 45.79 max 55.85

My CPU is stock, never tested to see if SP 4K scales with CPU, but IIRC from when I've seen others shares it doesn't much (I could be wrong).
Quote:


> Originally Posted by *spyshagg*
> 
> Never seen my hotspot (lol) with such a high delta over the core, even when testing with a 85ºc target.


Plenty of members experience this aspect. Kundica is on WC, link. I have noted he several times remounted block/done TIM and has similar experience.

If I look at CSV for a run the MAX can be a momentary rise, generally even under sustained load the hotspot is lower than MAX. So I believe ref MAX as extreme value and average for "everyday" purpose.


----------



## VicsPC

Quote:


> Originally Posted by *gupsterg*
> 
> @VicsPC
> 
> Yes Hydronaut was PITA to spread. I just used it as I wanted to as a trial and had it as freebie with EK AM4 & TR blocks. Usually AS5 is what I use.
> 
> I have used AS5 for too many years, every time I think I'll buy another TIM I don't. Basically from say der8auer's and THG TIM roundups I have deemed I see no point in paying for another TIM.
> 
> I believe you have way better airflow in your case
> 
> 
> 
> 
> 
> 
> 
> . I don't
> 
> 
> 
> 
> 
> 
> 
> . I chose the Dark Base 900 as it had size I wanted, features plus was on a ridiculous price at the time. I had envisaged the way I was doing build I was comprising on airflow within case. So somewhat I pay for it in temps
> 
> 
> 
> 
> 
> 
> 
> , but's no way shabby compared to others data I have seen.
> 
> I'm surprised our default V64 runs in SP 4K differ so much, with just going to water.
> 
> Yours: min 35.64 aver 42.33 max 50.58
> Mine: min 38.71 aver 45.79 max 55.85
> 
> My CPU is stock, never tested to see if SP 4K scales with CPU, but IIRC from when I've seen others shares it doesn't much (I could be wrong).
> Plenty of members experience this aspect. Kundica is on WC, link. I have noted he several times remounted block/done TIM and has similar experience.
> 
> If I look at CSV for a run the MAX can be a momentary rise, generally even under sustained load the hotspot is lower than MAX. So I believe ref MAX as extreme value and average for "everyday" purpose.


Yea not sure either, thats quite a bit of a difference. It might be because i ran it in Balanced but not sure. You're not using the LC bios or anything else on that run are you?


----------



## Reikoji

Finally! 7000 in this thing.....

P6 1667/1182
P7 1752/1215
1150/1150 HBM
+150% power
Real cooling fan active.

1.138 under load


----------



## dagget3450

Quote:


> Originally Posted by *kundica*
> 
> Nvidia drivers are known to have more CPU overhead which is reflected in the video. If anything, CPU usage is abnormally high on both cards but higher on the Nvidia due to what I just mentioned. Also, there is not CPU latency shown in the clip so I'm not sure where you're coming up with those numbers, not that it really matters because your claim isn't supported by the link you shared.


The cpu latency is at the very end of the bench, its a summary done by the game.


----------



## kundica

Quote:


> Originally Posted by *dagget3450*
> 
> The cpu latency is at the very end of the bench, its a summary done by the game.


Thanks. I'm still not sure that represents what the other user is trying to say though. It'll be interesting to see benchmarks if the game's DRM is ever broken so we can see if Dunovo is messing things up.


----------



## dagget3450

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> Finally! 7000 in this thing.....
> 
> P6 1667/1182
> P7 1752/1215
> 1150/1150 HBM
> +150% power
> Real cooling fan active.
> 
> 1.138 under load


Everyone running these benchmarks have yal tried without any monitoring software overlays? For me i found better scores with all monitoring software off. I recall this being a big issue when i was benching doom.


----------



## VicsPC

Quote:


> Originally Posted by *dagget3450*
> 
> Everyone running these benchmarks have yal tried without any monitoring software overlays? For me i found better scores with all monitoring software off. I recall this being a big issue when i was benching doom.


I should give it a try and see what i come up with. I don't use HBCC either so might give me even more.


----------



## gupsterg

Quote:


> Originally Posted by *VicsPC*
> 
> Yea not sure either, thats quite a bit of a difference. It might be because i ran it in Balanced but not sure. You're not using the LC bios or anything else on that run are you?


TBH honest not phased by hotspot at present in range I get. I do believe it is silicon to silicon variance, besides other aspects for context.

Both runs I posted were fully stock VBIOS/registry/drivers, so was on Balanced. Same with CPU, only using 3200MHz C14 The Stilt "safe" timings, dual channel.


----------



## cplifj

Quote:


> Originally Posted by *AngryLobster*
> 
> Yes it is variable. I can't find what the trigger point is but the pump definitely gets louder after extended load.


on closer examination i have found it is not the pump that starts making noise.

It just happened to be that at certain loads, my PSU fan was ramping up, and that was giving the rumble.

I fixed this by changing out the hx650 for an hx1000i. The 650 was ample enough for one vega.

With the 1000 watts psu , i now run both the Vega 64 LC + my old R9 290X. Maximum powerdraw when everything loaded is still only about 700-760 Watts.

Yeah amd , you frikkin nutters wanted everyone to buy new PSU's they don't need at all.

Another fine example of how the techbiz lies to help each other earn more where the user don't need it at all.


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> @VicsPC
> 
> Yes Hydronaut was PITA to spread. I just used it as I wanted to as a trial and had it as freebie with EK AM4 & TR blocks. Usually AS5 is what I use.
> 
> I have used AS5 for too many years, every time I think I'll buy another TIM I don't. Basically from say der8auer's and THG TIM roundups I have deemed I see no point in paying for another TIM.
> 
> I believe you have way better airflow in your case
> 
> 
> 
> 
> 
> 
> 
> . I don't
> 
> 
> 
> 
> 
> 
> 
> . I chose the Dark Base 900 as it had size I wanted, features plus was on a ridiculous price at the time. I had envisaged the way I was doing build I was comprising on airflow within case. So somewhat I pay for it in temps
> 
> 
> 
> 
> 
> 
> 
> , but's no way shabby compared to others data I have seen.
> 
> I'm surprised our default V64 runs in SP 4K differ so much, with just going to water.
> 
> Yours: min 35.64 aver 42.33 max 50.58
> Mine: min 38.71 aver 45.79 max 55.85
> 
> My CPU is stock, never tested to see if SP 4K scales with CPU, but IIRC from when I've seen others shares it doesn't much (I could be wrong).
> Plenty of members experience this aspect. Kundica is on WC, link. I have noted he several times remounted block/done TIM and has similar experience.
> 
> If I look at CSV for a run the MAX can be a momentary rise, generally even under sustained load the hotspot is lower than MAX. So I believe ref MAX as extreme value and average for "everyday" purpose.


I am also using Hydronaut and I know from a previous post that our GPU temps are identical (40/60 core/hotspot). I am hitting 7200 in SP 4K using my gaming overclocks. Try this:

First load up the liquid bios and apply the hellm powerplay table with 142% power limiy and 400 amps.

1) Restart computer.

2) Once windows posts and all programs have loaded, run Custom Resolution Utility Restart64.exe. Then turn your monitor off/on.

3) Set power plan to balanced in wattman. Hit apply. Repeat step 2.

4) In wattman, move the slider to custom. Don't change the core speed, core voltage, or HBM voltage. Only change the HBM speed to 1100 and slide the power limit to 142%. Hit apply. Repeat step 2.

5) Run superposition 4K. You should hit 7200.


----------



## cplifj

ok, oc'ing is nice, but flashing new cards with bios not ment for it; That is just being cheapskates.. AMD fanboys who don't want to give amd what it's due.

no wonder NV has a bigger R&D budget, it's mostly amd customers who ride the cheapskate. but keep complaining and whining they didn't beat NVidia .....


----------



## jbravo14

Just got my RX Vega 56 Sapphire. Benched a little bit, then flashed straight to Vega 64 bios.

For now I just set the HBM clocks to 1050 + power limit set to 50% - stock vega 64 voltages.

Did you guys have to up the HBM voltage to get higher clocks?

Or would i get better by undervolting first then try to raise the clocks after?


----------



## geriatricpollywog

Quote:


> Originally Posted by *cplifj*
> 
> ok, oc'ing is nice, but flashing new cards with bios not ment for it; That is just being cheapskates.. AMD fanboys who don't want to give amd what it's due.
> 
> no wonder NV has a bigger R&D budget, it's mostly amd customers who ride the cheapskate. but keep complaining and whining they didn't beat NVidia .....


Sorry I thought this was OCN. Am I in the wrong place?


----------



## geoxile

Quote:


> Originally Posted by *0451*
> 
> Sorry I thought this was OCN. Am I in the wrong place?


This is stockclock.net boy, you came to the wrong neighborhood


----------



## dagget3450

Quote:


> Originally Posted by *cplifj*
> 
> ok, oc'ing is nice, but flashing new cards with bios not ment for it; That is just being cheapskates.. AMD fanboys who don't want to give amd what it's due.
> 
> no wonder NV has a bigger R&D budget, it's mostly amd customers who ride the cheapskate. but keep complaining and whining they didn't beat NVidia .....


Not sure i follow you here..
AMD gpus have bios switch for dual bios. This makes flashing gpu easier.

People flash nvidia cards for more performance and overclock it to match next level performing said gpu.

You honestly believe AMDs bottom line is even slightly affected by the insanely small amount of people who flash thier gpu? No way.


----------



## webhito

Quote:


> Originally Posted by *cplifj*
> 
> ok, oc'ing is nice, but flashing new cards with bios not ment for it; That is just being cheapskates.. AMD fanboys who don't want to give amd what it's due.
> 
> no wonder NV has a bigger R&D budget, it's mostly amd customers who ride the cheapskate. but keep complaining and whining they didn't beat NVidia .....


Err, we purchased a top of the line card, not a low range nor midrange gpu, wattman is included in radeon software for users to put to use, flashing just makes it permanent.
Not sure where you come up with that nonsense.


----------



## gupsterg

Quote:


> Originally Posted by *0451*
> 
> I am also using Hydronaut and I know from a previous post that our GPU temps are identical (40/60 core/hotspot). I am hitting 7200 in SP 4K using my gaming overclocks. Try this:
> 
> First load up the liquid bios and apply the hellm powerplay table with 142% power limiy and 400 amps.
> 
> 1) Restart computer.
> 
> 2) Once windows posts and all programs have loaded, run Custom Resolution Utility Restart64.exe. Then turn your monitor off/on.
> 
> 3) Set power plan to balanced in wattman. Hit apply. Repeat step 2.
> 
> 4) In wattman, move the slider to custom. Don't change the core speed, core voltage, or HBM voltage. Only change the HBM speed to 1100 and slide the power limit to 142%. Hit apply. Repeat step 2.
> 
> 5) Run superposition 4K. You should hit 7200.


Cheers for info







. RX VEGA 64 AIO VBIOS borks my GPU







. I tried it in the past and today again twice. I believe my sample can't sustain the clocks







.

When I purchased VEGA all I wanted was ~GTX 1080 performance so I could still use FreeSync. I had a MSI GTX 1080 EK X, the card from factory has higher power limit than FE model (see TPU DB). This boosted to ~1975MHz without me doing anything, just due to temps/powerlimit/nvidia boost 3.0. This is earlier SP 4K of VEGA tweaked vs MSI GTX 1080 EK X, link. Then 3DM FS, link 1, link 2 (older nV driver).

All my gripe originally with VEGA was pricing, so I went GTX 1080. As I didn't have G-Sync monitor I couldn't use variable refresh rate. Gaming without this tech did not seem as smooth even if FPS was high enough. So as soon as I saw VEGA at what I deemed a good price I grabbed it and sold GTX 1080.


----------



## Reikoji

Quote:


> Originally Posted by *dagget3450*
> 
> Everyone running these benchmarks have yal tried without any monitoring software overlays? For me i found better scores with all monitoring software off. I recall this being a big issue when i was benching doom.


i will see what i get with it off later today.


----------



## Reikoji

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers for info
> 
> 
> 
> 
> 
> 
> 
> . RX VEGA 64 AIO VBIOS borks my GPU
> 
> 
> 
> 
> 
> 
> 
> . I tried it in the past and today again twice. I believe my sample can't sustain the clocks
> 
> 
> 
> 
> 
> 
> 
> .
> 
> When I purchased VEGA all I wanted was ~GTX 1080 performance so I could still use FreeSync. I had a MSI GTX 1080 EK X, the card from factory has higher power limit than FE model (see TPU DB). This boosted to ~1975MHz without me doing anything, just due to temps/powerlimit/nvidia boost 3.0. This is earlier SP 4K of VEGA tweaked vs MSI GTX 1080 EK X, link. Then 3DM FS, link 1, link 2 (older nV driver).
> 
> All my gripe originally with VEGA was pricing, so I went GTX 1080. As I didn't have G-Sync monitor I couldn't use variable refresh rate. Gaming without this tech did not seem as smooth even if FPS was high enough. So as soon as I saw VEGA at what I deemed a good price I grabbed it and sold GTX 1080.


yea, then as soon as they finally sell @ msrp they become a steal. i feel like i robbed newegg yesterday. and they deserve it !.


----------



## fursko

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers for info
> 
> 
> 
> 
> 
> 
> 
> . RX VEGA 64 AIO VBIOS borks my GPU
> 
> 
> 
> 
> 
> 
> 
> . I tried it in the past and today again twice. I believe my sample can't sustain the clocks
> 
> 
> 
> 
> 
> 
> 
> .
> 
> When I purchased VEGA all I wanted was ~GTX 1080 performance so I could still use FreeSync. I had a MSI GTX 1080 EK X, the card from factory has higher power limit than FE model (see TPU DB). This boosted to ~1975MHz without me doing anything, just due to temps/powerlimit/nvidia boost 3.0. This is earlier SP 4K of VEGA tweaked vs MSI GTX 1080 EK X, link. Then 3DM FS, link 1, link 2 (older nV driver).
> 
> All my gripe originally with VEGA was pricing, so I went GTX 1080. As I didn't have G-Sync monitor I couldn't use variable refresh rate. Gaming without this tech did not seem as smooth even if FPS was high enough. So as soon as I saw VEGA at what I deemed a good price I grabbed it and sold GTX 1080.


Vega LC obviously better than 1080 ? How is your experience so far ? You use both gpu.


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers for info
> 
> 
> 
> 
> 
> 
> 
> . RX VEGA 64 AIO VBIOS borks my GPU
> 
> 
> 
> 
> 
> 
> 
> . I tried it in the past and today again twice. I believe my sample can't sustain the clocks


My card wouldn't run at stock LC bios clocks either but does fine up to 1722 with older drivers if I use stock voltage. With the newest driver I can run 1712 but my daily driver on the LC bios was 1702. I noticed your posts in the other thread. Correct me if I'm wrong but it seems you were always undervolting with an especially aggressive undervolt for p6. You also change the HBM voltage control, I'm aware it doesn't touch HMB voltage, but I'd stop messing with it for your tests.

I'd try this. Flash the LC bios probably 8734 as it always seemed most stable for me. Set p7 to something like 1702 but leave all of your voltages stock and leave p6 clock stock as well. HBM can probably be set to 1100 if that's what your prefer. See if that's stable with a +50% power limit.

Recently I stopped using the LC bios because I need the card to not default to 1752 in Linux and OSX. Instead I'm using stock Air bios and edited a powerplay table to adjust usSocketPowerLimit, usBatteryPowerLimit, and usSmallPowerLimit to match the LC bios of 264. Doing that allows me to run the same clocks I used with the LC bios in Windows but leaves the clocks stock for the other OSes. I've also experimented with various p6 and p7 clocks/voltages. I've settled on 1587/1075mv 1677/1150mv as good performance without wrecking the wattage.


----------



## gupsterg

Quote:


> Originally Posted by *Reikoji*
> 
> yea, then as soon as they finally sell @ msrp they become a steal. i feel like i robbed newegg yesterday. and they deserve it !.


I paid £515, with 2 games. I deemed the games ~£40, I also snagged some cashback ~£11, so card was ~£464. That's as close to SEP in context of UK AFAIK. Plus as it was Limited Edition I ended up with cool looking back plate







.
Quote:


> Originally Posted by *fursko*
> 
> Vega LC obviously better than 1080 ? How is your experience so far ? You use both gpu.


I have RX VEGA 64 Limited Edition (air) on EK-FC Radeon VEGA block. I do believe RX VEGA 64 AIO are the better binn'd GPUs.

GTX 1080 was more power efficient. I also think boost tech is better implementation. If I had G-Sync monitor or nVidia had FreeSync support I may have not swapped TBH. I did have quick scout for a G-Sync monitor and choice seemed limited vs FreeSync. If I also changed from my MG279Q to PG279Q I was paying extra for variable refresh rate tech. I believe the driver panel is weaker for options, etc on green team, AMD panel allows much more without using a 3rd party app.

So for FreeSync/better driver panel I went back to AMD. Gaming without looking at FPS I believe the experience is the same. With tweaks a V56/64 delivers good performance but at the cost of power







.
Quote:


> Originally Posted by *kundica*
> 
> My card wouldn't run at stock LC bios clocks either but does fine up to 1722 with older drivers if I use stock voltage. With the newest driver I can run 1712 but my daily driver on the LC bios was 1702. I noticed your posts in the other thread. Correct me if I'm wrong but it seems you were always undervolting with an especially aggressive undervolt for p6. You also change the HBM voltage control, I'm aware it doesn't touch HMB voltage, but I'd stop messing with it for your tests.
> 
> I'd try this. Flash the LC bios probably 8734 as it always seemed most stable for me. Set p7 to something like 1702 but leave all of your voltages stock and leave p6 clock stock as well. HBM can probably be set to 1100 if that's what your prefer. See if that's stable with a +50% power limit.
> 
> Recently I stopped using the LC bios because I need the card to not default to 1752 in Linux and OSX. Instead I'm using stock Air bios and edited a powerplay table to adjust usSocketPowerLimit, usBatteryPowerLimit, and usSmallPowerLimit to match the LC bios of 264. Doing that allows me to run the same clocks I used with the LC bios in Windows but leaves the clocks stock for the other OSes. I've also experimented with various p6 and p7 clocks/voltages. I've settled on 1587/1075mv 1677/1150mv as good performance without wrecking the wattage.


I'm probably not gonna use any AIO VBIOS now. I was monitoring GPU at stock with that VBIOS and saw ~1.231V with ~1800MHz before 3DM crashed. That IMO was excessive, as I believe GPU would have got more voltage than what monitoring captured.

I didn't think I was being too aggressive with DPM6 voltage. As usually I set DPM6 lower than what HBM voltage would be in WattMan and I want GPU to use the lower voltage I set in DPM6. I then match it, but I take on board your info/experience for use if I need it







.

I have tested DPM7: 1672MHz 1150mV with also DPM6 increased, but only 1000mV. This is stable, but yielded little clocks gains over 1652MHz/1557MHz but I need ~+25mV on each DPM as voltage. If I go for DPM7: 1682MHz 1150mV with similar DPM6, I gain slightly higher clock but instability creeps in







.

I'm just aiming to get 1557MHz/1652MHz as tightly setup for mV/PowerLimit as I can. As these clocks for performance match the GTX 1080 (~1975MHz) in test data I have.


----------



## kundica

Quote:


> Originally Posted by *gupsterg*
> 
> I was monitoring GPU at stock with that VBIOS and saw ~1.231V with ~1800MHz before 3DM crashed. That IMO was excessive, as I believe GPU would have got more voltage than what monitoring captured.


Damn. I've seen others mention it boosting that high but my card only boosts higher than p7 during compute tasks. At stock AIO bios it's completely unstable but at 1722 I'd see 1770ish. That's why I settled for 1702.


----------



## gupsterg

It was a shock to me as well.

Usually for:-

DPM6: 1557MHz 975mV
DPM7: 1652MHz 1125mV
HBM: 1100MHz 975mV
PowerLimit: 65%

I'd say I see ~1600MHz in 3D loads, Compute ~1700MHz. Then for said value for 3D, depending on app it's generally ~+25MHz, I'd say for Compute, depending on app it's ~-25MHz of said value. Both generally use ~1.075V under load.

I did set this same profile on AIO VBIOS today, I get same performance/voltage and stability in what I tested for short period. What is unstable on stock VBIOS was unstable on AIO VBIOS as well. I used the later AIO VBIOS. Due to these aspects and other as stated before, I deemed it's best to stay on factory VBIOS.

For example Valley I have seen can destabilize a profile on my GPU rather quick. I believe it is down to the more "yoyo" it does. Now depending on how unstable the profile is I can have driver reset. Rendering may pause for a millisecond, I may not bomb to desktop, I know a reset occurred as monitor re-syncs, rendering then continues and at this time GPU has reverted to stock. So if I was on AIO VBIOS I could potentially be hitting 1.25V IMO. Yeah GPU should crash at this point, but I really don't like zapping it this way TBH.


----------



## MAMOLII

hi gupsterg after 2 years of furyx fun and bios modding i jumped to a rx56!
I have to read 401 pages....
from what i read i have to flash rx64 bios to give 1.35 ram volts and undervolt gpu to overclock and there is another volt that is not hmb volt
1) is the cooler the same rx 56 and rx 64... i mean the heatsink inside the case or the rx 56 is smaller?
2)i believe its safe to flash rx 64 bios to a rx56 same memory chips same gpu...
3)best tool for oc? wattman afterburner OverdriveNTool?
# its day one i have to read a lot of pages....


----------



## gupsterg

Sweet







, I sorta came from Fury X to VEGA and liking it







(only fly in ointment is no bios mod







).

1) Yes.

2) After what I experienced with flashing V64 AIR to V64 LIQUID I would consider monitoring voltages on a V56 flashed to V64 AIR. Especially look out for when profile fails and driver resets. Generally it should be OK, as V64 AIR does not have as high default clock as V64 LIQUID.

3) I've stuck to WattMan. Resizing it currently cause crash of app, if I only resize vertically to full height in one go it doesn't for me. OverdrivenNTool uses same access method to driver. I have noted some having better luck with it, for me no different than WattMan. Currently I use MSI AB only for monitoring when want graphs (otherwise HWINFO). The visual representation has aided me to grasp aspects of tweak, etc.

For me ref cooler reached limits of PowerPlay elements which can cause throttling, so best is to aim for undervolting and what clocks you can gain with it IMO. If not going water cooled go Morpheus 2, plenty of decent results in this thread/web on it







.

Ref this thread, covers HBM voltage in WattMan/OverdriveNTool plus other aspects.


----------



## MAMOLII

thanks gupsterg! the furyx seems to run out of ram in latest game titles.(Assassin's Creed and Wolfenstein) so i sold it
its always trial and error to find the sweet spot( like tuning the ryzen memory







)

I was ready to change paste with liquid metal but i think the capacity of the cooler is small and maybe doesn't worth









i have an old zalman vf3000 from the amd 5850 days... maybe if i can mount it keeping the stock plate for the vrm cooling would be great
2.77 check bios 64 check going to.... atiflash -p 0 rx64bios.rom... soon and let the fun beggin


----------



## dagget3450

How accurate is MSI AB and Vram usage? Reason i asked is, i am able to hit almost 13gb vram usage in WoW @ 8k max settings and max AA. its around 6GB/10GB with 8xMSAA


----------



## Yviena

Hmm seems i get best results with both overdriventool and msi afterburner used together.


----------



## Reikoji

Without monitoring software or overlay running, but also dropped P7 down to 1210mv.

Probably would have crashed... without this much better cooling fan











1200mv survived ?! that use to be instant crash for me.


----------



## SpecChum

I just got 6437 on my stock cooler









Anyway, reason for post is to ask: Is there any reason to put the memory voltage any lower the the P7 voltage?

I've tried and it doesn't seem to lower any temps or power, just causes instability if it's too low.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> I just got 6437 on my stock cooler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Anyway, reason for post is to ask: Is there any reason to put the memory voltage any lower the the P7 voltage?
> 
> I've tried and it doesn't seem to lower any temps or power, just causes instability if it's too low.


It acts like the minimum power level for the GPU, so when you set P6 to 1000, setting HBM to 1050 will override the P6 state from what we've seen. As a consequence whether this is true or not for current drivers, I still do the same and always have HBM at or below my P6 mv value.


----------



## Reikoji

I just keep my HBM/floor voltage somewhat below my P6 voltage. I think it should be above whatever P5 voltage is at the very least, unless you have your P6 voltage below p5 voltage.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> It acts like the minimum power level for the GPU, so when you set P6 to 1000, setting HBM to 1050 will override the P6 state from what we've seen. As a consequence whether this is true or not for current drivers, I still do the same and always have HBM at or below my P6 mv value.


I know it's the floor, yeah, I was just wondering if there's any point in having it lower than the lowest P6 or P7 voltage.

My limited testing it saying no so I just wondered if anyone had found a use for it yet.

My card seems to be quite happy running 910mV HBM voltage at 1028Mhz but 1100Mhz needs 985mV, so for my "quiet" profile I set P6, P7 and HBM to 910mV.

Running 1100mV HBM and 1100mV P7 doesn't seem to do anything at all compared to having HBM at 985mV.


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> I just keep my HBM/floor voltage somewhat below my P6 voltage. I think it should be above whatever P5 voltage is at the very least, unless you have your P6 voltage below p5 voltage.


P5 is 1100, I'm always below that on P6 and P7 lol


----------



## ducegt

Count me in. $500 open box from NewEgg working just fine. I'm on a 1080p 144hz Freesync display so probably more interested in underclocking if anything. Out of box on higher TDP bios is 2.5x times faster than my previous overclocked r9 285.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> I know it's the floor, yeah, I was just wondering if there's any point in having it lower than the lowest P6 or P7 voltage.
> 
> My limited testing it saying no so I just wondered if anyone had found a use for it yet.
> 
> My card seems to be quite happy running 910mV HBM voltage at 1028Mhz but 1100Mhz needs 985mV, so for my "quiet" profile I set P6, P7 and HBM to 910mV.
> 
> Running 1100mV HBM and 1100mV P7 doesn't seem to do anything at all compared to having HBM at 985mV.


Yes I've seen the same though it's 950mv for me. Higher might stabilise some higher OC's but currently I have an odd issue with my card which might result in an RMA


----------



## The EX1

Quote:


> Originally Posted by *dagget3450*
> 
> How accurate is MSI AB and Vram usage? Reason i asked is, i am able to hit almost 13gb vram usage in WoW @ 8k max settings and max AA. its around 6GB/10GB with 8xMSAA


Have you tried setting AB to display each card's memory usage? Otherwise, it may display a combined amount.


----------



## elox

@Trixx what issue? Do you killed your silicon lottery chip?


----------



## gupsterg

Quote:


> Originally Posted by *MAMOLII*
> 
> thanks gupsterg! the furyx seems to run out of ram in latest game titles.(Assassin's Creed and Wolfenstein) so i sold it
> its always trial and error to find the sweet spot( like tuning the ryzen memory
> 
> 
> 
> 
> 
> 
> 
> )
> 
> I was ready to change paste with liquid metal but i think the capacity of the cooler is small and maybe doesn't worth
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i have an old zalman vf3000 from the amd 5850 days... maybe if i can mount it keeping the stock plate for the vrm cooling would be great
> 2.77 check bios 64 check going to.... atiflash -p 0 rx64bios.rom... soon and let the fun beggin


No Problem







.

PowerPlay has throttle limit for HBM 95°C and Hotspot as 105°C. I did reach hotspot temp limit easily on ref blower, at stock with just a run of SP 4K. As my case uses a rad on top/front, airflow is not great, so I do believe some of my experience was affected by this. Also be aware HBM could have performance dropping prior to reach the temp limit, when we OC. So make sure you get scaling as you clock higher on it.

The factory TIM seems very hard. I tried teasing the cooler off after all screws were undone and it wasn't just easing off. So I warmed the heatsink with a hairdryer. Even then I had to take several attempts of warming/easing with very slight force to get the heatsink off. The factory TIM seemed like glue between die/hs base







.

If your die is not molded I would be hesitant to use liquid metal. You may find experience within this thread of members having used it, perhaps.
Quote:


> Originally Posted by *dagget3450*
> 
> How accurate is MSI AB and Vram usage? Reason i asked is, i am able to hit almost 13gb vram usage in WoW @ 8k max settings and max AA. its around 6GB/10GB with 8xMSAA
> 
> 
> Spoiler: Warning: Spoiler!


AFAIK all monitoring shows is an allocation of VRAM and not actual usage. I asked Mumak about it some time ago, link. I do not think exposure of actual VRAM usage has occurred from AMD or nVidia.


----------



## TrixX

Quote:


> Originally Posted by *elox*
> 
> @Trixx what issue? Do you killed your silicon lottery chip?


Longer description of issue.

I've had an issue with lifting the clocks above 1760MHz ever since I got the card, and I was thinking that the water block was exacerbating the issue and showing some deeper issue with the card. It underclocks like a beast, but top end it was running out of steam.

However testing over the last couple of hours has rendered some confusing results so trying to work out what's going on at the moment. Though I've not really pushed the card beyond any real boundaries of the stock BIOS with the exception of the power limit. It's not like I was running it insanely hot or anything, it's been at max of 70C for the most part.


----------



## Ne01 OnnA

Quote:


> Originally Posted by *gupsterg*
> 
> Sweet
> 
> 
> 
> 
> 
> 
> 
> , I sorta came from Fury X to VEGA and liking it
> 
> 
> 
> 
> 
> 
> 
> (only fly in ointment is no bios mod
> 
> 
> 
> 
> 
> 
> 
> ).
> 
> 1) Yes.
> 
> 2) After what I experienced with flashing V64 AIR to V64 LIQUID I would consider monitoring voltages on a V56 flashed to V64 AIR. Especially look out for when profile fails and driver resets. Generally it should be OK, as V64 AIR does not have as high default clock as V64 LIQUID.
> 
> 3) I've stuck to WattMan. Resizing it currently cause crash of app, if I only resize vertically to full height in one go it doesn't for me. OverdrivenNTool uses same access method to driver. I have noted some having better luck with it, for me no different than WattMan. Currently I use MSI AB only for monitoring when want graphs (otherwise HWINFO). The visual representation has aided me to grasp aspects of tweak, etc.
> 
> For me ref cooler reached limits of PowerPlay elements which can cause throttling, so best is to aim for undervolting and what clocks you can gain with it IMO. If not going water cooled go Morpheus 2, plenty of decent results in this thread/web on it
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Ref this thread, covers HBM voltage in WattMan/OverdriveNTool plus other aspects.


 Now Vega









Im waiting for some Custom Badass from XFX or Sapphire


----------



## dagget3450

Quote:


> Originally Posted by *The EX1*
> 
> Have you tried setting AB to display each card's memory usage? Otherwise, it may display a combined amount.


Yes i will have to test this on a single card

Quote:


> Originally Posted by *gupsterg*
> 
> No Problem
> 
> 
> 
> 
> 
> 
> 
> .
> 
> PowerPlay has throttle limit for HBM 95°C and Hotspot as 105°C. I did reach hotspot temp limit easily on ref blower, at stock with just a run of SP 4K. As my case uses a rad on top/front, airflow is not great, so I do believe some of my experience was affected by this. Also be aware HBM could have performance dropping prior to reach the temp limit, when we OC. So make sure you get scaling as you clock higher on it.
> 
> The factory TIM seems very hard. I tried teasing the cooler off after all screws were undone and it wasn't just easing off. So I warmed the heatsink with a hairdryer. Even then I had to take several attempts of warming/easing with very slight force to get the heatsink off. The factory TIM seemed like glue between die/hs base
> 
> 
> 
> 
> 
> 
> 
> .
> 
> If your die is not molded I would be hesitant to use liquid metal. You may find experience within this thread of members having used it, perhaps.
> AFAIK all monitoring shows is an allocation of VRAM and not actual usage. I asked Mumak about it some time ago, link. I do not think exposure of actual VRAM usage has occurred from AMD or nVidia.


The only thing i can really go by is with furyx i had to turn a few settings down slightly because i would get vram limit like spikes in fps/movement. So i feel like its somewhat accurate. When i use the absolute max aa setting i am not the vram like pauses. I guess i need to test other games or benchmarks to see. I know most cases ill run out of gpu power before vram with 16gb buffer. I also wonder how hbcc would affect this on an RX


----------



## diabetes

Has anyone managed to make Vega sustain clocks greater than 1650Mhz in Unigine Superposition "1080p Extreme"? I have tinkered with the card a lot and it seems like it refuses to go to P7 in SP, even when I set this as min. in Wattman. Interestingly enough it reaches the max clock no problem in Blender Clylces and holds that.

I have a Vega56 with Vega64LC Edition PP-Tables. My P6 is 1667Mhz/1100mV and P7 is 1722/1125mV. Power Limit does not matter. According to GPU-Z the card stays at 264W even when i set Powerlimit to +50% (400W).
The card is watercooled and after I fixed my block mounting, hotspot goes to 60C max at 264W.

Is it safe to disable ACG (Advanced Clock Gating) for higher P-States? I am curious if that makes a difference.


----------



## kundica

Quote:


> Originally Posted by *diabetes*
> 
> Has anyone managed to make Vega sustain clocks greater than 1650Mhz in Unigine Superposition "1080p Extreme"? I have tinkered with the card a lot and it seems like it refuses to go to P7 in SP, even when I set this as min. in Wattman. Interestingly enough it reaches the max clock no problem in Blender Clylces and holds that.
> 
> I have a Vega56 with Vega64LC Edition PP-Tables. My P6 is 1667Mhz/1100mV and P7 is 1722/1125mV. Power Limit does not matter. According to GPU-Z the card stays at 264W even when i set Powerlimit to +50% (400W).
> The card is watercooled and after I fixed my block mounting, hotspot goes to 60C max at 264W.
> 
> Is it safe to disable ACG (Advanced Clock Gating) for higher P-States? I am curious if that makes a difference.


You should sustain higher clocks with those pstates, at least my card does. Have you tried running those clocks with stock voltage?

@dagget3450 What settings do you run WoW at? I logged in for the first time since March and was confused that my Vega was running low utilization/clocks. Adding a ton of AA helped the clocks raise but I'm wondering if there are other ways.


----------



## diabetes

Quote:


> Originally Posted by *kundica*
> 
> Have you tried running those clocks with stock voltage?


Yes. The higher I set the voltage, the lower my actual frequency gets. At stock volts i get ~ 1540-1590Mhz in SP.


----------



## 113802

What's the safe voltage that I can set the HBM voltage? @ 1000Mv my HBM can be clocked at 1130Mhz stable without artifacts. I can bench at 1145 with artifacts.


----------



## diabetes

Quote:


> Originally Posted by *WannaBeOCer*
> 
> What's the safe voltage that I can set the HBM voltage? @ 1000Mv my HBM can be clocked at 1130Mhz stable without artifacts. I can bench at 1145 with artifacts.


HBM voltage is constant 1.25V on V56 and 1.35V on V64. HBM voltage settings in Wattman actually set the minimum GPU core voltage for the IMC.


----------



## TrixX

Quote:


> Originally Posted by *diabetes*
> 
> Yes. The higher I set the voltage, the lower my actual frequency gets. At stock volts i get ~ 1540-1590Mhz in SP.


Set power target to +50%, you are being power throttled.

Really you should lower voltage and find best undervolt and then work upwards


----------



## diabetes

Quote:


> Originally Posted by *TrixX*
> 
> Set power target to +50%, you are being power throttled.
> 
> Really you should lower voltage and find best undervolt and then work upwards


Read post 4021 on the top of this page. +50% Powerlimit makes the card stay at 264W for some reason.


----------



## spyshagg

use the powertable reg files. Your condition is consistent with temperature or power limit. Although i have seen mine with +50% go up to 340w


----------



## criminal

Has anyone put a AIO on Vega? I was thinking about picking one up and using a G12 bracket I already have. I am sure I will have to modify it a bit.


----------



## TrixX

Quote:


> Originally Posted by *diabetes*
> 
> Read post 4021 on the top of this page. +50% Powerlimit makes the card stay at 264W for some reason.


Sounds like some settings are being duplicated. Try to have HBM mv below or equal to DPM6 mv which is lower than DPM7 mv.


Quote:


> Originally Posted by *criminal*
> 
> Has anyone put a AIO on Vega? I was thinking about picking one up and using a G12 bracket I already have. I am sure I will have to modify it a bit.


Yes, it's quite successful when modified and gives performance similar to that of the LC Vega. The other alternative is the Raijintek Morpheus II which has proven to fit and work pretty well. Better than the stock cooler by a long way.


----------



## Naeem

Did fresh round of superposition test

RX Vega LC
50% power target
HBM2 @ 1100mhz
HBCC = ON


----------



## diabetes

I reapplied the Powertable and found that my P5 had the same voltage as P6. Then I reverted it to completely stock V64LC settings with ACG disabled and +50% Powertarget. The card now reaches [email protected] in SP with DPM7 is set to 1750Mhz. Score scaled accordingly.



Then I reenabled ACG and found that the card goes to 1710Mhz max with otherwise the same settings, but now has terrible coilwhine. Is it safe to leave ACG disabled?


----------



## TrixX

Quote:


> Originally Posted by *diabetes*
> 
> I reapplied the Powertable and found that my P5 had the same voltage as P6. Then I reverted it to completely stock V64LC settings with ACG disabled and +50% Powertarget. The card now reaches [email protected] in SP with DPM7 is set to 1750Mhz. Score scaled accordingly.
> 
> 
> 
> Then I reenabled ACG and found that the card goes to 1710Mhz max with otherwise the same settings, but now has terrible coilwhine. Is it safe to leave ACG disabled?


You can set the minimum to P6 to prevent downclocking to P5 and below in Wattman or in OverdriveNTool. I tend to use Clockblocker to lock P7 during gaming.


----------



## 113802

Quote:


> Originally Posted by *diabetes*
> 
> HBM voltage is constant 1.25V on V56 and 1.35V on V64. HBM voltage settings in Wattman actually set the minimum GPU core voltage for the IMC.


Anyone know the safe voltage for the IMC? Without increasing it my HBM can only run at 1070 stable and 1105mhz benchable.


----------



## TrixX

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Anyone know the safe voltage for the IMC? Without increasing it my HBM can only run at 1070 stable and 1105mhz benchable.


Been running 950mv most of the time but I've seen others running it up as high as 1100mv.


----------



## SavantStrike

Just bought a pair of Vega 64 Air cards. Will be buying full cover blocks for them and go from there.

Guess I've got 130+ pages to read through in this thread now.


----------



## SpecChum

Does anyone else find enabling HBCC lowers overclock headroom or is it me doing something wrong?

I can run P7 at 1700Mhz 1050mV with it off, but I get a crash fairly quickly with HBCC on at the same frequency.


----------



## poisson21

Don't know, with crossfire you can't use HBCC.


----------



## tarot

Quote:


> Originally Posted by *SpecChum*
> 
> Does anyone else find enabling HBCC lowers overclock headroom or is it me doing something wrong?
> 
> I can run P7 at 1700Mhz 1050mV with it off, but I get a crash fairly quickly with HBCC on at the same frequency.


yeah I found the benefits were small except in superposition but that engine sucks so I turned it off and I agree it does seem to put a strain on the clock.


----------



## fursko

Chg70 freesync 2 monitor get firmware update today and 72-144hz freesync range now 48-144. Its really awesome with Vega 64 LC and Wolfenstein 2 get new update vega +%20 perf.


----------



## rancor

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Anyone know the safe voltage for the IMC? Without increasing it my HBM can only run at 1070 stable and 1105mhz benchable.


It's not clear if we can even control that voltage or if it is just core voltage. If you are talking about the "memory voltage" my air card defaulted to 1100mV. I would guess the IMC can handle the exact same voltages as the core.


----------



## Ne01 OnnA

->



http://hexus.net/tech/reviews/graphics/111350-inno3d-ichill-geforce-gtx-1070-ti-x3/?page=11

1080 is now 3% slower than ATI VEGA 64.
V64 has gone up in price since last week, but right now the cheapest V64 on scan is £468 while the cheapest 1080 is £489.


----------



## dagget3450

Quote:


> Originally Posted by *kundica*
> 
> You should sustain higher clocks with those pstates, at least my card does. Have you tried running those clocks with stock voltage?
> 
> @dagget3450 What settings do you run WoW at? I logged in for the first time since March and was confused that my Vega was running low utilization/clocks. Adding a ton of AA helped the clocks raise but I'm wondering if there are other ways.


I don't really play wow at the moment per se, i am using it to test things against my furyx's and one of those was wow. I can run max settings @8k with no AA enabled.(it isnt needed on my monitor) and from testing i get a solid 60fps. I went to stormwind to see and it stayed pretty consistent. If i were playing again it should be suffice. I know on my furyx i had to run 4 way to get best fps, but had to turn down some settings to keep it from pausing when panning camera and i did play then during launch and into legion.

The vegas are doing more than what 4 furies could do in that game, so thats a plus.(considering mgpu is dying on new games) I think i might resub and give it a go along with Warframe.


----------



## porschedrifter

Hey guys, does anyone know if the displayports are dual mode or single?
Do you need a passive or active HDMI adapter to work on HDMI only monitors?


----------



## VicsPC

Quote:


> Originally Posted by *Ne01 OnnA*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ->
> 
> 
> 
> http://hexus.net/tech/reviews/graphics/111350-inno3d-ichill-geforce-gtx-1070-ti-x3/?page=11
> 
> 1080 is now 3% slower than ATI VEGA 64.
> V64 has gone up in price since last week, but right now the cheapest V64 on scan is £468 while the cheapest 1080 is £489.


Considering its part console port PC my guess is Dice worked directly with AMD, was the same for battlefront 1.


----------



## 113802

Quote:


> Originally Posted by *rancor*
> 
> It's not clear if we can even control that voltage or if it is just core voltage. If you are talking about the "memory voltage" my air card defaulted to 1100mV. I would guess the IMC can handle the exact same voltages as the core.


This voltage defaulted to 1100mV? Mine is default 950mV and when I increase it to 1000mV I can run it at 1130Mhz stable without artifacts. Currently I can run 1060Mhz without any artifacts. 1105Mhz runs fine in most games but crashes in Shadow of War.


----------



## rancor

Quote:


> Originally Posted by *WannaBeOCer*
> 
> This voltage defaulted to 1100mV? Mine is default 950mV and when I increase it to 1000mV I can run it at 1130Mhz stable without artifacts. Currently I can run 1060Mhz without any artifacts. 1105Mhz runs fine in most games but crashes in Shadow of War.


Yes that voltage. OverdriveN Tool defaulted to 1100mV, Whatman defaults to 1050mV. This is running the Air 64 bios as I get random instability running the AIO bios down clocked even with my watercooling.


----------



## bill1971

when I flash a bios,must reinstall drivers?


----------



## Reikoji

The box for air cooled Vega's is so much smaller.
Quote:


> Originally Posted by *bill1971*
> 
> when I flash a bios,must reinstall drivers?


That shouldn't be necessary. Unless you're flashing an nvidia GPU bios onto your card, in which you would need to install Nvidia drivers. (joke).


----------



## dagget3450

Hold on a second...

https://www.computerbase.de/2017-11/wolfenstein-2-vega-benchmark/#diagramm-async-compute-1920-1080-anspruchsvolle-testsequenz



crazy boost for vega... i am guessing in worst case scenario nvidia does better due to asynch disabled? (except 4k)

Either way liking these improvements, makes you wonder whats in store for the future


----------



## kundica

Quote:


> Originally Posted by *dagget3450*
> 
> Hold on a second...
> 
> https://www.computerbase.de/2017-11/wolfenstein-2-vega-benchmark/#diagramm-async-compute-1920-1080-anspruchsvolle-testsequenz
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> crazy boost for vega... i am guessing in worst case scenario nvidia does better due to asynch disabled? (except 4k)
> 
> Either way liking these improvements, makes you wonder whats in store for the future


I think the worst case scenario is taken from that first part of New York. It's super bugged. The framerate doesn't slow down, it completely drops and all the latency goes red.


----------



## Reikoji

http://www.3dmark.com/spy/2708354
http://www.3dmark.com/fs/14102960

~1170w watts from the wall.


----------



## fato22

hello everyone. I just started to OC my Vega 64 Air and I have a question. I found a pretty stable and satisfied OC p6 1000v p7 1070v +4%, HBM [email protected]
Fans are set at 3100. Target is 75.

Gpu temp at max frequency (~1620mhz) is pretty good and rarely reach 76C. HBM temp (at full load) may move up and down from 78C to 84C.

This is where I am worried. I read that you cannot go over 80C with HBM memory. Unfortunately I can't lower voltage since under 1060, with my OC, it gets instable.

I don't have any artifact or issue at all during my gaming (mainly the witcher 3).

Do I have to lower the frequency of the HBM or you guys think it's ok if I go a little bit over 80c?


----------



## Reikoji

Quote:


> Originally Posted by *fato22*
> 
> hello everyone. I just started to OC my Vega 64 Air and I have a question. I found a pretty stable and satisfied OC p6 1000v p7 1070v +4%, HBM [email protected]
> Fans are set at 3100. Target is 75.
> 
> Gpu temp at max frequency (~1620mhz) is pretty good and rarely reach 76C. HBM temp (at full load) may move up and down from 78C to 84C.
> 
> This is where I am worried. I read that you cannot go over 80C with HBM memory. Unfortunately I can't lower voltage since under 1060, with my OC, it gets instable.
> 
> I don't have any artifact or issue at all during my gaming (mainly the witcher 3).
> 
> Do I have to lower the frequency of the HBM or you guys think it's ok if I go a little bit over 80c?


HBM at that speed is fine for those temperatures. HBM has a higher safe temperature, but its the stability temperatures that are lower.


----------



## fato22

So I am good to go with my OC?


----------



## Reikoji

If no artificing or blackouts, I'd say you're good. It would be trying to push too much higher at those temperatures you will start to see memory instability prop up.

Make those water loop plans :3

Also, HBM voltage isn't HBM voltage, its core floor voltage. HBM voltage on vega 64 is always 1.356v. Having it set to 1060 means the card core voltage will attempt not to drop below that under load.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.3dmark.com/spy/2708354
> http://www.3dmark.com/fs/14102960
> 
> ~1170w watts from the wall.


Damn! How do you even tax such a setup, run everything with supersampling?


----------



## TrixX

That's awesome to see. I'm liking where multi-Vega is headed


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.3dmark.com/spy/2708354
> http://www.3dmark.com/fs/14102960
> 
> ~1170w watts from the wall.


WOW! Now that is impressive!









Quote:


> Originally Posted by *fato22*
> 
> hello everyone. I just started to OC my Vega 64 Air and I have a question. I found a pretty stable and satisfied OC p6 1000v p7 1070v +4%, HBM [email protected]
> Fans are set at 3100. Target is 75.
> 
> Gpu temp at max frequency (~1620mhz) is pretty good and rarely reach 76C. HBM temp (at full load) may move up and down from 78C to 84C.
> 
> This is where I am worried. I read that you cannot go over 80C with HBM memory. Unfortunately I can't lower voltage since under 1060, with my OC, it gets instable.
> 
> I don't have any artifact or issue at all during my gaming (mainly the witcher 3).
> 
> Do I have to lower the frequency of the HBM or you guys think it's ok if I go a little bit over 80c?


In my testing going over 80C seems to be fine anywhere up to about 1080Mhz; at 1100Mhz I start to get instabilities at 80c+.

Bear in mind tho, the timings drop at 85C and that's coded into the HBM chips themselves I think. For example on the menu of the game "Hard Reset" I usually get 93FPS but once the HBM gets to 85C it immediately drops to 89FPS.

I like to use that menu as a test now as it it 100% loads the GPU, heats it up real quick, and shows a fairly constant frame rate so you can see any changes in realtime









Quote:


> Originally Posted by *fato22*
> 
> So I am good to go with my OC?


I think you're fine!


----------



## Reikoji

Did the undervolt on this one.

Aren't able to enable HBCC with crossfire enabled however.


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> Did the undervolt on this one.
> 
> Aren't able to enable HBCC with crossfire enabled however.


Still very nice.

I've disabled HBCC now anyway, I've noticed it somewhat limits my achievable core overclock.


----------



## gupsterg

Just a share on some max power testing for TR/VEGA. Room ambient ~22.3C, reading taken at start of test.

TR 1950X 3.875GHz 1.275V
F4-3200C14D-16GTZ 3200MHz 1T The Stilt Safe preset 1.35V
ASUS Zenith Extreme
6x Arctic Cooling F12 PWM
1x Silent Wings 3 140mm 1000RPM
EK D5 PWM
1x SATA SSD, 2x SATA HDD
ASUS MG279Q

RX VEGA 64, factory VBIOS, PP mod only to allow +100% on slider, settings as below with driver v17.11.1:-

DPM6: 1557MHz 975mV
DPM7: 1652MHz 1125mV
HBM: 1100MHz 975mV
PowerLimit: 65%

PC has been on since about Thursday night. When not in use it has been running Bionic and other tests for my meddling







.

Wall power meter min 121W, max 705W, this is *including screen* which is ~20W. As this is wall power draw IMO the PSU on the rig side would be outputting max ~617W. AFAIK decent PSUs are rated on the draw rig side and not wall plug side. We can see this data in reviews where they give data as such as JonnyGuru, here is the review page of hot-testing PSU I used.



Spoiler: Start









Spoiler: Mid test















Spoiler: End







HWINFO CSV log

P95_Heaven_TR_V64.zip 55k .zip file


As this was 1st time doing such a test and I have yet to view CSV, if members see vast clock drops on GPU at time this maybe due to when I moved Heaven window and rendering pauses and also how when scene changes occur in Heaven GPU takes a "PowerTune" breather.

Next I used Bionic PrimeGrid on CPU/GPU.



HWINFO CSV log

Bionic_CPU_GPU.zip 92k .zip file


I saw max ~670W from wall plug meter, IMO max ~585W from PSU to rig side is output.

I have 2x MagiCool G2 Slim 360mm rads, purchased based on a review on extremerigs. I believe for the price and loading I tested, they held up well. Kit was used in case with all panels, Dark Base 900. I have removed front door and done mesh mod and also removed excess plastic molding near mesh on sides of top panel.
Quote:


> Originally Posted by *dagget3450*
> 
> Yes i will have to test this on a single card
> The only thing i can really go by is with furyx i had to turn a few settings down slightly because i would get vram limit like spikes in fps/movement. So i feel like its somewhat accurate. When i use the absolute max aa setting i am not the vram like pauses. I guess i need to test other games or benchmarks to see. I know most cases ill run out of gpu power before vram with 16gb buffer. I also wonder how hbcc would affect this on an RX


I concur Fury X would have ran out of VRAM (one reason I sold mine). My link to Mumak's post was just a FYI, as you had asked about the accuracy. As we have no real data on actual vs allocated it's pretty much anyone's guess as to discrepancy IMO.
Quote:


> Originally Posted by *dagget3450*
> 
> Hold on a second...
> 
> https://www.computerbase.de/2017-11/wolfenstein-2-vega-benchmark/#diagramm-async-compute-1920-1080-anspruchsvolle-testsequenz
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> crazy boost for vega... i am guessing in worst case scenario nvidia does better due to asynch disabled? (except 4k)
> 
> Either way liking these improvements, makes you wonder whats in store for the future


Wolfenstein 2 works extremely well on VEGA. In general I see ~100FPS+ and more often than not it's ~125-144FPS. When I didn't cap to 144 I have seen screen tearing at 1440P Mein Leben! preset. I have only on occasions noted down to ~60FPS where I believe smoke has been rendered and some of the outside areas. These occurrences are far and few between that I have put it down to bugs/optimizations that need addressing.


----------



## fato22

Quote:


> Originally Posted by *Reikoji*
> 
> If no artificing or blackouts, I'd say you're good. It would be trying to push too much higher at those temperatures you will start to see memory instability prop up.
> 
> Make those water loop plans :3
> 
> Also, HBM voltage isn't HBM voltage, its core floor voltage. HBM voltage on vega 64 is always 1.356v. Having it set to 1060 means the card core voltage will attempt not to drop below that under load.


Yes I've heard about the core floor voltage
Quote:


> Originally Posted by *SpecChum*
> 
> WOW! Now that is impressive!
> 
> 
> 
> 
> 
> 
> 
> 
> In my testing going over 80C seems to be fine anywhere up to about 1080Mhz; at 1100Mhz I start to get instabilities at 80c+.
> 
> Bear in mind tho, the timings drop at 85C and that's coded into the HBM chips themselves I think. For example on the menu of the game "Hard Reset" I usually get 93FPS but once the HBM gets to 85C it immediately drops to 89FPS.
> 
> I like to use that menu as a test now as it it 100% loads the GPU, heats it up real quick, and shows a fairly constant frame rate so you can see any changes in realtime
> 
> 
> 
> 
> 
> 
> 
> 
> I think you're fine!


Thanks! I rarely see 85C. I happened only a couple of times on the witcher 3 but I didn't notice any drop in fps and it is usually between 82 and 83. On pubg it stays way under 80.


----------



## ducegt

So happy this open box 64 LC card wasn't a dud. I changed my airflow around to appease Vega's radiator and went with 2 PCIE cables instead of the daisy chained one. No OverdriveNTtool. Wattman set to balanced. Just -75mV in afterburner, HBM 1095, and custom fan curve. Max VCCD 1.1v. HBM temp is normally only 1-5C higher than core.

Fast enough I think and rather quiet. Not very sure how this stacks up with the results others have got yet.


----------



## By-Tor

My Vega 64 will be here today and need to get either a video display port cable or display port to DVI-D adaptor.

My main monitor is an Asus 144hz, 1ms screen and need help with what cable/adaptor would allow me to run it at 144hz.

Thank you


----------



## 113802

Quote:


> Originally Posted by *By-Tor*
> 
> My Vega 64 will be here today and need to get either a video display port cable or display port to DVI-D adaptor.
> 
> My main monitor is an Asus 144hz, 1ms screen and need help with what cable/adaptor would allow me to run it at 144hz.
> 
> Thank you


Get a DisplayPort 1.2 cable incase you get a FreeSync monitor in the future.


----------



## By-Tor

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Get a DisplayPort 1.2 cable incase you get a FreeSync monitor in the future.


Yes after I get the EK water block for it I plan on picking up a freesync monitor.

Thanks for the heads up...

New egg has this one that says its Compliant with Displayport 1.2.

https://www.newegg.com/Product/Product.aspx?item=N82E16812423118


----------



## Trender07

Guys I mean probably for 1080p HBCC is useless right?


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> Just a share on some max power testing for TR/VEGA. Room ambient ~22.3C, reading taken at start of test.
> 
> TR 1950X 3.875GHz 1.275V
> F4-3200C14D-16GTZ 3200MHz 1T The Stilt Safe preset 1.35V
> ASUS Zenith Extreme
> 6x Arctic Cooling F12 PWM
> 1x Silent Wings 3 140mm 1000RPM
> EK D5 PWM
> 1x SATA SSD, 2x SATA HDD
> ASUS MG279Q
> 
> RX VEGA 64, factory VBIOS, PP mod only to allow +100% on slider, settings as below with driver v17.11.1:-
> 
> DPM6: 1557MHz 975mV
> DPM7: 1652MHz 1125mV
> HBM: 1100MHz 975mV
> PowerLimit: 65%
> 
> PC has been on since about Thursday night. When not in use it has been running Bionic and other tests for my meddling
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Wall power meter min 121W, max 705W, this is *including screen* which is ~20W. As this is wall power draw IMO the PSU on the rig side would be outputting max ~617W. AFAIK decent PSUs are rated on the draw rig side and not wall plug side. We can see this data in reviews where they give data as such as JonnyGuru, here is the review page of hot-testing PSU I used.
> 
> 
> 
> Spoiler: Start
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Mid test
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: End
> 
> 
> 
> 
> 
> 
> 
> HWINFO CSV log
> 
> P95_Heaven_TR_V64.zip 55k .zip file
> 
> 
> As this was 1st time doing such a test and I have yet to view CSV, if members see vast clock drops on GPU at time this maybe due to when I moved Heaven window and rendering pauses and also how when scene changes occur in Heaven GPU takes a "PowerTune" breather.
> 
> Next I used Bionic PrimeGrid on CPU/GPU.
> 
> 
> 
> HWINFO CSV log
> 
> Bionic_CPU_GPU.zip 92k .zip file
> 
> 
> I saw max ~670W from wall plug meter, IMO max ~585W from PSU to rig side is output.
> 
> I have 2x MagiCool G2 Slim 360mm rads, purchased based on a review on extremerigs. I believe for the price and loading I tested, they held up well. Kit was used in case with all panels, Dark Base 900. I have removed front door and done mesh mod and also removed excess plastic molding near mesh on sides of top panel.
> I concur Fury X would have ran out of VRAM (one reason I sold mine). My link to Mumak's post was just a FYI, as you had asked about the accuracy. As we have no real data on actual vs allocated it's pretty much anyone's guess as to discrepancy IMO.
> Wolfenstein 2 works extremely well on VEGA. In general I see ~100FPS+ and more often than not it's ~125-144FPS. When I didn't cap to 144 I have seen screen tearing at 1440P Mein Leben! preset. I have only on occasions noted down to ~60FPS where I believe smoke has been rendered and some of the outside areas. These occurrences are far and few between that I have put it down to bugs/optimizations that need addressing.


Looks like you are throttlimg from 1630 to 1567mhz. Temps are awesome though. Try 150% power limit and stock voltage 1150/1200 P6/P7.


----------



## gupsterg

Quote:


> Originally Posted by *0451*
> 
> Looks like you are throttlimg from 1630 to 1567mhz. Temps are awesome though. Try 150% power limit and stock voltage 1150/1200 P6/P7.


The throttle is down to how Heaven is, when the scene changes occur (black screen) GPU will aim to drop PState. I have seen this on Hawaii, Fiji and VEGA. If I did something like 3DM it would be AOK.

Here is where I did some stability testing one evening for profile without CPU being battered and at stock CPU/OC GPU. These are just HML files for GPU monitoring. This has Heaven ~24min, then used PC for something ~30min and then pretty much back to back 3DM Stress test, FSE then FS and lastly SD. The 3DM stress test presets passed.

Stability_testing.zip 123k .zip file


I also did ROG Furmark with artifact scanning today. In the past I have never used Furmark/Kombustor/OCCT on GPU. I tried this as test as @porschedrifter had highlighted it did pick up artifacts for him on a profile.

So same profiles as before for CPU/GPU. CPU had been under load for 2hrs from Bionic, Furmark I did ~10mins.



I saw max ~633W from wall meter plug, ~550W output from PSU to rig side IMO. I inadvertently at one point did full screen on Furmark during this testing, that showed a max of ~750W at wall plug. IMO ~657W output from PSU on rig side. That maybe the drops of clocks in HWINFO GPU clocks graph. Without any GPU loading for those Bionic work units I see max ~334W, IMO ~283W output from PSU to rig side.


----------



## Trender07

Quote:


> Originally Posted by *gupsterg*
> 
> The throttle is down to how Heaven is, when the scene changes occur (black screen) GPU will aim to drop PState. I have seen this on Hawaii, Fiji and VEGA. If I did something like 3DM it would be AOK.
> 
> Here is where I did some stability testing one evening for profile without CPU being battered and at stock CPU/OC GPU. These are just HML files for GPU monitoring. This has Heaven ~24min, then used PC for something ~30min and then pretty much back to back 3DM Stress test, FSE then FS and lastly SD. The 3DM stress test presets passed.
> 
> Stability_testing.zip 123k .zip file
> 
> 
> I also did ROG Furmark with artifact scanning today. In the past I have never used Furmark/Kombustor/OCCT on GPU. I tried this as test as @porschedrifter had highlighted it did pick up artifacts for him on a profile.
> 
> So same profiles as before for CPU/GPU. CPU had been under load for 2hrs from Bionic, Furmark I did ~10mins.
> 
> 
> 
> I saw max ~633W from wall meter plug, ~550W output from PSU to rig side IMO. I inadvertently at one point did full screen on Furmark during this testing, that showed a max of ~750W at wall plug. IMO ~657W output from PSU on rig side. That maybe the drops of clocks in HWINFO GPU clocks graph. Without any GPU loading for those Bionic work units I see max ~334W, IMO ~283W output from PSU to rig side.


Hm I don't know if my GPU is stable or not because if I start then stop nicehash cryptonight it crashes with my current volts, but i.e I set 1100 mv and it doesn't crash.
Currently running 1010 mV and I passed furmark, time spy, firestrike and superposition withouth problem :/


----------



## gupsterg

Quote:


> Originally Posted by *Trender07*
> 
> Hm I don't know if my GPU is stable or not because if I start then stop nicehash cryptonight it crashes with my current volts, but i.e I set 1100 mv and it doesn't crash.
> Currently running 1010 mV and I passed furmark, time spy, firestrike and superposition withouth problem :/


The way the GPU is it uses Advanced Clock Generator (ACG) / Advanced Voltage Frequency Scaling (AVFS) for DPM 5/6/7.

So let's say I use:-

DPM6: 1557MHz 975mV
DPM7: 1652MHz 1125mV
HBM: 1100MHz 975mV
PowerLimit: 65%

I will see ~1600MHz in 3D, ~1700 in Compute and depending on app loading GPU, on average it will be ~*+*25MHz for 3D from stated MHz and ~*-*25MHz for Compute.

So the profile that you wish to use 24/7 and for all cases may need testing in various apps for length IMO.


----------



## By-Tor

Sexy... And Heavy...


----------



## Leons

Greetings to all.
Always many useful information in this community, thank you.
I own a Sapphire RX Vega 56:



I would ask for clarification to use 10 bpc mode:
My monitor (Asus PB328Q) has a 10 bit panel but if you set 10 bpc in the Radeon settings the setting reverts to 8 bpc.



Maybe I have to change the monitor settings?
Many thanks for the help.


----------



## cplifj

i have that happen when the monitor is not on 60Hz but only 59Hz.

changing that frequency in windows enables the 10 bit color support. i have iiyama gold Phoenix.


----------



## Leons

Quote:


> Originally Posted by *cplifj*
> 
> i have that happen when the monitor is not on 60Hz but only 59Hz.
> 
> changing that frequency in windows enables the 10 bit color support. i have iiyama gold Phoenix.


My monitor is running at 2560x1440 @ 75Hz.

Thanks for the reply.

Edit: Setting the monitor @ 60hz 10 bpc works.


----------



## dagget3450

Quote:


> Originally Posted by *ducegt*
> 
> Count me in. $500 open box from NewEgg working just fine. I'm on a 1080p 144hz Freesync display so probably more interested in underclocking if anything. Out of box on higher TDP bios is 2.5x times faster than my previous overclocked r9 285.


Quote:


> Originally Posted by *MAMOLII*
> 
> hi gupsterg after 2 years of furyx fun and bios modding i jumped to a rx56!
> I have to read 401 pages....
> from what i read i have to flash rx64 bios to give 1.35 ram volts and undervolt gpu to overclock and there is another volt that is not hmb volt
> 1) is the cooler the same rx 56 and rx 64... i mean the heatsink inside the case or the rx 56 is smaller?
> 2)i believe its safe to flash rx 64 bios to a rx56 same memory chips same gpu...
> 3)best tool for oc? wattman afterburner OverdriveNTool?
> # its day one i have to read a lot of pages....


Quote:


> Originally Posted by *SavantStrike*
> 
> Just bought a pair of Vega 64 Air cards. Will be buying full cover blocks for them and go from there.
> 
> Guess I've got 130+ pages to read through in this thread now.


Quote:


> Originally Posted by *fato22*
> 
> hello everyone. I just started to OC my Vega 64 Air and I have a question. I found a pretty stable and satisfied OC p6 1000v p7 1070v +4%, HBM [email protected]
> Fans are set at 3100. Target is 75.
> 
> Gpu temp at max frequency (~1620mhz) is pretty good and rarely reach 76C. HBM temp (at full load) may move up and down from 78C to 84C.
> 
> This is where I am worried. I read that you cannot go over 80C with HBM memory. Unfortunately I can't lower voltage since under 1060, with my OC, it gets instable.
> 
> I don't have any artifact or issue at all during my gaming (mainly the witcher 3).
> 
> Do I have to lower the frequency of the HBM or you guys think it's ok if I go a little bit over 80c?


Quote:


> Originally Posted by *By-Tor*
> 
> My Vega 64 will be here today and need to get either a video display port cable or display port to DVI-D adaptor.
> 
> My main monitor is an Asus 144hz, 1ms screen and need help with what cable/adaptor would allow me to run it at 144hz.
> 
> Thank you


Quote:


> Originally Posted by *Leons*
> 
> Greetings to all.
> Always many useful information in this community, thank you.
> I own a Sapphire RX Vega 56:
> 
> 
> 
> I would ask for clarification to use 10 bpc mode:
> My monitor (Asus PB328Q) has a 10 bit panel but if you set 10 bpc in the Radeon settings the setting reverts to 8 bpc.
> 
> 
> 
> Maybe I have to change the monitor settings?
> Many thanks for the help.


Added to list! Welcome on in!


----------



## By-Tor

I'm running the 17.7.2 drivers with my 290x now and see new drivers 17.11.1 are on the AMD site. Anyone having any problems with the new drivers?
I can't use my Vega yet as I'm waiting on a display port cable that I ordered from newegg.

Should I uninstall my current drivers and use the new ones?

Thank you


----------



## IvantheDugtrio

So I'm back with a Powercolor Vega 56 in the same dual-rad loop with the ekwb block.

Considering how most guides for Vega suggested undervolting first it was surprising that this card would just downclock the moment voltages were reduced. I have flashed a Vega 64 AIO BIOS onto the card and can now stress my water loop. Core voltage and power limit are maxed out at 1250 mV and +50% power limit. The GPU now draws a staggering 360 watts at load. It still can't hit 1700 MHz in Superposition.


----------



## fursko

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> So I'm back with a Powercolor Vega 56 in the same dual-rad loop with the ekwb block.
> 
> Considering how most guides for Vega suggested undervolting first it was surprising that this card would just downclock the moment voltages were reduced. I have flashed a Vega 64 AIO BIOS onto the card and can now stress my water loop. Core voltage and power limit are maxed out at 1250 mV and +50% power limit. The GPU now draws a staggering 360 watts at load. It still can't hit 1700 MHz in Superposition.


Vega 64 lc bios 1752 p7 and 1200mV p7. You should set same clocks but lower voltages. I hit 1730 with 1180mv p7 in superposition.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *fursko*
> 
> Vega 64 lc bios 1752 p7 and 1200mV p7. You should set same clocks but lower voltages. I hit 1730 with 1180mv p7 in superposition.


Does your card automatically adjust clock speed based on voltage setting? Mine never follows the set clockspeed unless I crank up the voltage. I'm also using the soft powerplay.

I'm on the latest 17.11.1 drivers btw.


----------



## LordDain

I actually hit 1740Mhz on my Vega 56 with LC BIOS 'by accident' because Wattman reset the settings without me noticing. I usually just have P6/P7 on 1025/1100mv and easily make 1680Mhz. I can do 1700Mhz on around 1125mv. try that?
Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Does your card automatically adjust clock speed based on voltage setting? Mine never follows the set clockspeed unless I crank up the voltage. I'm also using the soft powerplay.
> 
> I'm on the latest 17.11.1 drivers btw.


Mine does. Trick is finding the best undervolt and gpu clock combo. I can push my voltages above 1150mv and gain clocks, but the fps actually suffers and the overall score is lower.

This is on my Powercolor 56 on 64 AIR BIOS. P6 [email protected], P7 [email protected] HBM [email protected], PL: +40%


























On 1080P power draw is around 250W. On 4k Optimized its 270W.


----------



## fursko

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Does your card automatically adjust clock speed based on voltage setting? Mine never follows the set clockspeed unless I crank up the voltage. I'm also using the soft powerplay.
> 
> I'm on the latest 17.11.1 drivers btw.


Yes. You cant adjust clocks. Just play with voltages. Launch superposition windowed mode and adjust your voltages. Watch your clocks. I get highest clocks with 1180mV. Which is 1730 mhz. My all clocks stock 1752p7, 1668p6


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Yes. You cant adjust clocks. Just play with voltages. Launch superposition windowed mode and adjust your voltages. Watch your clocks. I get highest clocks with 1180mV. Which is 1730 mhz. My all clocks stock 1752p7, 1668p6


You can adjust clocks, but some cards react better than others. Mine doesn't going down in clocks, but it doesn't like going above 1759MHz for some reason. Much easier to use the voltage as the clock frequency management.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> You can adjust clocks, but some cards react better than others. Mine doesn't going down in clocks, but it doesn't like going above 1759MHz for some reason. Much easier to use the voltage as the clock frequency management.


Its really rare and useless. I mean adjusting clocks pointless. Your enemies power, temp and voltage limitations not clocks. Vega LC already pushed too far. Vega not designed for high clocks. Best you can do adjusting voltages.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Its really rare and useless. I mean adjusting clocks pointless. Your enemies power, temp and voltage limitations not clocks. Vega LC already pushed too far. Vega not designed for high clocks. Best you can do adjusting voltages.


Well my V64 Air wouldn't be getting 1700+ if I was running stock Air Clocks. I'm running LC clocks and it's fine, which are a fair bit higher. So yes they do matter especially if you have the hardware to handle it.


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> Well my V64 Air wouldn't be getting 1700+ if I was running stock Air Clocks. I'm running LC clocks and it's fine, which are a fair bit higher. So yes they do matter especially if you have the hardware to handle it.


Actually i was talking for Vega lc bios.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Actually i was talking for Vega lc bios.


Not all Vega can run those clocks, but for those that can, definitely the way to go. I've seen a few that can run 1800+ but they are few and far between.


----------



## MAMOLII

Does anyone now if the heatsink can be removed only!! without having to remove the plate that cools the vrms?


----------



## Razkin

Quote:


> Originally Posted by *MAMOLII*
> 
> Does anyone now if the heatsink can be removed only!! without having to remove the plate that cools the vrms?


Yes, it can.


----------



## MAMOLII

Thanks man!!! I think for a costum with an aio water cooler! but i want to keep se stock plate as it is


----------



## gupsterg

@MAMOLII



As said before stock TIM was like glue IMO, so it will feel as if HS is stuck on. Aim to use least force and I used hairdryer just to warm HS, to help easing it off. I just took my time, slow and easy does it IMO







.


----------



## bobnoho

so this is my first time posting on this subject, maybe somebody can help, I have 4 RX 64's and these cards are pieces of ****!!!!
I purchase them on the original release date and have been trying to mine ether, but these cards just CONSTINTLY crash!! sometimes they will go for a week or so, sometimes I'm lucky if they run for an hour, the pc dosn't crash but wattman resets itself and then they only get about 30Mh @350 watts..

iv tried different settings so far what seems to be the best is:

core- 1127
mem-1100
voltage- auto
fans- 80%
temp- 65-85
power limit- -40

these settings have been getting me 39-44Mh depending on the DAG at 160-180 watts per card


----------



## allenwr1505

Quote:


> Originally Posted by *bobnoho*
> 
> so this is my first time posting on this subject, maybe somebody can help, I have 4 RX 64's and these cards are pieces of ****!!!!
> I purchase them on the original release date and have been trying to mine ether, but these cards just CONSTINTLY crash!! sometimes they will go for a week or so, sometimes I'm lucky if they run for an hour, the pc dosn't crash but wattman resets itself and then they only get about 30Mh @350 watts..
> 
> iv tried different settings so far what seems to be the best is:
> 
> core- 1127
> mem-1100
> voltage- auto
> fans- 80%
> temp- 65-85
> power limit- -40
> 
> these settings have been getting me 39-44Mh depending on the DAG at 160-180 watts per card


You're settings are not optimized.

I'm using the newest November release drivers. Uninstall the old ones using DDU and delete the registry files with the card settings. The first post in the BIOS mod thread on this forum outlines Helmm's registry tweak and which files specifically you need to delete from the registry.

You can easily hit 44Mhash/card, and can actually go up to 48 depending on how good your HBM is.

I run my cards at
core - 1000
mem - 1100

I currently just expanded from two vega 64 to four vega 64 cards, and am in the process of optimization. First, I will optimize them for the lowest undervolt possible. I am currently running my primary card (the one connected to monitors) at 0.8562v, and the other three cards I am running at 0.8250v. My net power draw is 785W for the system, which equates to about 150w per card. I expect to be able to shave off another 10-40w depending on how low they can stably undervolt while mining.

Undervolting is tricky. The only way to get under 0.875v is to apply registry modded files. I've created ten custom edits of Helmm's powerplay table with varying voltages. I delete the old powerplay table, apply the new one with the voltage states edited. I restart the computer, then use OverdriveNTool to apply the lower core clock to the P6 and P7 states. The core clock needs to be applied to both P6 and P7 (it needs to be identical) for this to work.

Once I finish power optimization I'll try pushing my HBM on each card to see how far I can get it. I tried a few weeks ago with only two cards, and one card could hash at 1160HBM stably, and the other couldn't pass 1100 without crashing.


----------



## SavantStrike

Quote:


> Originally Posted by *bobnoho*
> 
> so this is my first time posting on this subject, maybe somebody can help, I have 4 RX 64's and these cards are pieces of ****!!!!
> I purchase them on the original release date and have been trying to mine ether, but these cards just CONSTINTLY crash!! sometimes they will go for a week or so, sometimes I'm lucky if they run for an hour, the pc dosn't crash but wattman resets itself and then they only get about 30Mh @350 watts..
> 
> iv tried different settings so far what seems to be the best is:
> 
> core- 1127
> mem-1100
> voltage- auto
> fans- 80%
> temp- 65-85
> power limit- -40
> 
> these settings have been getting me 39-44Mh depending on the DAG at 160-180 watts per card


Sounds like those are good settings. You're getting what's expected for tweaked VEGA cards on Ethereum. The instability is probably the mining client or the beta block chain drivers.
Quote:


> Originally Posted by *TrixX*
> 
> Not all Vega can run those clocks, but for those that can, definitely the way to go. I've seen a few that can run 1800+ but they are few and far between.


So I probably shouldn't flash the Vega 64 AIO BIOS to my Air 64's straight out of the box then? It looked like the AIO version only had a mild speed bump and I was putting full cover blocks on the cards.


----------



## TrixX

Quote:


> Originally Posted by *SavantStrike*
> 
> So I probably shouldn't flash the Vega 64 AIO BIOS to my Air 64's straight out of the box then? It looked like the AIO version only had a mild speed bump and I was putting full cover blocks on the cards.


I'd test higher clocks with the stock Air BIOS, though the only way to really find out is to flash the AIO BIOS and see. There's some advantages, though the negative is the lower temp targets if you want to go above 70C, the flip side is keeping the temps below 70 increase longevity of the card and less likely to cause damage through heat.

Can end up with LOUD NOISES from the fan though









P.S. Perf boost is a bit more than just mild I should add


----------



## owntecx

Hi, after 3 months i finally put the morpheus on my vega 56. some preliminary testing with fans 1100rpm/7.5volt


----------



## porschedrifter

Hey guys, for anyone who recently bought the newer 64 cards, could you please upload your bios using GPU-Z so we can catch any new bios versions coming out?
At the time of this posting the latest Liquid Cooled bios version we have is v8774 and Air Cooled bios is v8730
Thanks!


----------



## Reikoji

Quote:


> Originally Posted by *porschedrifter*
> 
> Hey guys, for anyone who recently bought the newer 64 cards, could you please upload your bios using GPU-Z so we can catch any new bios versions coming out?
> At the time of this posting the latest Liquid Cooled bios version we have is v8774 and Air Cooled bios is v8730
> Thanks!


If they go by asus numbering schemes, highest isnt the latest :3

but,

SapphireLimitedAir8737.zip 135k .zip file


----------



## 113802

Some fun with my RX Vega 64 XTX


----------



## PontiacGTX

Quote:


> Originally Posted by *dagget3450*
> 
> Hold on a second...
> 
> https://www.computerbase.de/2017-11/wolfenstein-2-vega-benchmark/#diagramm-async-compute-1920-1080-anspruchsvolle-testsequenz
> 
> 
> 
> crazy boost for vega... i am guessing in worst case scenario nvidia does better due to asynch disabled? (except 4k)
> 
> Either way liking these improvements, makes you wonder whats in store for the future


Worst case scenario comes from Manhattan


----------



## MAMOLII

Quote:


> Originally Posted by *gupsterg*
> 
> @MAMOLII
> 
> 
> 
> As said before stock TIM was like glue IMO, so it will feel as if HS is stuck on. Aim to use least force and I used hairdryer just to warm HS, to help easing it off. I just took my time, slow and easy does it IMO
> 
> 
> 
> 
> 
> 
> 
> .


yep i wonder is the hairdryer trick works for the amd sticker (warranty) to remove it nicely to use it again








my plan is to unmount the front case and the heatsink only leaving the vrm card plate as it is, and then mount with mod an aio cpu watercooler! after the furyx i think i have the gpu watercooling is a must sundrome


----------



## gupsterg

Quote:


> Originally Posted by *MAMOLII*
> 
> yep i wonder is the hairdryer trick works for the amd sticker (warranty) to remove it nicely to use it again


It sure does







. I heated the sticker, used thin plastic sheet to pick edge up, again applied slight heat and sheet slid under







. Then had some of that backing for stickers and placed it on that to reuse







.


Quote:


> Originally Posted by *MAMOLII*
> 
> my plan is to unmount the front case and the heatsink only leaving the vrm card plate as it is, and then mount with mod an aio cpu watercooler! after the furyx i think i have the gpu watercooling is a must sundrome


Should work. I know of members on OCUK that modded AIO on, will find posts and link ASAP







.


----------



## geriatricpollywog

Quote:


> Originally Posted by *MAMOLII*
> 
> yep i wonder is the hairdryer trick works for the amd sticker (warranty) to remove it nicely to use it again
> 
> 
> 
> 
> 
> 
> 
> 
> my plan is to unmount the front case and the heatsink only leaving the vrm card plate as it is, and then mount with mod an aio cpu watercooler! after the furyx i think i have the gpu watercooling is a must sundrome


I called XFX and they said I could install a waterblock without voiding my warranty.


----------



## Soggysilicon

Quote:


> Originally Posted by *TrixX*
> 
> Not all Vega can run those clocks, but for those that can, definitely the way to go. I've seen a few that can run 1800+ but they are few and far between.


Been around since the early days and I can't say I have ever seen a screeny' of anyone hitting and holding 1800... never-the-less gaming on it. Do you have a screen cap of such an event? I would be interested in seeing how this feat was accomplished.


----------



## kundica

Quote:


> Originally Posted by *Soggysilicon*
> 
> Been around since the early days and I can't say I have ever seen a screeny' of anyone hitting and holding 1800... never-the-less gaming on it. Do you have a screen cap of such an event? I would be interested in seeing how this feat was accomplished.


I think he meant setting p7 to 1800+ which AMDMatt used in a lot of his videos.


----------



## allenwr1505

Quote:


> Originally Posted by *0451*
> 
> I called XFX and they said I could install a waterblock without voiding my warranty.


Yes, XFX allows you to install a waterblock without voiding your warranty. They might be the only AMD vendor that allows that from what I can tell. I actually just RMA'd a card I installed a waterblock on. They didn't give me any problems with the RMA and set back another new Vega 64.


----------



## SavantStrike

Quote:


> Originally Posted by *TrixX*
> 
> I'd test higher clocks with the stock Air BIOS, though the only way to really find out is to flash the AIO BIOS and see. There's some advantages, though the negative is the lower temp targets if you want to go above 70C, the flip side is keeping the temps below 70 increase longevity of the card and less likely to cause damage through heat.
> 
> Can end up with LOUD NOISES from the fan though
> 
> 
> 
> 
> 
> 
> 
> 
> 
> P.S. Perf boost is a bit more than just mild I should add


Looking again at the specs, the liquid is more than 100mhz faster, to the tune of about 8 Percent. I'm pretty confident that's a realistic target under water, but under air it could be dicey, especially with the TDP bump. It would probably unlock hair dryer mode lol.


----------



## tarot

I have had good luck with xfx I returned a fury x no muss no fuss they sent me a new one so I,m a fan.

now in other news.
http://www.funkykit.com/reviews/video-cards/asus-rog-strix-radeon-rx-vega-64-graphics-card-review/5/

what is wrong with this picture.
my watercooled overclocked vega 64 in 3d11 only gets 8080 graphics and theres on a stock strixx gets 9577
is my card totally borked or am I dreaming.

also same for 3dmark fs
http://www.3dmark.com/fs/14104505
24597 I have it 25k before but that was pushing it.
and they get 26542 what the hell.

now time spy is a little closer
http://www.3dmark.com/spy/2581065
7684 for me and they get 7047
that makes more sense.

so what the hell si going on here.

in other other news my undervolt overclock is not as stable as I thought AND Diablo 3 is a mean spirited hard tasking biatch








after about an hour and a half had a driver crash and overclock reset. I have had those befoe that was generally the overboost to 170/1750 which my card seems to hate but it didn't do that this time so I threw it down a little and it is stable now in D3 but if you think firestrike or even timespy and the stress tests are the be all end-all I am here to tell you they ain't









play D3 at 4k maxed out for an hour and see what happens to you









weirdly enough I did not get the same issues in doom or any other game I ahev tried but hey.

also in my cooling the radiator water etc gets very hot and gpu gets a bit warm as well.
would adding a 140 to the 280 help or a faster pump (I only have a cheapish ek pump spinning at 2300 at the moment)


----------



## geriatricpollywog

Quote:


> Originally Posted by *tarot*
> 
> I have had good luck with xfx I returned a fury x no muss no fuss they sent me a new one so I,m a fan.
> 
> now in other news.
> http://www.funkykit.com/reviews/video-cards/asus-rog-strix-radeon-rx-vega-64-graphics-card-review/5/
> 
> what is wrong with this picture.
> my watercooled overclocked vega 64 in 3d11 only gets 8080 graphics and theres on a stock strixx gets 9577
> is my card totally borked or am I dreaming.
> 
> also same for 3dmark fs
> http://www.3dmark.com/fs/14104505
> 24597 I have it 25k before but that was pushing it.
> and they get 26542 what the hell.
> 
> now time spy is a little closer
> http://www.3dmark.com/spy/2581065
> 7684 for me and they get 7047
> that makes more sense.
> 
> so what the hell si going on here.
> 
> in other other news my undervolt overclock is not as stable as I thought AND Diablo 3 is a mean spirited hard tasking biatch
> 
> 
> 
> 
> 
> 
> 
> 
> after about an hour and a half had a driver crash and overclock reset. I have had those befoe that was generally the overboost to 170/1750 which my card seems to hate but it didn't do that this time so I threw it down a little and it is stable now in D3 but if you think firestrike or even timespy and the stress tests are the be all end-all I am here to tell you they ain't
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> play D3 at 4k maxed out for an hour and see what happens to you
> 
> 
> 
> 
> 
> 
> 
> 
> 
> weirdly enough I did not get the same issues in doom or any other game I ahev tried but hey.
> 
> also in my cooling the radiator water etc gets very hot and gpu gets a bit warm as well.
> would adding a 140 to the 280 help or a faster pump (I only have a cheapish ek pump spinning at 2300 at the moment)


What are your GPU and hotspot temps? Your FS graphics score should be higher.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *0451*
> 
> What are your GPU and hotspot temps? Your FS graphics score should be higher.







under 70 usually but they can go up.


ok I,ll add the hwinfo log

fs.CSV 81k .CSV file


seems to peg around 1640 and 1100 on the ram except in the combined test at the end where it trails off.
so not sure why the slower times in 3s 11 and fs for the graphics test.
now a little while ago I did do a reinstall of dx9 to get UT3 to work might go check that out because everything vulkan like doom and serious sam fusion work great


----------



## TrixX

Quote:


> Originally Posted by *Soggysilicon*
> 
> Been around since the early days and I can't say I have ever seen a screeny' of anyone hitting and holding 1800... never-the-less gaming on it. Do you have a screen cap of such an event? I would be interested in seeing how this feat was accomplished.


Quote:


> Originally Posted by *kundica*
> 
> I think he meant setting p7 to 1800+ which AMDMatt used in a lot of his videos.


Kundica is on the money


----------



## fursko

I just saw Star Wars BF2 benchmarks: 



. Vega 64(1570/945) performs like (almost) 1080 ti lol. Besides its DX11 and gpu utilization around %96 dunno why. Nvidia cards %99.


----------



## TrixX

Interesting vid, though it was showing a Vega64 running basically stock untuned. Will be interesting to see the UV/OC results with Battlefront 2.


----------



## pengs

Quote:


> Originally Posted by *fursko*
> 
> I just saw Star Wars BF2 benchmarks:
> 
> 
> 
> . Vega 64(1570/945) performs like (almost) 1080 ti lol. Besides its DX11 and gpu utilization around %96 dunno why. Nvidia cards %99.


I'm curious how a water cooled V64 compares to a Ti but most of these Youtube framerate comparison channels use the reference 64 at stock. 1660-1700MHz is quite a boost when compared to 1550, about 7-10% increase.

Wolf II is showing the reference leading over a Ti. Now add 7-ish% percent and then overclock _that_. The future looks very promising for Vega.


----------



## TrixX

Quote:


> Originally Posted by *pengs*
> 
> I'm curious how a water cooled V64 compares to a Ti but most of these Youtube framerate comparison channels use the reference 64 at stock. 1660-1700MHz is quite a boost when compared to 1550, about 7-10% on top of whatever it's achieving to begin with.
> 
> Wolf II is showing the reference leading over a Ti. Now add 7-ish% percent and then overclock _that_. The future looks very promising for Vega.


Well it's interesting that the Ti was at ~1800MHz most of the time (likely stock) and my Vega with a water block is at around ~1750MHz in most game titles. I don't have Battlefront 2 to do an indirect comparison but it would be interesting to see.


----------



## pengs

Quote:


> Originally Posted by *TrixX*
> 
> Well it's interesting that the Ti was at ~1800MHz most of the time (likely stock) and my Vega with a water block is at around ~1750MHz in most game titles. I don't have Battlefront 2 to do an indirect comparison but it would be interesting to see.


Yeah, it would be interesting. Vega will never win against the Ti in this game though. The engine is well embedded in DirectX 11 and Dice isn't putting any work into DX12. Quite a bummer - Dice usually had the technical advantage with Frostbyte. Bethesda definitely holds that crown atm.


----------



## TrixX

Quote:


> Originally Posted by *pengs*
> 
> Yeah, it would be interesting. Vega will never win against the Ti in this game though. The engine is well embedded in DirectX 11 and Dice isn't putting any work into DX12. Quite a bummer - Dice usually had the technical advantage with Frostbyte. Bethesda definitely holds that crown atm.


Frostbyte is reportedly an incredibly clunky and difficult engine to work with. If anything UE4 and Lumberyard/CryEngine are easier to work with, though CryEngine is known for it's own quirks. There were questions about Frostbyte being shoehorned into every EA game the could and while the visuals can be pretty good, there's been a lot of pay off with the base mechanics of a lot of games suffering as a result of the engine choice.


----------



## fursko

Quote:


> Originally Posted by *pengs*
> 
> I'm curious how a water cooled V64 compares to a Ti but most of these Youtube framerate comparison channels use the reference 64 at stock. 1660-1700MHz is quite a boost when compared to 1550, about 7-10% increase.
> 
> Wolf II is showing the reference leading over a Ti. Now add 7-ish% percent and then overclock _that_. The future looks very promising for Vega.


My watercooled uv/oc vega 64 shows much better performance than 1080 ti in wolf2. But benchmarks generally use 1080 ti fe stock.

I must add this info. Cod ww 2 perfect undervolting stability test game. My undervolted gpu reach 1800+mhz in this game (my p7 1752) and my daily undervolt settings crashed. If you trying to find sweet spot for your undervolt settings definitely use cod ww2. Besides i dont like cod series actually but this game unexpectedly good. HDR + freesync 2 damn good with this title.


----------



## jbravo14

I think i finally got to tune my Vega 56 with the stock cooler to its full potential. Vega 64 like performance for less money (-$100)

Vega 56 + Vega 64 Bios Mod

Only used wattman, did not bother to update the powerplay table yet (TLDR for the instructions.)

Core clock - Stock Vega 64
Core Voltage - P6 - 950, P7 - 1050
HBM Clock - 1050
HBM floor voltage - 1075
Fan - 400 - 3200
Temp target - 70 - Max - 85

No crashes on games both DX11 and DX12.

Max temp GPU - 80C
Max temp HBM - 85C
Max GPU clock - 1622mhz

Should i set a more agressive fan RPM? or are these max temps good enough, so far - Assassins creed Syndicate was able to tax my GPU at ~100% all the time on Ultra preset.

No other games was able to keep it at ~100% util, hovers between 1590mhz - ~1610mhz.


----------



## Razkin

You'll lose memory performance above 80 degrees HBM.


----------



## geriatricpollywog

Quote:


> Originally Posted by *jbravo14*
> 
> I think i finally got to tune my Vega 56 with the stock cooler to its full potential. Vega 64 like performance for less money (-$100)
> 
> Vega 56 + Vega 64 Bios Mod
> 
> Only used wattman, did not bother to update the powerplay table yet (TLDR for the instructions.)
> 
> Core clock - Stock Vega 64
> Core Voltage - P6 - 950, P7 - 1050
> HBM Clock - 1050
> HBM floor voltage - 1075
> Fan - 400 - 3200
> Temp target - 70 - Max - 85
> 
> No crashes on games both DX11 and DX12.
> 
> Max temp GPU - 80C
> Max temp HBM - 85C
> Max GPU clock - 1622mhz
> 
> Should i set a more agressive fan RPM? or are these max temps good enough, so far - Assassins creed Syndicate was able to tax my GPU at ~100% all the time on Ultra preset.
> 
> No other games was able to keep it at ~100% util, hovers between 1590mhz - ~1610mhz.


That's acceptable for air cooled.


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> Core clock - Stock Vega 64
> Core Voltage - P6 - 950, P7 - 1050
> HBM Clock - 1050
> HBM floor voltage - 1075
> Fan - 400 - 3200
> Temp target - 70 - Max - 85


First up I'd be having that floor voltage at 950mv, as at 1075mv it's rendering your P6 and P7 voltages null as they'll be running off the 1075 floor voltage.

Test it's stable at 950mv on that and you should be golden. Otherwise work out min floor then set P6 to the same and P7 to 20-30mv above from a testing perspective.


----------



## SpecChum

Quote:


> Originally Posted by *Razkin*
> 
> You'll lose memory performance above 80 degrees HBM.


I'm not sure what the official temp is but I don't notice a drop in performance until 85c, then it's instant.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> I'm not sure what the official temp is but I don't notice a drop in performance until 85c, then it's instant.


There's another drop at 45C and 65C IIRC


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> There's another drop at 45C and 65C IIRC


Blimey, I'm on air, I've got no chance of keeping it under 65c let alone 45c


----------



## MAMOLII

every time the o/c fail my card runs at 100% load in desktop with stock voltages!stuck in 3d mode i suppose...
is it normal or just happens to my system?i have to reboot every time!


----------



## SpecChum

Quote:


> Originally Posted by *MAMOLII*
> 
> every time the o/c fail my card runs at 100% load in desktop with stock voltages!stuck in 3d mode i suppose...
> is it normal or just happens to my system?i have to reboot every time!


Try downloading CRU and running reset64.exe


----------



## Paul17041993

Quote:


> Originally Posted by *TrixX*
> 
> There's another drop at 45C and 65C IIRC


Interesting, is this just the core clock throttling or does the memory latency actually increase?


----------



## SpecChum

Quote:


> Originally Posted by *Paul17041993*
> 
> Interesting, is this just the core clock throttling or does the memory latency actually increase?


If the clock throttles, RTSS doesn't pick it up.

Even when you hit 85C it stays at 1100Mhz, or whatever it's set at, the FPS just tanks.


----------



## TrixX

Quote:


> Originally Posted by *Paul17041993*
> 
> Interesting, is this just the core clock throttling or does the memory latency actually increase?


Something akin to loosening timings at those points. The miner's were all complaining about it when they got their cards a month or two back.


----------



## jbravo14

Quote:


> Originally Posted by *TrixX*
> 
> First up I'd be having that floor voltage at 950mv, as at 1075mv it's rendering your P6 and P7 voltages null as they'll be running off the 1075 floor voltage.
> 
> Test it's stable at 950mv on that and you should be golden. Otherwise work out min floor then set P6 to the same and P7 to 20-30mv above from a testing perspective.


Thanks let me try that out, I think 950mv worked, but required me to stay at 945mhz. Is it a significant gain to still increase hbm voltage to attain 1050mhz? I guess I'm thinking if it's worth the trade off?

I remember seeing lower FPS 945mhz vs 1050mhz when running ROTR benchmark


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> Thanks let me try that out, I think 950mv worked, but required me to stay at 945mhz. Is it a significant gain to still increase hbm voltage to attain 1050mhz? I guess I'm thinking if it's worth the trade off?
> 
> I remember seeing lower FPS 945mhz vs 1050mhz when running ROTR benchmark


Test in between, as it's going to be the floor you may get away with 1000mv on HBM and 1050MHz, that way P6 can be 1000mv and P7 1050mv. Keeps power draw down that way and still gives good results.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> Thanks let me try that out, I think 950mv worked, but required me to stay at 945mhz. Is it a significant gain to still increase hbm voltage to attain 1050mhz? I guess I'm thinking if it's worth the trade off?
> 
> I remember seeing lower FPS 945mhz vs 1050mhz when running ROTR benchmark


The mem frequency and the mem voltage are certainly tied in some way as I can run all the way up to 1028Mhz at 910mV but even 1029Mhz downclocks to 800Mhz at that voltage, I then need a decent bump up to about 960mV and then it's fine all the way to about 1080Mhz. 1110Mhz needs 980mV to be stable.

That's actually my "quiet" profile, mem, p6 and p7 all at 910mV and p7 core up to 1670.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Test in between, as it's going to be the floor you may get away with 1000mv on HBM and 1050MHz, that way P6 can be 1000mv and P7 1050mv. Keeps power draw down that way and still gives good results.


I tend to keep the p6 and p7 voltage the same, have you noticed any advantages of putting p6 lower?


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> I tend to keep the p6 and p7 voltage the same, have you noticed any advantages of putting p6 lower?


Yes you don't get P7 bugging out. If P6 and P7 are the same early on it would bug out the Core clock and get stuck at stock (IIRC). To make sure there's an active switch between P6 and P7 I normally keep a 20mv gap.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Yes you don't get P7 bugging out. If P6 and P7 are the same early on it would bug out the Core clock and get stuck at stock (IIRC). To make sure there's an active switch between P6 and P7 I normally keep a 20mv gap.


Oh right, I thought that was frequency. I've not noticed any oddness keep them the same to be fair.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> Oh right, I thought that was frequency. I've not noticed any oddness keep them the same to be fair.


Had it happen a few times on older drivers. Though I've got into the habit of using ClockBlocker or just locking to P7 for gaming. Makes life a lot easier and smoother.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Had it happen a few times on older drivers. Though I've got into the habit of using ClockBlocker or just locking to P7 for gaming. Makes life a lot easier and smoother.


I've just starting using that too.

I can run CS:GO at 300fps on 3440x1440 pulling under 100W









Without CB the FPS goes up and down; it's still usually over 200 tho.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> I've just starting using that too.
> 
> I can run CS:GO at 300fps on 3440x1400 pulling under 100W
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Without CB the FPS goes up and down; it's still usually over 200 tho.


Exactly the power draw in P5-7 is dependant on GPU load, so while some circumstances benefit from the full P0-P7 spectrum for power saving, gaming isn't one of them. I get really stable clocks and performance with no stuttering when P7 is locked.


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> Exactly the power draw in P5-7 is dependant on GPU load, so while some circumstances benefit from the full P0-P7 spectrum for power saving, gaming isn't one of them. I get really stable clocks and performance with no stuttering when P7 is locked.


Plus my p7 is 200mV less than my p5


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> Plus my p7 is 200mV less than my p5


That's all good can't edit P5 anyway


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> The mem frequency and the mem voltage are certainly tied in some way as I can run all the way up to 1028Mhz at 910mV but even 1029Mhz downclocks to 800Mhz at that voltage, I then need a decent bump up to about 960mV and then it's fine all the way to about 1080Mhz. 1110Mhz needs 980mV to be stable.
> 
> That's actually my "quiet" profile, mem, p6 and p7 all at 910mV and p7 core up to 1670.


What tools do you use to test stability? on my r9 390, I just use furmark to test stabillity. But on vega, it will be stable on furmark and crash when i bench on games. Not sure if this is because furmark is only DX11.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> What tools do you use to test stability? on my r9 390, I just use furmark to test stabillity. But on vega, it will be stable on furmark and crash when i bench on games. Not sure if this is because furmark is only DX11.


My initial tests are just Fire strike and superposition just to see if it crashes quickly but if those are OK I just use the card as normal playing the games I normally do.

If I'm doing something else I sometimes put Heaven on and just leave it running for an hour or so.

I've not used Furmark for years.


----------



## jbravo14

Quote:


> Originally Posted by *TrixX*
> 
> Had it happen a few times on older drivers. Though I've got into the habit of using ClockBlocker or just locking to P7 for gaming. Makes life a lot easier and smoother.


Do you have clockblocker at startup or just before you start a game?

any downsides of using the clockblocker?

Is it more stable to use clockblocker when doing UV/OC?


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> Do you have clockblocker at startup or just before you start a game?
> 
> any downsides of using the clockblocker?
> 
> Is it more stable to use clockblocker when doing UV/OC?


I just run it as and when, also after FCU it always prompt for Admin access on start-up I believe.


----------



## Razkin

Quote:


> Originally Posted by *SpecChum*
> 
> I'm not sure what the official temp is but I don't notice a drop in performance until 85c, then it's instant.


It is indeed 85c, I just rechecked. I have one Vega [email protected] mining eth and the hashrate dropped ~5MH/s as soon as it passes 80 degree's, but recovers. It is indeed when it passes 85 degrees the hashrate drops close to 2 MH/s and only recovers when cooled below 85.


----------



## jbravo14

One other observation I had was whenever I bring the HBM floor voltage lower (without modifying the core mv), the core clocks also become lower.

Is there any relationship to the hbm voltage and the core clock?


----------



## spyshagg

The card is a Rubik cube. Everything seems connected to everything lol


----------



## ducegt

Yup. OCing just isn't what it use to be now that cards tune themselves. Think I'll just be leaving my LC edition at stock. Undervolting was stable for benches, but not games.


----------



## 113802

Quote:


> Originally Posted by *ducegt*
> 
> Yup. OCing just isn't what it use to be now that cards tune themselves. Think I'll just be leaving my LC edition at stock. Undervolting was stable for benches, but not games.


I stopped playing with the frequency. I just added +50% power limit so it runs between 1600-1720Mhz. I did overclock the HBM to 1075Mhz since it gives a huge increase.


----------



## owntecx

Hi guys, after instaling morpheus II on my rx vega 56, i noticed about between 6 to 8 degrees diference between core and hbm. Are those still normal? hotspot, its usually 10c above hbm temps


----------



## ducegt

Quote:


> Originally Posted by *owntecx*
> 
> Hi guys, after instaling morpheus II on my rx vega 56, i noticed about between 6 to 8 degrees diference between core and hbm. Are those still normal? hotspot, its usually 10c above hbm temps


Yes. Perfectly normal.


----------



## allenwr1505

Quote:


> Originally Posted by *owntecx*
> 
> Hi guys, after instaling morpheus II on my rx vega 56, i noticed about between 6 to 8 degrees diference between core and hbm. Are those still normal? hotspot, its usually 10c above hbm temps


My cards all run with the HBM a few celsius above the hotspot temp. However, they're all running with significantly lower core clocks and core undervolts while the HBM is overclocked for mining.


----------



## porschedrifter

Quote:


> Originally Posted by *jbravo14*
> 
> What tools do you use to test stability? on my r9 390, I just use furmark to test stabillity. But on vega, it will be stable on furmark and crash when i bench on games. Not sure if this is because furmark is only DX11.


ROG's version of FurMark, use artifact scan.
http://www.geeks3d.com/dl/showd/546 thank me later


----------



## gupsterg

I read so many posts on various locations concerning HBCC benefiting SP 4K, why do I gain nothing?


----------



## Naeem

Quote:


> Originally Posted by *gupsterg*
> 
> I read so many posts on various locations concerning HBCC benefiting SP 4K, why do I gain nothing?


old drivers i was getting 4500+ in 1080p extrem and 5100+ with hbcc on with new amd drivers i get about same score of 5100 to 5200 with or without hbcc


----------



## PontiacGTX

Quote:


> Originally Posted by *gupsterg*
> 
> I read so many posts on various locations concerning HBCC benefiting SP 4K, why do I gain nothing?


mostly benefits form setting not maxed out or lower res


----------



## fursko

Quote:


> Originally Posted by *jbravo14*
> 
> What tools do you use to test stability? on my r9 390, I just use furmark to test stabillity. But on vega, it will be stable on furmark and crash when i bench on games. Not sure if this is because furmark is only DX11.


Didnt test hbm stability yet but you should definitely use cod ww2 for undervolt stability.


----------



## jbravo14

retuned my card and now using below setting for my vega 56 with 64 bios

HBM - 1025 / 900mv
P6 - 1537 / 920mv
P7 - 1627 / 985mv (tried 980mv and i get a crash, I also backed down from 1632)
Power limit +50%
Max Temp GPU - 76C
Max Temp HBM - 81C

Passed the super position test.

Now off to test other games.


----------



## Soggysilicon

Quote:


> Originally Posted by *TrixX*
> 
> Kundica is on the money


Thanks for clearing that up guys, I was worried there for a second that someone had figured out some unicorn shavings to get 1800+ sustained clocks!







This is when setting for an undervolt then?


----------



## TrixX

Quote:


> Originally Posted by *Soggysilicon*
> 
> Thanks for clearing that up guys, I was worried there for a second that someone had figured out some unicorn shavings to get 1800+ sustained clocks!
> 
> 
> 
> 
> 
> 
> 
> This is when setting for an undervolt then?


Yeah AMDMatt ran 1812 P7 with 1100mv P7 on the 17.10.1 driver which I can't compete with as mine won't allow the P7 clock above 1759Mhz. Dunno why either as I've seen my stable at 1800+ at sub 10% GPU load...


----------



## jbravo14

Quote:


> Originally Posted by *jbravo14*
> 
> retuned my card and now using below setting for my vega 56 with 64 bios
> 
> HBM - 1025 / 900mv
> P6 - 1537 / 920mv
> P7 - 1627 / 985mv (tried 980mv and i get a crash, I also backed down from 1632)
> Power limit +50%
> Max Temp GPU - 76C
> Max Temp HBM - 81C
> 
> Passed the super position test.
> 
> Now off to test other games.


found out this fails on superposition 4K, i had to up the HBM voltage to 950 and back in business.


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> found out this fails on superposition 4K, i had to up the HBM voltage to 950 and back in business.


Yeah I have to run 950mv to be stable on HBM, below that I can run in low load conditions but 100% GPU load causes a crash.


----------



## gupsterg

@Soggysilicon

AMDMatt (@LtMatthere) has a golden sample IMO. Several tried AMDMatt's milder settings without success. There is a second on OCUK, TonyTurbo78. IIRC though TonyTurbo78 uses ~1200mV for ~1800MHz. He has also uploaded some videos, a 3DM FS run in this post. Matt's card is AIO, Tony's card is AIR on WC.


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> @Soggysilicon
> 
> AMDMatt (@LtMatthere) has a golden sample IMO. Several tried AMDMatt's milder settings without success. There is a second on OCUK, TonyTurbo78. IIRC though TonyTurbo78 uses ~1200mV for ~1800MHz. He has also uploaded some videos, a 3DM FS run in this post. Matt's card is AIO, Tony's card is AIR on WC.


1800 P7 or sustained actual clocks?


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> 1800 P7 or sustained actual clocks?


1800+ P7
I've not seen anyone who's claimed above 1800 actual sustained under load.



EDIT: the 1980 claim on that leaderboard was prior to the driver fix where you could set insane MHz levels but was actually running stock BIOS clocks.


----------



## geriatricpollywog

Quote:


> Originally Posted by *TrixX*
> 
> 1800+ P7
> I've not seen anyone who's claimed above 1800 actual sustained under load.
> 
> 
> 
> EDIT: the 1980 claim on that leaderboard was prior to the driver fix where you could set insane MHz levels but was actually running stock BIOS clocks.


I was just playing GTA online at 1802/1110. Upon seeing the 1800+ claim, I tried 1812/1100 but the game crashed. Actual sustained was 1750 at 1802 and 1760 at 1812.


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> I was just playing GTA online at 1802/1110. Upon seeing the 1800+ claim, I tried 1812/1100 but the game crashed. Actual sustained was 1750 at 1802 and 1760 at 1812.


Yeah I get about 1750 in most games with P7 at 1752/1150mv unless I get a properly heavy load, then it drops to 1720ish


----------



## gupsterg

Quote:


> Originally Posted by *0451*
> 
> 1800 P7 or sustained actual clocks?


Tony had 1825MHz as P7, watch linked video. GT1 is ~1750-1780MHz range and on average ~1770MHz, GT2's lower range is just shy of 1800MHz by 5MHz and peaks above it fair amount of time, so we could say ~1800MHz.

I'll fish out AMDMatt's YT channel as he has more videos and various games. I believe not every game can he sustain same profile. TBH I do not watch all his videos start to finish, perhaps he will chime in, as I did mention him by his OCN username and he should get notification.

These clock variations, for differing loads on same profile is just how ACG/AVFS/PowerTune is for any given card.


----------



## SpecChum

Seems my "quiet" profile is far from stable









That's the one with all the voltages at 910mV and HBM at 1028Mhz, but I got a crash on Hellblade - crashed right after she gets out of the boat, I didn't even get to control the character yet









It was 1am tho, so I didn't test too much further. Will have another play tonight.

Passes every benchmark I've tried, crashes almost instantly in game lol


----------



## Paul17041993

Quote:


> Originally Posted by *TrixX*
> 
> Something akin to loosening timings at those points. The miner's were all complaining about it when they got their cards a month or two back.


noted, guess this is part of the reason why undervolting can boost performance so significantly...
Quote:


> Originally Posted by *jbravo14*
> 
> One other observation I had was whenever I bring the HBM floor voltage lower (without modifying the core mv), the core clocks also become lower.
> 
> Is there any relationship to the hbm voltage and the core clock?


Probably to do with load balancing across the PCB, or something weird to do with the fabric...
Quote:


> Originally Posted by *SpecChum*
> 
> Seems my "quiet" profile is far from stable
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That's the one with all the voltages at 910mV and HBM at 1028Mhz, but I got a crash on Hellblade - crashed right after she gets out of the boat, I didn't even get to control the character yet
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It was 1am tho, so I didn't test too much further. Will have another play tonight.
> 
> Passes every benchmark I've tried, crashes almost instantly in game lol


Probably because games tend to load hardware and the PCIe link like a train with square wheels, any time the hardware changes clocks or power state or links are suddenly stressed is a chance for vdroop to kick in just enough to cause transistors to stop working...


----------



## elox

Quote:


> Originally Posted by *owntecx*
> 
> Hi guys, after instaling morpheus II on my rx vega 56, i noticed about between 6 to 8 degrees diference between core and hbm. Are those still normal? hotspot, its usually 10c above hbm temps


Do you used the x-bracket that comes with the morpheus? Are you using the backplate? Thermal paste?
Still looking for a "solution" to lower my hotspot on the morpheus.. Maybe I´ll reseat if for the fifth time and use the original x-bracket with a different thermal paste (kyronaut atm)


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> Do you used the x-bracket that comes with the morpheus? Are you using the backplate? Thermal paste?
> Still looking for a "solution" to lower my hotspot on the morpheus.. Maybe I´ll reseat if for the fifth time and use the original x-bracket with a different thermal paste (kyronaut atm)


I think at some point hotspot temps can't be alleviated. I think it has much more to do with case airflow then TIM and the cooling solution used. I see people on water with lower HBM and cores temps then i do but their hotspot temps are much higher.

I'm starting to think that it may be a sensor behind the core/hbm on the backside of the pcb.


----------



## elox

Quote:


> Originally Posted by *VicsPC*
> 
> I'm starting to think that it may be a sensor behind the core/hbm on the backside of the pcb.


Thats what i start to think too. It seems that all people that used the x-bracked from the card have great hotspot temps. I´m pretty sure it has nothing to do with the case airflow..


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> Thats what i start to think too. It seems that all people that used the x-bracked from the card have great hotspot temps. I´m pretty sure it has nothing to do with the case airflow..


Haha you say that but I'm not using my xbrace and my hotspot temps are some of the lowest. I'm usually 12°C above CORE and 9°C above HBM. Right now playing FS17 my hotspot temp is 34°C, HBM is 30°C and core is 27°C. Ambient is around 22°C and water temp around 26°C (still a bit warm in southern France these days during the day.


----------



## elox

Quote:


> Originally Posted by *VicsPC*
> 
> Haha you say that but I'm not using my xbrace and my hotspot temps are some of the lowest. I'm usually 12°C above CORE and 9°C above HBM. Right now playing FS17 my hotspot temp is 34°C, HBM is 30°C and core is 27°C. Ambient is around 22°C and water temp around 26°C (still a bit warm in southern France these days during the day.


Damn...







Do you did something special when mounting the morpheus? What thermal paste did you use? My hotspot and HBM are great but hotspot can get 30 degree higher then the core :/
Edit: just saw your on water


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> Damn...
> 
> 
> 
> 
> 
> 
> 
> Do you did something special when mounting the morpheus? What thermal paste did you use? My hotspot and HBM are great but hotspot can get 30 degree higher then the core :/


Oh I'm on water no morpheus lol. But even on the factory cooler i wasn't reaching 105°C+ on hotspot like some people were reaching i was close to around 90-95°C under load. If you have a spare fan let it sit against the back of the card as either intake or exhaust (preferably intake blowing air over the back of the card) if it makes ANY difference then we'll know.


----------



## elox

Quote:


> Originally Posted by *VicsPC*
> 
> Oh I'm on water no morpheus lol. But even on the factory cooler i wasn't reaching 105°C+ on hotspot like some people were reaching i was close to around 90-95°C under load. If you have a spare fan let it sit against the back of the card as either intake or exhaust (preferably intake blowing air over the back of the card) if it makes ANY difference then we'll know.


I´ve already tried. Zero difference. With the stock cooler my hotspot was at 95max degree with uv to 1,1v. Hotest i saw with morpheus was ~88 but thats with 1,2v and +50 pt. With my current uv hotspot is ~70-75 degree when core is somewhere between 40-50 degrees. Its okay but the gap is huge.


----------



## VicsPC

Quote:


> Originally Posted by *elox*
> 
> I´ve already tried. Zero difference. With the stock cooler my hotspot was at 95max degree with uv to 1,1v. Hotest i saw with morpheus was ~88 but thats with 1,2v and +50 pt. With my current uv hotspot is ~70-75 degree when core is somewhere between 40-50 degrees. Its okay but the gap is huge.


Yea i think on air there's not much to be done. If the fan made no difference then it could be that hotspot temps are possibly under the hbm/core and it's reading the "hottest" temp. I don't think it's anything to worry about for Vega, if it's a throttling concern then I'd worry otherwise it's a non issue.


----------



## bahamutzero

How to avoid an annoying bug where voltage won't go down past 1.05V in idle state, after doing a round of benchmarks or playing games? After that happens I have to reset Wattman to defaults, reset the PC and then I can set the voltages again.


----------



## SpecChum

Quote:


> Originally Posted by *bahamutzero*
> 
> How to avoid an annoying bug where voltage won't go down past 1.05V in idle state, after doing a round of benchmarks or playing games? After that happens I have to reset Wattman to defaults, reset the PC and then I can set the voltages again.


Have you tried grabbing CRU and running Restart64.exe?

That restarts the whole display driver.

EDIT: meant restart64.exe not reset.exe


----------



## poisson21

So today i made a little testing with my crossfire setting (rx 64 with lc bios 8774) with 3d mark firestrike.

Until now i always applied the same setting on both card.

p6 1667/1150

p7 1712/1250

For hbm i have to settled at 1105Mhz for 3d mark because anything above that crash, while i can use 1150Mhz in superposition 4k and games..

Floor voltage at 1150mV.

With these settings i have a 25-30Mhz difference in the actual clock of the card during bench, So i tried to increase P7 for the 2nd card to have near the same clock.

To achieve that i have to put P7 to 1737Mhz, hence a difference of 5Mhz maximum during the bench.

And it result in mitigate result, i lost a bit of graphical score ~43150 to ~42850 (within margin of error i think).

But i gain a lot in combined score ~6450 to ~6850.

I will try now if it will also impact superposition 4K.

(someone know if there is a technical reason to not be able to use HBCC with crossfire setting?)

https://www.3dmark.com/compare/fs/14135557/fs/14135518/fs/14135490/fs/14135457/fs/14135423/fs/14135273


----------



## poisson21

Test with superposition 4K now.

To have the same clock during bench , i have to set the second card P7 to 1762Mhz.

And i saw a strange thing, even with this boost i can see a gap up to 15Mhz during the bench, card 2 always below card 1 except 1 or 2 times where it was above for ~5Mhz max for 1-2 second, this was with hbm at 1105Mhz.

For me superposition can accept higher HBM clock, so i tested with 1145Mhz and surprise, the core clock gap is smaller during bench...

Further test is needed.

I tested up to 1195Mhz on the two cards but i know i can not use it 24/7.



I'll retest 3d mark fs to see how much i can increase HBM with it.


----------



## TrixX

Card 1 will be the "controller" card so Card 2 will be waiting on Card 1. Make sure you have the faster card as Card 1.


----------



## bahamutzero

Quote:


> Originally Posted by *SpecChum*
> 
> Have you tried grabbing CRU and running Restart64.exe?
> That restarts the whole display driver.
> EDIT: meant restart64.exe not reset.exe


I just did that several times and it's still stuck at 1.05V. I don't know if it's related to VBIOS, unstable display driver or Wattman settings...

However yesterday's driver fixed some problems regarding clocks and/or voltages:
Quote:


> Radeon WattMan reset and restore factory default options may not reset graphics or memory clocks.


----------



## jbravo14

Quote:


> Originally Posted by *TrixX*
> 
> Yeah I have to run 950mv to be stable on HBM, below that I can run in low load conditions but 100% GPU load causes a crash.


Did not realize that 4K taxes the GPU much higher than 1080P on ultra/extreme.

my temps were higher in super position 4K vs 1080 extreme.

I had to set my max RPM to 3000, and target temp of 70C to avoid HBM temps to go past 84C.

I have a hunch that going past 84C causes instability on HBM.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Have you tried grabbing CRU and running Restart64.exe?
> 
> That restarts the whole display driver.
> 
> EDIT: meant restart64.exe not reset.exe


restart64.exe saved me a lot of time from system reboots. Thanks for this tip.


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> Did not realize that 4K taxes the GPU much higher than 1080P on ultra/extreme.
> 
> my temps were higher in super position 4K vs 1080 extreme.
> 
> I had to set my max RPM to 3000, and target temp of 70C to avoid HBM temps to go past 84C.
> 
> I have a hunch that going past 84C causes instability on HBM.


Anything 1080p can have some CPU bottlenecking, but in Superposition that doesn't seem the case. However the 4K run is incredibly tough, I was pulling ~15W more in the 4K run than 1080p Extreme.

Though I was maxing out at 40C both Core and Mem. Hotspot hit 60C though.


----------



## poisson21

So end of my 2nd run of 3d mark fs.

The cards didn't behave like my first run at all, during it HBM above 1105Mhz was a no go, crashing completly the system at the very beginning of the bench.

2 hours after i restart a run, increasing HBM gradually, and , surprise, it goes up to 1195Mhz this time with really different result than the first run.

https://www.3dmark.com/compare/fs/14136477/fs/14136418/fs/14135557/fs/14136662/fs/14136633/fs/14136602/fs/14136574/fs/14136710/fs/14136536/fs/14136505#

I'll try something else later. If someone has idea to test i can give it a go.

I already test undervolting but my card seems to dislike it...

edit: @TrixX , my rig is under water so i can not swap easily my 2 cards







.


----------



## jbravo14

Quote:


> Originally Posted by *TrixX*
> 
> Anything 1080p can have some CPU bottlenecking, but in Superposition that doesn't seem the case. However the 4K run is incredibly tough, I was pulling ~15W more in the 4K run than 1080p Extreme.
> 
> Though I was maxing out at 40C both Core and Mem. Hotspot hit 60C though.[/quote
> 
> Water cooled?
> 
> I forgot to mention that my temps (75 - 84C) HBM were still using stock reference cooler.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> Quote:
> 
> 
> 
> Originally Posted by *TrixX*
> 
> Anything 1080p can have some CPU bottlenecking, but in Superposition that doesn't seem the case. However the 4K run is incredibly tough, I was pulling ~15W more in the 4K run than 1080p Extreme.
> 
> Though I was maxing out at 40C both Core and Mem. Hotspot hit 60C though.
> 
> 
> 
> Water cooled?
> 
> I forgot to mention that my temps (75 - 84C) HBM were still using stock reference cooler.
Click to expand...

Keeping the HBM cool is proving tricky for me and my quiet profile lol

Settings that pass 4k SP fail on games tho


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Keeping the HBM cool is proving tricky for me and my quiet profile lol
> 
> Settings that pass 4k SP fail on games tho


What's your quiete profile setting? mV and fan profile? also are you on Vega 64 or 56?

Wanted to compare if I got a chip that withing normal range of temps/operation.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> What's your quiete profile setting? mV and fan profile? also are you on Vega 64 or 56?
> 
> Wanted to compare if I got a chip that withing normal range of temps/operation.


It's my overly optimistic profile running 910mV on all of mem, p6 and p7 with a fan of 2200RPM max on an air cooled Vega 64.

Clocks are 1652 on P7 and 1028Mhz on HBM.

Works fine for 3DMark and Superposition, including 4k, but the HBM gets up to like 80 to 84C which some games don't seem to like and I get driver resets. I'm fairly sure it's the HBM temperature.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> It's my overly optimistic profile running 910mV on all of mem, p6 and p7 with a fan of 2200RPM max on an air cooled Vega 64.
> 
> Clocks are 1652 on P7 and 1028Mhz on HBM.
> 
> Works fine for 3DMark and Superposition, including 4k, but the HBM gets up to like 80 to 84C which some games don't seem to like and I get driver resets. I'm fairly sure it's the HBM temperature.


Wow, thats amazing.

I thought i read somewhere in this forum that setting HBM voltage all the same may become problematic?

I got around 6k points on superposition 4k with the vega 56 I have, aside from the fact that its running at 3000RPM when in 4K.

Normal gaming loads at 1080p i don't reach beyond 75C.

Does that also mean that the vega 64 chips can operate at a much lower voltages/temps compared to the 56? I am now thinking of flashing back to the stock 56 bios and use the same UV/OCsettings i have and see if it retains the performance with lower temps.


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> Quote:
> 
> 
> 
> Originally Posted by *TrixX*
> 
> Anything 1080p can have some CPU bottlenecking, but in Superposition that doesn't seem the case. However the 4K run is incredibly tough, I was pulling ~15W more in the 4K run than 1080p Extreme.
> 
> Though I was maxing out at 40C both Core and Mem. Hotspot hit 60C though.
> 
> 
> 
> Water cooled?
> 
> I forgot to mention that my temps (75 - 84C) HBM were still using stock reference cooler.
Click to expand...

Yeah I'm watercooled now.

For daily on air I was running the LC BIOS so max temp of 70 and target of 65 and the following:
P7 1752MHz @ 980mv
P6 1668MHz @ 950mv
HBM 1050MHz @ 950mv
Fan was set to max of 3600RPM and in games usually hung around 2200RPM.

Benching was just whatever I could get away with up to 4700RPM as the fan wouldn't go any harder!

Current watercooled daily is this though:
P7 1752MHz @ 1150mv
P6 1702MHz @ 1120mv
HBM 1096MHz @ 950mv

Performance is high 1680's to 1720's in games for the most part. Firestrike's a PoS and crashes on my system all the time, but other than that everything else runs fine.


----------



## gupsterg

Quote:


> Originally Posted by *TrixX*
> 
> Performance is high 1680's to 1720's in games for the most part. Firestrike's a PoS and crashes on my system all the time, but other than that everything else runs fine.


I use 3DM as a stability/stress test and used it extensively since getting VEGA. So I would assume it's crashing due to creating instability on GPU/current profile.

Level 1 OC for me is:-

DPM6: 1557MHz 975mV
DPM7: 1642MHz 1125mV
HBM: 1100MHz 975mV
Clocks seen: ~1600MHz and upto +25MHz in 3D loads depending on app.

Versus Balanced for performance/power jump I got:-



*Some setup info:*

TR 1950X Stock, F4-3200C14D-16GTZ using The Stilt's safe timings, ASUS ZE UEFI 0801, Win10 Pro x64 Fall Creators Update

VBIOS: RX VEGA 64 AIR 016.001.001.000.008769
Cooling: EK-FC Radeon Vega with Thermal Grizzly Hydronaut (spread fully over die)
Driver: v17.11.1, HBCC: Off, FreeSync: On, ASUS MG279Q modded to 57-144Hz range with LFC: On

*Raw data info:*

3DM TS runs, link.

3DM FS runs, link.

SP 4K Preset



Hoping to find time to do a Level 2 OC and under volt for, may have to throw a game in there







and perhaps gain a wall plug meter with average W readings!







.


----------



## TrixX

Quote:


> Originally Posted by *gupsterg*
> 
> I use 3DM as a stability/stress test and used it extensively since getting VEGA. So I would assume it's crashing due to creating instability on GPU/current profile.


I think that maybe true, however I've had issues with it crashing like crazy since I built this rig even using stock settings and undervolts etc...

I've got another drive for dedicated testing that I'm setting up 2mo so will do some testing with that and seeing CPU/GPU stability and stock vs OC/UV clocks too.


----------



## ducegt

Same here. 3Dmark crashes/freezes before a test is even started or while its gathering info before a test. It doesn't seem to like settings being changed while its loaded. Same issues when only using Wattman profiles.


----------



## gupsterg

@TrixX

Originally that profile with DPM7: 1652MHz 1100mV used to crap out in 3DM menu/changes between tests. Upped to 1112mV and somewhat helped but 1125mV nailed it for that SW. Heaven and Valley still eluded me for repeated lengthy testing (~1hr). I never got artifacts, but GPU just overboosted, as DPM7 mV of 1125mV was not sufficient for what ACG/AVFS targeted, driver reset and GPU was back at stock. That got resolved by knocking DPM7 MHz to 1642MHz.

Compute loads never really seemed to have issues on profiles, I clocked up well in excess of 24hrs usage of [email protected]/bionic and the GPU was sustaining on average ~1675MHz.

@ducegt

I have only used WattMan so far. I apply settings without any apps open. On Hawaii/Fiji 3DM would bounce GPU states even when in menu. I don't think VEGA does this if memory serve me correctly, but it could well be that at these times as driver is trying stop GPU bouncing DPMs, changing settings could lead to issues.


----------



## SpecChum

Here's my 4k result from my quiet profile, not as quiet now tho, decided to up memory voltage to 915mV and the fan to the default 2400 - still fairly quiet IMHO tho



69C core is good, HBM got to 75C

EDIT: That was also with Chrome and all my usual taskbar apps running, such as TeamViewer and Plex server (which is serving a TV show as we speak).

EDIT2: Here are ONT settings:


And yes, I know the 910mV on P6 and P7 won't got lower then 915mV, i just changed the memory mV and left those to change on their own


----------



## Reikoji

undervolt


No undervolt x2.



https://www.3dmark.com/spy/2731978

If only games could utilize these GPU as well as these benchmarks could.


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> 
> 
> 
> No undervolt.
> 
> https://www.3dmark.com/spy/2731978
> 
> If only games could utilize these GPU as well as these benchmarks could.


Seems to scale very well on that benchmark


----------



## poisson21

Yeah, score scale very well with hbm frequency and is very constant.

I go up to 13694 with my crossfire setting and hbm at 1195Mhz.


----------



## SpecChum

Well, I seem to have fixed my stability issues on my quiet profile.

Lowering the memory from 1028Mhz to 1020Mhz seems to have done the trick









Just played Hellblade, which didn't even get past the intro on 1028Mhz, for well over an hour without issue. It's pegged at 100% usage too.

Nice.


----------



## tarot

my problem is 3dmark works fine with the other settings I tried even fs stress and timespy stress BUT load up diablo 3 at 4k maxed and after an hour and a half boom so I dropped mine a bit so it sits around 1638 1075 on the ram and no problems apart from the temps going up a lot.

I might try some of these ultra low volts people are running but I doubt it will work for me.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Here's my 4k result from my quiet profile, not as quiet now tho, decided to up memory voltage to 915mV and the fan to the default 2400 - still fairly quiet IMHO tho
> 
> 
> 
> 69C core is good, HBM got to 75C
> 
> EDIT: That was also with Chrome and all my usual taskbar apps running, such as TeamViewer and Plex server (which is serving a TV show as we speak).
> 
> EDIT2: Here are ONT settings:
> 
> 
> And yes, I know the 910mV on P6 and P7 won't got lower then 915mV, i just changed the memory mV and left those to change on their own


You are lucky to have a cool card. I can't lower the p7 mv lower than 990. I can try lowering the p7 clock but not sure if I will take a huge performance hit. I know I get more FPS on hbm clock, not so much on core clock p7.


----------



## jbravo14

Was trying to undervokt further and had a crash. Since recovering from the crash the prior stable uv/Co is not working anymore. Weird, anyone had similar issues recovering from a crash?


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> Was trying to undervokt further and had a crash. Since recovering from the crash the prior stable uv/Co is not working anymore. Weird, anyone had similar issues recovering from a crash?


Yes, I've done a few driver re-installs as a result. If you use a custom PowerPlay Table you'll need to delete it from the registry too. May not always be necessary, but I found I needed to a couple of times.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> You are lucky to have a cool card. I can't lower the p7 mv lower than 990. I can try lowering the p7 clock but not sure if I will take a huge performance hit. I know I get more FPS on hbm clock, not so much on core clock p7.


Yeah, thanks, it does seem to undervolt reasonably well.

It's obviously not going to win any speed awards, but that's not what I'm looking for anyway.

I upgraded from a Fury Tri-X and I'm very happy so far.


----------



## SpecChum

Actually, now I know the previous HBM speed of 1028Mhz wasn't 100% stable I might try and go lower on the volts...

Also, I've switched to Afterburner for fan speeds, as it gives me a bit more control - the default fan curve is sometimes a little conservative which sometimes allows the HBM to creep up.


----------



## Moshenokoji

Hey all,

I got my Sapphire Vega 56 last week, installed the beta 17.11.1 drivers and ran a few benchmarks at stock speeds to get an idea of what I was dealing with. After everything was good, I went ahead and flashed the Vega 64 BIOS onto it and everything was working fine. I was able to get about 1620-1640mhz core pretty easily on 1150mv and 1030mhz HBM2 at 1050mv. Everything was stable. After I installed the WHQL 17.11.1 from Nov 13th my PC locks up once the driver loads. I flicked back to the 56 BIOS, rebooted and it was fine. Uninstalled the drivers and rebooted back to 64 BIOS and it was fine until I installed the WHQL 17.11.1 driver again.

I can get OK overclocks on the stock 56 BIOS, about 950mhz HBM2 and close to 1600mhz core, but not being able to push it a little further is annoying considering it worked before, has anyone else had any experience with this or know what I can do to fix the issue?


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Yeah, thanks, it does seem to undervolt reasonably well.
> 
> It's obviously not going to win any speed awards, but that's not what I'm looking for anyway.
> 
> I upgraded from a Fury Tri-X and I'm very happy so far.


I came from an r9 390, so yeah it's a big jump for me as well.

With all the tuning I'm doing and benchmarking, I got used to 3000rpm noise.

I was able to game with core 990mv and hbm 950mv but still hitting 82-84c on benchmarks.

I've only been using wattman, but thinking of switching over to the other tool, seems less buggy compared to wattman.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> I came from an r9 390, so yeah it's a big jump for me as well.
> 
> With all the tuning I'm doing and benchmarking, I got used to 3000rpm noise.
> 
> I was able to game with core 990mv and hbm 950mv but still hitting 82-84c on benchmarks.
> 
> I've only been using wattman, but thinking of switching over to the other tool, seems less buggy compared to wattman.


Those temps do seem kinda high for that fan and voltage. I get 70C on 2400RPM.

I'm not sure if my card is cooler than average or yours is warmer, though?

EDIT: just noticed you put 990mV core, I'd probably go up in temp on that.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Those temps do seem kinda high for that fan and voltage. I get 70C on 2400RPM.
> 
> I'm not sure if my card is cooler than average or yours is warmer, though?
> 
> EDIT: just noticed you put 990mV core, I'd probably go up in temp on that.


I haven't tried doing a full reinstall of and drivers, I'll give that a shot and retest. Also not sure if this is because I flashed my Vega 56 to 64. Are there any recommended bios version of v64 air for flashing to Vega 56?


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> I haven't tried doing a full reinstall of and drivers, I'll give that a shot and retest. Also not sure if this is because I flashed my Vega 56 to 64. Are there any recommended bios version of v64 air for flashing to Vega 56?


8730 was the last one I saw, however I've seen mention of a few newer ones from the recent batch of cards with further advanced ID's. Not sure which would be best of those though as I don't know where they are available.

8730 however is available from the Techpowerup bios cache.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Those temps do seem kinda high for that fan and voltage. I get 70C on 2400RPM.
> 
> I'm not sure if my card is cooler than average or yours is warmer, though?
> 
> EDIT: just noticed you put 990mV core, I'd probably go up in temp on that.


I reinstalled the drivers then redid my UV, I'm now
Quote:


> Originally Posted by *SpecChum*
> 
> Those temps do seem kinda high for that fan and voltage. I get 70C on 2400RPM.
> 
> I'm not sure if my card is cooler than average or yours is warmer, though?
> 
> EDIT: just noticed you put 990mV core, I'd probably go up in temp on that.


I reinstalled the drivers and redid my UV.

One thing different I did not move the core clocks, left it at 0% on the slider rather than specifying the actual clocks.

HBM - 910mv - 1020mhz
P6 - 910mv - stock mhz
P7 - 930mv - stock mhz
RPM - 400 - 2600RPM
MAX temp was 78 - big improvement, gaming was at 75 max

However, my FSP dropped from 117 to 112 on ROTR.

And my SP 4k scores came down from 6000 to 5800.

Maybe the HBM mv (floor) and core mv voltage pulls down core clocks automatically.

Still debating on keeping the 3000RPM 117FPS, 6k SP4k scores or the quiete profile.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> I reinstalled the drivers then redid my UV, I'm now
> I reinstalled the drivers and redid my UV.
> 
> One thing different I did not move the core clocks, left it at 0% on the slider rather than specifying the actual clocks.
> 
> HBM - 910mv - 1020mhz
> P6 - 910mv - stock mhz
> P7 - 930mv - stock mhz
> RPM - 400 - 2600RPM
> MAX temp was 78 - big improvement, gaming was at 75 max
> 
> However, my FSP dropped from 117 to 112 on ROTR.
> 
> And my SP 4k scores came down from 6000 to 5800.
> 
> Maybe the HBM mv (floor) and core mv voltage pulls down core clocks automatically.
> 
> Still debating on keeping the 3000RPM 117FPS, 6k SP4k scores or the quiete profile.


The clocks do lower yeah, at 0 increase (1632 p7 on a 64) I get 1460mhz.


----------



## SpecChum

OK, so what's the deal with HBM freq and total power?

At HBM 1100Mhz when I set P7 to 1682Mhz at 1000mV I get ~1550Mhz and ~220W power, all good, but when I set HBM to 950Mhz the core goes up to @1630Mhz and power shoots up to ~280W.

Memory is 990mV on both.

Anyone know why?


----------



## CCoR

Quote:


> Originally Posted by *SpecChum*
> 
> OK, so what's the deal with HBM freq and total power?
> 
> At HBM 1100Mhz when I set P7 to 1682Mhz at 1000mV I get ~1550Mhz and ~220W power, all good, but when I set HBM to 950Mhz the core goes up to @1630Mhz and power shoots up to ~280W.
> 
> Memory is 990mV on both.
> 
> Anyone know why?


That's really odd, try utilizing P6 along with P7 (only) instead and see if that normalizes it for you.


----------



## SpecChum

Utilise p6?

I'm not locking any states out if that's what you mean?


----------



## CCoR

maybe trying locking it just to p6 and p7, same goes for your mem, lock it to last p state and see if that fixes your issue.

what driver are you using?


----------



## SpecChum

It's not an issue as such, I don't run the memory at 950Mhz, I just wondered if anyone else noticed the same.

I'm on 17.11.1


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *SpecChum*
> 
> OK, so what's the deal with HBM freq and total power?
> 
> At HBM 1100Mhz when I set P7 to 1682Mhz at 1000mV I get ~1550Mhz and ~220W power, all good, but when I set HBM to 950Mhz the core goes up to @1630Mhz and power shoots up to ~280W.
> 
> Memory is 990mV on both.
> 
> Anyone know why?






just a blind shot in the dark but I have noticed the same sort of thing.
maybe they are sharing caring parts lower one the other goes up.
so to test throw the ram down to 900 if you can the shoot the core up to 1700 then reverse
throw the ram up to 1100 or 1150 and shoot the core down to say 1540 I believe they are related so the tradeoff is....what give better performance faster ram or faster coore.

as for the power....faster the core the way more juice you pull the ram is not going to be pulling anywhere near the the juice the core does up high.

I might give that a whirl although mine is a bit mental


----------



## madmanmarz

Interesting read


http://imgur.com/3RoFD


Goes to show how hotspot temp can be improved by covering the spaces in between the GPU/HBM


----------



## tarot

Quote:


> Originally Posted by *madmanmarz*
> 
> Interesting read
> 
> 
> http://imgur.com/3RoFD
> 
> 
> Goes to show how hotspot temp can be improved by covering the spaces in between the GPU/HBM


I did that full cover as mine is a molded gpu/hbm still get 20 odd degrees offset from the gpu temps though but never really over 70 degees and it doesn't throttle (well not that I ahev noticed.


----------



## SavantStrike

Quote:


> Originally Posted by *madmanmarz*
> 
> Interesting read
> 
> 
> http://imgur.com/3RoFD
> 
> 
> Goes to show how hotspot temp can be improved by covering the spaces in between the GPU/HBM


I'm not sure why that person decided to go to so much effort instead of using a full cover block, but it's a pretty neat mod. I can see that my original strategy of just putting three dots of TIM on the die and the HBM might not be the best strategy, so I'll do the spread method when I put full cover blocks on in a week.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> OK, so what's the deal with HBM freq and total power?
> 
> At HBM 1100Mhz when I set P7 to 1682Mhz at 1000mV I get ~1550Mhz and ~220W power, all good, but when I set HBM to 950Mhz the core goes up to @1630Mhz and power shoots up to ~280W.
> 
> Memory is 990mV on both.
> 
> Anyone know why?


That's when an UV/OC has gone bad and you are back to stock settings without it confirming it via the software. It's pretty easy to note what's working and what isn't by the power draw.


----------



## jbravo14

Quote:


> Originally Posted by *TrixX*
> 
> That's when an UV/OC has gone bad and you are back to stock settings without it confirming it via the software. It's pretty easy to note what's working and what isn't by the power draw.


How do you validate for power draw? I have hwmonitor but not sure where to check the total power draw correctly


----------



## TrixX

Quote:


> Originally Posted by *jbravo14*
> 
> How do you validate for power draw? I have hwmonitor but not sure where to check the total power draw correctly


Just using the sensor readout from Afterburner. It's not accurate to the full card draw but is accurate enough to see when the draw for the CPU is outside expected.

Basically I did a ton of testing at different voltages to see the power scaling and compared to stock. Then test an unknown OC and if you see the stock power usage it's likely the OC failed.

I use other markers too to double check, but it's my first check on the list.


----------



## barbz127

Hi all,

Vega64 air owner here; I'm not pushing too hard but can't get my settings to save in wattman, am I best to move onto overdriventool or watttool?

Also is there a link or list of good starting values for the v64?

Thank you


----------



## TrixX

Quote:


> Originally Posted by *barbz127*
> 
> Hi all,
> 
> Vega64 air owner here; I'm not pushing too hard but can't get my settings to save in wattman, am I best to move onto overdriventool or watttool?
> 
> Also is there a link or list of good starting values for the v64?
> 
> Thank you


I personally prefer OverdriveNTool, but Wattman is pretty serviceable from what other members have posted. I haven't used Watttool. The other thing I like about ONT is that its much quicker to change values compared to Wattman.

A good starting test is to leave clocks stock and find the minimum GPU Floor voltage (HBM voltage) that your card is happy with. Some cards are happy at 900mv others can't seem to cope with less than 1000mv to be stable. Depends on your sample.

Then once that's done match P6 voltage to the HBM and raise P7 by 20 over P6 and see if stable. Basically trying to find the minimum's your GPU is ok with.

I then start increasing P7 until I reach thermal or power throttling (in Superposition you'll see the actual MHz start to fluctuate and if the fan isn't maxxed out it's likely to be Power). A good starting point for Power Target is +25% and if on air you'll likely never need more than +40%.

Raising HBM MHz to 1000+ is also an easy way to gain performance, though the exact point yours starts having issues will be based on the sample again. Some have issues at 985MHz and some can go to 1200MHz (with the latest drivers).

Just do the testing incrementally, Superposition is a good testing point to check for safe desktop overclocks and Firestrike/Timespy tend to highlight failed OC/UV settings even if stable outside them.


----------



## Chaoz

Quote:


> Originally Posted by *barbz127*
> 
> Hi all,
> 
> Vega64 air owner here; I'm not pushing too hard but can't get my settings to save in wattman, am I best to move onto overdriventool or watttool?
> 
> Also is there a link or list of good starting values for the v64?
> 
> Thank you


You could try my settings, they should be stable for an air cooled GPU and should drop a couple degrees aswell.



With these clocks and volts my P7 clocks to around 1580MHz and HBM to 1000MHz. These are a bit older settings, as I flashed a LC BIOS not too long ago, so I got my HBM on 1050MHz.

If those settings are stable for you, you could work your way up from there and try and find your GPU's stable clocks.

I tend to use OverdriveNTool as Watttool/Wattman resets my settings after reboot for some reason.


----------



## barbz127

Thanks for the responses, Ill give this a go this evening.

In terms of applying thermal paste to non molded dies what is the best method?

My V64 was under water with great results but is no longer - hotspot temps are 17-20 degrees higher than hbm/core temps and I cant remember if this is correct.

Could be to do with poor application (not enough in the gap) - either way Im sure ill need new screws if I have to strip the card again.


----------



## Chaoz

Np, happy to help.

Best method for molded or unmolded is the triple X method. Put an X on each die and you should be good.

My Sapphire has a molded package and is under water so can't really tell you if the Hot Spot temp is good or not.


----------



## TrixX

Quote:


> Originally Posted by *Chaoz*
> 
> Np, happy to help.
> 
> Best method for molded or unmolded is the triple X method. Put an X on each die and you should be good.
> 
> My Sapphire has a molded package and is under water so can't really tell you if the Hot Spot temp is good or not.


Going off some other information I've seen it might be better to treat the entire Die of GPU and HBM as a single die and cover the entirety of it evenly or using the X method. Obviously LM is a different matter but that's always an alternative option.


----------



## barbz127

I have used the x method as that worked for me under water.

Would anyone know what thread the screws on the Vega are?


----------



## fursko

Quote:


> Originally Posted by *TrixX*
> 
> I personally prefer OverdriveNTool, but Wattman is pretty serviceable from what other members have posted. I haven't used Watttool. The other thing I like about ONT is that its much quicker to change values compared to Wattman.
> 
> A good starting test is to leave clocks stock and find the minimum GPU Floor voltage (HBM voltage) that your card is happy with. Some cards are happy at 900mv others can't seem to cope with less than 1000mv to be stable. Depends on your sample.
> 
> Then once that's done match P6 voltage to the HBM and raise P7 by 20 over P6 and see if stable. Basically trying to find the minimum's your GPU is ok with.
> 
> I then start increasing P7 until I reach thermal or power throttling (in Superposition you'll see the actual MHz start to fluctuate and if the fan isn't maxxed out it's likely to be Power). A good starting point for Power Target is +25% and if on air you'll likely never need more than +40%.
> 
> Raising HBM MHz to 1000+ is also an easy way to gain performance, though the exact point yours starts having issues will be based on the sample again. Some have issues at 985MHz and some can go to 1200MHz (with the latest drivers).
> 
> Just do the testing incrementally, Superposition is a good testing point to check for safe desktop overclocks and Firestrike/Timespy tend to highlight failed OC/UV settings even if stable outside them.


Actually superposition not good man. You should find sensitive games. I can use 1000mV p7 and 1190mhz hbm with superposition but games differs too much. In Cod WW2 i cant go below 1160mV p7 same with overwatch. And 1090 mhz hbm works without artifacts in cod ww2.

I thought my tweaks stable until i play cod ww2.


----------



## TrixX

Quote:


> Originally Posted by *fursko*
> 
> Actually superposition not good man. You should find sensitive games. I can use 1000mV p7 and 1190mhz hbm with superposition but games differs too much. In Cod WW2 i cant go below 1160mV p7 same with overwatch. And 1090 mhz hbm works without artifacts in cod ww2.
> 
> I thought my tweaks stable until i play cod ww2.


I don't have CoD WW2, or Overwatch. The only program giving me issues is Firestrike (well Timespy too) and I think I've found the root cause of that now.

Run all kinds of games like TW WH2, PUBG (quite a stressful lobby), Ashes of the Singularity etc...

I haven't found the litmus test game yet though that I own


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> That's when an UV/OC has gone bad and you are back to stock settings without it confirming it via the software. It's pretty easy to note what's working and what isn't by the power draw.


It's not that, no. I'm putting these settings in manually.

Once I put the memory at 950mhz it does it, even without a failure.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> It's not that, no. I'm putting these settings in manually.
> 
> Once I put the memory at 950mhz it does it, even without a failure.


Interesting, sounds like when I still my HBM at 1100MHz, all tests just crash. 1096MHz and it's all good...

I should note it's fine running 1120MHz HBM though


----------



## SpecChum

I've only gone up to 1100Mhz on HBM which is fine with a high fan but artefacts and eventually crashes on a lower fan due to heat.

1040Mhz seems about the limit where the temp get go up to about 87C and it doesn't artefact or crash.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> I've only gone up to 1100Mhz on HBM which is fine with a high fan but artefacts and eventually crashes on a lower fan due to heat.
> 
> 1040Mhz seems about the limit where the temp get go up to about 87C and it doesn't artefact or crash.


Yeah no worries if you are still on Air going above 1100 can induce some massively increased temps on HBM. It's why I still run 1096MHz for my daily.


----------



## geriatricpollywog

Watchdogs 2 is an extremely sensitive game that hates overclocks and makes the perfect stability test. And Ubisoft practically gave it out in cereal boxes, so it's not hard to find a cheap download code.


----------



## SpecChum

I almost bought all the watercooling bits last week but I'm going to hold of until the AIB cooling solutions come out and see what they're like first - might just sell this reference and get one of those instead.

My H110i is more than adequate for the CPU so, if I'm being honest, it would be a bit of a waste to do a full loop at the minute.

I could do a loop just for the Vega I suppose and extend it if/when I upgrade the CPU; this case has room for 2 x 360mm radiators so having a 360 and the H110i wouldn't be an issue for now.

That, and I've not watercooled for well over 10 years and it's all changed since I did it lol


----------



## SpecChum

Quote:


> Originally Posted by *0451*
> 
> Watchdogs 2 is an extremely sensitive game that hates overclocks and makes the perfect stability test. And Ubisoft practically gave it out in cereal boxes, so it's not hard to find a cheap download code.


Hellblade seems to be quite unforgiving too, yeah.

I can bench at 1100Mhz HBM, even 4k Superposition is fine, but 5 minutes of Hellblade (great game, BTW) it artefacts and dies. I had to go all the way down to 1040Mhz with the fan on 2800RPM.

The driver doesn't crash, just the game.


----------



## geriatricpollywog

Quote:


> Originally Posted by *SpecChum*
> 
> Hellblade seems to be quite unforgiving too, yeah.
> 
> I can bench at 1100Mhz HBM, even 4k Superposition is fine, but 5 minutes of Hellblade (great game, BTW) it artefacts and dies. I had to go all the way down to 1040Mhz with the fan on 2800RPM.
> 
> The driver doesn't crash, just the game.


My card could hardly go over 1000mhz on air. On water, I am game stable at 1120mhz and can bench at 1180. HBM voltage is 950. Curious, why don't you WC anymore? I started with my current build and can't see myself going back to air.


----------



## SpecChum

Quote:


> Originally Posted by *0451*
> 
> My card could hardly go over 1000mhz on air. On water, I am game stable at 1120mhz and can bench at 1180. HBM voltage is 950. Curious, why don't you WC anymore? I started with my current build and can't see myself going back to air.


I only did it the once









I had a Hydor pond pump, some B&Q tubing and fittings and a 120mm Thermaltake rad and fan - it cooled about as well as the cheap air cooler it replaced lol

And the pump didn't turn on with the PC









I just liked the convenience of AIO and I've been using those for the past 5 years or so.

Really tempting to WC this whole thing, but as I say:
1: I'd have to wing it as I've not got a clue any more lol
2: The CPU is already more the adequately cooled


----------



## jbravo14

looks like the problems I have been having with UV/OC was resolved when i switch to using overdrive N Tool. Wattman is just plain buggy and cannot save profiles.

I was able to find the sweet spot of my P7 by setting it to 915mv. When i set it to 910mv, i lose around 7 FPS in ROTR benchmark on ULTRA.

I was able to find a quite profile as seen below. (Temps below was after playing AC Syndicate - keeps GPU util at 100% - 1080P)

SP 4k score of ~6000 and game stable with (ROTR-DX12, AC Syndicate, AC Origins, The Division -DX12)

Anyone knows how to autostart with windows a profile from overdrive N tool?


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> looks like the problems I have been having with UV/OC was resolved when i switch to using overdrive N Tool. Wattman is just plain buggy and cannot save profiles.
> 
> I was able to find the sweet spot of my P7 by setting it to 915mv. When i set it to 910mv, i lose around 7 FPS in ROTR benchmark on ULTRA.
> 
> I was able to find a quite profile as seen below. (Temps below was after playing AC Syndicate - keeps GPU util at 100% - 1080P)
> 
> SP 4k score of ~6000 and game stable with (ROTR-DX12, AC Syndicate, AC Origins, The Division -DX12)
> 
> Anyone knows how to autostart with windows a profile from overdrive N tool?


Those settings look familiar









Wattman can save profiles, sort of, you can setup a profile per game and this includes wattman settings.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> Those settings look familiar
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Wattman can save profiles, sort of, you can setup a profile per game and this includes wattman settings.


Have they fixed the problem of it loading the last profile used rather than the current game profile?


----------



## gupsterg

Quote:


> Originally Posted by *0451*
> 
> I started with my current build and can't see myself going back to air.


Same here and agree.
Quote:


> Originally Posted by *SpecChum*
> 
> Really tempting to WC this whole thing, but as I say:
> 1: I'd have to wing it as I've not got a clue any more lol
> 2: The CPU is already more the adequately cooled


It seems more daunting that it is IMO, take your time planing and implementing, you will be fine IMO







. Was my 1st time on WC, with TR. So it was quite expensive kit to kill with a leak, etc







.

Personally AIB cards I wouldn't hold out for if you're considering WC. The ref PCB has good VRM, good components. In the past out of say mix of ref/AIB cards it is not "given" that a AIB cards OC better than ref. It is pure "Silicon lottery" as to what GPU is on PCB and how it OC. 2x 8 pin can supply a lot of W.



I reckon most GPUs on ambient cooling will conk out prior to power delivery being an issue. Also ref PCB is sound in that it uses PCI-E slot very little for power, like past AMD cards. It's not like how Polaris ref PCB was







.

I reckon also based on TPU data, AIB air cooled VEGA gonna saturate cooling solution more than say Fury AIR cooled, etc.



Source link.

Only reason I believe VEGA used less power in Furmark is driver may have intervened more to throttle GPU.

In past few gens of AMD GPU only ref PCB gained waterblock support as well.


----------



## SpecChum

Cheers Gup









The reference cooler is actually quite good cooling capacity wise, IF you can handle the noise.

It would be cheaper for me to sell this Vega for a reasonable price and buy an AIB Vega 64 instead of doing a WC solution, that was my thinking. But yes, I know what you mean about both the AIB cooling solutions possibly being quite saturated and the VRM components possibly not being up to par compared to the reference design.

This is of course assuming custom Vega's appear at all


----------



## SpecChum

I think you live quite close to me Gup, fancy popping round to help with my watercooling? lol


----------



## barbz127

Playing around with overdriventool, pretty sure I have it to start at boot but how would I confirm?

Are there reg keys I can check ?


----------



## SpecChum

Quote:


> Originally Posted by *barbz127*
> 
> Playing around with overdriventool, pretty sure I have it to start at boot but how would I confirm?
> 
> Are there reg keys I can check ?


It sets the actual GPU profile, you don't actually need to run the tool on startup.

Opening up OTN will show you the current settings.


----------



## barbz127

Quote:


> Originally Posted by *SpecChum*
> 
> It sets the actual GPU profile, you don't actually need to run the tool on startup.
> 
> Opening up OTN will show you the current settings.


Thanks, and settings won't try to overwrite?


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> I think you live quite close to me Gup, fancy popping round to help with my watercooling? lol


I would luv to







, it may place more strain on my marriage though!







, can you imagine the conversation with the wife!?

Gup: "I was wondering if I could go help my online buddy with a WC setup? luv"

Gup's wife: "As if you don't spend enough time tinkering with your own PC and glued to the web! Now you're even going around your forum buddies home'[email protected]$!"

(Gup astonished at this, has to quickly don the "Milk Tray Man" outfit to appease the wife!







. As this results in EPIC fail! For the next several weeks he is cooking and cleaning to make atonement for his request of time away from family!).


----------



## SpecChum

Hahaha

Class


----------



## SpecChum

Interesting reading:


__
https://www.reddit.com/r/7d9eb8/vega_56_with_64_bios_doe_for_optimized_settings

Confirms what I found out about lower memory clocks giving more power to the core.

Looks like this guy has been doing the same testing as me, just much more thoroughly.


----------



## gupsterg

Only my opinion.

I believe with VEGA we really need to look at average MHz at load (ie sustained) and also power usage to draw a better conclusion on where we are with profiles. If I look at my wall plug meter it seems to peak and drop more, more often than when I had Fury X. This is also the case seen in MSI AB HML file for GPU power usage.

You'll find 3DM tests in this zip which don't have the wilder PowerTune effects as Heaven/Valley:-

104_Final_PASS.zip 522k .zip file


By my comment I do not detract value from the share by the reddit member







. Most interesting share of data for sure







.


----------



## SpecChum

My "everyday" profile is:
P6 - stock - 1000mV
P7 - 1652 - 1000mV
Mem - 1040Mhz - 990mV
Fan - 2800RPM max

That gives me 1520Mhz average and keeps HBM below 85C even after prolonged 100% activity. Except in that damn Hard Reset menu (I'm obsessed!) where HBM reaches 87C.

Core temp isn't an issue normally, it's the damn HBC temp - I don't like it going above 85C as I lose FPS.


----------



## ducegt

Quote:


> Originally Posted by *SpecChum*
> 
> Interesting reading:
> 
> 
> __
> https://www.reddit.com/r/7d9eb8/vega_56_with_64_bios_doe_for_optimized_settings
> 
> Confirms what I found out about lower memory clocks giving more power to the core.
> 
> Looks like this guy has been doing the same testing as me, just much more thoroughly.


His optimized settings have 95.6 FS points per watt which is about the same I get on stock balanced 64LC. I don't see where he mentions his stock performance ... Just curious how much the 56 gains after his testing.


----------



## Kyozon

Quote:


> Originally Posted by *gupsterg*
> 
> Only my opinion.
> 
> I believe with VEGA we really need to look at average MHz at load (ie sustained) and also power usage to draw a better conclusion on where we are with profiles. If I look at my wall plug meter it seems to peak and drop more, more often than when I had Fury X. This is also the case seen in MSI AB HML file for GPU power usage.
> 
> You'll find 3DM tests in this zip which don't have the wilder PowerTune effects as Heaven/Valley:-
> 
> 104_Final_PASS.zip 522k .zip file
> 
> 
> By my comment I do not detract value from the share by the reddit member
> 
> 
> 
> 
> 
> 
> 
> . Most interesting share of data for sure
> 
> 
> 
> 
> 
> 
> 
> .


I think i am hitting a Wall with VEGA FE-LC at 1685Mhz - 1.2V on the Core, PLimit 50%

It's definitely not Temps issues, as it seems to top on 48C Max. Is there anything that must be done in order to increase those Clocks? Or the RX Vega is more suited to Higher Clocks?

Thanks.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> My "everyday" profile is:
> P6 - stock - 1000mV
> P7 - 1652 - 1000mV
> Mem - 1040Mhz - 990mV
> Fan - 2800RPM max
> 
> That gives me 1520Mhz average and keeps HBM below 85C even after prolonged 100% activity. Except in that damn Hard Reset menu (I'm obsessed!) where HBM reaches 87C.
> 
> Core temp isn't an issue normally, it's the damn HBC temp - I don't like it going above 85C as I lose FPS.


I'll have to test if my FPS drops after hitting HBM 85C, the quiet profile setting I tried had hit a max temp of 86C (but not sustained).
Quote:


> Originally Posted by *SpecChum*
> 
> Those settings look familiar
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Wattman can save profiles, sort of, you can setup a profile per game and this includes wattman settings.


I was inspired with the quiet profile you created, couldn't achieve low temps using wattman, so I gave ONT a try and made my UV/OC more effective/sustainable.

If anyone would start on UV/OC tuning their vega, I would recommend they have below tools in-hand:

- Overdrive NT Tool
- restart64
- Unigen Superposition (4K - to test stability), Unigene Heaven (paused with a moving scene - eg. waving flag) - this helps test temp and FPS output monitoring as you tune your UV/OC.
- DX12 game with a benchmark - test stability and performance(FPS)
- ATIflash if you are planning to flash the BIOS


----------



## Razkin

Quote:


> Originally Posted by *SpecChum*
> 
> OK, so what's the deal with HBM freq and total power?
> 
> At HBM 1100Mhz when I set P7 to 1682Mhz at 1000mV I get ~1550Mhz and ~220W power, all good, but when I set HBM to 950Mhz the core goes up to @1630Mhz and power shoots up to ~280W.
> 
> Memory is 990mV on both.
> 
> Anyone know why?


Check the voltage on the core. Sometimes when you alter a frequency wether it is core or HBM it messes with the core voltage.


----------



## Reikoji

i have a feeling 'auto' voltages are completely different from the default manual voltages shown when you first switch to manual. vega 64 air default manuals are p7/1200 p6/1150 hbm/1100. lc card defaults are p7/1200 p6/1150 hbm/950.

In SP4K, setting the air card voltage to manual and leaving the numericals at their defaults costs 500-600 points vs leaving voltages on auto. max core clock is visually lower as well, dipping down into the 1400's, close to 1300, as opposed to getting above 1600 left on auto. temperatures are also not any better with the reduced performance. allowing the fans to rev up to max, both modes put the card at ~72c.

the lc card is basically undervolted at its default manual voltages. it usually survives SP4K, but 3d mark and games it crashes real fast. auto definitely applies the real voltages.

i am curious how hbm/floor voltage is handled when it is on auto. it clearly causes a performance hit if left at 1100 when switched to manual, so i cant see it being that on auto.


----------



## SpecChum

Quote:


> Originally Posted by *Reikoji*
> 
> i have a feeling 'auto' voltages are completely different from the default manual voltages shown when you first switch to manual. vega 64 air default manuals are p7/1200 p6/1150 hbm/1100. lc card defaults are p7/1200 p6/1150 hbm/950.
> 
> In SP4K, setting the air card voltage to manual and leaving the numericals at their defaults costs 500-600 points vs leaving voltages on auto. max core clock is visually lower as well, dipping down into the 1400's, close to 1300, as opposed to getting above 1600 left on auto. temperatures are also not any better with the reduced performance. allowing the fans to rev up to max, both modes put the card at ~72c.
> 
> the lc card is basically undervolted at its default manual voltages. it usually survives SP4K, but 3d mark and games it crashes real fast. auto definitely applies the real voltages.
> 
> i am curious how hbm/floor voltage is handled when it is on auto. it clearly causes a performance hit if left at 1100 when switched to manual, so i cant see it being that on auto.


I suspected this last night as my BIOS says default HBM volts is 1050mV but when you select manual mode it goes to 1100mV


----------



## madmanmarz

Quote:


> Originally Posted by *TrixX*
> 
> Going off some other information I've seen it might be better to treat the entire Die of GPU and HBM as a single die and cover the entirety of it evenly or using the X method. Obviously LM is a different matter but that's always an alternative option.


Agree. X method or spread it out method but you should definitely make sure TIM gets in between the core/hbm areas molded or not. It will definitely help hotspot temps


----------



## PontiacGTX

I wonder if no one has find out if the P states can be arranged to stabilize the clock speed and make it constant and achieve better performance?


----------



## VicsPC

Quote:


> Originally Posted by *madmanmarz*
> 
> Agree. X method or spread it out method but you should definitely make sure TIM gets in between the core/hbm areas molded or not. It will definitely help hotspot temps


I did X on the core and put a small blob on my hbm. I think everyone is getting too into hotspot temps, don't think thermal paste is going to make any difference into it.


----------



## madmanmarz

Quote:


> Originally Posted by *VicsPC*
> 
> I did X on the core and put a small blob on my hbm. I think everyone is getting too into hotspot temps, don't think thermal paste is going to make any difference into it.


It makes a difference because you need the cover the area right in the middle between core and HBM. Although hotspot temps will vary it seems from card to card and it will only make so much of a difference. Look here at this example, check out his GPU-Z screens.


http://imgur.com/3RoFD


Also I know it works because I initially used a dot on each piece, and upon redoing paste and spreading it out over the entire area, my hotspot temps dropped around 10c while nothing else really changed.


----------



## madmanmarz

^^^112C!!!

vs




64C

big difference. I'm convinced that the hotspot is right in the middle there, although reaching your max temp limit (70 or 85c) doesn't seem to cause throttling, but over 100c I have heard people say it does.


----------



## geriatricpollywog

Quote:


> Originally Posted by *madmanmarz*
> 
> 
> 
> 
> 
> ^^^112C!!!
> 
> vs
> 
> 
> 
> 
> 64C
> 
> big difference. I'm convinced that the hotspot is right in the middle there, although reaching your max temp limit (70 or 85c) doesn't seem to cause throttling, but over 100c I have heard people say it does.


Wait, was it 64 C during the runnor at the time the screenshot was taken?


----------



## tarot

yes I agree with this mine dropped with the same treatment one thing I just tested which I found interesting is playing with the temp sliders.
drop them both down critical to say 60 or even 50 if you are watercooled and the other slider to say 30 or 40 (assuming those are the temps you are getting with hotspot included)
run firestrike
then max them out and run it again.
my thought is the hotspot may be what is being seen more than the core temps and it does throttle and if you have it maxed it can cause overboosting like I got so a range of say 70 for critical and 60 for normal may help.


----------



## PontiacGTX

Quote:


> Originally Posted by *0451*
> 
> Wait, was it 64 C during the runnor at the time the screenshot was taken?


Maximum temperature read while GPUz was open


----------



## tarot

Quote:


> Originally Posted by *PontiacGTX*
> 
> Maximum temperature read while GPUz was open


that would have been better if they had max temps on all the tabs...the way that is it looks like the cores are 28 degrees and the hotspot is 60 odd
mine with the last little run I did just then was 64 on the hotpot and 39/40 on the core and ram.


----------



## diabetes

Quote:


> Originally Posted by *madmanmarz*
> 
> big difference. I'm convinced that the hotspot is right in the middle there, although reaching your max temp limit (70 or 85c) doesn't seem to cause throttling, but over 100c I have heard people say it does.


I think it was proper mounting that made a difference there and not the TIM on the epoxy. Epoxy is a poor heat conductor 0.21W/(m*K) up to 6.0W/(m*K) for special mixtures and considering the thickness of it, it's ability to transfer heat to the block is very limited. I have an unmolded card with no TIM left between the core and the HBM as I have cleaned that out. When remounting my EK block, I evenly spread a thin layer of Gelid Extreme over the core and the HBM stacks. It was not enough material to fill in the gap when the block was applying mounting pressure. Yet my hotspot only reaches 67C when pumping 320W into the chip. Low 50s at 220W.

Before that, I had way higher hotspot Temps because I used the wrong screws (6mm ones insted of 4mm) for my block. Proper cooler mounting is key for Vega.

Besides that, grats to your cool mod! I've already seen the pics on Reddit yesterday.


----------



## Sunsoar

Anyone else rocking a Vega 64 LC and have that whine happen when they boot a game?

I just got my Gigabyte 64 LC and it runs great - but a whine happens when I get into a game. I dropped the power limit to -50% and it has pretty much gone away but that is quite a power in performance.

Edit: After getting much closer to the PC with my ear - the whine is emanating from my PSU....


----------



## madmanmarz

Quote:


> Originally Posted by *0451*
> 
> Wait, was it 64 C during the runnor at the time the screenshot was taken?


If you look at GPU-Z you'll see they just left the hbm to show max

Quote:


> Originally Posted by *diabetes*
> 
> I think it was proper mounting that made a difference there and not the TIM on the epoxy. Epoxy is a poor heat conductor 0.21W/(m*K) up to 6.0W/(m*K) for special mixtures and considering the thickness of it, it's ability to transfer heat to the block is very limited. I have an unmolded card with no TIM left between the core and the HBM as I have cleaned that out. When remounting my EK block, I evenly spread a thin layer of Gelid Extreme over the core and the HBM stacks. It was not enough material to fill in the gap when the block was applying mounting pressure. Yet my hotspot only reaches 67C when pumping 320W into the chip. Low 50s at 220W.
> 
> Before that, I had way higher hotspot Temps because I used the wrong screws (6mm ones insted of 4mm) for my block. Proper cooler mounting is key for Vega.
> 
> Besides that, grats to your cool mod! I've already seen the pics on Reddit yesterday.


NOT MY MOD! lol just showing this as some sort of evidence for hotspot temps. i think many people with the unmolded die have actually reported better hotspot temps (maybe since the epoxy isn't there).

regardless I can say in my situation it was not mounting, but how the paste was spread that helped my hotspot, and it appears there was plenty of good contact in this persons mod the first go around as well.

I also tried setting a fan on the front of the card to help cool the nexxxos passive part better and it didn't make even one lick of difference in reported temps.


----------



## Soggysilicon

Quote:


> Originally Posted by *SavantStrike*
> 
> I'm not sure why that person decided to go to so much effort instead of using a full cover block, but it's a pretty neat mod. I can see that my original strategy of just putting three dots of TIM on the die and the HBM might not be the best strategy, so I'll do the spread method when I put full cover blocks on in a week.


I may of had done the same thing if RX Vega (at the time of my purchase, which was day 1) had been more plentiful. One "rational" reason to use a gpu only block is for flow rates when using a performance pump and large ID tubing. FC blocks are notoriously restrictive. My old EK HF Supreme VGA universal blocks are fantastic. In this application, however, a custom bracket was on the menu and in my work-a-day life these last few months work has been such a slog that the EK FC block was a strong alternative. Brand familiarity, nickel plating seemed worked out... no garage engineering required...









In short... flow rates and that "do it urself pride".
Quote:


> Originally Posted by *gupsterg*
> 
> Only my opinion.
> 
> I believe with VEGA we really need to look at average MHz at load (ie sustained) and also power usage to draw a better conclusion on where we are with profiles. If I look at my wall plug meter it seems to peak and drop more, more often than when I had Fury X. This is also the case seen in MSI AB HML file for GPU power usage.
> 
> You'll find 3DM tests in this zip which don't have the wilder PowerTune effects as Heaven/Valley:-
> 
> 104_Final_PASS.zip 522k .zip file
> 
> 
> By my comment I do not detract value from the share by the reddit member
> 
> 
> 
> 
> 
> 
> 
> . Most interesting share of data for sure
> 
> 
> 
> 
> 
> 
> 
> .


Been in this camp for some time now. The "sustained" 10 minute average clock camp.

For me 1700~1710 over 10 minutes is realizable. I would of liked to have seen 1750's but without seriously pouring on the coals with >50% PL on LC bios... It's just not feasible. I can't say the wattage is really here or there but I suspect the diminishing returns are REAL.
Quote:


> Originally Posted by *Sunsoar*
> 
> Anyone else rocking a Vega 64 LC and have that whine happen when they boot a game?
> 
> I just got my Gigabyte 64 LC and it runs great - but a whine happens when I get into a game. I dropped the power limit to -50% and it has pretty much gone away but that is quite a power in performance.
> 
> Edit: After getting much closer to the PC with my ear - the whine is emanating from my PSU....


Welcome to Vega ownership... the coil whine... is strong with this one.


----------



## Chaoz

Coil whine is just as bad on mine aswell. Luckily I undervolted it a bit and I play with FreeSync at 75hz so my 64 doesn't go to 100% and whine like crazy.


----------



## fursko

In Shadow of War my card can work with 1185mhz hbm lol. My in-game benchmark scores for maxed settings: 1080p: 116>> 1440p: 84>> 2160p: 50. Vega needs little push. Better drivers or enabling features can make vega 64 = 1080 ti. Right now its just little better than gtx 1080 and this is really bad. Crimson redux drivers coming soon. Hope amd can make surprises for vega users.


----------



## VicsPC

Quote:


> Originally Posted by *diabetes*
> 
> I think it was proper mounting that made a difference there and not the TIM on the epoxy. Epoxy is a poor heat conductor 0.21W/(m*K) up to 6.0W/(m*K) for special mixtures and considering the thickness of it, it's ability to transfer heat to the block is very limited. I have an unmolded card with no TIM left between the core and the HBM as I have cleaned that out. When remounting my EK block, I evenly spread a thin layer of Gelid Extreme over the core and the HBM stacks. It was not enough material to fill in the gap when the block was applying mounting pressure. Yet my hotspot only reaches 67C when pumping 320W into the chip. Low 50s at 220W.
> 
> Before that, I had way higher hotspot Temps because I used the wrong screws (6mm ones insted of 4mm) for my block. Proper cooler mounting is key for Vega.
> 
> Besides that, grats to your cool mod! I've already seen the pics on Reddit yesterday.


Yea i think so as well. I mounted mine just right the first time and my hotspot temp at load is always 12°C above CORE and 9°C above HBM. I tighten all the screws (in a pattern) around the die first making sure the studs are flush with the pcb then will continue doing the rest of the card. Since pressure shifts off the waterblock i will go around, again, and give them each a bit more of a tighten. After that i mounted the backplate and had no issues. I used the x method for the core and tiny xmethod for the HBM and haven't had any issues. I didn't spread it or use too much TIM like i see in some pics but proper mounting is key in ANY installation.

People need to remember that less is BETTER when it comes to TIM. You want your gap between the waterblock/heatsink to be as minute as possible and i mean that literally.
Quote:


> Originally Posted by *Chaoz*
> 
> Coil whine is just as bad on mine aswell. Luckily I undervolted it a bit and I play with FreeSync at 75hz so my 64 doesn't go to 100% and whine like crazy.


Honestly i need to stick my head into my case next to my card to even BARELY hear it. I even did it with all my fans off and was barely audible. This was during Siege so 99% usage and getting like 200fps. It's another issue that varies per card and per power supply.


----------



## barbz127

Am I right to assume that memory clocks are limited by voltage?

From all testing that I have done (not touching memory speed) the lower the voltage the slower the memory speed.

Is there a way to underclock but still meet factory speeds or is it more about finding the sweet spot between performance and power draw?


----------



## gupsterg

@Soggysilicon

Here is SWBF, Ultra preset, FreeSync: On, FRTC: 140. In game overlay, each time I snuck a peak at was ~140.

SWBF.zip 53k .zip file


I will do the same for Wolfenstein 2, viewing a HML when I had 1652MHz as DPM 7, sustained GPU clock was ~1620MHz.

@Chaoz

I think my sample is mighty good in this aspect. My rig is on the floor to the side of the desk, there are only 2-3 case points I notice some coil whine / buzz from GPU.

i) Heaven/Valley on exit the credits screen, noticeable sitting in my chair.

ii) A sorta of low frequency buzz from GPU when under load, noticeable only if have ear ~5-10cm from case side panel.

Each case I think is normal, only GPU I have had which didn't was an ASUS DCUII 290X. I believe the used concrete in core of chokes. Other than that some of the AIB cards have been worse than reference, for example Vapor-X 290X / Fury Nitro I owned.

@VicsPC

I used to be in the camp of using the least TIM. Even then I don't think I used as little as some and not as much as others. What made leave the camp of the minutest TIM application was what occurred repeatedly on Ryzen with 2 differing CPUs and a ThermalRight Archon SB-E/IB-E X2.

In this image is same HS, top is differing CPU and lower another, lower I used slightly less TIM, link. Lapped HS and another differing CPU, link. Even though the lapped HS had similar amount of TIM as very 1st CPU/unlapped HS mount, as it was a flatter base and with mount pressure/natural process of "pumping out" I have better contact. TIM has naturally thinned out. Between all these CPUs I also noted the IHS is not perfectly flat. If I placed a metal rule edge on HS base or IHS I may find on vertical or horizontal it was not flat, or even both.

So either we take more time assessing cooling solution base and CPU/GPU surface or repeat mount testing to see what is best. The gains I got from lapped HS were not much vs an appropriate amount of TIM with unlapped base. Caveat being that HS had good contact with CPU in primary location of IHS where die is.

GN just did 



. In situations where base is not making good contact, using minutest TIM will create more issues.

IMO we may lose some degrees of temperature by using slightly more TIM, but potential damage from not using enough could be costly. I believe the spreading by hand of TIM is best method, as regardless of mount pressure and natural pumping out of TIM, we will have created best contact from get go.


----------



## VicsPC

Quote:


> Originally Posted by *gupsterg*
> 
> @Soggysilicon
> 
> Here is SWBF, Ultra preset, FreeSync: On, FRTC: 140. In game overlay, each time I snuck a peak at was ~140.
> 
> SWBF.zip 53k .zip file
> 
> 
> I will do the same for Wolfenstein 2, viewing a HML when I had 1652MHz as DPM 7, sustained GPU clock was ~1620MHz.
> 
> @Chaoz
> 
> I think my sample is mighty good in this aspect. My rig is on the floor to the side of the desk, there are only 2-3 case points I notice some coil whine / buzz from GPU.
> 
> i) Heaven/Valley on exit the credits screen, noticeable sitting in my chair.
> 
> ii) A sorta of low frequency buzz from GPU when under load, noticeable only if have ear ~5-10cm from case side panel.
> 
> Each case I think is normal, only GPU I have had which didn't was an ASUS DCUII 290X. I believe the used concrete in core of chokes. Other than that some of the AIB cards have been worse than reference, for example Vapor-X 290X / Fury Nitro I owned.
> 
> @VicsPC
> 
> I used to be in the camp of using the least TIM. Even then I don't think I used as little as some and not as much as others. What made leave the camp of the minutest TIM application was what occurred repeatedly on Ryzen with 2 differing CPUs and a ThermalRight Archon SB-E/IB-E X2.
> 
> In this image is same HS, top is differing CPU and lower another, lower I used slightly less TIM, link. Lapped HS and another differing CPU, link. Even though the lapped HS had similar amount of TIM as very 1st CPU/unlapped HS mount, as it was a flatter base and with mount pressure/natural process of "pumping out" I have better contact. TIM has naturally thinned out. Between all these CPUs I also noted the IHS is not perfectly flat. If I placed a metal rule edge on HS base or IHS I may find on vertical or horizontal it was not flat, or even both.
> 
> So either we take more time assessing cooling solution base and CPU/GPU surface or repeat mount testing to see what is best. The gains I got from lapped HS were not much vs an appropriate amount of TIM with unlapped base. Caveat being that HS had good contact with CPU in primary location of IHS where die is.
> 
> GN just did
> 
> 
> 
> . In situations where base is not making good contact, using minutest TIM will create more issues.
> 
> IMO we may lose some degrees of temperature by using slightly more TIM, but potential damage from not using enough could be costly. I believe the spreading by hand of TIM is best method, as regardless of mount pressure and natural pumping out of TIM, we will have created best contact from get go.


Shocking you're having issues with Ryzen but then again you're using TR it's a bit more massive then my 1700x. I used a tiny amount on my 1700x and again my temps are fantastic, same with my Vega 64 ek. I think it all comes down to how people are installing things.

For example, because of how the spring sits on the ekwb supremacy i use a small metal washer so the spring sits ON and not IN the waterblock. I seem to be one of the few who didn't have any issues with my supremacy and 1700x using the ekwb gasket (and on top of it, im only using the center portion as i didnt get a whole gasket with my am4 mounting kit). I've been building long enough to know when something would make a difference and not. Then again when it comes to my PC I'm utterly anal at installing components. I probably took an hr or so installing my block onto my gpu just to make sure it was perfect.

I agree that if the mounting pressure or installation is done poorly yea youll need more TIM but it does hurt temps regardless. I think in the end it all comes down to installation.

P.S in that bottom pick you used WAY too little TIM, i'm guessing that was the hydronaut? i noticed it as well in my testing, that stuff does not like to spread whatsoever, i ended up with much worse temps.


----------



## gupsterg

In each photo it is AS5 and Ryzen, none were of ThreadRipper and nor did I mention that







. I just used the Ryzen mounts as example as had no VEGA photos to share.

I do believe the GN video has covered well how we too are seeing not vast difference in unmolded die vs molded for temps. It has shown members with unmolded dies may need a tad more TIM, which again has been discussed here before. The lower height of HBM has been highlighted before by media/AMD.



I too took time and care excessively with my GPU WB block install







.

My copper block had some oxidation on delivery, link. I measured each area I applied a thermal pad to, cut it with just enough excess so when block was mounted and pad molded it was not too short or excessively long. I too tightened screws on opposing sides and then others in rotation and equally, until seated.

Whichever way we look at it TIM application is part of installation, so there is a need to use right amount.

Even if I have made assessment on both surfaces I just think it is better to use a little excessive TIM then too little. I am no longer advocating minute TIM usage at all. I believe my temps on VEGA are no higher or no greater than others, roughly accounting for possible variances.


----------



## SpecChum

Anyone tried new drivers yet?

I'm at work sadly.


----------



## Trender07

Quote:


> Originally Posted by *gupsterg*
> 
> @Soggysilicon
> 
> Here is SWBF, Ultra preset, FreeSync: On, FRTC: 140. In game overlay, each time I snuck a peak at was ~140.
> 
> SWBF.zip 53k .zip file
> 
> 
> I will do the same for Wolfenstein 2, viewing a HML when I had 1652MHz as DPM 7, sustained GPU clock was ~1620MHz.
> 
> @Chaoz
> 
> I think my sample is mighty good in this aspect. My rig is on the floor to the side of the desk, there are only 2-3 case points I notice some coil whine / buzz from GPU.
> 
> i) Heaven/Valley on exit the credits screen, noticeable sitting in my chair.
> 
> ii) A sorta of low frequency buzz from GPU when under load, noticeable only if have ear ~5-10cm from case side panel.
> 
> Each case I think is normal, only GPU I have had which didn't was an ASUS DCUII 290X. I believe the used concrete in core of chokes. Other than that some of the AIB cards have been worse than reference, for example Vapor-X 290X / Fury Nitro I owned.
> 
> @VicsPC
> 
> I used to be in the camp of using the least TIM. Even then I don't think I used as little as some and not as much as others. What made leave the camp of the minutest TIM application was what occurred repeatedly on Ryzen with 2 differing CPUs and a ThermalRight Archon SB-E/IB-E X2.
> 
> In this image is same HS, top is differing CPU and lower another, lower I used slightly less TIM, link. Lapped HS and another differing CPU, link. Even though the lapped HS had similar amount of TIM as very 1st CPU/unlapped HS mount, as it was a flatter base and with mount pressure/natural process of "pumping out" I have better contact. TIM has naturally thinned out. Between all these CPUs I also noted the IHS is not perfectly flat. If I placed a metal rule edge on HS base or IHS I may find on vertical or horizontal it was not flat, or even both.
> 
> So either we take more time assessing cooling solution base and CPU/GPU surface or repeat mount testing to see what is best. The gains I got from lapped HS were not much vs an appropriate amount of TIM with unlapped base. Caveat being that HS had good contact with CPU in primary location of IHS where die is.
> 
> GN just did
> 
> 
> 
> . In situations where base is not making good contact, using minutest TIM will create more issues.
> 
> IMO we may lose some degrees of temperature by using slightly more TIM, but potential damage from not using enough could be costly. I believe the spreading by hand of TIM is best method, as regardless of mount pressure and natural pumping out of TIM, we will have created best contact from get go.


I don't know what could be but since half a month I have a weird and RANDOM looks-like-coil-whine but it actually is like a weird buzzing, which didn't make a month~ a go. It happens even when idle its just random and weird sound, when I've heard coil whine from this card before were well in normal coil whine cases like some menus with crazy high fps if I heard on the back of the card where it blows the air I could heard normal coil whine but since this half month ago it makes sometimes this random buzzing sound it didn't before


----------



## VicsPC

Quote:


> Originally Posted by *gupsterg*
> 
> In each photo it is AS5 and Ryzen, none were of ThreadRipper and nor did I mention that
> 
> 
> 
> 
> 
> 
> 
> . I just used the Ryzen mounts as example as had no VEGA photos to share.
> 
> I do believe the GN video has covered well how we too are seeing not vast difference in unmolded die vs molded for temps. It has shown members with unmolded dies may need a tad more TIM, which again has been discussed here before. The lower height of HBM has been highlighted before by media/AMD.
> 
> 
> 
> I too took time and care excessively with my GPU WB block install
> 
> 
> 
> 
> 
> 
> 
> .
> 
> My copper block had some oxidation on delivery, link. I measured each area I applied a thermal pad to, cut it with just enough excess so when block was mounted and pad molded it was not too short or excessively long. I too tightened screws on opposing sides and then others in rotation and equally, until seated.
> 
> Whichever way we look at it TIM application is part of installation, so there is a need to use right amount.
> 
> Even if I have made assessment on both surfaces I just think it is better to use a little excessive TIM then too little. I am no longer advocating minute TIM usage at all. I believe my temps on VEGA are no higher or no greater than others, roughly accounting for possible variances.


It really is, honestly, a per build basis. There is SUCH a huge variation between manufacturing that it all depends. I checked my ryzen and vega with a straight edge and both were unbelievably flat.

It's why i keep a tube of cheap TIM for customer builds and try like that for an imprint. Could use toothpaste as well. I did check my vega and ryzen with pressure paper as well so i know for a fact i got some good mounting pressure for mine.


----------



## TrixX

Quote:


> Originally Posted by *Trender07*
> 
> I don't know what could be but since half a month I have a weird and RANDOM looks-like-coil-whine but it actually is like a weird buzzing, which didn't make a month~ a go. It happens even when idle its just random and weird sound, when I've heard coil whine from this card before were well in normal coil whine cases like some menus with crazy high fps if I heard on the back of the card where it blows the air I could heard normal coil whine but since this half month ago it makes sometimes this random buzzing sound it didn't before


I have a buzz under 100% load and 1150mv+ being applied on my card. Not sure why either but seems to be coming from the VRM array. Bloody annoying really.


----------



## fato22

Ok, something weird happened with my OC on my Vega 64. I've been playing for almost 2 weeks with my perfectly stable OC (p6 1000, p7 1080 +4%, 1000mhz ram @ 1060).

I am not sure what happened, I never had any crash but yesterday my OC wasn't there anymore.

So, while there, I decided to update the drivers. Now when I try to OC with the same settings I had before it is not stable anymore.

Is this a normal behavior? I was able to bring the GPU Clock back to where it was but if I touch the HBM it crashes. I have to leave it at 945 stock.


----------



## poisson21

Jay2cents had a video where he try if the different amount of tim on a gpu die is important or not, he litterally flooded a gpu die with a full syringe of 5g and there isn't any differnece in temp.


----------



## VicsPC

Quote:


> Originally Posted by *poisson21*
> 
> Jay2cents had a video where he try if the different amount of tim on a gpu die is important or not, he litterally flooded a gpu die with a full syringe of 5g and there isn't any differnece in temp.


Linus did one as well and too little made a difference but too much didn't. I still say its per build basis and it's why pressure paper is SO important, so is mounting pressure, more so then lapping.


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Anyone tried new drivers yet?
> 
> I'm at work sadly.


Using them







.

SP 4K, 3DM FS/TS same for same clocks.

WattMan resizing still borked.

Now doing some gaming.
Quote:


> Originally Posted by *Trender07*
> 
> I don't know what could be but since half a month I have a weird and RANDOM looks-like-coil-whine but it actually is like a weird buzzing, which didn't make a month~ a go. It happens even when idle its just random and weird sound, when I've heard coil whine from this card before were well in normal coil whine cases like some menus with crazy high fps if I heard on the back of the card where it blows the air I could heard normal coil whine but since this half month ago it makes sometimes this random buzzing sound it didn't before


I have to be so close and silent room to hear anything as stated in point (ii), I consider this normal, as well as point (i). So no idea what to suggest







.


----------



## kundica

Quote:


> Originally Posted by *SpecChum*
> 
> Anyone tried new drivers yet?
> 
> I'm at work sadly.


I tried them yesterday but ended up rolling back. I noticed some microstutter that wasn't there before as well as HBM temp not being read by HWInfo64. GPU-Z seemed to read the HBM temp fine but I use HWInfo to monitor my stuff since it interfaces with my Aquaero.


----------



## madmanmarz

Quote:


> Originally Posted by *VicsPC*
> 
> Linus did one as well and too little made a difference but too much didn't. I still say its per build basis and it's why pressure paper is SO important, so is mounting pressure, more so then lapping.


Yeah generally everything extra will just get pushed out, maybe not as much on some caps/surfaces that are concave but as far as naked dies/gpus go - go ham


----------



## Reikoji

did you make sure to disable gpu vrm voltage monitoring? i found that to be a major cause in stuttering when using hwinfo.


----------



## jbravo14

Quote:


> Originally Posted by *kundica*
> 
> I tried them yesterday but ended up rolling back. I noticed some microstutter that wasn't there before as well as HBM temp not being read by HWInfo64. GPU-Z seemed to read the HBM temp fine but I use HWInfo to monitor my stuff since it interfaces with my Aquaero.


I also rolledback due to the HBM temps not visible


----------



## Trender07

Guys I can't be the only one that since about 3 drivers a go , the HBM mem gets locked at 800 MHz with 950 mV? I have to set 951 mv or 955 mv whatever but it gets locked at 800 mhz with 950 mv which didnt happened before, anyone else?


----------



## 113802

Quote:


> Originally Posted by *Trender07*
> 
> Guys I can't be the only one that since about 3 drivers a go , the HBM mem gets locked at 800 MHz with 950 mV? I have to set 951 mv or 955 mv whatever but it gets locked at 800 mhz with 950 mv which didn't happened before, anyone else?


I've seen Wattman show 800Mhz but after a few seconds it jumps up to the correct frequency. GPU-Z always shows the correct HBM frequency.

http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.2-Release-Notes.aspx
Quote:


> Radeon WattMan user interface may not reflect overclocked or underclocked values for GPU memory.


----------



## gupsterg

Quote:


> Originally Posted by *Trender07*
> 
> Guys I can't be the only one that since about 3 drivers a go , the HBM mem gets locked at 800 MHz with 950 mV? I have to set 951 mv or 955 mv whatever but it gets locked at 800 mhz with 950 mv which didnt happened before, anyone else?


Happened to me, when seeing how low I could under volt.

If I set HBM Voltage in WattMan higher than 950mV all is well, but then GPU will not use as low a voltage as I'm targeting in DPM 6.


----------



## porschedrifter

Has anyone lost GPU HBM Temperature with latest AMD driver 17.11.2 that was just released?


----------



## gupsterg

Yes, PM'd Martin, sure to be fixed ASAP IMO







.

*** edit ***

Temporary solution until next HWINFO build release.

Apply multiply factor 1000 to HBM temperature sensor in HWINFO.



*Once new build is released undo the change prior to updating.*


----------



## jbravo14

Quote:


> Originally Posted by *jbravo14*
> 
> I'll have to test if my FPS drops after hitting HBM 85C, the quiet profile setting I tried had hit a max temp of 86C (but not sustained).
> I was inspired with the quiet profile you created, couldn't achieve low temps using wattman, so I gave ONT a try and made my UV/OC more effective/sustainable.
> 
> If anyone would start on UV/OC tuning their vega, I would recommend they have below tools in-hand:
> 
> - Overdrive NT Tool
> - restart64
> - Unigen Superposition (4K - to test stability), Unigene Heaven (paused with a moving scene - eg. waving flag) - this helps test temp and FPS output monitoring as you tune your UV/OC.
> - DX12 game with a benchmark - test stability and performance(FPS)
> - ATIflash if you are planning to flash the BIOS


I switched back to using wattman from OTN, my temps suddenly got high and my HBM clocks were not going as high. Later found that HBM was reaching 90C and that core mv was at 1000mv max (the 915mv setting not taking effect).

Copied over my OTN settings to wattman and back in business. Not sure what caused it though.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> I switched back to using wattman from OTN, my temps suddenly got high and my HBM clocks were not going as high. Later found that HBM was reaching 90C and that core mv was at 1000mv max (the 915mv setting not taking effect).
> 
> Copied over my OTN settings to wattman and back in business. Not sure what caused it though.


This happens quite often, when it does just increase p6 or p7 clock by 2 or something, press apply, then decrease it again, and press apply.


----------



## Yviena

Has the protection on modified bios been cracked yet? was thinking maybe i can overvolt the HBM voltage to 1.4 as I'm watercooling my vega.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Yviena*
> 
> Has the protection on modified bios been cracked yet? was thinking maybe i can overvolt the HBM voltage to 1.4 as I'm watercooling my vega.


What clocks are you trying to hit?


----------



## SavantStrike

Quote:


> Originally Posted by *Yviena*
> 
> Has the protection on modified bios been cracked yet? was thinking maybe i can overvolt the HBM voltage to 1.4 as I'm watercooling my vega.


It hasn't









If it's anything like the protection on Pascal it probably never will be - only available by directly connecting a BIOS programmer to the card and forcing a flash.


----------



## gupsterg

Even flashing using external tool is no go. These tests were done early on in VEGA BIOS thread.

Current belief is the digital signature (been around since HD 7xxx series) is being checked by a on die implementation to verify VBIOS at GPU post. As we have no method to update it modified VBIOS will not function.


----------



## jmoonb

So I've been playing around with me vega 56 (64 bios) on water but I just can't seem to figure this out. It seems my card has the "overboost" crash others have mentioned. I can be stable in anything at say 1580/1070mv P7 but will crash at 1590/1070mv. Those two settings would produce a difference of roughly 10mhz (1550-1560 and 1560-1570) which is enough to crash the card.

Now the odd bit... When I see other peoples hwinfo pics, I notice how regardless of their voltage settings, the current given to the card would compensate to allow the card to function at its optimal mhz. For example, on some cards, even at 1v the card would pump enough amps which would bring MAX chip power to 250W. Defeats the purpose of undervolting but it does keep things stable. Mine on the other hand is locked at 200W at 1070mv no matter what p7 clock is set. This can only be changed once I change the voltage. It was like this with the vega 56 bios and its the same with the 64.

Am I doing something wrong here?


----------



## Yviena

Quote:


> Originally Posted by *0451*
> 
> What clocks are you trying to hit?


Trying to hit around 1190mhz on memory I'm stable upto 1150mhz atm
Quote:


> Originally Posted by *gupsterg*
> 
> Even flashing using external tool is no go. These tests were done early on in VEGA BIOS thread.
> 
> Current belief is the digital signature (been around since HD 7xxx series) is being checked by a on die implementation to verify VBIOS at GPU post. As we have no method to update it modified VBIOS will not function.


I don't really understand why AMD did this.
Vega would probably be even cheaper without this ARM security processor, they could atleast have disabled the protection for the gaming rx vega version but have it enabled in the FE edition...


----------



## SpecChum

Quote:


> Originally Posted by *jmoonb*
> 
> So I've been playing around with me vega 56 (64 bios) on water but I just can't seem to figure this out. It seems my card has the "overboost" crash others have mentioned. I can be stable in anything at say 1580/1070mv P7 but will crash at 1590/1070mv. Those two settings would produce a difference of roughly 10mhz (1550-1560 and 1560-1570) which is enough to crash the card.
> 
> Now the odd bit... When I see other peoples hwinfo pics, I notice how regardless of their voltage settings, the current given to the card would compensate to allow the card to function at its optimal mhz. For example, on some cards, even at 1v the card would pump enough amps which would bring MAX chip power to 250W. Defeats the purpose of undervolting but it does keep things stable. Mine on the other hand is locked at 200W at 1070mv no matter what p7 clock is set. This can only be changed once I change the voltage. It was like this with the vega 56 bios and its the same with the 64.
> 
> Am I doing something wrong here?


No, not just you.

I'm struggling to get Bulletstorm stable - it keeps boosting too high when I limit it to 75hz


----------



## ducegt

Quote:


> Originally Posted by *jmoonb*
> 
> Am I doing something wrong here?


I don't know for certain, but my thinking aligns with yours. Less voltage just amounts to more amperage and no actual performance gains with negligible efficiency gains. I love tweaking stuff for the hell of it, but AMD's PowerTune has a firm grip on things. With Tonga.. R9 285/380.. increasing voltage didn't amount to anything besides wasted electricity at ambient temperatures.


----------



## gupsterg

Quote:


> Originally Posted by *Yviena*
> 
> I don't really understand why AMD did this.
> Vega would probably be even cheaper without this ARM security processor, they could atleast have disabled the protection for the gaming rx vega version but have it enabled in the FE edition...


Yep PITA







. Just tried a data change in VBIOS. Does not matter:-

i) if CSM: On/Off, Secure Boot & Fast Boot: Off.
ii) use UEFI/GOP module with Legacy VBIOS section "digital signature" check disabled.

On post GPU is disabled, OLED on ZE show Code: B2 Message: Load VGA BIOS


----------



## SpecChum

OK, so I need a way of not going over a certain core speed regardless of load.

Might need 2 profiles here.

Setting 1642Mhz on P7 gives me a nice steady 1530Mhz on the core on full load, that's fine, but when I run an older game this boosts to 1570Mhz or so and eventually locks up.

I could lower P7 but then that'll underclock on heavy loads.

Hmm.


----------



## Mumak

Here the latest HWiNFO64 build to fix reporting of HBM temperature with Crimson 17.11.2: http://www.hwinfo.com/beta/hw64_561_3287.zip


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> OK, so I need a way of not going over a certain core speed regardless of load.
> 
> Might need 2 profiles here.
> 
> Setting 1642Mhz on P7 gives me a nice steady 1530Mhz on the core on full load, that's fine, but when I run an older game this boosts to 1570Mhz or so and eventually locks up.
> 
> I could lower P7 but then that'll underclock on heavy loads.
> 
> Hmm.


What PowerLimit do you use?

I just tried Dead Space and Monkey Island. DS without cap is ~700FPS in menus and ~200-350FPS in game. Monkey Island never made it into DPM5/6/7. No likey the waveform on HBM







, would be nice if when GPU not in high DPM like MI then just stick in low HBM MHz.

DS_MI_HML.zip 21k .zip file

Quote:


> Originally Posted by *Mumak*
> 
> Here the latest HWiNFO64 build to fix reporting of HBM temperature with Crimson 17.11.2: http://www.hwinfo.com/beta/hw64_561_3287.zip


Thanks as always for swift support







.
Earlier today played SWBF, TBH Balanced profile is more than enough to keep FPS 120-140 range. I'm thinking of going per game profile instead of a global one.


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> What PowerLimit do you use?
> 
> I just tried Dead Space and Monkey Island. DS without cap is ~700FPS in menus and ~200-350FPS in game. Monkey Island never made it into DPM5/6/7. No likey the waveform on HBM
> 
> 
> 
> 
> 
> 
> 
> , would be nice if when GPU not in high DPM like MI then just stick in low HBM MHz.
> 
> DS_MI_HML.zip 21k .zip file
> 
> 
> Earlier today played SWBF, TBH Balanced profile is more than enough to keep FPS 120-140 range. I'm thinking of going per game profile instead of a global one.


Tried it on 0% and 50%


----------



## gupsterg

Ahhh .... no idea what to suggest







...

I've so far only used:-

Wolfenstein 2
Crysis 2 and 3
Dead Space
Monkey Island
Lords of the fallen

on my profile besides synthetic 3D loads and [email protected]/bionic. So far profile is sound. I may try Mickey Mouse Castle of Illusion (don't ask why I have such titles







) .


----------



## SpecChum

I could just unlock the fps I guess, but I liked the idea of taming the fan where I could.

I've not tried Bulletstorm on my quiet profile yet...that's next. That's got lower clocks anyway.


----------



## gupsterg

I locked FPS to 140 on DS, I think GPU never made it into higher DPM, besides clocks as voltage seems flat.

DS_FRTC_140.zip 12k .zip file


GPU W halved. I may have some Fiji data, be nice to compare clocks/W. I used to use Power Efficiency with same cap for that game, will have to dig into backups of rig.


----------



## SpecChum

Yep, quiet worked


----------



## SpecChum

I'm already happy with the performance to be fair, but are you all in the "it's just a Fury with higher clocks" camp or "there's more to come"?

I can't help thinking all those extra transistors must do something!


----------



## Soggysilicon

Quote:


> Originally Posted by *Trender07*
> 
> Guys I can't be the only one that since about 3 drivers a go , the HBM mem gets locked at 800 MHz with 950 mV? I have to set 951 mv or 955 mv whatever but it gets locked at 800 mhz with 950 mv which didnt happened before, anyone else?


I "highly" suspect there was a build roll back which has "re-introduced" some issues with lower P states locking. These drivers wreck my R9 280X, locking the P states low in all conditions due to a janky registry hook. This was a bug introduced back in late 2014 into 2015. I have no idea if AMD/Radeon use something like ClearCase for their software management or not. It's not the first time I have said "hey look, that ole' bug is back". This 800mhz thing happened same time that the resize / freeze-hang-crash the Radeon driver reared up as well.

With the memory conditions loosening up and some folks finally being able to break the 1100/1105 barrier, I am wondering if there have been some features or functions that have been disabled.
Quote:


> Originally Posted by *Yviena*
> 
> Has the protection on modified bios been cracked yet? was thinking maybe i can overvolt the HBM voltage to 1.4 as I'm watercooling my vega.


Nope. Would need the source code and build to get the check sum to validate. I think some later post you indicated you had a handle on this though...








Quote:


> Originally Posted by *Mumak*
> 
> Here the latest HWiNFO64 build to fix reporting of HBM temperature with Crimson 17.11.2: http://www.hwinfo.com/beta/hw64_561_3287.zip


Cheers!


----------



## Soggysilicon

Quote:


> Originally Posted by *SpecChum*
> 
> I'm already happy with the performance to be fair, but are you all in the "it's just a Fury with higher clocks" camp or "there's more to come"?
> 
> I can't help thinking all those extra transistors must do something!


I "believe" but cannot prove that Vega is a pilot platform utilizing a monolithic die to test and optimize for Navi / Scaleable architecture so that multiple smaller die packages can work across a fabric while utilizing the same memory address bus which is similar to Nvidias road map. I think Vega is "special" in this regard as I also believe it to be the last "big" chip Radeon is going to produce at the consumer level. I like to remain optimistic that software is just lagging behind the hardware. The build of the physical device is very pleasing.

That software thou...


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Yep, quiet worked


Sweet







.

I got undervolt active in lower than DPM5/6/7







.



DPM_UV.zip 21k .zip file


WattMan gives no access to this.

Last time I try OverdriveNTool I did not work.

This was with PP mod. Gotta now suss the 800MHz HBM lock.

*Warning to VEGA FE owners GFX clocks section in Vega64SoftPowerTableEditor is not correct, only do PP mod reg edit by hand.*

*** edit ***

OK not got the HBM 800MHz bug when GPU enter higher DPM, just did SP 4K run.

So UV of DPM 1/2/3, so clock range idle to 1138MHz is working via PP mod.


----------



## SavantStrike

Quote:


> Originally Posted by *Yviena*
> 
> Trying to hit around 1190mhz on memory I'm stable upto 1150mhz atm
> I don't really understand why AMD did this.
> Vega would probably be even cheaper without this ARM security processor, they could atleast have disabled the protection for the gaming rx vega version but have it enabled in the FE edition...


They did it because Microsoft demanded it. They want BIOS validation for secure boot. AMD certainly delivered.

Quote:


> Originally Posted by *Soggysilicon*
> 
> I "believe" but cannot prove that Vega is a pilot platform utilizing a monolithic die to test and optimize for Navi / Scaleable architecture so that multiple smaller die packages can work across a fabric while utilizing the same memory address bus which is similar to Nvidias road map. I think Vega is "special" in this regard as I also believe it to be the last "big" chip Radeon is going to produce at the consumer level. I like to remain optimistic that software is just lagging behind the hardware. The build of the physical device is very pleasing.


There are a few snippets about infinity fabric on Vega, though nothing concrete aside from marketing mumbo jumbo.

Infinity fabric is AMDs hail Mary pass to enter back into the CPU space, it would make sense if it's their strategy with Navi. Multiple small dies across a high speed interconnect. Nvidia is looking at it and Intel is moving that way too. If AMD has commercially available products and experience under their belt, then maybe the "glue" really paid off.


----------



## kundica

Yes!

https://www.phoronix.com/scan.php?page=news_item&px=AMDGPU-DC-Accepted


----------



## gupsterg

Quote:


> Originally Posted by *SavantStrike*
> 
> They did it because Microsoft demanded it. They want BIOS validation for secure boot. AMD certainly delivered.


This IMO was not it TBH. They already had secured it for that, read this post.

I also linked a UEFI org PDF in Fiji bios mod and also comment 24 of this TPU article has it.

What AMD delivered IMO is locking out GPU unlocks as was doable on Hawaii/Fiji. They have made us small user base of bios modders unable to have card as we want. Instead we're dabbling with registry.

Each driver swap with modded VBIOS I had to do zero setup. Now I have to add reg mod. If driver reset due to app crash (even if profile was stable set in VBIOS) you were back at your profile. Now with using WattMan you have to re-setup profile as it disappears. Not to mention resizing WattMan is still borked, before I never gave that app a 2nd glance.

Things like timings mods, RAM MHz strap changes or anything which PowerPlay reg mod or OS OC apps can't touch is all out of the window.

On my Fury X I actually tightened VRM throttling temps, so if cooling failed it would save GPU. I set OCP with in VRM controller to just right level of what max OC of GPU needed. In both cases it was lower than factory. We never had access to "Fuzzy Logic" fan profile setup in OS app (even when WattMan came to it) but you could with VBIOS mod. There was so much that could be done.


----------



## Reikoji

Star Wars: The Old Republic, is pretty . . . Hostile towards these drivers. The screen will blackout and the fan, temperature target/max, and memory will revert to defaults, but it will leave the power target alone. When trying full screen crossfire the screen with blackout more often. After a blackout, the loading splash screen is shown then before returning to what I was doing before a few seconds later.

Radeon settings will also crash and restart if the screen is open at the time.


----------



## SavantStrike

Quote:


> Originally Posted by *gupsterg*
> 
> This IMO was not it TBH. They already had secured it for that, read this post.
> 
> I also linked a UEFI org PDF in Fiji bios mod and also comment 24 of this TPU article has it.
> 
> What AMD delivered IMO is locking out GPU unlocks as was doable on Hawaii/Fiji. They have made us small user base of bios modders unable to have card as we want. Instead we're dabbling with registry.
> 
> Each driver swap with modded VBIOS I had to do zero setup. Now I have to add reg mod. If driver reset due to app crash (even if profile was stable set in VBIOS) you were back at your profile. Now with using WattMan you have to re-setup profile as it disappears. Not to mention resizing WattMan is still borked, before I never gave that app a 2nd glance.
> 
> Things like timings mods, RAM MHz strap changes or anything which PowerPlay reg mod or OS OC apps can't touch is all out of the window.
> 
> On my Fury X I actually tightened VRM throttling temps, so if cooling failed it would save GPU. I set OCP with in VRM controller to just right level of what max OC of GPU needed. In both cases it was lower than factory. We never had access to "Fuzzy Logic" fan profile setup in OS app (even when WattMan came to it) but you could with VBIOS mod. There was so much that could be done.


I miss bios mods so much. I've been on Pascal for 16 months now and I just got a Vega 64 that is crippled by a lack of custom BIOS options. All of the power play tables and nonsense make me frustrated, as none of that was necessary when we could still mod our own BIOSes.

I would love to mess with timings on the HBM2. I don't think that's happening any time soon.


----------



## AlphaC

Can anyone tell me how well the latest drivers do in Specviewperf 12?

VEGA is supposed to be a compute monster but these benchmarks don't seem to reflect that.

Radeon VEGA FE , CPU comparison http://quasarzone.co.kr/bbs/board.php?bo_table=qc_qsz&wr_id=97104&sca=CPU

https://techgage.com/article/a-look-at-amds-radeon-rx-vega-64-workstation-compute-performance/4/ <---- Intel Core i9-7900X used

http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-6.html <---- Intel Core i7-6900K @4.3 GHz

https://www.gamersnexus.net/guides/2977-vega-fe-vs-fury-x-at-same-clocks-ipc <---- Intel i7-7700K 4.5GHz locked

https://www.pcper.com/reviews/Graphics-Cards/Radeon-Vega-Frontier-Edition-16GB-Air-Cooled-Review/Professional-Testing-SPEC <---- i7-5960X

If I can get an aftermarket VEGA 56 , undervolt the daylights out of it and get decent performance relative to Quadro P2000 (75W TDP...) / Radeon Pro WX 7100 (150W TDP) I will do it.


----------



## Soggysilicon

Latest drivers are unusually crash prone playing Endless Legend... any one else having issues?


----------



## jbravo14

anyone here playing assasins creed origins?

I'm getting some performance hiccups - in open world my HBM is at 800mhz.

Whenever i go to the menu I get the max OC of 1020mhz, or when i activate senu (bird) it clocks up to 1020mhz.

Other games did not have this problem.


----------



## geriatricpollywog

Quote:


> Originally Posted by *AlphaC*
> 
> Can anyone tell me how well the latest drivers do in Specviewperf 12?
> 
> VEGA is supposed to be a compute monster but these benchmarks don't seem to reflect that.
> 
> Radeon VEGA FE , CPU comparison http://quasarzone.co.kr/bbs/board.php?bo_table=qc_qsz&wr_id=97104&sca=CPU
> 
> https://techgage.com/article/a-look-at-amds-radeon-rx-vega-64-workstation-compute-performance/4/ <---- Intel Core i9-7900X used
> 
> http://www.tomshardware.com/reviews/amd-radeon-vega-frontier-edition-16gb,5128-6.html <---- Intel Core i7-6900K @4.3 GHz
> 
> https://www.gamersnexus.net/guides/2977-vega-fe-vs-fury-x-at-same-clocks-ipc <---- Intel i7-7700K 4.5GHz locked
> 
> https://www.pcper.com/reviews/Graphics-Cards/Radeon-Vega-Frontier-Edition-16GB-Air-Cooled-Review/Professional-Testing-SPEC <---- i7-5960X
> 
> If I can get an aftermarket VEGA 56 , undervolt the daylights out of it and get decent performance relative to Quadro P2000 (75W TDP...) / Radeon Pro WX 7100 (150W TDP) I will do it.


Titan Xp really puts those 12B transistors to work. I wish my big block V64 and Fury X would do the same.


----------



## diabetes

Quote:


> Originally Posted by *jbravo14*
> 
> anyone here playing assasins creed origins?
> 
> I'm getting some performance hiccups - in open world my HBM is at 800mhz.
> 
> Whenever i go to the menu I get the max OC of 1020mhz, or when i activate senu (bird) it clocks up to 1020mhz.
> 
> Other games did not have this problem.


Check if your card can reliably stay in P7 when not in the menu. HBM clocks are tied to Pstates.


----------



## wellkevi01

So I've had my Vega 64 Liquid for about two weeks now and I've found that 9 times out of 10, when I adjust something in WattMan/OverdriveNTool, the card begins to idle in P State 3. Anyone know of a fix?


----------



## By-Tor

I have had my card for a few days waiting on my water block to come in before installing. I mounted the block and installed the card tonight and am very happy with the performance of the card and must have gotten a good seat on the block as my temp has not broken 28c after 2 hours of BF1 on Ultra at stock settings on the card..

What would be the better OC tool for this card Wattman or Overdriventool?


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> I have had my card for a few days waiting on my water block to come in before installing. I mounted the block and installed the card tonight and am very happy with the performance of the card and must have gotten a good seat on the block as my temp has not broken 28c after 2 hours of BF1 on Ultra at stock settings on the card..
> 
> What would be the better OC tool for this card Wattman or Overdriventool?


Gorgeous! +rep. Wattman has been more reliable than Overdriventool for me.


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> Gorgeous! +rep. Wattman has been more reliable than Overdriventool for me.


Thank you.

I set the core at 1700 and memory at 1100 using wattman, and while running superposition the card was pulling 320 watts and went up to 29c max temp... WOW


----------



## gupsterg

Quote:


> Originally Posted by *SavantStrike*
> 
> I miss bios mods so much. I've been on Pascal for 16 months now and I just got a Vega 64 that is crippled by a lack of custom BIOS options. All of the power play tables and nonsense make me frustrated, as none of that was necessary when we could still mod our own BIOSes.
> 
> I would love to mess with timings on the HBM2. I don't think that's happening any time soon.


Yeah timings mod would have been nice to try on HBM2. I had 11 Fiji cards, most did 545MHz stock MVDDC or ~+25mV, 600MHz I had 1 sample that managed it slightly above say "bench stable" but not really long term stable, needed ~+100mV MVDDC. HBM2 does seem to be allowing greater headroom on OC'ing.

I have just completed a round of testing of PP mods







. Some interesting results have occurred







.

I believe I have a fuller idea now what HBM Voltage slider is in WattMan.

I have also made my full GPU/HBM clock work via PP mod







.

I also believe I have gained better understanding how card is picking DPM, etc







(some already may have known).

Only snag is now trying to post this without it being a mega post







.
Quote:


> Originally Posted by *wellkevi01*
> 
> So I've had my Vega 64 Liquid for about two weeks now and I've found that 9 times out of 10, when I adjust something in WattMan/OverdriveNTool, the card begins to idle in P State 3. Anyone know of a fix?


I believe the GPU is not going into a higher DPM. What is happening is as we OC in OS "dynamically" DPM clks/mV shift IMO. I had seen this on Hawaii/Fiji.

Example:-

I do a AIDA64 registers dump and gain DPM VID/Clks, I set an OC on Fiji using OC SW, I do another dump, based on how far I OC'd determined how many and how much the lower DPM where affected.

I have just done testing where I placed my full WattMan OC in registry, it is giving same performance *but* idle mV is same. This is giving the same affect as VBIOS mod, in that lower DPMs are not shifting on clk/mV.

I believe GPU tech LEDs denote GPU DPM state and not loading, etc.


----------



## Aenra

Quote:


> Originally Posted by *gupsterg*
> 
> Only snag is now trying to post this without it being a mega post
> 
> 
> 
> 
> 
> 
> 
> .


No snag, please post the lot of it. You guys have.. 'advanced' to levels i'm entirely unfamiliar with, any further info/breakdown would be very appreciated.


----------



## gupsterg

NP







.

I think I shall create sections within the OP of VEGA bios thread to cover what I believe the testing/mods show.

Here is sample of final testing result.



In GPU-Z note the GPU default clock is 1642MHz and HBM 1100MHz. If the OC was applied via OS SW it would be 1630/945 (as below).



Also this confirms as before stated/experienced on past cards the PowerPlay in registry takes precedent over VBIOS at OS load. In the 1st image is also a clue what HBM voltage slider is, yes this is a bit of teasing, but it will be revealed soon







.

Now some may say I changed HBM voltage in PowerPlay registry and it had no affect (not done testing yet so can not refute). The simple reasons why that mod may not work is perhaps it's a voltage that may need value at GPU post for it to apply by VRM chip and as reg mod does not apply until OS it has no affect. So when someone on say a V56 flashes to V64 they gain the higher HBM voltage as value was there at GPU post.


----------



## pmc25

Really hope someone finds a way to mess around with HBM2 timings. For those of us with blocks, I'm pretty sure the default lowest state can be taken quite a bit lower, when under full gaming / bench load in winter the temperature is barely creeping North of 30C.

I would assume the timings floor assumes the card won't run reliably or stably beyond what it can do at 45C-50C on a light load, in [LC]BIOS. Obviously, if that's the case, it might be possible to extract even more performance.


----------



## Trender07

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I've seen Wattman show 800Mhz but after a few seconds it jumps up to the correct frequency. GPU-Z always shows the correct HBM frequency.
> 
> http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.2-Release-Notes.aspx


Well I mean even on rivatuner osd it shows 800 mhz locked


----------



## Rootax

I don't know if anyone noticed or If I'm late to the party, but the 17.11.2 supports Vega FE right off the bat. It's still the pro control panel, but it's nice for people who are gaming with it.


----------



## dagget3450

Quote:


> Originally Posted by *Rootax*
> 
> I don't know if anyone noticed or If I'm late to the party, but the 17.11.2 supports Vega FE right off the bat. It's still the pro control panel, but it's nice for people who are gaming with it.


If it gives crossfire option ill try it out.


----------



## Razkin

Quote:


> Originally Posted by *gupsterg*
> 
> NP
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I think I shall create sections within the OP of VEGA bios thread to cover what I believe the testing/mods show.
> 
> Here is sample of final testing result.
> 
> 
> 
> In GPU-Z note the GPU default clock is 1642MHz and HBM 1100MHz. If the OC was applied via OS SW it would be 1630/945 (as below).
> 
> 
> 
> Also this confirms as before stated/experienced on past cards the PowerPlay in registry takes precedent over VBIOS at OS load. In the 1st image is also a clue what HBM voltage slider is, yes this is a bit of teasing, but it will be revealed soon
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Now some may say I changed HBM voltage in PowerPlay registry and it had no affect (not done testing yet so can not refute). The simple reasons why that mod may not work is perhaps it's a voltage that may need value at GPU post for it to apply by VRM chip and as reg mod does not apply until OS it has no affect. So when someone on say a V56 flashes to V64 they gain the higher HBM voltage as value was there at GPU post.


Does lowering the P0 voltage effectively mean that it wil lower voltage across the entire table and as a result gives acces running a P7 state at a lower vcore than 0,875?


----------



## 113802

Latest driver had Radeon Chill on by default. Was curious why my card ran at lower framerates with very low temperatures. It's kinda useful though since they were playable FPS and didn't cause coil whine due to unnecessary frames.


----------



## gupsterg

Quote:


> Originally Posted by *Razkin*
> 
> Does lowering the P0 voltage effectively mean that it wil lower voltage across the entire table and as a result gives acces running a P7 state at a lower vcore than 0,875?


Not in testing so far done.

Think of GPU has ~2 methods of approaching clocks voltages.

i) non ACG/AVFS, DPM 0 to 4.

ii) ACG/AVFS, DPM 5 to 7.

The driver/GPU is making an assessment what to choose. There were posts before where I provided Dead Space case testing.

Uncapped even though picked an ACG/AVFS mode can be using 1100MHz. Which would be clocks similar to non ACG/AVFS DPM. So it seems under ACG/AVFS mode MHz can go lower than a non ACG/AVFS DPM and not flip over to that mode. It may make DPM 5 set clock irrelevant. Currently I believe DPM 5 VID is the lowest voltage ACG/AVFS can use when under load. More testing needs to be done.

Capped Dead Space picked a non ACG/AVFS mode. Again I think it is not just the capping that decided mode but also type of app/usage of GPU from it.

Idle state seems something we can't touch at present. It seems like a dynamic ACG/AVFS state. Again this is just a opinion.

As OP has been amended in VEGA bios thread I would urge members to read there to see testing/results/observations and viewpoint formed.

I thought it before and this new testing shows IMO that VEGA is by far more advanced than any past AMD GPU on how it operates for clocks/states/voltages, etc.


----------



## Razkin

I turned ACG off for states 6 and 7, but it had not have any effect. I also tried changing the usVddcLookupTableOffset by 10 and it would fail to laod Windows having changed that.
It is possible to run the core in P7 at ~900MHz and 0,875V core, so it doesn't look like DPM VID 5 is the lowest it can use, you need to have the HBM voltage slider on 900 to do so, which is the lowest value that has an effect, lowering further has no effect.

One other thing I have noticed is that je can lower the core clock in P7 to 852MHz(P0 clocks, but with full mem speed) if you take small enough steps in OverdriveNT since I upgraded to 17.11.2. The card doesn't really like it as it stays stuck at 852MHz even if you reset all the settings.


----------



## gupsterg

Changing usVddcLookupTableOffset would have made PowerPlay corrupt.



All values with red line are pointers (ie offset locations) for beginnings of table sections, basically the directory of the PowerPlay. I have just added this warning in OP of my thread, section *Testing of PowerPlay registry mods* > *Intro to PowerPlay registry mod*.

Perhaps disable ACG for DPM 5/6/7 and not just 6/7.

What is HBM voltage slider in WattMan is DPM 5 VID.

Remove any PP mod.

Edit only DPM 5 VID in PP reg, apply, once rebooted change WattMan to "Custom" and do is if changing profile, you will see HBM slider is set as what DPM 5 is in PP mod. You will also see same in OverdriveNTool.


----------



## LeadbyFaith21

Hey guys, so I just got Vega and got an EK block on it. I've got a question though, what is the "GPU Hot Spot Temp" referring too?


----------



## laczarus

Quote:


> Originally Posted by *LeadbyFaith21*
> 
> what is the "GPU Hot Spot Temp" referring too?


There is no definite answer for that yet afaik.
But so far people have shared that the reported temp on the hotspot went down after repasting (for me too).
So it has to be somewhere on the chip itself. But where exactly is unknown. I'd say its either the center or the T junction between GPU and HBM2.
A temperature of ~20°C above the GPU temp seems to be the norm but better results are possible. Also the bios tells us that the hotspot max temp is 105°C, so you should be fine if it doesn't get that hot.


----------



## Reikoji

Quote:


> Originally Posted by *LeadbyFaith21*
> 
> Hey guys, so I just got Vega and got an EK block on it. I've got a question though, what is the "GPU Hot Spot Temp" referring too?


I believe it refers to the area on the card where the temperature gets the hottest. They put a sensor at that point so it can be measured.



Something like this.


----------



## ChaosCloud

I'm trying to undervolt to keep my cards temperatures in check. The problem I'm having is that every time I try and undervolt the P6 and P7 states to 1500MHz @ 955mV with the hbm voltage set to 955mV it will still run at around 1050mV by default. After this happens will get stuck at that setting even after I close out of the game. So within any monitoring programs such as HWInfo or GPU-Z it will still show at 1050mV.

I've tried cleaning out all my drivers and re-installing the latest. I also tried going back to 17.11.1 to see if that would fix it.

From what I read if you want the voltage on the core to be lower you have to decrease the HBM voltage with it.

I just don't understand why it's not using the voltage that I set in the P6 and P7 states.

I do have a Vega 56 flashed to a 64 if that matters. My power limit is at +50%.


----------



## LeadbyFaith21

Quote:


> Originally Posted by *laczarus*
> 
> There is no definite answer for that yet afaik.
> But so far people have shared that the reported temp on the hotspot went down after repasting (for me too).
> So it has to be somewhere on the chip itself. But where exactly is unknown. I'd say its either the center or the T junction between GPU and HBM2.
> A temperature of ~20°C above the GPU temp seems to be the norm but better results are possible. Also the bios tells us that the hotspot max temp is 105°C, so you should be fine if it doesn't get that hot.


Quote:


> Originally Posted by *Reikoji*
> 
> I believe it refers to the area on the card where the temperature gets the hottest. They put a sensor at that point so it can be measured.
> 
> 
> 
> Something like this.


Okay, I was assuming it was some power delivery component on the back. Mine's getting to around 75 C with the core and hbm around 45 C, so that's probably the spot on the back from the image. Knowing the bios max temp is 105 is good though! Thanks guys!


----------



## dagget3450

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Latest driver had Radeon Chill on by default. Was curious why my card ran at lower framerates with very low temperatures. It's kinda useful though since they were playable FPS and didn't cause coil whine due to unnecessary frames.


i just installed it and it wasn't enabled for mine. I am building out rig with parts i had lying around, found a cheap mobo and just popped an rx vega 64 in to test for fun. It seems like Vega doesn't suffer from cpu bottlenecks? Maybe?

Stock run on vega64 hbcc on, no voltage or clocks adjusted. cpu is a xeon 2683 v3 @ whopping 2.4ghz/2ghz ddr4 2133
17.7 driver


17.11.2


timespy
https://www.3dmark.com/3dm/23396358?
gpu 7155 cpu 8155

i guess its okay scores, doesn't seem as limited by cpu as i thought it would have


----------



## barbz127

Hi all,

How do these look?

Vega 64 air - unmolded die - nth-1 with XXX application.
17.11.1 drivers
Stock HSF has been reapplied by me following a stint under water

Gpu mem sits at 1500 solid in superposition and pubg - hbm at 1030.



Where to next for more perf without sacrificing temps/performance? Does anything look unsafe/subpar?

Thankyou


----------



## ITAngel

Question, why my Vega64 card black out sometimes during mid 60C temps? It does it randomly during gaming and it just last a few seconds. Also it only happen during the monitor with the HMDI connector not the L-Diplay Port. Is on air and I have not water cooled it yet. PSU is 850watts, fans are set from low to 4000rpm max, temp range 55 low and 85 high on wattman. Powerlimit 50


----------



## hyp36rmax

Finally installed my Vega64. I'm pretty impressed with the performance at 4K 60hz. I'll have better shots once I get the test bench finalized.


----------



## ITAngel

Quote:


> Originally Posted by *hyp36rmax*
> 
> Finally installed my Vega64. I'm pretty impressed with the performance at 4K 60hz. I'll have better shots once I get the test bench finalized.


Damn nice build man, pretty clean looking too. Is that an open bench case?


----------



## geriatricpollywog

Quote:


> Originally Posted by *hyp36rmax*
> 
> Finally installed my Vega64. I'm pretty impressed with the performance at 4K 60hz. I'll have better shots once I get the test bench finalized.


Nice +rep. What clock speeds are you seeing?

BTW that is way too clean to be a test bench.


----------



## hyp36rmax

Quote:


> Originally Posted by *ITAngel*
> 
> Damn nice build man, pretty clean looking too. Is that an open bench case?


Thanks! Yes It's the Primo Chill Praxis Wetbench.

Quote:


> Originally Posted by *0451*
> 
> Nice +rep. What clock speeds are you seeing?
> 
> BTW that is way too clean to be a test bench.


Thanks! Currently running everything stock.

As far as the wetbench, I have QDC on both sides that connect to the CPU and GPU blocks so I can easily change out GPU setups.

Here's a shot of my FURY X's on the wetbench.



*Here are my stock 3DMark Results:*


----------



## ITAngel

Yea I have to wait for two more fittings so I can water cooled my Vega64 and put one more rad. Here is what it currently looks like.


----------



## geriatricpollywog

Quote:


> Originally Posted by *hyp36rmax*
> 
> Thanks! Yes It's the Primo Chill Praxis Wetbench.
> 
> Thanks! Currently running everything stock.
> 
> As far as the wetbench, I have QDC on both sides that connect to the CPU and GPU blocks so I can easily change out GPU setups.
> 
> Here's a shot of my FURY X's on the wetbench.
> 
> 
> 
> 
> *Here are my stock 3DMark Results:*


With some OCing, you can see 21,000 in FS. I also moved from Fury X to Vega.


----------



## hyp36rmax

Quote:


> Originally Posted by *0451*
> 
> With some OCing, you can see 21,000 in FS. I also moved from Fury X to Vega.


Dang that's a huge jump! Yea I can't wait to see what a 1700X @ 4.0ghz and Vega64 OC can do. I see a lot of people under volting and increasing the clocks. Is there a definite post that describes some basic settings to achieve this?


----------



## hyp36rmax

Quote:


> Originally Posted by *ITAngel*
> 
> Yea I have to wait for two more fittings so I can water cooled my Vega64 and put one more rad. Here is what it currently looks like.


I just responded to your PM. But this is what I would do in your setup:


----------



## geriatricpollywog

Quote:


> Originally Posted by *hyp36rmax*
> 
> Dang that's a huge jump! Yea I can't wait to see what a 1700X @ 4.0ghz and Vega64 OC can do. I see a lot of people under volting and increasing the clocks. Is there a definite post that describes some basic settings to achieve this?


First download the liquid bios (google it. Any of the liquid bios will work). Then download the 142% power limit 400 amp powerplay table and install. See post 253 http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/200_100#post_26297003

Download CRU (custom resolution utility) and run restart64 to restart your GPU. Go into wattman and set to custom. Don't change the the gpu core or voltage. Leave P6/P7 at default. P7 should be at 1752/1200. Change the HBM core to 1100mhz. Leave HBM voltage at 950. Move the power limit slider to 142%. Run restart64 again. Run Firestrike and you should see at least 20,000.


----------



## hyp36rmax

Quote:


> Originally Posted by *0451*
> 
> First download the liquid bios (google it. Any of the liquid bios will work). Then download the 142% power limit 400 amp powerplay table and install. See post 253 http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/200_100#post_26297003
> 
> Download CRU (custom resolution utility) and run restart64 to restart your GPU. Go into wattman and set to custom. Don't change the the gpu core or voltage. Leave P6/P7 at default. P7 should be at 1752/1200. Change the HBM core to 1100mhz. Leave HBM voltage at 950. Move the power limit slider to 142%. Run restart64 again. Run Firestrike and you should see at least 20,000.


Holy cow thanks!


----------



## bahamutzero

Quote:


> Originally Posted by *ChaosCloud*
> 
> I'm trying to undervolt to keep my cards temperatures in check. The problem I'm having is that every time I try and undervolt the P6 and P7 states to 1500MHz @ 955mV with the hbm voltage set to 955mV it will still run at around 1050mV by default. After this happens will get stuck at that setting even after I close out of the game. So within any monitoring programs such as HWInfo or GPU-Z it will still show at 1050mV.


I have the same problem, it also happens with Vega 56 bios under certain circumstances. I've tried changing several drivers and vbios versions (incl. Vega 64) but I couldn't get it to downclock properly after load. If you come up with a solution please post here. Thanks!


----------



## Grummpy

Hello here is my vega 64 mod.
im afraid to flash bios never done it before.
and is there any advantage in doing the bios flash if you use msi after burner ?


----------



## Grummpy

:thumb:I had to redo the vrm cooling got it down from 93c to mid 40s.


Parts...
https://www.amazon.co.uk/gp/product/B01A9VUFGS/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1



http://imgur.com/wcASAlO

















my testing above.


----------



## Trender07

Quote:


> Originally Posted by *0451*
> 
> First download the liquid bios (google it. Any of the liquid bios will work). Then download the 142% power limit 400 amp powerplay table and install. See post 253 http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/200_100#post_26297003
> 
> Download CRU (custom resolution utility) and run restart64 to restart your GPU. Go into wattman and set to custom. Don't change the the gpu core or voltage. Leave P6/P7 at default. P7 should be at 1752/1200. Change the HBM core to 1100mhz. Leave HBM voltage at 950. Move the power limit slider to 142%. Run restart64 again. Run Firestrike and you should see at least 20,000.


With 950 mV my HBM gets locked to 800 MHz







maybe my Limited Edition have some weird bios causing this issue?


----------



## geriatricpollywog

Quote:


> Originally Posted by *Trender07*
> 
> With 950 mV my HBM gets locked to 800 MHz
> 
> 
> 
> 
> 
> 
> 
> maybe my Limited Edition have some weird bios causing this issue?


Did you try the liquid bios?


----------



## wellkevi01

Quote:


> Originally Posted by *bahamutzero*
> 
> I have the same problem, it also happens with Vega 56 bios under certain circumstances. I've tried changing several drivers and vbios versions (incl. Vega 64) but I couldn't get it to downclock properly after load. If you come up with a solution please post here. Thanks!


I had this same issue with my 64 LC. Pretty much anytime that I'd change clock/voltage in WattMan the card would jump up to PState 3 and would idle there. I solved it by doing a clean install of the AMD driver and I didn't open WattMan to accept their terms. I then just switched to using OverdriveNTool and AfterBurner for OC'ing and stuff.


----------



## Redeemer

https://www.asus.com/us/Graphics-Cards/ROG-STRIX-RXVEGA56-O8G-GAMING/


----------



## Reikoji

Quote:


> Originally Posted by *ITAngel*
> 
> Question, why my Vega64 card black out sometimes during mid 60C temps? It does it randomly during gaming and it just last a few seconds. Also it only happen during the monitor with the HMDI connector not the L-Diplay Port. Is on air and I have not water cooled it yet. PSU is 850watts, fans are set from low to 4000rpm max, temp range 55 low and 85 high on wattman. Powerlimit 50


What do you have your HBM clock set to?

In my experience, high clocks + high HBM temperature would cause these blackouts in certain applications. FFXIV being the primary one for me. Other cases include random color circles to flash on the screen. With air card, its hard pressed to keep HBM at a temperature it can remain stable at too far higher than stock speeds. Even the LC vega 64 can't get too high with the cheap radiator it has...

The blackouts for me do not occur at 1100mhz if I keep HBM temperature below 52c, having a powerful fan added on to the stock LC Vega's radiator.


----------



## gupsterg

Yesterday I thought HBM voltage in WattMan was DPM 5 VID. This was due to some testing yesterday done with PP mod.

What is shown as HBM voltage in WattMan is a translation of associated DPM VID to MCLK.

For example in stock VBIOS/PowerPlay is:-

i) VDDClookup table DPM 05 as 1100mV on V64.
ii) MCLK table has DPM 4 clock say 945MHz on V64, it has an ID associated with it of 05. This is associating it with a SOC/GPU DPM.

Now look at this image.



Which one has HBM as 975mV? aka GPU voltage floor limit.

Left is set to 975mV, right 1000mV, by PP mod, emulating WattMan HBM mV setting. So how come each shows GPU using less mV at times? even if HBM clock has dropped GPU clock is remaining high.

I shall be updating OP in VEGA bios mod tomorrow.

*HBM voltage in WattMan is nonsense.

HBM voltages in OverdriverNTool are nonsense as well.*

In PowerPlay of VBIOS is only one HBM voltage. On V56 1.25V and 1.35V on V64.

What is happening in OverdriveNTool for HBM mV is this:-

Each HBM clock in PowerPlay has a ucVddInd value, links to a SOC index/clock. The SOCclk also has an index, it shows an ID matching a GPU DPM. So OverdriveNTool reads these IDs and shows HBM P0 & P1 as 800mV as they match ID to GPU P0. HBM P2 is picking up info of mV from GPU P2 and HBM P3 is picking up GPU P5.

WattMan is doing the same, but only shows HBM P3 and as that has ID of GPU P5 it shows mV of that. This is why yesterday I thought it was GPU DPM 5.

Todays further testing and PowerPlay marking has shown what these apps are doing and why I know now it is nonsense.

Here I changed HBM P3 to use an index of 7 in PP reg mod, as highlighted in this post and look what OverdriveNTool shows now.



Here is a SP 4K HML file for profile above in image.

SP_4K_LVL1.3.zip 8k .zip file


SO as per previous consensus, as I was using 1125mV for 1100MHz HBM, I should not have been seeing lower mV, but I do.

I fully believe PP mod is way forward.

WattMan and OverdriveNTool for HBM mV play with ID bits and are probably causing issues.

My best runs on SP 4K and 3DM are now on PP mod and this is essentially same profile as what I used in WattMan but is working better, as benches show me when I compare. Through PP mod I also have no freaky idle issues (see linked post).


----------



## ITAngel

Quote:


> Originally Posted by *Reikoji*
> 
> What do you have your HBM clock set to?
> 
> In my experience, high clocks + high HBM temperature would cause these blackouts in certain applications. FFXIV being the primary one for me. Other cases include random color circles to flash on the screen. With air card, its hard pressed to keep HBM at a temperature it can remain stable at too far higher than stock speeds. Even the LC vega 64 can't get too high with the cheap radiator it has...
> 
> The blackouts for me do not occur at 1100mhz if I keep HBM temperature below 52c, having a powerful fan added on to the stock LC Vega's radiator.


Set stock 945Mhz if I recall all and only charge fan profile to 300rpm to 4000rpm max with temp range 55C-75C with power limit 50. Note this happen even without power limit set and power limit of 15.


----------



## Soggysilicon

Quote:


> Originally Posted by *ITAngel*
> 
> Question, why my Vega64 card black out sometimes during mid 60C temps? It does it randomly during gaming and it just last a few seconds. Also it only happen during the monitor with the HMDI connector not the L-Diplay Port. Is on air and I have not water cooled it yet. PSU is 850watts, fans are set from low to 4000rpm max, temp range 55 low and 85 high on wattman. Powerlimit 50


I do not believe this to be temp related (I have been in the 20s C and seen it no different then when the loop is warmed up), as I have had the same issues on my setup which is custom looped. I have found the issue to occur more with the latest drivers; it had all but vanished with the windows 10 FC patch and maybe 3 drivers ago in combination but came right back once the latest drivers came out which have the resize fail bug.

It's driver / windows / video display (software) related. Now that has been the case with DP, HDMI I have not experienced the issue but I also believe it is closely related to Freesync / monitor features and how windows is tasked with handling devices.

If it happens I shut down the application and force restart the radeon settings exe, retweak, and run again. If I am looking at wattman and it has the HBM at 800mhz, theres a good chance the driver has broken.

Also be aware under windows 10 that you will need to manually verify you have your correct monitor drivers installed... windows is horrible about reverting the generic PnP with new Radeon driver updates.
Quote:


> Originally Posted by *Grummpy*
> 
> Hello here is my vega 64 mod.
> im afraid to flash bios never done it before.
> and is there any advantage in doing the bios flash if you use msi after burner ?


Yes, going from the air to LC bios you gain more wattage overhead which isn't something AB is going to do for you. You can get the same result with powerplay tables. If you don't want to deal with the registry then bios flash to the LC and proceed in wattman.

Vega has 2 bios configurations, one is factory locked so there is little concern you would brick your card; just toggle the dip switch to the first bios and boot up and reflash the failed second in the unlikely event of an error (also back up your factory with the tool).

I run the LC bios on my reference air card.

Be aware the extra power will require more consideration when cooling. There is some indication that the LC cards are better binned, so be prepared to up the voltage to maintain the clocks.


----------



## Trender07

Quote:


> Originally Posted by *0451*
> 
> Did you try the liquid bios?


Nope as I'm currently using the AIR cooler so it would be overkill


----------



## astrixx

Quote:


> Originally Posted by *Soggysilicon*
> 
> Vega has 2 bios configurations, one is factory locked so there is little concern you would brick your card; just toggle the dip switch to the first bios and boot up and reflash the failed second in the unlikely event of an error (also back up your factory with the tool).
> 
> .


matt at AMD told me both GPU BIOSES are unlocked so be careful, one uses less power.


----------



## gupsterg

Quote:


> Originally Posted by *astrixx*
> 
> matt at AMD told me both GPU BIOSES are unlocked so be careful, one uses less power.


I would concur, some seem protected and others not.


----------



## cplifj

GODDA........ AMD.

driver 17.11.2

The clocks on my vega stay stuck on their highest clock after shutting down games or anything that uses the gfx card..

DJEZUS, how much fun is it when working stuff gets killed by patches and updates....

AMATEURS.

I never even overclocked the thing yet, right now , stuck on 1788MHz core clock and hbm on 945MHz. i have to reboot to get back the basic idle clock.

GPU LOAD just stays at 100% whatever i just shut down.


----------



## TrixX

Quote:


> Originally Posted by *cplifj*
> 
> GODDA........ AMD.
> 
> driver 17.11.2
> 
> The clocks on my vega stay stuck on their highest clock after shutting down games or anything that uses the gfx card..
> 
> DJEZUS, how much fun is it when working stuff gets killed by patches and updates....
> 
> AMATEURS.
> 
> I never even overclocked the thing yet, right now , stuck on 1788MHz core clock and hbm on 945MHz. i have to reboot to get back the basic idle clock.
> 
> GPU LOAD just stays at 100% whatever i just shut down.


If it's stuck in P7 state, then the driver's crashed. If you are running it as default you may want to try an Undervolt as it could be overboosting and crashing the driver. There's quite a lot of literature in this thread so far on how to Undervolt successfully, however I'd aim to match your P6 and P7 voltages to your HBM voltage as a first port of call (normally I make P7 20mv higher to avoid old bugs).


----------



## cplifj

well, as far as i am concerned this is a amd driver issue since i'm running stock out of the box.

It also just happens with the newest driver only. (a reset in wattman does nothing either)

The card is required to run flawless for stock application,

And it did, like most games, but a week and a patch or driver later it no longer works. talk about orchestrated trolling of the masses who just payed good money for it.....

and some wonder why wars get started .....


----------



## Soggysilicon

Quote:


> Originally Posted by *astrixx*
> 
> matt at AMD told me both GPU BIOSES are unlocked so be careful, one uses less power.


Quote:


> Originally Posted by *gupsterg*
> 
> I would concur, some seem protected and others not.


Hey guys, thanks for pointing this out. Was not aware that some cards have bios 1 unlocked. I highly suspect this is a variance / deviation in process at testing to ship stage. As a general rule of thumb its not that difficult to unlock a flash, (setting a 00 to FF at the address MSB); but yeah... one should be locked before it ships to reduce product returns.

Then again this version of AMD Radeon seems confused about their audience, one step forward two steps back. The software quality control seems very dodgy.
Quote:


> Originally Posted by *cplifj*
> 
> well, as far as i am concerned this is a amd driver issue since i'm running stock out of the box.
> 
> It also just happens with the newest driver only. (a reset in wattman does nothing either)
> 
> The card is required to run flawless for stock application,
> 
> And it did, like most games, but a week and a patch or driver later it no longer works. talk about orchestrated trolling of the masses who just payed good money for it.....
> 
> and some wonder why wars get started .....


The drivers since launch have been mediocre to poor... right now I cold only suggest these latest drivers if you are benefiting from the HBM loosened restrictions. As these drivers allow for some whom where unable to get 1000+ HBM to 1100+ and beyond. I suppose I suffer less straight up reboots as well... but the drivers crash just as much as they ever have had... and the resize bug is amateur hour.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *cplifj*
> 
> well, as far as i am concerned this is a amd driver issue since i'm running stock out of the box.
> 
> It also just happens with the newest driver only. (a reset in wattman does nothing either)
> 
> The card is required to run flawless for stock application,
> 
> And it did, like most games, but a week and a patch or driver later it no longer works. talk about orchestrated trolling of the masses who just payed good money for it.....
> 
> and some wonder why wars get started .....


Most of the Vega 64 LC owners (myself included) are having the same issue, the air cooled variants will thermal throttle before clocks get high enough to cause stability problems. The LC is definitely not the card to get if you don't like to mess with the stock clock speeds. I would agree that the drivers seem to have issues with the way the advanced clock generator works on the LC cards, the clocks will runaway and then the driver crashes. Unfortunately that is the result when things are launched before they are ready for primetime.


----------



## ducegt

No issues here with 64 LC using stock settings. Messing with clocks and voltages does cause lots of problems though.

Regarding the 142 PL and 400 TDC mod... I don't think it does anything. With stock 50PL I can almost push 400w. Those extra mods didn't do anything and I did try increasing clocks a little along with them.


----------



## jbravo14

are there any brackets available for modding the vega 56 for AIO?

After updating to the latest drivers, reset64.exe does not work anymore.


----------



## SavantStrike

Quote:


> Originally Posted by *Grummpy*
> 
> :thumb:I had to redo the vrm cooling got it down from 93c to mid 40s.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> Parts...
> https://www.amazon.co.uk/gp/product/B01A9VUFGS/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
> 
> 
> 
> http://imgur.com/wcASAlO
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> my testing above.


So are those held on with epoxy? How did you manage to reapply them?

Either way my hat is off to your warranty voiding skills and dedication.


----------



## MapRef41N93W

Hey guys, is there any reason not to buy a Frontier Edition version of Vega? Newegg just put it on sale for $699. I bought an RX64 on ebay for $500 yesterday, but considering the Frontier has double the memory and pro driver support, is there particular reason not to buy it? Will be using about 50/50 for gaming/workstation applications. Are the gaming drivers for it broken or something? Card will be going under water so the cooler is irrelevant.


----------



## orlfman

just received my vega 56 today to replace my rx 580.


noticed my card has samsung hbm2. is there any trend regarding overclocks on the hbm2 with samsung vs hynix?


----------



## surfinchina

The FE bios is locked, but that's the case with all of them right?
edit: I mean it requires a signed bios, so no modding...

Otherwise, I have the FE and it's nice. I put an EK block on it.


----------



## Naeem




----------



## pengs

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> Most of the Vega 64 LC owners (myself included) are having the same issue, the air cooled variants will thermal throttle before clocks get high enough to cause stability problems. The LC is definitely not the card to get if you don't like to mess with the stock clock speeds. I would agree that the drivers seem to have issues with the way the advanced clock generator works on the LC cards, the clocks will runaway and then the driver crashes. Unfortunately that is the result when things are launched before they are ready for primetime.


I've actually had less issues with the 17.11.x drivers than previous using 64 LC. So far anyhow.

The way that I view the LC is that I'm way better off to fine tune HBM settings and dial in a power limit than to try and eek core clocks or tighten voltage right away.

I've been slowly creeping my power limit passed 20% and easing up the memory, these drivers do seem to allow me to sit solidly at 1050MHz where-as previously I was not completely stable with stock voltage. I'm going to find the limit and then raise the voltage floor to see if it helps - overclocking the HBM is where I've seen the largest and most immediate boost.

I may attempt to undervolt by whatever it allows me at the end but when I first received the card I couldn't regress the voltage beyond -20mV without a driver crash. Granted, AMD has more control over the power states driver side than before (and it shows as the drivers mature) but I'm just not expecting much if any undervolting on the liquid version.

Firestrike
PL+25
HBM 1050
GPU score is 25320 (seems decent)


Spoiler: Warning: Spoiler!






Quote:


> Originally Posted by *cplifj*
> 
> well, as far as i am concerned this is a amd driver issue since i'm running stock out of the box.
> 
> It also just happens with the newest driver only. (a reset in wattman does nothing either)
> 
> The card is required to run flawless for stock application,
> 
> And it did, like most games, but a week and a patch or driver later it no longer works. talk about orchestrated trolling of the masses who just payed good money for it.....
> 
> and some wonder why wars get started .....


Yeah I've noticed, for whatever reason, the drivers seem to degrade over the course of a week. The Radeon settings application starts to become a bit unstable also. A clean driver install usually fixes it.


----------



## Grummpy

This is my power usage with locked 60 fps with alien isolation maxed out 1920x1080.
i just found it interesting i can play staying below 60 watts with vega 64


----------



## Grummpy

i duno im just sharing my results with anyone who wants to look at them.





would be interesting to compare my moded water cooled temps and clocks to the real deal water cooled card.
mine was a air cooled with a water cooler fittted.



http://imgur.com/HOwpmoq


----------



## By-Tor

My Vega 64 has been running since Friday and have played a little bit with OCing with WattMan but getting nowhere near the scores that some are getting in superposition. It scores in the mid 4500 range in 1080p Extreme. It does what I need it to do in games with stock settings, but like playing with some OCing and benching.

Is there something I need to unlock or set to allow for higher freq. and voltage adjustment?

Or will it take bios flashing to unlock options I'm looking for?

Thank you

This was with WattMan set to turbo..


----------



## cplifj

does anyone happen to be running corsair's link software and also happen to get clocks stuck at max after gaming or compute loads ?

it doesn't seem to happen when corsair's link software isn't running, this also monitors temps and fanspeeds of everything in the system.

more monitoring just seems to conflict with each other and then some.


----------



## SpecChum

Quote:


> Originally Posted by *Grummpy*
> 
> This is my power usage with locked 60 fps with alien isolation maxed out 1920x1080.
> i just found it interesting i can play staying below 60 watts with vega 64


God damnit, that part at 1:10 made me jump!

Great game though.


----------



## Grummpy

here is my score.


----------



## Grummpy

Stock cooler surface is a press tool finish it isn't ground or lapped.
For a 470 pound card i expect this simple process to be done without question and im disappointed it wasn't.
You have to void the 2 years to fix their poor workmanship......
buy yourself a diamond plate for 7 pounds and get it flat add some thermal grizzly thermal paste and your temps will drop.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Grummpy*
> 
> i duno im just sharing my results with anyone who wants to look at them.
> 
> 
> 
> 
> 
> would be interesting to compare my moded water cooled temps and clocks to the real deal water cooled card.
> mine was a air cooled with a water cooler fittted.
> 
> 
> 
> http://imgur.com/HOwpmoq






looks like a good setup to me what is that software you are using to record and monitor is that relive...never bothered with it might have to give it a shot also what are you watmann settings when you run superposition


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> Stock cooler surface is a press tool finish it isn't ground or lapped.
> For a 470 pound card i expect this simple process to be done without question and im disappointed it wasn't.
> You have to void the 2 years to fix their poor workmanship......
> buy yourself a diamond plate for 7 pounds and get it flat add some thermal grizzly thermal paste and your temps will drop.


No offense but do you also complain when Intel uses thermal paste on their 1000$+ CPUs as well? God knows their IHS is far from flat either, let's not forget 99% of heatsink and cooling manufacturers that are also convex BY DESIGN. A smooth flat surface helps BUT it's not necessary, most of the components that create the heat are either dead center or to the side. Why do you think running bare die drops temperatures even more on CPUs? Convex block against bare die best scenario. If the block was completely flat wouldn't work as well.


----------



## Grummpy

vega 64 cooler is on the right amd 290 is on the left.


Considering you have a more dense fin and longer cooler on the 290 with a lower tdp it makes little sense why they made the vega 64 cooler smaller.
It does have 2 extra fins but that dont make up for the less length.
I just dont understand their reasoning why they would do this, Its like they enjoy loud cards


----------



## SpecChum

https://www.overclockers.co.uk/powercolor-radeon-rx-vega-64-devil-8gb-hbm2-pci-express-graphics-card-gx-190-pc.html

That's...a lot.

I'm not really sure what price I was expecting really, but that's more than I thought it would be.

EDIT: Saying that, I'm comparing it with Black Friday prices, when compared to the £520 price of a normal 64 it doesn't seem that bad


----------



## Grummpy

Quote:


> Originally Posted by *VicsPC*
> 
> No offense but do you also complain when Intel uses thermal paste on their 1000$+ CPUs as well? God knows their IHS is far from flat either, let's not forget 99% of heatsink and cooling manufacturers that are also convex BY DESIGN. A smooth flat surface helps BUT it's not necessary, most of the components that create the heat are either dead center or to the side. Why do you think running bare die drops temperatures even more on CPUs? Convex block against bare die best scenario. If the block was completely flat wouldn't work as well.


yes intel surface was allso concave i had to lap the thing.
it seems they all do it


----------



## Grummpy

Quote:


> Originally Posted by *VicsPC*
> 
> No offense but do you also complain when Intel uses thermal paste on their 1000$+ CPUs as well? God knows their IHS is far from flat either, let's not forget 99% of heatsink and cooling manufacturers that are also convex BY DESIGN. A smooth flat surface helps BUT it's not necessary, most of the components that create the heat are either dead center or to the side. Why do you think running bare die drops temperatures even more on CPUs? Convex block against bare die best scenario. If the block was completely flat wouldn't work as well.


yes i do i think its a disgrace intel uses tooth paste where ryzen solders.
my intel cpu was concave that had the same problem.
2 wrongs dont make a right.
seems they all do this


----------



## Grummpy

yes intel use tooth paste and they dont machine Finnish their [email protected]
i had to lap my intel cpu because it was conncave


----------



## Grummpy

sorry the page wasnt loading i re posted thinking it wasn sent.

how do i delete posts.


----------



## 113802

Quote:


> Originally Posted by *Grummpy*
> 
> i duno im just sharing my results with anyone who wants to look at them.
> 
> 
> 
> 
> 
> would be interesting to compare my moded water cooled temps and clocks to the real deal water cooled card.
> mine was a air cooled with a water cooler fittted.
> 
> 
> 
> http://imgur.com/HOwpmoq


What overlay are you using? I'll compare it with mine when I get home.


----------



## Grummpy

thats just msi after burner


----------



## wellkevi01

Quote:


> Originally Posted by *SpecChum*
> 
> https://www.overclockers.co.uk/powercolor-radeon-rx-vega-64-devil-8gb-hbm2-pci-express-graphics-card-gx-190-pc.html
> 
> That's...a lot.
> 
> I'm not really sure what price I was expecting really, but that's more than I thought it would be.
> 
> EDIT: Saying that, I'm comparing it with Black Friday prices, when compared to the £520 price of a normal 64 it doesn't seem that bad


That'd be like ~$600 USD, right?


----------



## 113802

Quote:


> Originally Posted by *Grummpy*
> 
> thats just msi after burner


Awesome, I'll install RivaTuner when I get back home. I usually uninstall it from the MSI Afterburner installer.


----------



## Grummpy

I found this interesting.
1 frame per watt.
low setting 2560x1440


----------



## Reikoji

Quote:


> Originally Posted by *Naeem*


why quake champions tho.
...


----------



## TrixX

Oh that is tempting. Would love to direct compare FE vs RX


----------



## fato22

Hello. QUestion about vega 64 LC (Sapphire). I was just able to get swap my vega 64 air with a LC. I watched videos online and it was clocking about 1750 in turbo mode (I believe the video I watched was showing fallout 4). I just installed it and I just raised the power limit to 50% in custom. I clock ~1690 full load in the witcher 3.

If I put turbo mode without touching anything it clocks ~1660.

Is there something I am missing?


----------



## orlfman

anyone know what's the max, safe hotspot temperature? i tried googling but find conflicting information. from 90c to 115c. i noticed on my air cooled vega at stock settings, bios, and profile hitting up to 86c. though average is around 80c.


----------



## wellkevi01

Quote:


> Originally Posted by *fato22*
> 
> Hello. QUestion about vega 64 LC (Sapphire). I was just able to get swap my vega 64 air with a LC. I watched videos online and it was clocking about 1750 in turbo mode (I believe the video I watched was showing fallout 4). I just installed it and I just raised the power limit to 50% in custom. I clock ~1690 full load in the witcher 3.
> 
> If I put turbo mode without touching anything it clocks ~1660.
> 
> Is there something I am missing?


The 1750 MHz is the highest achievable clockspeed, but I found that my LC(@ +50% Power Limit) pretty much only hits that right off the bat when it's still cool. After just a few seconds of 100% load the clockspeed starts to fall and steadies out at about 1680 MHz.
Quote:


> Originally Posted by *orlfman*
> 
> anyone know what's the max, safe hotspot temperature? i tried googling but find conflicting information. from 90c to 115c. i noticed on my air cooled vega at stock settings, bios, and profile hitting up to 86c. though average is around 80c.


I think somewhere in this thread someone says that 105°C is the max safe temp for the Hot Spot which is stated in the BIOs. I found that if I undervolt my LC by -62 mV(sadly, the lowest I can go), my Hot Spot temp drops from ~88° down to ~80°C.


----------



## Grummpy

Just ignore hotspot.
ive done tests and it looses its temp instantly when you close stress test
its just nonsense .
you can apply any result from resistance on thermal diodes.
thats i i think anyway, anything that has instant temp drops tellls me it was just a estimate


----------



## wellkevi01

Quote:


> Originally Posted by *fato22*
> 
> Hello. QUestion about vega 64 LC (Sapphire). I was just able to get swap my vega 64 air with a LC. I watched videos online and it was clocking about 1750 in turbo mode (I believe the video I watched was showing fallout 4). I just installed it and I just raised the power limit to 50% in custom. I clock ~1690 full load in the witcher 3.
> 
> If I put turbo mode without touching anything it clocks ~1660.
> 
> Is there something I am missing?


I think 1750 MHz is the max clockspeed achievable under perfect conditions. I found that my LC, at +50% Power Limit, only hits close to 1750 right off that bat. After only a few seconds of 100% load, the clockspeed begins to drop and it settles at around 1680 MHz.
Quote:


> Originally Posted by *orlfman*
> 
> anyone know what's the max, safe hotspot temperature? i tried googling but find conflicting information. from 90c to 115c. i noticed on my air cooled vega at stock settings, bios, and profile hitting up to 86c. though average is around 80c.


Earlier in this thread, people were saying that in the BIOs, it says that 105°C was the max safe temp for the Hot Spot. I found that when I undervolt my LC by 62 mV(sadly, the lowest I can go), my Hot Spot temps drop from ~90°C, down to ~82°C.


----------



## fato22

Quote:


> Originally Posted by *wellkevi01*
> 
> I think 1750 MHz is the max clockspeed achievable under perfect conditions. I found that my LC, at +50% Power Limit, only hits close to 1750 right off that bat. After only a few seconds of 100% load, the clockspeed begins to drop and it settles at around 1680 MHz.
> .


So ~1690 in the witcher 3 with only +50 power limit is about right I guess? Thanks

I tried to raise the frequency 2% but it crashed right away with stock voltage. I guess there is really no room for OC>?


----------



## Soggysilicon

Quote:


> Originally Posted by *cplifj*
> 
> does anyone happen to be running corsair's link software and also happen to get clocks stuck at max after gaming or compute loads ?
> 
> it doesn't seem to happen when corsair's link software isn't running, this also monitors temps and fanspeeds of everything in the system.
> 
> more monitoring just seems to conflict with each other and then some.


I would highly recommend to limit the number of monitoring softwares running, especially ones polling the gpu as it will have an effect on the devices overall performance and stability... now it can't be helped when your trouble shooting to tuning, but in general; my humble suggestion is to avoid it. As you have suggested the different softwares pull from the same resources to read back the / poll the state.
Quote:


> Originally Posted by *orlfman*
> 
> anyone know what's the max, safe hotspot temperature? i tried googling but find conflicting information. from 90c to 115c. i noticed on my air cooled vega at stock settings, bios, and profile hitting up to 86c. though average is around 80c.


I have read / heard many numbers tossed around... and on this topic... to my knowledge it has never been identified by anyone credible from AMD / Radeon in the technical department. I have called it a nuthin' burger... and "believe" it to be a current - table referenced - diode near the bottom of the die mass on top of the base of the interposer (just between the mem and proc/soc). It scales beat for beat with memory loading with all my testing that I have done. I have seen no appreciable collaboration between it and performance / stability. The caveat to that statement is that I run about a 10-15 C delta from it to my HBM temps on my setup under load so I "maybe" see 45 C after an hour or two of straight' gaming or benching. This is on a non-molded chip first gen ref card.
Quote:


> Originally Posted by *Grummpy*
> 
> Just ignore hotspot.
> ive done tests and it looses its temp instantly when you close stress test
> its just nonsense .
> you can apply any result from resistance on thermal diodes.
> thats i i think anyway, anything that has instant temp drops tellls me it was just a estimate


It can be precision rather than an estimate, but most of these setups are designed to be "more" accurate within a certain range. Depends on the granularity of the look up table. I do agree that its a load dependent... in that sense, an estimate perhaps of the thermal delta or gradient of the entire package.

All this reminds me of my ole' Phenom II, oc' it to kingdom come, but if the temp read ever >50C a crash was imminent. Mind you stock... 70C + was a non-issue. AMDs like it cold for sure, not sure if this is at all relate-able to the hot spot talk though.


----------



## geriatricpollywog

Quote:


> Originally Posted by *fato22*
> 
> So ~1690 in the witcher 3 with only +50 power limit is about right I guess? Thanks
> 
> I tried to raise the frequency 2% but it crashed right away with stock voltage. I guess there is really no room for OC>?


Did you swap for the factory liquid all-in-one or do you have a custom waterblock? My core temp starts at 1750 and settles at 1725 in Witcher 3 on custom water. With fans and pumps turned up, it settles at 1740.


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Did you swap for the factory liquid all-in-one or do you have a custom waterblock? My core temp starts at 1750 and settles at 1725 in Witcher 3 on custom water. With fans and pumps turned up, it settles at 1740.


That's about right, the 1680MHz during something like Superposition is a worst case scenario for the GPU at load and that's the MHz it drops to when set to 1750MHz and for me running around 1150mv.


----------



## Naeem

Quote:


> Originally Posted by *fato22*
> 
> Hello. QUestion about vega 64 LC (Sapphire). I was just able to get swap my vega 64 air with a LC. I watched videos online and it was clocking about 1750 in turbo mode (I believe the video I watched was showing fallout 4). I just installed it and I just raised the power limit to 50% in custom. I clock ~1690 full load in the witcher 3.
> 
> If I put turbo mode without touching anything it clocks ~1660.
> 
> Is there something I am missing?


it will only hit 1750mhz in compute loads might even go 1800mhz in some apps in games its mostly around 1700mhz to 1730mhz run a game like bf1 and it will go 1720+ it depends on game


----------



## hyp36rmax

So this happened today...



Yeaaa... a Seasonic Prime 1000 Watt Platinum PSU is *not * enough..... I have a spare Seasonic 1200 Watt XP3 Platinum connected to provide additional power to the second GPU for now waiting on an EVGA 1600 Watt T2 incoming hopefully tomorrow.

A little more context. A single VEGA 64 runs great with a 1000 Watt PSU, as soon as I added the second i got an instant reboot when ever I would start an app or game that put a load on the system.

This takes up as much power as my i7 5820K with SLI EVGA GTX 1080Ti FTW3's











Spoiler: Warning: Spoiler!


----------



## fato22

Quote:


> Originally Posted by *0451*
> 
> Did you swap for the factory liquid all-in-one or do you have a custom waterblock? My core temp starts at 1750 and settles at 1725 in Witcher 3 on custom water. With fans and pumps turned up, it settles at 1740.


I just have the regular liquid all in one. The witcher 3 goes from 1690 average to 1710 in certain areas. In other places it may dip into 1660 tho. Again, I didn't touch anything, just raised the power limit.


----------



## mirosso

Hey guys. I just wanted to let you know that you can install the latest update 17.11.2 on the Vega FE. FINALLY
I just don't understand why AMD is not communicating such huge (at least for me) infos.


----------



## Kyozon

Quote:


> Originally Posted by *mirosso*
> 
> Hey guys. I just wanted to let you know that you can install the latest update 17.11.2 on the Vega FE. FINALLY
> I just don't understand why AMD is not communicating such huge (at least for me) infos.


I saw your post on Reddit there, but how can i Install this Driver? Is not working for me.


----------



## 113802

Quote:


> Originally Posted by *hyp36rmax*
> 
> So this happened today...
> 
> is *not * enough..... I have a spare Seasonic 1200 Watt XP3 Platinum connected to provide additional power to the second GPU for now waiting on an EVGA 1600 Watt T2 incoming hopefully tomorrow.
> 
> A little more context. A single VEGA 64 runs great with a 1000 Watt PSU, as soon as I added the second i got an instant reboot when ever I would start an app or game that put a load on the system.
> 
> This takes up as much power as my i7 5820K with SLI EVGA GTX 1080Ti FTW3's
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


When overclocking my RX Vega 64 + 6700k my Seasonic 850-X shuts down. I bought an EVGA Supernova 1200w p2 and haven't had an issue since.


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> So this happened today...
> 
> 
> 
> 
> Yeaaa... a Seasonic Prime 1000 Watt Platinum PSU is *not * enough..... I have a spare Seasonic 1200 Watt XP3 Platinum connected to provide additional power to the second GPU for now waiting on an EVGA 1600 Watt T2 incoming hopefully tomorrow.
> 
> A little more context. A single VEGA 64 runs great with a 1000 Watt PSU, as soon as I added the second i got an instant reboot when ever I would start an app or game that put a load on the system.
> 
> This takes up as much power as my i7 5820K with SLI EVGA GTX 1080Ti FTW3's
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


Yes they eat power for breafast. Ive seen over or right around 1200 watts on cf vega fe. Im using a lepa gs1600 watt lol. I was going to try 3way cf with a vega 64 and fe but chances are they wont work... In the name of science... I do wonder how much power they would need


----------



## mirosso

Quote:


> Originally Posted by *Kyozon*
> 
> I saw your post on Reddit there, but how can i Install this Driver? Is not working for me.


I was on 17.Q4 and 17.9.1 and I honestly just downloaded the 17.11.2 off the AMD website, installed it and rebooted. Everything fine on my side. What didn't work?


----------



## cmogle4

Quote:


> Originally Posted by *Kyozon*
> 
> I saw your post on Reddit there, but how can i Install this Driver? Is not working for me.


1.Download the 17.Q4 driver and install it using custom install and choose yes to do you want to install multiple drivers.
2. Switch to Gaming driver
3. Download and install the latest RX Vega driver 17.11.2
4. Install 17.11.2
5.Reboot

The above steps worked for me.


----------



## cplifj

So funny, after asking about corsair link software, i today boot into windows and the first thing that greets me is an ERROR REPORT from corsair link software crashing and failing (never happened before , also did not receive any updates in the meantime, but posting anything on this forum is a joy for all the ****hackers reading along doing social engineering to get all the info they need for trolling others.

COINCIDENCES ????? There are no coincidences: sad mo at it again.


----------



## dagget3450

Quote:


> Originally Posted by *cplifj*
> 
> So funny, after asking about corsair link software, i today boot into windows and the first thing that greets me is an ERROR REPORT from corsair link software crashing and failing (never happened before , also did not receive any updates in the meantime, but posting anything on this forum is a joy for all the ****hackers reading along doing social engineering to get all the info they need for trolling others.
> 
> COINCIDENCES ????? There are no coincidences: sad mo at it again.


I remember a while back i had the load/clock stuck issue on vega with my ryzen build. I moved my vegas over to my intel rig due to better slot arrangment on x99. Cant say ive seen it since but i havent been on the machine lately(holiday)


----------



## hyp36rmax

Quote:


> Originally Posted by *WannaBeOCer*
> 
> When overclocking my RX Vega 64 + 6700k my Seasonic 850-X shuts down. I bought an EVGA Supernova 1200w p2 and haven't had an issue since.


Figures it was a power load issue. I ordered an EVGA Supernova 1600 Watt T2. Probably won't have an issue after this. Haven't had much luck with the Seasonic Prime series lately

Quote:


> Originally Posted by *dagget3450*
> 
> Yes they eat power for breafast. Ive seen over or right around 1200 watts on cf vega fe. Im using a lepa gs1600 watt lol. I was going to try 3way cf with a vega 64 and fe but chances are they wont work... In the name of science... I do wonder how much power they would need


Ha! Yea I didn't think I would have to resort to a 1600+ Watt PSU again after my main system required one also. Let's do it! Name of science!!! hahaha


----------



## SavantStrike

Is there a dump of the red devil BIOS yet? I keep hoping we'll get lucky and it will be compatible and offer higher HBM voltage.

Does the FE have more oc headroom on the HBM with it's additional voltage?


----------



## Grummpy

wow amazing value for 600 pounds.
https://www.overclockers.co.uk/gigabyte-radeon-rx-vega-64-xtx-8gb-hbm2-pci-express-liquid-cooled-graphics-card-aqua-pack-gx-19k-gi.html


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> When overclocking my RX Vega 64 + 6700k my Seasonic 850-X shuts down. I bought an EVGA Supernova 1200w p2 and haven't had an issue since.


Wow thats weird. My rx vega 64 liquid + %50 power limit and 5ghz 7700k pulls maximum 650-700w from wall. (this is extreme example)


----------



## fursko

I returned my rx vega 64 lc lol. Silicon lottery was good but coil whine and radiator fan was bad. I will buy it again. Just waiting for more reasonable price. My gpu was gigabyte brand. I was getting around 27k firestrike and around 5300-5400 superposition 1080p extreme benchmark. Can you guys share your vega brand and experience so far ? Which one should i get ? Xfx, Sapphire, Gigabyte, Msi... Sapphire cheaper than others. I heard some lc vegas cant maintain stock clocks. Thats bad...

I can buy 1080 ti actually. Better price/perf/watt. But i have freesync 2 monitor. Dunno what to do. Hope i can find good deal for vega 64 lc. And if they enable disabled vega features it can close gap between 1080 ti.

I want buy vega but amd doesnt help consumer. This price/perf/watt doesnt make sense. What you guys think ? (blower v56 and v64 out of question)


----------



## kundica

Quote:


> Originally Posted by *fursko*
> 
> I returned my rx vega 64 lc lol. Silicon lottery was good but coil whine and radiator fan was bad. I will buy it again. Just waiting for more reasonable price. My gpu was gigabyte brand. I was getting around 27k firestrike and around 5300-5400 superposition 1080p extreme benchmark. Can you guys share your vega brand and experience so far ? Which one should i get ? Xfx, Sapphire, Gigabyte, Msi... Sapphire cheaper than others. I heard some lc vegas cant maintain stock clocks. Thats bad...
> 
> I can buy 1080 ti actually. Better price/perf/watt. But i have freesync 2 monitor. Dunno what to do. Hope i can find good deal for vega 64 lc. And if they enable disabled vega features it can close gap between 1080 ti.
> 
> I want buy vega but amd doesnt help consumer. This price/perf/watt doesnt make sense. What you guys think ? (blower v56 and v64 out of question)


Chances are you'll have a similar experience with all the Vega LC cards except you won't find another that clocks as well as the one you just returned. People often blame coil whine on the card itself but in my experience it's a combination of things. I had really bad GPU coil whine once but after replacing my Corsair AX series PSU with a Seasonic it went away.


----------



## geriatricpollywog

Quote:


> Originally Posted by *fursko*
> 
> I returned my rx vega 64 lc lol. Silicon lottery was good but coil whine and radiator fan was bad. I will buy it again. Just waiting for more reasonable price. My gpu was gigabyte brand. I was getting around 27k firestrike and around 5300-5400 superposition 1080p extreme benchmark. Can you guys share your vega brand and experience so far ? Which one should i get ? Xfx, Sapphire, Gigabyte, Msi... Sapphire cheaper than others. I heard some lc vegas cant maintain stock clocks. Thats bad...
> 
> I can buy 1080 ti actually. Better price/perf/watt. But i have freesync 2 monitor. Dunno what to do. Hope i can find good deal for vega 64 lc. And if they enable disabled vega features it can close gap between 1080 ti.
> 
> I want buy vega but amd doesnt help consumer. This price/perf/watt doesnt make sense. What you guys think ? (blower v56 and v64 out of question)


What were your sustained clocks during gaming? I was able to hit 27,300 graphics score with 1180mhz HBM, but my stable game settings give me a score of 26,700. Sustained clocks are 1725/1110 on custom water.


----------



## Aenra

Quote:


> Originally Posted by *fursko*
> 
> I can buy 1080 ti actually


You'd lose both your monitor's Freesync (money down the drain) and Radeon's Enhanced Sync feature (which does wonders where available).

Even if money wasn't an issue and you went and got yourself a GSync monitor; for what? Ask around, see how GSync fares with really powerful GTXs.. badly. Still.

Obviously, this would all be alleviated if you had some 2K or higher setup. But unless you do? The 1080Ti is *up* to 30% faster. Up to (read: marketing). At 1080p, it's about 10-15% faster* than a 1080. Give me one title where that 10% gain is a must 

So all that in mind? No offense, but i fail to see the point, consdering what you'd be giving up for it.

* Here:


----------



## pengs

Quote:


> Originally Posted by *0451*
> 
> What were your sustained clocks during gaming? I was able to hit 27,300 graphics score with 1180mhz HBM, but my stable game settings give me a score of 26,700. Sustained clocks are 1725/1110 on custom water.


Are you able to do 1180 with stock HBM voltage?
What are your other settings needed for that?
Quote:


> Originally Posted by *fursko*
> 
> I returned my rx vega 64 lc lol. Silicon lottery was good but coil whine and radiator fan was bad. I will buy it again.


I have coil whine but it's coming from the PSU and it's only really terrible when running a benchmark or something that I've not put a frame rate cap on. With Freesync you kinda need to lock it to just under the refresh rate, alternatively Vsync or Enhanced sync


----------



## Miiksu

Quote:


> Originally Posted by *TrixX*
> 
> That's about right, the 1680MHz during something like Superposition is a worst case scenario for the GPU at load and that's the MHz it drops to when set to 1750MHz and for me running around 1150mv.


What temps u are gettin with that Aqua Computer block?


----------



## geriatricpollywog

Quote:


> Originally Posted by *pengs*
> 
> Are you able to do 1180 with stock HBM voltage?
> What are your other settings needed for that?
> I have coil whine but it's coming from the PSU and it's only really terrible when running a benchmark or something that I've not put a frame rate cap on. With Freesync you kinda need to lock it to just under the refresh rate, alternatively Vsync or Enhanced sync


Stock HBM voltage, liquid bios, 150% power limit, 500 amp limit.


----------



## ducegt

Quote:


> Originally Posted by *0451*
> 
> Stock HBM voltage, liquid bios, 150% power limit, 500 amp limit.


Does afterburner or GPU Z show it ever going beyond 400w?


----------



## fursko

Quote:


> Originally Posted by *kundica*
> 
> Chances are you'll have a similar experience with all the Vega LC cards except you won't find another that clocks as well as the one you just returned. People often blame coil whine on the card itself but in my experience it's a combination of things. I had really bad GPU coil whine once but after replacing my Corsair AX series PSU with a Seasonic it went away.


I cant change my psu. Its already brand new 1000w corsair and its really good. Works fanless for hours.
Quote:


> Originally Posted by *0451*
> 
> What were your sustained clocks during gaming? I was able to hit 27,300 graphics score with 1180mhz HBM, but my stable game settings give me a score of 26,700. Sustained clocks are 1725/1110 on custom water.


Depends. Core clock around 1690 for shadow of war, around 1730 for cod ww2. HBM 1180mhz for shadow of war and benchmarks but 1090 mhz for overwatch. Ow really sensitive. I didnt get 27000+ but i didnt try a lot actually. My score was 26800 i guess.


----------



## geriatricpollywog

Quote:


> Originally Posted by *ducegt*
> 
> Does afterburner or GPU Z show it ever going beyond 400w?


The most I've ever seen is 600w at the wall under heavy GPU and CPU load. The GPU accounts for 400w and the rest of the system (CPU/fans/pumps/mobo) is 200w. During normal gaming, GPU-Z shows 290-300w and I see 500-550 at the wall. I can probably get similar results with a lower power limit and amperage. I just don't want power to be a limitimg factor.


----------



## fursko

Quote:


> Originally Posted by *Aenra*
> 
> You'd lose both your monitor's Freesync (money down the drain) and Radeon's Enhanced Sync feature (which does wonders where available).
> Even if money wasn't an issue and you went and got yourself a GSync monitor; for what? Ask around, see how GSync fares with really powerful GTXs.. badly. Still.
> Obviously, this would all be alleviated if you had some 2K or higher setup. But unless you do? The 1080Ti is *up* to 30% faster. Up to (read: marketing). At 1080p, it's about 10-15% faster* than a 1080. Give me one title where that 10% gain is a must
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So all that in mind? No offense, but i fail to see the point, consdering what you'd be giving up for it.
> 
> * Here:


My monitor 1440p 144hz freesync 2 chg70. Nvidia has fast sync its same thing enhanced sync. Im sure i will not buy gsync monitor lol. My monitor is amazing. I want vega but prices bad. 1080 ti superior and its almost same price.
Quote:


> Originally Posted by *pengs*
> 
> Are you able to do 1180 with stock HBM voltage?
> What are your other settings needed for that?
> I have coil whine but it's coming from the PSU and it's only really terrible when running a benchmark or something that I've not put a frame rate cap on. With Freesync you kinda need to lock it to just under the refresh rate, alternatively Vsync or Enhanced sync


Fps coil whine normal but my card whines even at 50 fps. And yes my card can work with even 1190mhz(only for benchmarks). 1200mhz crash lol ^^ Stock 950mV.


----------



## fursko

Quote:


> Originally Posted by *0451*
> 
> The most I've ever seen is 600w at the wall under heavy GPU and CPU load. The GPU accounts for 400w and the rest of the system (CPU/fans/pumps/mobo) is 200w. During normal gaming, GPU-Z shows 290-300w and I see 500-550 at the wall. I can probably get similar results with a lower power limit and amperage. I just don't want power to be a limitimg factor.


V64 LC + %50 power limit gpuz says 350-390W, psu says around 660W total with 5ghz 7700k (gaming load)


----------



## Sunsoar

Quote:


> Originally Posted by *fursko*
> 
> I returned my rx vega 64 lc lol. Silicon lottery was good but coil whine and radiator fan was bad. I will buy it again. Just waiting for more reasonable price. My gpu was gigabyte brand. I was getting around 27k firestrike and around 5300-5400 superposition 1080p extreme benchmark. Can you guys share your vega brand and experience so far ? Which one should i get ? Xfx, Sapphire, Gigabyte, Msi... Sapphire cheaper than others. I heard some lc vegas cant maintain stock clocks. Thats bad...
> 
> I can buy 1080 ti actually. Better price/perf/watt. But i have freesync 2 monitor. Dunno what to do. Hope i can find good deal for vega 64 lc. And if they enable disabled vega features it can close gap between 1080 ti.
> 
> I want buy vega but amd doesnt help consumer. This price/perf/watt doesnt make sense. What you guys think ? (blower v56 and v64 out of question)


I thought my Gigabyte LC had coil whine but it is actually my PSU. Ordered a 1000w Prime Ultra to alleviate this. Actually just ordered a whole new computer build.


----------



## 113802

Quote:


> Originally Posted by *Sunsoar*
> 
> I thought my Gigabyte LC had coil whine but it is actually my PSU. Ordered a 1000w Prime Ultra to alleviate this. Actually just ordered a whole new computer build.


It's not your PSU it is your video card. When video cards render useless frames they scream. Every single GPU rendering more frames than it should be. Just run Radeon Chill and relax.

I am also using a Gigabyte RX Vega 64 XTX(LC)

My EVGA GTX 780, GTX 780 Ti KingPin edition, EVGA GTX 980, EVGA GTX 980 Ti, and EVGA GTX 1070 FTW along with my XFX R9 390x and RX Vega 64 LC all scream with high FPS.





Quote:


> Originally Posted by *fursko*
> 
> I returned my rx vega 64 lc lol. Silicon lottery was good but coil whine and radiator fan was bad. I will buy it again. Just waiting for more reasonable price. My gpu was gigabyte brand. I was getting around 27k firestrike and around 5300-5400 superposition 1080p extreme benchmark. Can you guys share your vega brand and experience so far ? Which one should i get ? Xfx, Sapphire, Gigabyte, Msi... Sapphire cheaper than others. I heard some lc vegas cant maintain stock clocks. Thats bad...
> 
> I can buy 1080 ti actually. Better price/perf/watt. But i have freesync 2 monitor. Dunno what to do. Hope i can find good deal for vega 64 lc. And if they enable disabled vega features it can close gap between 1080 ti.
> 
> I want buy vega but amd doesnt help consumer. This price/perf/watt doesnt make sense. What you guys think ? (blower v56 and v64 out of question)


All the current video cards are reference AMD cards. Pick a brand that has a 3 year warranty. MSI or Gigabyte. No reason to buy a Sapphire, XFX, or PowerColor when they only offer a 2 year warranty.

I would suggest waiting for the results of the Sapphire RX Vega 64 custom card before purchasing a new card.


----------



## Chaoz

Mine whines like crazy aswell.

My previous GPU's never had coil whine.
I had: 2 Sapphire HD5970's, an Asus R9 390, an EVGA GTX1070 SC.

But now my Vega 64 does, too bad. But cba with it, with FreeSync it goes completely away, so I'm not bothered.


----------



## geriatricpollywog

Quote:


> Originally Posted by *fursko*
> 
> V64 LC + %50 power limit gpuz says 350-390W, psu says around 660W total with 5ghz 7700k (gaming load)


That's a lot. I also have a 5ghz 7700k at 1.35v. I'm using a brand new Kill-a-watt P3 power meter and a 9 month old EVGA Supernova 1200P2 power supply.


----------



## orlfman

so not sure if its coincidence or actually related, but i just had a black screen + freeze + reboot after i loaded up gpu-z while playing pubg at the same time to check out temps with my vega 56. everything was running great since i installed the card yesterday before this incident. and it was quick. like the moment gpu-z started to launch after i gave windows uac permission to load it, it froze.

i actually thought it was going to tdr as with my 580 i had similar problems, like loading up a youtube video while playing a game simultaneously, or loading up gpu-z would cause a tdr. but this didn't tdr, just rebooted. event viewer doesn't show anything useful. just a random warning un-schedule shutdown followed by a critical level shutdown. doesn't list a bsod or a tdr.

the reboot wasn't instant though. i heard the fan on my 56 spin up and down a few times for like 5-7 seconds before the reboot. do you think it could have been gpu-z that caused it or something else? anyone else have a similar event?


----------



## Sunsoar

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It's not your PSU it is your video card. When video cards render useless frames they scream. Every single GPU rendering more frames than it should be. Just run Radeon Chill and relax.
> 
> I am also using a Gigabyte RX Vega 64 XTX(LC)
> 
> My EVGA GTX 780, GTX 780 Ti KingPin edition, EVGA GTX 980, EVGA GTX 980 Ti, and EVGA GTX 1070 FTW along with my XFX R9 390x and RX Vega 64 LC all scream with high FPS.
> 
> 
> 
> 
> 
> All the current video cards are reference AMD cards. Pick a brand that has a 3 year warranty. MSI or Gigabyte. No reason to buy a Sapphire, XFX, or PowerColor when they only offer a 2 year warranty.
> 
> I would suggest waiting for the results of the Sapphire RX Vega 64 custom card before purchasing a new card.


Sorry man - it's my PSU where the noise is coming from. Nothing from the card....ear next to PSU that's where the noise is.


----------



## TrixX

Quote:


> Originally Posted by *Miiksu*
> 
> What temps u are gettin with that Aqua Computer block?


I haven't done a full temp equalisation test, but after a few hours of gaming it doesn't usually break 40C. Even with something like PUBG which is notorious for it's poor optimisation.

Got a race in a week and a half in iRacing so I'm avoiding messing with the PC much in that period, but after that I'll be doing a full tear down and repaste (want to try Conductonaut instead of Hyrdoonaut).

I also have some Arctic Ceramique (relatively cheap paste) that I want to test to see if theres any real gain over the bulk cheap pastes vs Hydronaut and it's horrendous application method









OH and I'll be putting the retention bracket back on the back of the card. Not 100% sure there's enough pressure with just the block screws.

On the Coil Whine topic, just set the 300FPS limit in Radeon Settings. It stops the only situations I've had high pitch coil whine. Never heard it since as those situations were in menu's without 2K-4K FPS









The low buzz I get under full load from my VRM's is annoying though...


----------



## By-Tor

I was having a huge issue with black screens and lockups and it was pissing me off to the point I was thinking about putting my 290X back in my loop...

What I found was that when I was running the 290X in x-fire I used one cable out from the PSU to power both the 8 & 6 pin connectors on each card and it worked fine for a couple of years. Now when I installed my Vega 64 and just switched the 6 pin cable for another 8 pin cable thinking that would be fine. Well It was locking up bad tonight so I thought about that single cable from the PSUs PCI-E connector and decided to put each plug one its own out from the PSU and now it is running fine at 1745 core and 1050 memory and after 2 hours of BF1 the temps only got as high as 29c (the house is 21c).

Loving this card....


----------



## hyp36rmax

Quote:


> Originally Posted by *By-Tor*
> 
> I was having a huge issue with black screens and lockups and it was pissing me off to the point I was thinking about putting my 290X back in my loop...
> 
> What I found was that when I was running the 290X in x-fire I used one cable out from the PSU to power both the 8 & 6 pin connectors on each card and it worked fine for a couple of years. Now when I installed my Vega 64 and just switched the 6 pin cable for another 8 pin cable thinking that would be fine. Well It was locking up bad tonight so I thought about that single cable from the PSUs PCI-E connector and decided to put each plug one its own out from the PSU and now it is running fine at 1745 core and 1050 memory and after 2 hours of BF1 the temps only got as high as 29c (the house is 21c).
> 
> Loving this card....


Yea high power cards require a single 6+2-pin or 8-pin from the PSU to each connector.

*Seasonic*



*EVGA*


----------



## GroupB

Wondering if anyone had this problem... in HWinfo64 my HBM temperature is now 0 all the time ( not working anymore). I switch to 17.11.2 from blockchain driver then it happen. I try many hwinfo portable version and installed , I reset the order , try multiple thing and It just refuse to work. I dont feel like switching back driver to test that... anyone with 17.11.2 have the same problem ?

BTW gpu-z still report it, really just hwinfo64 and its a pity cause I like that program and having the option to OSD the value.

If its only me well I guess Ill do without the sensor for now, max was 45 C so its not like I really need to monitor it.


----------



## jearly410

Quote:


> Originally Posted by *GroupB*
> 
> Wondering if anyone had this problem... in HWinfo64 my HBM temperature is now 0 all the time ( not working anymore). I switch to 17.11.2 from blockchain driver then it happen. I try many hwinfo portable version and installed , I reset the order , try multiple thing and It just refuse to work. I dont feel like switching back driver to test that... anyone with 17.11.2 have the same problem ?
> 
> BTW gpu-z still report it, really just hwinfo64 and its a pity cause I like that program and having the option to OSD the value.
> 
> If its only me well I guess Ill do without the sensor for now, max was 45 C so its not like I really need to monitor it.


I've got the same problem


----------



## orlfman

Quote:


> Originally Posted by *By-Tor*
> 
> I was having a huge issue with black screens and lockups and it was pissing me off to the point I was thinking about putting my 290X back in my loop...
> 
> What I found was that when I was running the 290X in x-fire I used one cable out from the PSU to power both the 8 & 6 pin connectors on each card and it worked fine for a couple of years. Now when I installed my Vega 64 and just switched the 6 pin cable for another 8 pin cable thinking that would be fine. Well It was locking up bad tonight so I thought about that single cable from the PSUs PCI-E connector and decided to put each plug one its own out from the PSU and now it is running fine at 1745 core and 1050 memory and after 2 hours of BF1 the temps only got as high as 29c (the house is 21c).
> 
> Loving this card....


oh yeah i bet. mines hooked up with two independent 8 pin cables. not using a single cable with two heads. so i don't think mine was related to that. i did google and saw some people complain of their vega systems locking up too from gpu-z. along with 3dmark apparently. i hope its gpuz and not the card.

https://community.amd.com/thread/220453


----------



## Leons

Quote:


> Originally Posted by *GroupB*
> 
> Wondering if anyone had this problem... in HWinfo64 my HBM temperature is now 0 all the time ( not working anymore). I switch to 17.11.2 from blockchain driver then it happen. I try many hwinfo portable version and installed , I reset the order , try multiple thing and It just refuse to work. I dont feel like switching back driver to test that... anyone with 17.11.2 have the same problem ?
> 
> BTW gpu-z still report it, really just hwinfo64 and its a pity cause I like that program and having the option to OSD the value.
> 
> If its only me well I guess Ill do without the sensor for now, max was 45 C so its not like I really need to monitor it.


Quote:


> Originally Posted by *jearly410*
> 
> I've got the same problem


The problem has been fixed with the beta version 5.61-3287


----------



## gupsterg

Quote:


> Originally Posted by *SavantStrike*
> 
> Is there a dump of the red devil BIOS yet? I keep hoping we'll get lucky and it will be compatible and offer higher HBM voltage.
> 
> Does the FE have more oc headroom on the HBM with it's additional voltage?


I doubt Red Devil will have additional HBM voltage, as it uses stock clocks for HBM. What will be interesting is how so the clocks are for PowerPlay. In the past looking at say a std vs OC ROM I gained insight on % increase they did all the way when highest clock increased.

FE uses 1.35V same as V64.


----------



## SpecChum

PSA: Setting Enhanced Sync and in-game V-sync doesn't end well...


----------



## fursko

Quote:


> Originally Posted by *orlfman*
> 
> so not sure if its coincidence or actually related, but i just had a black screen + freeze + reboot after i loaded up gpu-z while playing pubg at the same time to check out temps with my vega 56. everything was running great since i installed the card yesterday before this incident. and it was quick. like the moment gpu-z started to launch after i gave windows uac permission to load it, it froze.
> 
> i actually thought it was going to tdr as with my 580 i had similar problems, like loading up a youtube video while playing a game simultaneously, or loading up gpu-z would cause a tdr. but this didn't tdr, just rebooted. event viewer doesn't show anything useful. just a random warning un-schedule shutdown followed by a critical level shutdown. doesn't list a bsod or a tdr.
> 
> the reboot wasn't instant though. i heard the fan on my 56 spin up and down a few times for like 5-7 seconds before the reboot. do you think it could have been gpu-z that caused it or something else? anyone else have a similar event?


Same. Gpuz freeze if i open it while i playing game. Launch it before game.


----------



## Reikoji

Newegg must be regretting putting those VEGA FE's on sale right about now.


----------



## PontiacGTX

Do you think that 750w would be fine for vega and a SB cpu?


----------



## SavantStrike

Quote:


> Originally Posted by *Reikoji*
> 
> Newegg must be regretting putting those VEGA FE's on sale right about now.


What would make you say that?

They obviously want to move these units for dinner reason.


----------



## SpecChum

Quote:


> Originally Posted by *SavantStrike*
> 
> What would make you say that?
> 
> They obviously want to move these units for dinner reason.


I'm hungry too.


----------



## The EX1

Quote:


> Originally Posted by *Grummpy*
> 
> vega 64 cooler is on the right amd 290 is on the left.
> 
> 
> Considering you have a more dense fin and longer cooler on the 290 with a lower tdp it makes little sense why they made the vega 64 cooler smaller.
> It does have 2 extra fins but that dont make up for the less length.
> I just dont understand their reasoning why they would do this, Its like they enjoy loud cards


Vega's cooler has a vapor chamber.


----------



## wellkevi01

Quote:


> Originally Posted by *PontiacGTX*
> 
> Do you think that 750w would be fine for vega and a SB cpu?


If it's a quality brand PSU it should be fine. I've got no issues and my setup is a 5820k oc'd to 4.1GHz and a Vega 64 Liquid powered by an EVGA Supernova 750 G2. I think it's pushing it though, cause my UPS shows that my PC is pulling ~650 watts under load. Granted, I do have my speakers and monitors hooked to it aswell.


----------



## Reikoji

Quote:


> Originally Posted by *SavantStrike*
> 
> What would make you say that?
> 
> They obviously want to move these units for dinner reason.


oh just saying since they sold out quickly after being dropped to what they like to charge for LC RX vegas. Lost potential $700 on each one. Tho.. they were never worth $1500 to begin with :3

If only RX vegas could get the same price handling..


----------



## SavantStrike

Quote:


> Originally Posted by *Reikoji*
> 
> oh just saying since they sold out quickly after being dropped to what they like to charge for LC RX vegas. Lost potential $700 on each one. Tho.. they were never worth $1500 to begin with :3
> 
> If only RX vegas could get the same price handling..


The $465 Vega 64 air units last week were pretty sweet. They sold out in around 2-3 days too.


----------



## Trender07

lol im using an EVGA G3 650W


----------



## BeetleatWar1977

has anyone read this: http://www.planet3dnow.de/cms/35030-amds-vega-der-neue-star-am-krypto-mining-himmel/

if you dont understand german, go there http://monerobenchmarks.info/ and sort to hashes........


----------



## madmanmarz

Anyone else having issues with destiny 2 and some other games where you have to alt tab out and in because the screen freezes?

Has this issue been addressed through drivers or are the cards faulty? Obviously we've discussed the fix which involves locking in p7 state but I could only imagine if this was happening to every Vega owner. Maybe I should RMA.


----------



## ducegt

Vega's PowerTune does not capitalize on temperature. At least with 64 LC.

I temporarily moved my PC to my balcony and it's -3C out there. Balanced and +50PL profiles resulted in no gains over my normal 22-24C ambient temps. I maxed out all fans in my case while doing this and saw a max GPU temp of 33C. @#$%ing AMD driver crashed when trying to OC to D7 clock or voltage.


----------



## SpecChum

Quote:


> Originally Posted by *madmanmarz*
> 
> Anyone else having issues with destiny 2 and some other games where you have to alt tab out and in because the screen freezes?
> 
> Has this issue been addressed through drivers or are the cards faulty? Obviously we've discussed the fix which involves locking in p7 state but I could only imagine if this was happening to every Vega owner. Maybe I should RMA.


I've noticed some funky things happening whilst Afterburner is running; the API test in 3DMark often crashes the entire PC with LED Code 8 with it running but it's fine without it.

I've got a feeling Afterburner is responsible for some other weirdness I've been having too.

Also, and this might just be me, but don't enable ingame vysnc when enhanced sync is enabled. This completely locks Hard Reset and I have to "Hard Reset, literally", or it reboots itself.


----------



## pengs

Quote:


> Originally Posted by *SpecChum*
> 
> I've noticed some funky things happening whilst Afterburner is running; the API test in 3DMark often crashes the entire PC with LED Code 8 with it running but it's fine without it.
> 
> I've got a feeling Afterburner is responsible for some other weirdness I've been having too.


Yeah, I've run AB for years and uninstalled it about 2 weeks ago. I also have a feeling it was causing instability by existing in the background, OSD off/settings untouched. I have a hunch it has something to do with it's software monitoring.

Overclocking is more forgiving without it, oddly.


----------



## MapRef41N93W

Just got my Gigabyte RX Vega 64, and the card just straight up will not take my display right. I get a link failure message on boot and either the display is locked to 4k at 30hz, or it doesn't even offer me anything above 1080p. Tried 3 different displayport cables. Are you telling me AMD still hasn't fixed this 3 generations later? I had minor issues with my R9 290 with this, but it could usually be fixed with a reboot.


----------



## SpecChum

Quote:


> Originally Posted by *MapRef41N93W*
> 
> Just got my Gigabyte RX Vega 64, and the card just straight up will not take my display right. I get a link failure message on boot and either the display is locked to 4k at 30hz, or it doesn't even offer me anything above 1080p. Tried 3 different displayport cables. Are you telling me AMD still hasn't fixed this 3 generations later? I had minor issues with my R9 290 with this, but it could usually be fixed with a reboot.


No issues on here my 75Hz 3440x1440 XR34CK1 or Samsung 144Hz 2560x1440 144Hz CHG70


----------



## madmanmarz

Nah just gpu-z on second monitor, and sometimes nothing. 90% of the time you alt tab out and in and you're good - no driver crash or nothin, but in multiplayer it's an issue. Need to try just one monitor and get that out of the equation. The issue is definitely related to fluctuating p states most likely with hbm speed. Many games have no issue whatsoever.
Quote:


> Originally Posted by *SpecChum*
> 
> I've noticed some funky things happening whilst Afterburner is running; the API test in 3DMark often crashes the entire PC with LED Code 8 with it running but it's fine without it.
> 
> I've got a feeling Afterburner is responsible for some other weirdness I've been having too.
> 
> Also, and this might just be me, but don't enable ingame vysnc when enhanced sync is enabled. This completely locks Hard Reset and I have to "Hard Reset, literally", or it reboots itself.


----------



## hyp36rmax

Finally got my second VEGA 64 blocked and upgraded my PSU from a Seasonic PRIME 1000 Watt PSU to an EVGA Titanium 1600 Watt T2. The only thing left are sleeved cables then this test bench is ready for battle!


----------



## barbz127

What radiators do you have cooling that setup?


----------



## hyp36rmax

Quote:


> Originally Posted by *barbz127*
> 
> What radiators do you have cooling that setup?



Praxis Wetbench
ASUS Crosshair Hero IV
AMD Ryzen 1700X (EK Block)
Corsair Dominator 16GB 3200mhz
Samsung 250GB SSD
Sapphire VEGA 64 (EK Block)
Power Color VEGA 64 (EK Block)
*Alphacool UT60 360mm Radiator*
Gentle Typhoon 1866 RPM Fans (3x)
EVGA Titanium 1600 Watt T2 PSU

At a full load benching with FirestrikeI didn't see it go above 42C with an ambient of 24C


----------



## LeadbyFaith21

Quote:


> Originally Posted by *hyp36rmax*
> 
> 
> Praxis Wetbench
> 
> 
> ASUS Crosshair Hero IV
> 
> 
> AMD Ryzen 1700X (EK Block)
> 
> 
> Corsair Dominator 16GB 3200mhz
> 
> 
> Samsung 250GB SSD
> 
> 
> Sapphire VEGA 64 (EK Block)
> 
> 
> Power Color VEGA 64 (EK Block)
> 
> 
> *Alphacool UT60 360mm Radiator*
> 
> 
> Gentle Typhoon 1866 RPM Fans (3x)
> 
> 
> EVGA Titanium 1600 Watt T2 PSU
> 
> 
> 
> At a full load benching with FirestrikeI didn't see it go above 42C with an ambient of 24C


Could you update with temps after running some benchmarks? I'm cooling a single 64 + ryzen 7 with 2 slim 360 rads and am unimpressed with the 64's temps and am considering upgrading one of them to a slightly thicker rad.


----------



## dagget3450

Quote:


> Originally Posted by *hyp36rmax*
> 
> 
> Praxis Wetbench
> 
> 
> ASUS Crosshair Hero IV
> 
> 
> AMD Ryzen 1700X (EK Block)
> 
> 
> Corsair Dominator 16GB 3200mhz
> 
> 
> Samsung 250GB SSD
> 
> 
> Sapphire VEGA 64 (EK Block)
> 
> 
> Power Color VEGA 64 (EK Block)
> 
> 
> *Alphacool UT60 360mm Radiator*
> 
> 
> Gentle Typhoon 1866 RPM Fans (3x)
> 
> 
> EVGA Titanium 1600 Watt T2 PSU
> 
> 
> 
> At a full load benching with FirestrikeI didn't see it go above 42C with an ambient of 24C


I am curious to see if you have any issues on the ryzen and crosshair board. I had a few issues one of which someone was complaining about with gpu tachs getting stuck at full load on random. Also curious how CF vega will handle the lower bandwidth of the pcie on 370x. I wish i had a x399 and 1900x to compare against x370 ryzen in mgpu situations


----------



## hyp36rmax

Quote:


> Originally Posted by *LeadbyFaith21*
> 
> Could you update with temps after running some benchmarks? I'm cooling a single 64 + ryzen 7 with 2 slim 360 rads and am unimpressed with the 64's temps and am considering upgrading one of them to a slightly thicker rad.


For sure! This is why this build even exist! haha. Might have to give me some time to get my testing methodology set up and get through the Thanksgiving holiday here in the states


----------



## fursko

Quote:


> Originally Posted by *SpecChum*
> 
> No issues on here my 75Hz 3440x1440 XR34CK1 or Samsung 144Hz 2560x1440 144Hz CHG70


You have chg70 ? Can you try freesync windmill demo(test pattern red bar) with latest monitor firmware both hdmi and dp cable.


----------



## SpecChum

Quote:


> Originally Posted by *fursko*
> 
> You have chg70 ? Can you try freesync windmill demo(test pattern red bar) with latest monitor firmware both hdmi and dp cable.


I can do DisplayPort.

What am I looking for?

Also, do you have a link to the Windmill demo? I couldn't find it last time I looked; not that I looked very hard to be honest.


----------



## fursko

Quote:


> Originally Posted by *SpecChum*
> 
> I can do DisplayPort.
> 
> What am I looking for?
> 
> Also, do you have a link to the Windmill demo? I couldn't find it last time I looked; not that I looked very hard to be honest.


https://drive.google.com/drive/folders/0B0RkAW7Y4oRSd1gtSkFPcXB6RGM

I find this link. My experience is hdmi shows better colors while displayport colors broken. Red bar appear orange with displayport.

edit: also a lot of ghosting below 60-65 fps with both cable. Calibration report paper says its calibrated with hdmi cable but hdmi cable works 144hz 8bit (100hz 10bit) dp cable 144hz 10 bit.


----------



## SpecChum

Quote:


> Originally Posted by *fursko*
> 
> https://drive.google.com/drive/folders/0B0RkAW7Y4oRSd1gtSkFPcXB6RGM
> 
> I find this link. My experience is hdmi shows better colors while displayport colors broken. Red bar appear orange with displayport.
> 
> edit: also a lot of ghosting below 60-65 fps with both cable. Calibration report paper says its calibrated with hdmi cable but hdmi cable works 144hz 8bit (100hz 10bit) dp cable 144hz 10 bit.


Ah, right. I do notice the reds look a bit orange towards the bottom of the screen, but that seems to be more VA viewing angle variance.

Rest of screen seems fine, in fact I find the colours fine in general, specially when in HDR mode and the wide gamut kicks in - shame about the lack of dimming zones.

What relevance does the Windmill test have? Or is just where you first noticed it?


----------



## fursko

Quote:


> Originally Posted by *SpecChum*
> 
> Ah, right. I do notice the reds look a bit orange towards the bottom of the screen, but that seems to be more VA viewing angle variance.
> 
> Rest of screen seems fine, in fact I find the colours fine in general, specially when in HDR mode and the wide gamut kicks in - shame about the lack of dimming zones.
> 
> What relevance does the Windmill test have? Or is just where you first noticed it?


Yeah i just noticed with demo. With hdmi cable its look pure red. If you have cable you can try. This is weird though. Obvious difference.


----------



## Naeem

anyone else here getting driver crash every single game even on stock settings ? seems to be the issue all drivers from past month or so i tried clean install as well as ddu also tried removing msi after burner did not help either it just crahses randomly into game some time after 10 min some time after 20 min

i have vega lc


----------



## SpecChum

Quote:


> Originally Posted by *fursko*
> 
> Yeah i just noticed with demo. With hdmi cable its look pure red. If you have cable you can try. This is weird though. Obvious difference.


Have you double checked to make sure the colour settings stay the same on both the monitor and Radeon settings?

Also check HDMI level - having this to limited can make colour seems more rich, at the expense of reduced contrast range.


----------



## SpecChum

Quote:


> Originally Posted by *Naeem*
> 
> anyone else here getting driver crash every single game even on stock settings ? seems to be the issue all drivers from past month or so i tried clean install as well as ddu also tried removing msi after burner did not help either it just crahses randomly into game some time after 10 min some time after 20 min
> 
> i have vega lc


I've only got a 64 Air but it seems pretty stable unless I'm intentionally pushing it.


----------



## fursko

Quote:


> Originally Posted by *SpecChum*
> 
> Have you double checked to make sure the colour settings stay the same on both the monitor and Radeon settings?
> 
> Also check HDMI level - having this to limited can make colour seems more rich, at the expense of reduced contrast range.


Hdmi 48-100 freesync range 100hz refresh rate 10bit full rgb. Displayport 48-144 144hz refresh rate 10 bit full rgb. Its same but hdmi support 48-100 for freesync and hdr. Dunno what is hdmi level.


----------



## SpecChum

HDMI has 2 different levels, they're usually called "full" and "limited".

Full is the PC type 0 to 255 range for use with PCs.

Limited or "Studio" is what a TV usually uses and has a range of 16 to 235.

If you set Limited when the signal is sending full you'll get black crush but richer colours as it's sees 16 as 0 and 0 as, essentially, -16.

Reds will be "redder" but the overall image will be far too contrasty.


----------



## fursko

Quote:


> Originally Posted by *SpecChum*
> 
> HDMI has 2 different levels, they're usually called "full" and "limited".
> 
> Full is the PC type 0 to 255 range for use with PCs.
> 
> Limited or "Studio" is what a TV usually uses and has a range of 16 to 235.
> 
> If you set Limited when the signal is sending full you'll get black crush but richer colours as it's sees 16 as 0 and 0 as, essentially, -16.
> 
> Reds will be "redder" but the overall image will be far too contrasty.


Both full rgb. Probably its color temperature problem. Can you make color temperature 6500k instead of auto from radeon settings and try demo again ? (ultimate engine freesync)


----------



## SpecChum

Quote:


> Originally Posted by *fursko*
> 
> Both full rgb. Probably its color temperature problem. Can you make color temperature 6500k instead of auto from radeon settings and try demo again ? (ultimate engine freesync)


I can't try anything at the minute as I'm at work


----------



## LtAldoRaine

Hello everyone! I want to share my beauty MSI RX vega56I hope this is good place for join to RX VEGA 56 Owner Club?


----------



## Naeem

i was running unigine valley for like 5 min and gpu hot spot temp hit 97c at max and stays around 95-96c can this be the cause of my games crashing after 5 - 10 min into game ? i have vega liquid


----------



## fursko

Quote:


> Originally Posted by *Naeem*
> 
> i was running unigine valley for like 5 min and gpu hot spot temp hit 97c at max and stays around 95-96c can this be the cause of my games crashing after 5 - 10 min into game ? i have vega liquid


Its probably core clock and core voltage related.


----------



## ITAngel

Quote:


> Originally Posted by *SavantStrike*
> 
> The $465 Vega 64 air units last week were pretty sweet. They sold out in around 2-3 days too.


I picked the PowerColor Vega 64 Air from ebay that was sold by Newegg for that price.


----------



## By-Tor

Quote:


> Originally Posted by *ITAngel*
> 
> I picked the PowerColor Vega 64 Air from ebay that was sold by Newegg for that price.


Same here.. I waited a few day until my water block came in to install it. Really a nice jump from my Powercolor 290X LCS card I pulled out.

Just playing a little with OCing the card with WattMan. This is my best score in Superposition.. Not sure if its good or bad, but it did run smooth.



GPU-Z during the above run.


----------



## ITAngel

Quote:


> Originally Posted by *By-Tor*
> 
> Same here.. I waited a few day until my water block came in to install it. Really a nice jump from my Powercolor 290X LCS card I pulled out.
> 
> Just playing a little with OCing the card with WattMan. This is my best score in Superposition.. Not sure if its good or bad, but it did run smooth.
> 
> 
> 
> GPU-Z during the above run.


Nice! I have the ek block and another 240mm rad to install in my current loop. Just pending on two more fittings. This is one a Threadripper 1920X setup.


----------



## By-Tor

Quote:


> Originally Posted by *ITAngel*
> 
> Nice! I have the ek block and another 240mm rad to install in my current loop. Just pending on two more fittings. This is one a Threadripper 1920X setup.


The card really runs nice and the temps are great... I'm looking to make my jump back to AMD with a Ryzen 7 in the near future...


----------



## ITAngel

Quote:


> Originally Posted by *By-Tor*
> 
> The card really runs nice and the temps are great... I'm looking to make my jump back to AMD with a Ryzen 7 in the near future...


I use to have a 1700 Ryzen and was a great processor. Very cooled and fast.


----------



## Grummpy

I installed some driver update software and my machine because unstable after.
i had to reset but after i did all was well again so just avoid using these programs they cause more damage than good.


----------



## LeadbyFaith21

Quote:


> Originally Posted by *hyp36rmax*
> 
> For sure! This is why this build even exist! haha. Might have to give me some time to get my testing methodology set up and get through the Thanksgiving holiday here in the states


That would be awesome! And I'm not planning on updating until after Christmas, or the new year, if I do upgrade the radiators, so I'm in no rush!


----------



## tarot

Quote:


> Originally Posted by *fursko*
> 
> Both full rgb. Probably its color temperature problem. Can you make color temperature 6500k instead of auto from radeon settings and try demo again ? (ultimate engine freesync)


mine is DP and it looked perfectly fine to me tried to do obs capture but it kept crashing the computer ...stupid programs








but either way no issues for me
10b colour and full rgb in the control panel


----------



## The EX1

Quick weite up and video of the Red Devil Vega 64 card. Looks like a 60mhz boost clock bump over reference and a huge cooler. I need a couple od these haha

https://overclock3d.net/reviews/gpu_displays/powercolor_rx_vega_64_devil_preview/3


----------



## SpecChum

Quote:


> Originally Posted by *The EX1*
> 
> Quick weite up and video of the Red Devil Vega 64 card. Looks like a 60mhz boost clock bump over reference and a huge cooler. I need a couple od these haha
> 
> https://overclock3d.net/reviews/gpu_displays/powercolor_rx_vega_64_devil_preview/3


Just seen this; if that cools well and is reasonably quiet I might flip this Vega 64.


----------



## PontiacGTX

Quote:


> Originally Posted by *wellkevi01*
> 
> If it's a quality brand PSU it should be fine. I've got no issues and my setup is a 5820k oc'd to 4.1GHz and a Vega 64 Liquid powered by an EVGA Supernova 750 G2. I think it's pushing it though, cause my UPS shows that my PC is pulling ~650 watts under load. Granted, I do have my speakers and monitors hooked to it aswell.


i see people saying 850w wasnt enough


----------



## ducegt

Corsair RM850x 850w with 64LC and OCed 7700k here and no issues even if Vega draws 400w, but most of my use is stock 265ish.


----------



## PontiacGTX

Quote:


> Originally Posted by *ducegt*
> 
> Corsair RM850x 850w with 64LC and OCed 7700k here and no issues even if Vega draws 400w, but most of my use is stock 265ish.


what about the RX Vega 56 benchmark from gamernexus?


----------



## Grummpy

I ran a vega 64 on a 650 watt enermax and pulled around 450 watt at most at wall.
i found vega just isnt worth overclocking you gain very little,
keep stock speeds and under volt it by 100mv


----------



## Grummpy

This was done on a stock amd 290 gpu 4 years old.
AMD just amazing longevity .....


----------



## By-Tor

Quote:


> Originally Posted by *PontiacGTX*
> 
> i see people saying 850w wasnt enough


I'm running a XFX Black 850w on my sig. rig and with the card OCed its pulling just over 600 watts out of the wall (I have my tower plugged into a watt meter at the wall). With my old pair of 290X's OCed and benching I have seen it spike up over 1000w and drop right back down with no issue..


----------



## geriatricpollywog

[/quote]

Score should be higher for those clocks. It looks like your GPU load and HBM speed are dropping throughout the run.


----------



## By-Tor

Quote:


> Originally Posted by *0451*


Score should be higher for those clocks. It looks like your GPU load and HBM speed are dropping throughout the run.[/quote]

How would I fix that?


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> Score should be higher for those clocks. It looks like your GPU load and HBM speed are dropping throughout the run.


How would I fix that?[/quote]

Can you take a screenshot of wattman?


----------



## Grummpy

a 650 watt is ample enough to run vega 64 air cooled.
it will pull less than 450 watts ,
but if you over clock you can with ease make it pull much much more.


----------



## TrixX

Quote:


> Originally Posted by *Grummpy*
> 
> a 650 watt is ample enough to run vega 64 air cooled.
> it will pull less than 450 watts ,
> but if you over clock you can with ease make it pull much much more.


Not exactly true. It's system dependant, if you have a Gfx card pulling 450W and a CPU pulling 100W with a few peripherals you go past the efficiency point and into the stress point of the PSU, as a rule of thumb I never setup a system that can go past 80% of the rated PSU loading to prevent issues with adding peripheral's etc...

With a 650W, just the two components above are already at 84% of the rated maximum for the PSU. I've seen my system pull 700W with CPU OC'd but not fully stressed and GPU at 100% load. Though you can drop around 85W for a normal system as I have a ThreadRipper. Even then that's 615W and entering dangerous territory for the 650W PSU especially cheaper ones.


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> How would I fix that?


Can you take a screenshot of wattman?[/quote]


----------



## geriatricpollywog

Set HBM to 1100mhz/1000mv, core to 1752mhz/1200mv


----------



## TrixX

Quote:


> Originally Posted by *0451*
> 
> Set HBM to 1100mhz/1000mv, core to 1752mhz/1200mv


On air I'd keep the Core to 1100mv max not 1200mv. It'll heat throttle otherwise.


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> Set HBM to 1100mhz/1000mv, core to 1752mhz/1200mv


Will give it a go.

Quote:


> Originally Posted by *TrixX*
> 
> On air I'd keep the Core to 1100mv max not 1200mv. It'll heat throttle otherwise.


It's on water...


----------



## TrixX

Quote:


> Originally Posted by *By-Tor*
> 
> Will give it a go.
> It's on water...


My bad, getting mixed up between peeps then! Sorry mate.


Spoiler: Warning: Spoiler!







Those are my current settings. Get really good results with it. Only have P0-4 disabled due to a game I'm running, when gaming I either lock to P7 or P5-7. Normally I'll run with all active.


----------



## SpecChum

Kept getting crashing in Dirt Rally last night no matter what voltage I used. Then I disabled "Enhanced Sync" - and it worked perfectly.

Seems that might be cause of a few issues; I may have to retest my profiles with it off!


----------



## Suppenkoch

Hi there,

you noticed that on Win10 RS3 or RS4 Insider Vega won't accelerate DX9 titles ?
Whichever title I try I am always on 0-2 % utilization, using with or without Clockblocker. Frames keep fully limited by CPU power there, as Vega isn't accelerating anything. The very only title which takes some advantage of Vega's power is HalfLife2, but even there it never utilizes more than 30%.

But anyway I noticed that Vega isn's scaling very well at all. Playing Ghost Recon Wildlands Vega never utilizes more than 60%.

I am definately not CPU limited (by far not).

So beside scaling problems, the lack of accelerating Dx9 bothers me a lot. Will this be fixed ? I do not have this problem with another GCN card (R9 380X) inside of the same computer (I tested by reslotting, DDU and watch fps and all utilization by HwInfo64, latest drivers). Why with Vega ?


----------



## TrixX

Quote:


> Originally Posted by *Suppenkoch*
> 
> Hi there,
> 
> you noticed that on Win10 RS3 or RS4 Insider Vega won't accelerate DX9 titles ?
> Whichever title I try I am always on 0-2 % utilization, using with or without Clockblocker. Frames keep fully limited by CPU power there, as Vega isn't accelerating anything. The very only title which takes some advantage of Vega's power is HalfLife2, but even there it never utilizes more than 30%.
> 
> But anyway I noticed that Vega isn's scaling very well at all. Playing Ghost Recon Wildlands Vega never utilizes more than 60%.
> 
> I am definately not CPU limited (by far not).
> 
> So beside scaling problems, the lack of accelerating Dx9 bothers me a lot. Will this be fixed ? I do not have this problem with another GCN card (R9 380X) inside of the same computer (I tested by reslotting, DDU and watch fps and all utilization by HwInfo64, latest drivers). Why with Vega ?


GR Wildlands isn't a scaling issue, there seems to be something else in the pipeline causing an issue. Check out the Nvidia video's vs AMD, all the AMD's are at 50-80% utilisation vs Nvidia's 99%.

DX9 is interesting though. I'm putting together a bunch of tests over the weekend and I'll pop the results up here. Some synthetic and some gaming, from DX9 through to Vulkan.


----------



## fursko

I want add some info for vega tweaks oc/uv. I believe adjusting hbm voltage is misconception. It can vary chip to chip but my experience is leaving it at stock better. Stock 950 mV. If i set it 1100mV or something it didnt improve hbm stability and it causes hbm clock jumps (probably power limit).

Best way for vega tweak (%0 power limit vega 64 lc my experience):

1- Find sensitive app for ultimate daily stability (Cod WW2) Otherwise your tweaks will crash eventually.
2- Do not touch core clocks.
3- Lower your core voltage until game crash.
4- Lower p6-p7 same. My stock p6-p7 1150mV-1200mV. Keeping 50mV distance good. Otherwise you may encounter bugs.
5- When you find stable undervolt (my uv 1110-1160mV) start overclocking hbm without touching hbm voltage.
6- If you find correct hbm clocks your game will not crash anymore.
7- Apply and forget your settings. After finding stable uv/oc my wattman didnt crash and remembers my tweaks all the time. Resizing still broken though.
EDIT 8- This stable tweaks will not give you best benchmark results.

If you add %50 power limit:

Check your clocks when undervolting core voltage for better higher clocks. Because if you undervolt too much your clocks gonna be low. You dont have power limit this time.

What happens if you make your hbm voltage 1100 or something ? Your hbm clocks will jump 500-800 and oc'ed value all the time. And you will not get any benefit.

Sample: 1100mhz/950mV=artifact or crash >>> 1100mhz/1050mV= artifact or crash >>> 1090mhz/950mV= no artifact no crash. HBM voltage didnt help.

For vega 56 and vega 64 air users:

Just use better bios instead of adjusting clocks.

Vega overclocking different than nvidia. Focus powerlimit and core voltage, not clocks. If your chip really good. You can adjust clocks. But its really rare and complicated thing.


----------



## Suppenkoch

Quote:


> Originally Posted by *TrixX*
> 
> GR Wildlands isn't a scaling issue, there seems to be something else in the pipeline causing an issue. Check out the Nvidia video's vs AMD, all the AMD's are at 50-80% utilisation vs Nvidia's 99%.


You are sure ? The scaling issue I notice in other Dx11 titles, too. See for yourself in Cities Skylines for example. Have not this issue with the R9 380x.

TW Shogun 2 with Dx11 is another example. However TW Shogun2 offers also Dx9 and utilizes the Vega till 40 % (similar to HL2).
Strange that some Dx9 games don't utilizes Vega at all, and others only partial.

There seems to be a problem in general.

Edit: Just made a small test as I had time to observe utilization.

We have some very well utilization on some games.
Serisous Sam Fusion scales well on both dx11 and 12 with 97-100% utilization (OpenGL and Vulkan the same).
Witcher3 and Rise of the Tomb Raider also take the full juice.

Than got some partial utilization.
Shogun 2 up to 40% on both Dx9 and Dx11
HL2 Dx9 till 30%
Wildlands Dx11 till 60 %
Tropico 4 Dx9 till 60%
Also Dx11 titles like Assault Squad2, War for the Overworld, Tomb Raider 2013, XCOM, Cities Skylines,

Especially older Dx9 titles, but also some few newer, utilize like 0-2%.
Dungeon Lords Steam Edition, The Guild 2, Craft the World, Supreme Ruler (every part of the series), etc.

So there is a wide spread in general of utilizing fully, partial or not at all. Thats quite strange and it wonders me why. Not a single case I was even close to CPU limit.


----------



## 99belle99

Would I be able to run a Vega 64 and a 6 core X5660 @ 4.2GHz on the x58 platform with a 700W 80 plus bronze?


----------



## PontiacGTX

Quote:


> Originally Posted by *TrixX*
> 
> Not exactly true. It's system dependant, if you have a Gfx card pulling 450W and a CPU pulling 100W with a few peripherals you go past the efficiency point and into the stress point of the PSU, as a rule of thumb I never setup a system that can go past 80% of the rated PSU loading to prevent issues with adding peripheral's etc...
> 
> With a 650W, just the two components above are already at 84% of the rated maximum for the PSU. I've seen my system pull 700W with CPU OC'd but not fully stressed and GPU at 100% load. Though you can drop around 85W for a normal system as I have a ThreadRipper. Even then that's 615W and entering dangerous territory for the 650W PSU especially cheaper ones.


i have 40w in fans and cooling


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> Set HBM to 1100mhz/1000mv, core to 1752mhz/1200mv


Quote:


> Originally Posted by *TrixX*
> 
> My bad, getting mixed up between peeps then! Sorry mate.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Those are my current settings. Get really good results with it. Only have P0-4 disabled due to a game I'm running, when gaming I either lock to P7 or P5-7. Normally I'll run with all active.


Neither setting scored any better...

Thanks anyway..


----------



## PontiacGTX

http://www.microcenter.com/product/483668/Radeon_RX_Vega56_8G_Single-Fan_8GB_HBM2_PCIe_Video_Card


----------



## TrixX

Quote:


> Originally Posted by *Suppenkoch*
> 
> You are sure ? The scaling issue I notice in other Dx11 titles, too. See for yourself in Cities Skylines for example. Have not this issue with the R9 380x.
> 
> TW Shogun 2 with Dx11 is another example. However TW Shogun2 offers also Dx9 and utilizes the Vega till 40 % (similar to HL2).
> Strange that some Dx9 games don't utilizes Vega at all, and others only partial.
> 
> There seems to be a problem in general.
> 
> Edit: Just made a small test as I had time to observe utilization.
> 
> We have some very well utilization on some games.
> Serisous Sam Fusion scales well on both dx11 and 12 with 97-100% utilization (OpenGL and Vulkan the same).
> Witcher3 and Rise of the Tomb Raider also take the full juice.
> 
> Than got some partial utilization.
> Shogun 2 up to 40% on both Dx9 and Dx11
> HL2 Dx9 till 30%
> Wildlands Dx11 till 60 %
> Tropico 4 Dx9 till 60%
> Also Dx11 titles like Assault Squad2, War for the Overworld, Tomb Raider 2013, XCOM, Cities Skylines,
> 
> Especially older Dx9 titles, but also some few newer, utilize like 0-2%.
> Dungeon Lords Steam Edition, The Guild 2, Craft the World, Supreme Ruler (every part of the series), etc.
> 
> So there is a wide spread in general of utilizing fully, partial or not at all. Thats quite strange and it wonders me why. Not a single case I was even close to CPU limit.


Well GPU utilisation is a funky thing with Vega

First up 100% GPU utilisation is rare in games as it requires no other bottleneck to present itself.

Secondly when gaming you'll see interesting Load comparisons if you leave all power states active, vs disabling P0-P4 and keeping P5-P7 active and then just locking to P7. Very different and also very interesting. For instance in iRacing (DX11) I get around 20-40% GPU Load with it locked to P7. Same with P5-P7 as the game isn't GPU Load heavy. If I run all active it never gets out of P2 and flickers between P1 and P2 constantly with a load between 30% and 70%. The flickering I find causes the odd stutter so I normally lock to P7 in this instance.

For something like The Division in DX12 I can get full utilisation though I still lock to P7 as when switching areas or having areas load in it can drop the GPU load enough to go below P5 and the P5 to P0-P4 transition causes stutters.

I have fired up some old DX9 and DX11 games and if you leave the card in normal mode, it'll be unlikely to leave P0 as there's just not enough load for it to switch up P States. I thought it was a bug initially, but after further checking it's just not getting stressed. It kinda has a Pina Colada and chill's while still chucking out stupidly high framerates in Idle state









Though don't forget the single thread kings. The Witcher 3 and a few others can't leverage the full power of the GPU due to not being able to give enough information to it via their single main thread design. As there's more Vulkan/DX12 games about that bottleneck should be alleviated somewhat.

Ultimately to get good 99-100% GPU load games, it's going to be the new stuff which isn't just single threaded either allowing the bottleneck to shift to the GPU.

GR Wildlands and AC Origins excepted. Both of them are just poor optimisation issues.


----------



## Exposal

Quick question regarding updating the bios on my vega 56, i made a thread but doesn't seem to be getting any response.

I own a reference XFX vega 56 with a EKWB block

I found 2 LC bios and I noticed 1 is branded XFX and 1 is branded AMD

The XFX bios version is 016.001.001.000.008734
The AMD bios version is 016.001.001.000.008774

Any idea what the difference between these bios versions are? Assuming I should use the newer bios version

Thanks for any help!


----------



## Suppenkoch

@trixx

as i said dx9 titles dont accelerate at all. this results in very poor and choppy fps. but not every dx9 game suffers from it. thats quite weird.

in witcher 3 i have almost 100% utilization. like i sad i am not limited by cpu.

which dx9 titles did you try ? we need common ground to compare. also which windows version and driver version are you on ?


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> Neither setting scored any better...
> 
> Thanks anyway..


I am using the liquid bios with the power play table that allows a 142% power limit and 400 amps. Have you tried this?


----------



## geriatricpollywog

Quote:


> Originally Posted by *Suppenkoch*
> 
> @trixx
> 
> as i said dx9 titles dont accelerate at all. this results in very poor and choppy fps. but not every dx9 game suffers from it. thats quite weird.
> 
> in witcher 3 i have almost 100% utilization. like i sad i am not limited by cpu.
> 
> which dx9 titles did you try ? we need common ground to compare. also which windows version and driver version are you on ?


I am getting 100% utilization in GR Wildlands.


----------



## fursko

Quote:


> Originally Posted by *Exposal*
> 
> Quick question regarding updating the bios on my vega 56, i made a thread but doesn't seem to be getting any response.
> 
> I own a reference XFX vega 56 with a EKWB block
> 
> I found 2 LC bios and I noticed 1 is branded XFX and 1 is branded AMD
> 
> The XFX bios version is 016.001.001.000.008734
> The AMD bios version is 016.001.001.000.008774
> 
> Any idea what the difference between these bios versions are? Assuming I should use the newer bios version
> 
> Thanks for any help!


Dunno but one of them maybe lower power bios. Vega 64 LC has 2 bios low power-highpower. But looks like 8774 newer. You should try.


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> I am using the liquid bios with the power play table that allows a 142% power limit and 400 amps. Have you tried this?


I have not flashed a GPU in a very, very long time and would need some guidance to not brick my card.

If you can provide step by step instructions I may give it a try. Maybe a link to a how to guide..

Link to the Bios you speak of?

Thank you


----------



## Exposal

Quote:


> Originally Posted by *fursko*
> 
> Dunno but one of them maybe lower power bios. Vega 64 LC has 2 bios low power-highpower. But looks like 8774 newer. You should try.


Thanks! is there a way to tell if it's a low power or high power bios?

appreciate the help


----------



## Grummpy

I hope wattman gets smaller and allows for profile saves for global setting.
https://i.imgur.com/lzfe3fD.gifv


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> I have not flashed a GPU in a very, very long time and would need some guidance to not brick my card.
> 
> If you can provide step by step instructions I may give it a try. Maybe a link to a how to guide..
> 
> Link to the Bios you speak of?
> 
> Thank you


First download the liquid bios (google it. Any of the liquid bios will work. I don't remember which one I have). Use ATiflash to flash the bios. Then download the 142% power limit 400 amp powerplay table and double click to install. See post 253 http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/200_100#post_26297003

Download CRU (custom resolution utility) and run restart64 to restart your GPU. Go into wattman and set to custom. Don't change the the gpu core or voltage. Leave P6/P7 at default. P7 should be at 1752/1200. Change the HBM core to 1100mhz. Leave HBM voltage at 950. Move the power limit slider to 142%. Run restart64 again. Run Firestrike and you should see at least 20,000.


----------



## Kyozon

I noticed that there are some BIOSes for VEGA 64 LC that seems to come with Default 1750Mhz Core Clocks.

I have tried as hard as i could with my Frontier Edition LC to push 1750Mhz as the 64, no matter Which Voltages, it refuses to go beyond 1700Mhz.

My Temps are under Control, GPU-Z Reports a Max of 55C under Load at 1700Mhz.

Is there something i can do to make it do 1750Mhz as the RX VEGA Counterparts? Thanks.


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> First download the liquid bios (google it. Any of the liquid bios will work. I don't remember which one I have). Use ATiflash to flash the bios. Then download the 142% power limit 400 amp powerplay table and double click to install. See post 253 http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/200_100#post_26297003
> 
> Download CRU (custom resolution utility) and run restart64 to restart your GPU. Go into wattman and set to custom. Don't change the the gpu core or voltage. Leave P6/P7 at default. P7 should be at 1752/1200. Change the HBM core to 1100mhz. Leave HBM voltage at 950. Move the power limit slider to 142%. Run restart64 again. Run Firestrike and you should see at least 20,000.


I followed your instructions and the power limit slider in wattman still reads as +50%


----------



## astrixx

Quote:


> Originally Posted by *Kyozon*
> 
> I noticed that there are some BIOSes for VEGA 64 LC that seems to come with Default 1750Mhz Core Clocks.
> 
> I have tried as hard as i could with my Frontier Edition LC to push 1750Mhz as the 64, no matter Which Voltages, it refuses to go beyond 1700Mhz.
> 
> My Temps are under Control, GPU-Z Reports a Max of 55C under Load at 1700Mhz.
> 
> Is there something i can do to make it do 1750Mhz as the RX VEGA Counterparts? Thanks.


It could depend on game as well and how taxing it is.


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> I followed your instructions and the power limit slider in wattman still reads as +50%


Run the AMD driver removal software, reinstall drivers, then try again, especially if you haven't done so after replacing the 290x.


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> Run the AMD driver removal software, reinstall drivers, then try again, especially if you haven't done so after replacing the 290x.


142 is there now...


----------



## By-Tor

No improvement in Fire Strike or Superposition.


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> No improvement in Fire Strike or Superposition.


Can you post screenshots
Of wattman and GPU-z again?


----------



## By-Tor

Sure can


----------



## geriatricpollywog

Sensors tab


----------



## By-Tor




----------



## geriatricpollywog

That looks good to me. Run firestrike again


----------



## By-Tor

Quote:


> Originally Posted by *0451*
> 
> That looks good to me. Run firestrike again


Scored 4 points less than the last run... 20k sounded promising and that would beat my buddy's 1080 TI score and would have been nice, but guess it wasn't meant to be,...

Thank you for the help


----------



## Grummpy

well ive been using a 650 watt now since day one,
at wall its telling me 450 watts max so.
im happy


----------



## TrixX

Quote:


> Originally Posted by *Suppenkoch*
> 
> @trixx
> 
> as i said dx9 titles dont accelerate at all. this results in very poor and choppy fps. but not every dx9 game suffers from it. thats quite weird.
> 
> in witcher 3 i have almost 100% utilization. like i sad i am not limited by cpu.
> 
> which dx9 titles did you try ? we need common ground to compare. also which windows version and driver version are you on ?


I'll be doing proper numbers testing over the next few days. I tested some DX9 titles a few weeks back so I couldn't give you hard numbers on them and I was using different driver versions.


----------



## dagget3450

Quote:


> Originally Posted by *Kyozon*
> 
> I noticed that there are some BIOSes for VEGA 64 LC that seems to come with Default 1750Mhz Core Clocks.
> 
> I have tried as hard as i could with my Frontier Edition LC to push 1750Mhz as the 64, no matter Which Voltages, it refuses to go beyond 1700Mhz.
> 
> My Temps are under Control, GPU-Z Reports a Max of 55C under Load at 1700Mhz.
> 
> Is there something i can do to make it do 1750Mhz as the RX VEGA Counterparts? Thanks.


What driver are you using now? I just tried 17.11.2 and CF isnt in the UI.


----------



## wellkevi01

Quote:


> Originally Posted by *By-Tor*
> 
> Scored 4 points less than the last run... 20k sounded promising and that would beat my buddy's 1080 TI score and would have been nice, but guess it wasn't meant to be,...
> 
> Thank you for the help


For one, you need to run the regular FireStrike. You ran FireStrick Ultra. And two, you're not going to beat a 1080Ti with a Vega 64, so I wouldn't try too hard.


----------



## TrixX

Quote:


> Originally Posted by *wellkevi01*
> 
> For one, you need to run the regular FireStrike. You ran FireStrick Ultra. And two, you're not going to beat a 1080Ti with a Vega 64, so I wouldn't try too hard.


Yeah 1080Ti's fare much better in FS than Vega does.


----------



## AlphaC

https://videocardz.com/74175/xfx-launches-radeon-rx-vega-64-and-56-double-edition

It's hideous


----------



## The14thAssassin

Good luck finding UNDER the 1080.

I got mine for MSRP ($500) and that's the lowest I've seen it.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *AlphaC*
> 
> https://videocardz.com/74175/xfx-launches-radeon-rx-vega-64-and-56-double-edition
> 
> It's hideous






looks like one of Madonna's bras with fans







Quote:


> Originally Posted by *By-Tor*


is that the max temps because if it is that's pretty damn good whats the ambient temp there?


----------



## By-Tor

Quote:


> Originally Posted by *tarot*
> 
> 
> looks like one of Madonna's bras with fans
> 
> 
> 
> 
> 
> 
> 
> 
> is that the max temps because if it is that's pretty damn good whats the ambient temp there?


Yes they are.. Its 21C in my house at the moment..


----------



## HGooper

Finally just received my Bykski Vega full cover waterblock and used it on my Vega56 now, so what next? Flash it to Vega64 LC firmware?


----------



## TrixX

Quote:


> Originally Posted by *HGooper*
> 
> Finally just received my Bykski Vega full cover waterblock and used it on my Vega56 now, so what next? Flash it to Vega64 LC firmware?


I'd test Air64 first then 64 LC as the 56 may not like the high stock clocks of the LC.


----------



## HGooper

Ok I will try it and see how it goes.


----------



## By-Tor

Normal Fire Strike.


----------



## geriatricpollywog

What ia your graphics score?


----------



## Naeem

Quote:


> Originally Posted by *Suppenkoch*
> 
> Hi there,
> 
> you noticed that on Win10 RS3 or RS4 Insider Vega won't accelerate DX9 titles ?
> Whichever title I try I am always on 0-2 % utilization, using with or without Clockblocker. Frames keep fully limited by CPU power there, as Vega isn't accelerating anything. The very only title which takes some advantage of Vega's power is HalfLife2, but even there it never utilizes more than 30%.
> 
> But anyway I noticed that Vega isn's scaling very well at all. Playing Ghost Recon Wildlands Vega never utilizes more than 60%.
> 
> I am definately not CPU limited (by far not).
> 
> So beside scaling problems, the lack of accelerating Dx9 bothers me a lot. Will this be fixed ? I do not have this problem with another GCN card (R9 380X) inside of the same computer (I tested by reslotting, DDU and watch fps and all utilization by HwInfo64, latest drivers). Why with Vega ?


i noticed no issue with wildlands its 99% all all the time on my vega lc

few screenshots i took


http://imgur.com/RTxTE


----------



## MAMOLII

this gpu hotspot drives me crazy... i modded cooling add artic liquid freezer 120 on gpu keeping the stock amd plate on for vrms!!
but i had to add a copper plate 3mm thick to mount the cooler on gpu....ok...
i run 3dmark core 41 hmb 45.....hot spot 75 peak (used to have 70-72c) paste spread on gpu i reseated 3 times... same...








someone post in another forum adding a fan on backplate lowered hot spot 5 degrees...
maybe the copper plate i used delays heat transfer from inside
the silicon but core and hmb temps are great!
i touch the amd plate over vrms at load and its cool around 40s-50s with my finger sensor


----------



## Naeem

Quote:


> Originally Posted by *MAMOLII*
> 
> this gpu hotspot drives me crazy... i modded cooling add artic liquid freezer 120 on gpu keeping the stock amd plate on for vrms!!
> but i had to add a copper plate 3mm thick to mount the cooler on gpu....ok...
> i run 3dmark core 41 hmb 45.....hot spot 75 peak (used to have 70-72c) paste spread on gpu i reseated 3 times... same...
> 
> 
> 
> 
> 
> 
> 
> 
> someone post in another forum adding a fan on backplate lowered hot spot 5 degrees...
> maybe the copper plate i used delays heat transfer from inside
> the silicon but core and hmb temps are great!
> i touch the amd plate over vrms at load and its cool around 40s-50s with my finger sensor


it's probably reading from a sensor that is deep inside gpu die mine peaked upto 97c


----------



## LordDain

Quote:


> Originally Posted by *HGooper*
> 
> Ok I will try it and see how it goes.


It's what I did. I've flashed LC on my 56 but I've had a few cases where Wattman reset to default clocks and the PC was cooking.
My 56 did actually run on stock LC clocks, but not stable for gaming. That's why I went 'back' to the AIR64 BIOS in the end. A bit safer


----------



## MAMOLII

with air cooling it reaches 102c at gaming! running tests right now at 3dmark gpu 39 ram 44 hot spot 70!
the temp difference is insane!


----------



## Chaoz

Quote:


> Originally Posted by *0451*
> 
> First download the liquid bios (google it. Any of the liquid bios will work. I don't remember which one I have).


I couldn't use any LC BIOS, system would crap itself with a wrong BIOS, which I needed to hard reboot. I tried 2 different BIOS versions and they would not flash.
I tried the last one and it did flash eventually.

All were Sapphire version LC BIOS'.
So you can't just use any BIOS, imho.


----------



## Suppenkoch

Quote:


> Originally Posted by *TrixX*
> 
> I'll be doing proper numbers testing over the next few days. I tested some DX9 titles a few weeks back so I couldn't give you hard numbers on them and I was using different driver versions.


Also here again: What particular version of Windows and driver you are on ? Or was at this time ?

Quote:


> Originally Posted by *0451*
> 
> I am getting 100% utilization in GR Wildlands.


Quote:


> Originally Posted by *Naeem*
> 
> i noticed no issue with wildlands its 99% all all the time on my vega lc


Thanks for the input. So there might be a case not having trouble. Your fps are awesome. Mine don't raise as high as he isn't utilizing my Vega fully.
What Windows and driver version are you both on ? Which resolution and graphic preset you used in Wildlands ? Did you change something in the crimson driver setup ?

By myself I am on 2560x1080 and Ultra preset of Wildlands. Having Win10 Insider 17035 and 17.11.2 Crimson driver.


----------



## Trender07

Quote:


> Originally Posted by *fursko*
> 
> I want add some info for vega tweaks oc/uv. I believe adjusting hbm voltage is misconception. It can vary chip to chip but my experience is leaving it at stock better. Stock 950 mV. If i set it 1100mV or something it didnt improve hbm stability and it causes hbm clock jumps (probably power limit).
> 
> Best way for vega tweak (%0 power limit vega 64 lc my experience):
> 
> 1- Find sensitive app for ultimate daily stability (Cod WW2) Otherwise your tweaks will crash eventually.
> 2- Do not touch core clocks.
> 3- Lower your core voltage until game crash.
> 4- Lower p6-p7 same. My stock p6-p7 1150mV-1200mV. Keeping 50mV distance good. Otherwise you may encounter bugs.
> 5- When you find stable undervolt (my uv 1110-1160mV) start overclocking hbm without touching hbm voltage.
> 6- If you find correct hbm clocks your game will not crash anymore.
> 7- Apply and forget your settings. After finding stable uv/oc my wattman didnt crash and remembers my tweaks all the time. Resizing still broken though.
> EDIT 8- This stable tweaks will not give you best benchmark results.
> 
> If you add %50 power limit:
> 
> Check your clocks when undervolting core voltage for better higher clocks. Because if you undervolt too much your clocks gonna be low. You dont have power limit this time.
> 
> What happens if you make your hbm voltage 1100 or something ? Your hbm clocks will jump 500-800 and oc'ed value all the time. And you will not get any benefit.
> 
> Sample: 1100mhz/950mV=artifact or crash >>> 1100mhz/1050mV= artifact or crash >>> 1090mhz/950mV= no artifact no crash. HBM voltage didnt help.
> 
> For vega 56 and vega 64 air users:
> 
> Just use better bios instead of adjusting clocks.
> 
> Vega overclocking different than nvidia. Focus powerlimit and core voltage, not clocks. If your chip really good. You can adjust clocks. But its really rare and complicated thing.


So its better to leave the Power Limit at 0%? I've always undervolted with stock clocks and with 50%+ PL (v64 air cooled)


----------



## fursko

I tweaked and benched my vega 64 lc a lot and returned it because of coil whine. Now i will borrow my friend's 1080 ti FTW3. I will share my experience with it. After that i will decide which one should i get.

I will compare:
Out of the box performance.
Tweaked performance.
Games without freesync. (2k 144hz)
Power consumption.
Temps.
Noise.
Drivers.
Price difference.
Overall experience.

There is a lot of hate for vega everywhere. Lets see.


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> So its better to leave the Power Limit at 0%? I've always undervolted with stock clocks and with 50%+ PL (v64 air cooled)


It's matter of preference. Vega 64 LC already has high power limit out of the box. If i add %50 power gpu throttles (because of 70C temp limit)after long play session. My psu fan kicks in because of high power consumption and im hearing coil whine. Plus im not gaining performance for a lot of games. Some games benefits +%50 power in my situation.

Note: Im not pushing my fans because of noise. Thats why its throttling.

If your temps ok and if you want best performance you can add %50 power and follow steps for stable uv/oc.

%50 power limit means higher clocks with higher voltages. %0 power limit means you should lower core voltage for higher clocks because of power limit.

Superposition 1080p extreme score:

%0 Power limit = 5k+
%50 Power limit = 5.3k+

Shadow of war 1440p:

%0 power limit = 81 fps
%50 power limit = 84 fps

Dont use same oc/uv settings for %50 power limit.

%0 power = 1160mV P7
%50 power = 1180mV p7 for my gpu.

Edit: You can get more score with more undervolting. But your gpu will not be stable. Benchmark utilities like Superposition or Firestrike easily runs without crash. But games can crash easily.


----------



## Trender07

Quote:


> Originally Posted by *fursko*
> 
> It's matter of preference. Vega 64 LC already has high power limit out of the box. If i add %50 power gpu throttles (because of 70C temp limit)after long play session. My psu fan kicks in because of high power consumption and im hearing coil whine. Plus im not gaining performance for a lot of games. Some games benefits +%50 power in my situation.
> 
> Note: Im not pushing my fans because of noise. Thats why its throttling.
> 
> If your temps ok and if you want best performance you can add %50 power and follow steps for stable uv/oc.
> 
> %50 power limit means higher clocks with higher voltages. %0 power limit means you should lower core voltage for higher clocks because of power limit.
> 
> Superposition 1080p extreme score:
> 
> %0 Power limit = 5k+
> %50 Power limit = 5.3k+
> 
> Shadow of war 1440p:
> 
> %0 power limit = 81 fps
> %50 power limit = 84 fps
> 
> Dont use same oc/uv settings for %50 power limit.
> 
> %0 power = 1160mV P7
> %50 power = 1180mV p7 for my gpu.
> 
> Edit: You can get more score with more undervolting. But your gpu will not be stable. Benchmark utilities like Superposition or Firestrike easily runs without crash. But games can crash easily.


Well I just want my games to stop crashing because they crash eventually even passing every benchmark (suportpost, fs, time spy etc).
BTW just tried 0% power limit(stock) and now my clocks are extremely low lol (I ran before -1580 mhz in supoertposition) and now they wont run more than 1280 mhz o.o


----------



## wellkevi01

Quote:


> Originally Posted by *By-Tor*
> 
> Normal Fire Strike.


That's a pretty good Graphics Score. My 64 LC with +50% power limit and 1000MHz HBM was at 24,786 for me. It looks like your CPU is holding your overall score back. My 5820k @ 4.5GHz netted me a Physics Score of 17,507 and an overall score of 20,047.


----------



## By-Tor

Quote:


> Originally Posted by *wellkevi01*
> 
> That's a pretty good Graphics Score. My 64 LC with +50% power limit and 1000MHz HBM was at 24,786 for me. It looks like your CPU is holding your overall score back. My 5820k @ 4.5GHz netted me a Physics Score of 17,507 and an overall score of 20,047.


The card does run great and stays pretty darn cool.. I'm really thinking of the jump to AMD with a Ryzen 7 now as this CPU though still runs very well is also what 5 gens old now and I have been wanting to get back to AMD for sometime.

Cranked the CPU up to 5ghz and pulled this run...


----------



## TrixX

Quote:


> Originally Posted by *Suppenkoch*
> 
> Also here again: What particular version of Windows and driver you are on ? Or was at this time ?
> 
> Thanks for the input. So there might be a case not having trouble. Your fps are awesome. Mine don't raise as high as he isn't utilizing my Vega fully.
> What Windows and driver version are you both on ? Which resolution and graphic preset you used in Wildlands ? Did you change something in the crimson driver setup ?
> 
> By myself I am on 2560x1080 and Ultra preset of Wildlands. Having Win10 Insider 17035 and 17.11.2 Crimson driver.


As I mentioned I'm putting together a proper bench run of numerous games and drivers on Win 10. I could test Win 8 and 7 (if I can get 7 to run) later too. First run I'll be doing more stability testing on ThreadRipper, then moving onto Vega. Should be starting 2mo or Monday depending on some paperwork I need to do.


----------



## Naeem

Quote:


> Originally Posted by *Suppenkoch*
> 
> Also here again: What particular version of Windows and driver you are on ? Or was at this time ?
> 
> Thanks for the input. So there might be a case not having trouble. Your fps are awesome. Mine don't raise as high as he isn't utilizing my Vega fully.
> What Windows and driver version are you both on ? Which resolution and graphic preset you used in Wildlands ? Did you change something in the crimson driver setup ?
> 
> By myself I am on 2560x1080 and Ultra preset of Wildlands. Having Win10 Insider 17035 and 17.11.2 Crimson driver.


I am on 2560 x 1440 on high settings my windows 10 version is 1709 and i am using 17.11.1 WHQL driver i did not change anything in drivers but i use 50% power target and raised HBM2 at 1000mhz in MSI afterburner

try going to high or vry high settings ultra is just too much for any gpu


----------



## ducegt

Quote:


> Originally Posted by *By-Tor*
> 
> The card does run great and stays pretty darn cool.. I'm really thinking of the jump to AMD with a Ryzen 7 now as this CPU though still runs very well is also what 5 gens old now and I have been wanting to get back to AMD for sometime.
> 
> Cranked the CPU up to 5ghz and pulled this run...


Looking good. Stock 64LC on balanced profile does between 24k and 25k while GPUZ or afterburner reports 267w. It takes an additional 100w for 1k points and just isn't worth it for most use cases.


----------



## By-Tor

Quote:


> Originally Posted by *ducegt*
> 
> Looking good. Stock 64LC on balanced profile does between 24k and 25k while GPUZ or afterburner reports 267w. It takes an additional 100w for 1k points and just isn't worth it for most use cases.


I asked a buddy that owns a 1080 ti and running it on an AMD 1700X CPU at 3.8ghz to run the same Fire Strike to see what he would get. And surprisingly he only scored in the low 18,000s with a graphics score in the mid 22,000s, but his Pysics score almost hits 20,000...


----------



## geriatricpollywog

Quote:


> Originally Posted by *Chaoz*
> 
> I couldn't use any LC BIOS, system would crap itself with a wrong BIOS, which I needed to hard reboot. I tried 2 different BIOS versions and they would not flash.
> I tried the last one and it did flash eventually.
> 
> All were Sapphire version LC BIOS'.
> So you can't just use any BIOS, imho.


Quote:


> Originally Posted by *Suppenkoch*
> 
> Also here again: What particular version of Windows and driver you are on ? Or was at this time ?
> 
> Thanks for the input. So there might be a case not having trouble. Your fps are awesome. Mine don't raise as high as he isn't utilizing my Vega fully.
> What Windows and driver version are you both on ? Which resolution and graphic preset you used in Wildlands ? Did you change something in the crimson driver setup ?
> 
> By myself I am on 2560x1080 and Ultra preset of Wildlands. Having Win10 Insider 17035 and 17.11.2 Crimson driver.




I have Windows 10 Home with Fall Creator's update. I am running GR Wildlands at very high settings in 3440x1440 in DX12 with a 5.0 ghz 7700k. With a high res and fast CPU, I am more GPU limited, which may explain the 100% GPU utilization.
Quote:


> Originally Posted by *By-Tor*
> 
> The card does run great and stays pretty darn cool.. I'm really thinking of the jump to AMD with a Ryzen 7 now as this CPU though still runs very well is also what 5 gens old now and I have been wanting to get back to AMD for sometime.
> 
> Cranked the CPU up to 5ghz and pulled this run...


Nice! That's more like it. I think ram is a factor too. I am 600 points behind Bullzoid on HWBot even though my core and HBM are set higher (watercooled). However, his 7700k is 400mhz faster than mine and he has crazy fast ram. Something like 4200 or 4400mz at CL14.

http://hwbot.org/benchmark/3dmark_-_fire_strike/rankings?hardwareTypeId=videocard_2879&cores=1#start=0#interval=20


----------



## MAMOLII

any suggestions? +40c over gpu 91c during gaming


----------



## Reikoji

Quote:


> Originally Posted by *MAMOLII*
> 
> any suggestions? +40c over gpu 91c during gaming


dont worry about the hotspot temp. 51c isnt bad for
the lc card


----------



## wellkevi01

So I've found that Radeon Settings likes to spontaneously crash and sometimes after closing something that was loading the GPU 100%(like a game, or XMR-Stak) the GPU goes down to P0, but then slowly goes back up to P7 and stays there. Anyone else having these issues?


----------



## geriatricpollywog

Quote:


> Originally Posted by *wellkevi01*
> 
> So I've found that Radeon Settings likes to spontaneously crash and sometimes after closing something that was loading the GPU 100%(like a game, or XMR-Stak) the GPU goes down to P0, but then slowly goes back up to P7 and stays there. Anyone else having these issues?


Yes. Download custom resolution utility and run restart64 to get back to P7 faster.


----------



## Grummpy

Watching this nonsense HOTSPOT temp drop instantly tells me its just a estimate temp so just ignore it,
Its pure nonsense is what my observations tell me.


----------



## Grummpy

I had problems with crashing after i used some program to update my drivers on my system.
resetting resolved my problem.
avoid using them they just mess things up.


----------



## Reikoji

Quote:


> Originally Posted by *wellkevi01*
> 
> So I've found that Radeon Settings likes to spontaneously crash and sometimes after closing something that was loading the GPU 100%(like a game, or XMR-Stak) the GPU goes down to P0, but then slowly goes back up to P7 and stays there. Anyone else having these issues?


Radeon Settings crashes can be related to that and also not be. When you see the GPU tach slowly rise from idle to max and stay there, that is because the dirvers crashed. That will also lead to Radeon settings crashing and restarting, but the latest version of Wattman has been so buggy it will crash without that happening. Just out of no where when you are applying or changing a setting it will crash.

There is supposedly going to be a big update that changes that whole UI that will hopefully put an end to that.


----------



## Grummpy

my 2700k is still going strong not bad for a 6 y old cpu..
but im loosing around 3% performance i here with pci2.0


----------



## Naeem

Quote:


> Originally Posted by *Grummpy*
> 
> my 2700k is still going strong not bad for a 6 y old cpu..
> but im loosing around 3% performance i here with pci2.0


if you wanna know how bad it is run 720p low


----------



## Grummpy

why would i run 720.
its e relevant.
its using a tool i will never use for that task.
like asking to put tractor tires on a push bike.


----------



## MAMOLII

I have modded the cooler adding a artcic liquid freezer 120 while keeping the stock amd plate!
with air cooling i had core 84 hmb 92 hotspot 102!! now 51- 54 -91!

now its winter i have 18c room temp but summer goes up to 32 room so the hot spot will reach close to 105

maybe if i add a fan blowing the pcb plate drop the pcb temp and hot spot cause the amd fan is useless now
the front plate is off so the fan cant push the air to all pcb with the case open








the other trick is mount a silicon thermal adhesive pad on the back plate with a small heatsink on!

the funny thing is that i read about high hotspot temps before and i spread the paste to all gpu with a card to be sure


----------



## geriatricpollywog

Quote:


> Originally Posted by *MAMOLII*
> 
> I have modded the cooler adding a artcic liquid freezer 120 while keeping the stock amd plate!
> with air cooling i had core 84 hmb 92 hotspot 102!! now 51- 54 -91!
> 
> now its winter i have 18c room temp but summer goes up to 32 room so the hot spot will reach close to 105
> 
> maybe if i add a fan blowing the pcb plate drop the pcb temp and hot spot cause the amd fan is useless now
> the front plate is off so the fan cant push the air to all pcb with the case open
> 
> 
> 
> 
> 
> 
> 
> 
> the other trick is mount a silicon thermal adhesive pad on the back plate with a small heatsink on!
> 
> the funny thing is that i read about high hotspot temps before and i spread the paste to all gpu with a card to be sure


Vega likes custom water.


----------



## MAMOLII

temps are good keep in mind that to mount it with the stock amd vrm plate i had to add a copper plate 3mm thick between gpu and pump








custom cooling is always better!Its just a cheap water cooling solution with 65 euros cost!


----------



## cg4200

Quote:


> Originally Posted by *Reikoji*
> 
> dont worry about the hotspot temp. 51c isnt bad for
> the lc card


wow I have lc also and my hotspot is about 20 degrees above roughly 50-56 gaming and 75 hotspot yours seems way high.. I was testing this morning before clean card up and change out tim ... if I were you I would do the same
your hotspot can't get worse than that.. good luck if not I would maybe rma


----------



## fursko

Quote:


> Originally Posted by *cg4200*
> 
> wow I have lc also and my hotspot is about 20 degrees above roughly 50-56 gaming and 75 hotspot yours seems way high.. I was testing this morning before clean card up and change out tim ... if I were you I would do the same
> your hotspot can't get worse than that.. good luck if not I would maybe rma


Did you change tim ? Is it easy ? Can you share some photos ?


----------



## PontiacGTX

Quote:


> Originally Posted by *Naeem*
> 
> if you wanna know how bad it is run 720p low


Quote:


> Originally Posted by *Grummpy*
> 
> why would i run 720.
> its e relevant.
> its using a tool i will never use for that task.
> like asking to put tractor tires on a push bike.


Superposition isnt CPU bound


----------



## Grummpy

Proof hot spot is pure BS.
Instant spike to 80+ c and a instant drop to 40 c = nonsense resistance reading.


I completely ignore it now.


----------



## HGooper

Why did my power limit at wattman only allow me increase +1 instead of +50? My card is 56 at air64 bios btw.


----------



## xzamples

Will be getting a RX Vega 56 soon.

Any tips or advice?

Should I flash it to Vega 64?


----------



## geriatricpollywog

Quote:


> Originally Posted by *xzamples*
> 
> Will be getting a RX Vega 56 soon.
> 
> Any tips or advice?
> 
> Should I flash it to Vega 64?


Flash to 64. Play with it for a week then add a waterblock.


----------



## cg4200

So i was getting up yo 59 gpu 7-8 degrees more hbm and about 20 degrees abouve gpu for hotspot .. when i tested i had 12 mm fan blowing on doubles in back for all testing and today it is 79 c other day was 72 running firestrike testing so not equal apples to apples but other than 7 degress everything else is same..








so I used t6 to take backplate screws..2 than take all screws off back razor that void if removed sh..t than take two screws off airflow mounting bracket like my pic..3 than take apart wires carefully I used artic cleaner but rubbing alcohol works fine be careful between hbm I took sewing needle to clean between with northern toilet paper ..I had ek thermal pads from many blocks I would have preferred some tg 8 or better..i used tg tim but will say have had great luck with gelid extreme aswell .. I put pads every where even where there was none .5 ek when I applied tim I usally would not use so much but I filled between hbm space and mushed in like puttying than credit card method..and back together .. lastly I used ek 1mm on everything I could in back before putting back plate on made good contact..than some firestrike and again today 79 c upstairs and max after 4 runs is 49 gpu 51 hbm 71 hotspot .. so for me made big difference but not all will be same good luck


----------



## Reikoji

Quote:


> Originally Posted by *Grummpy*
> 
> Proof hot spot is pure BS.
> Instant spike to 80+ c and a instant drop to 40 c = nonsense resistance reading.
> 
> 
> I completely ignore it now.


I wouldn't say it is BS, but its not really that important of a value. If it is least below the max hotspot temp in the bios, then its all good. That hotspot temp can also be reduced on the LC card by adding a power fan to the radiator in pull, or just swapping out the fan already on it with a strong fan. Note by strong fan i mean one that emphasizes on cooling stuff rather than remaining silent. Silent fans are worthless to me. The temperature of the liquid going back into the card has an effect on not only GPU/HBM temperature but hotspot temp as well.


----------



## Grummpy

im using 4 gentle typhoon fans push pull on a 240 mm rad.
temps are great, its silent aswell.


----------



## PontiacGTX

check this


http://imgur.com/3RoFD


----------



## madmanmarz

Quote:


> Originally Posted by *Naeem*
> 
> anyone else here getting driver crash every single game even on stock settings ? seems to be the issue all drivers from past month or so i tried clean install as well as ddu also tried removing msi after burner did not help either it just crahses randomly into game some time after 10 min some time after 20 min
> 
> i have vega lc


I've been KILLLING MYSELF trying to figure out what's been going on with my game freezes that you can alt/tab out of and random crashes. It was 100% due to enhanced sync! This was happening only in some games but it was killer. VSync on or off and no issues whatsoever, no need to lock states or anything.


----------



## barbz127

Quote:


> Originally Posted by *Grummpy*
> 
> im using 4 gentle typhoon fans push pull on a 240 mm rad.
> temps are great, its silent aswell.


Hi Grumpy,

How did you go about mounting this cooler?

Cheers


----------



## Grummpy

Im looking forward in seeing my vega get faster and faster s
Quote:


> Originally Posted by *barbz127*
> 
> Hi Grumpy,
> 
> How did you go about mounting this cooler?
> 
> Cheers


I ordered 4x 2 mm pins from amazon and 8 lock nut bolts.
i used the bracket of the back to hold the cooler in place because of restricted space due to the vrm locations.
Using that spring bracket gave me perfect pressure when fitting just bolt down until they are level.
a tube of thermal grizzly paste.

https://www.amazon.co.uk/gp/product/B01A9VUFGS/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
https://www.amazon.co.uk/gp/product/B00B3RICZE/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1
https://www.amazon.co.uk/gp/product/B00MJSO2Y6/ref=oh_aui_detailpage_o04_s00?ie=UTF8&psc=1
https://www.amazon.co.uk/gp/product/B00KBNWKSQ/ref=oh_aui_detailpage_o07_s00?ie=UTF8&psc=1
https://www.overclockers.co.uk/id-cooling-frostflow-240-all-in-one-cpu-water-cooler-240mm-hs-008-id.html



Get a file so you can file around whats obstructing it from fitting.
Once its dry you can then remove the support clip i added.
allso you have to drill through one of the coolers but you dont really need to add those.


The cooler fits centre with about 1 mm clearance away from the vrm.
its perfect for this job.

I will take some photos tomorrow for you.
if i had the choice again i would just buy the vega liquid now its only 600 now.
i only saved myself 40 pounds .


----------



## cg4200

Hey man really nice job with moding your card!! ..
What is your gaming temp hbm and hotspot after reseating??Thanks


----------



## SpecChum

Quote:


> Originally Posted by *madmanmarz*
> 
> I've been KILLLING MYSELF trying to figure out what's been going on with my game freezes that you can alt/tab out of and random crashes. It was 100% due to enhanced sync! This was happening only in some games but it was killer. VSync on or off and no issues whatsoever, no need to lock states or anything.


I mentioned this a while ago


----------



## madmanmarz

Quote:


> Originally Posted by *SpecChum*
> 
> I mentioned this a while ago


hey it's a long thread


----------



## SpecChum

Quote:


> Originally Posted by *madmanmarz*
> 
> hey it's a long thread


Yeah, sorry.

I should have put "







" on the end of that, I wasn't having a go.

But yeah, I noticed last week ES was causing issues with a few games of mine.


----------



## madmanmarz

Quote:


> Originally Posted by *SpecChum*
> 
> Yeah, sorry.
> 
> I should have put "
> 
> 
> 
> 
> 
> 
> 
> " on the end of that, I wasn't having a go.
> 
> But yeah, I noticed last week ES was causing issues with a few games of mine.


No worries mate I saw this all over Reddit and elsewhere so I guess it's been an ongoing problem for the last 6 months or so. I'm just relieved that's all it was - I was about to RMA.


----------



## SavantStrike

The prices for Vega are really sad if you can even get one any more









LTT had a video that claimed the gigabyte 56 was going to have a MIR that took it to around $380 adder rebate. I never found one at that price. Across the board board all the cards shot through the roof as soon as availability dried up.

I'm glad I got my V64 when I did, but it's frustrating that no new buyers can enter the space.


----------



## SpecChum

I got lucky, I only paid £455 for my Vega 64.

You'll struggle to get a Vega 56 for this price currently.


----------



## MAMOLII

hot spot news!!!
repasted and.... run 10 minutes stress test firestike ultra!! rx56 flashed to rx64 bios default volts1.2v power +50%

before 

after repaste 

some loss on hmb some gain on hotspot! I leave it here....should i flash my rx 56 to liquid bios?
i see more people with water stay on air 64 bios around 1.125volts and 1600 mhz clocks

also i see that hot spot is very sensitive to power limit from 0 to 50% +20c


----------



## diabetes

Quote:


> Originally Posted by *MAMOLII*
> 
> (...)
> some loss on hmb some gain on hotspot! I leave it here....should i flash my rx 56 to liquid bios?
> i see more people with water stay on air 64 bios around 1.125volts and 1600 mhz clocks
> 
> also i see that hot spot is very sensitive to power limit from 0 to 50% +20c


Depends on whether you can get your card cool and stable with liquid bios. Some cards cannot do the stock clocks of the liquid bios, but it allows for more than 1.2V vcore. If your temps and power supply allow pushing more than 1.2V (power usage explodes past that voltage too), go for the liquid bios. If your card isn't stable with it and you want to keep power usage on a somewhat reasonable level, go for the air bios.

On my card (V56 on EKWB) at least, using the liquid bios and more than 1.2V only grants diminishing returns. I can get 1680Mhz stable on 1.15V, 1720 already needs 1.2V and 1750Mhz needs 1.25+V with GPU only power usage exceeding 350W (that is more than 420W for the whole card).


----------



## MAMOLII

i have an evga supernova 1300 watt!
so the card is gonna hit its limits








i want see if it can hit 1700 range! know runs at 1620! if not back to air bios...


----------



## Grummpy

I ignore hot spot temps its just a digital resistance reading from a sensor.
If it was important AMD would of put it within their wattman temp graph and they didn't.
Here are my temps and watt usage, latency and frame rate
Default clock speeds but memory clocked to 1100 default volt.
Core volt -100mv


----------



## Grummpy

Hot spot is just a estimate ignore it.
Just look how the temp rose and dropped in a instant.
Tells me to ignore it.


----------



## porschedrifter

So I seem to be having a very odd issue when coming back from Hibernation. If I let my system wake and sit at login screen or even log in and go into windows, it'll be about maybe 1 minute or so and the screens will go dark, system will be unresponsive and the gpu fan will be at 100%. Although, when I restart or shutdown and boot up, I have zero problems. Seems to only be upon wake from hibernation.

I check the logs and bluescreenview and there are no BSOD's logged at all.
Anyone else having an issue like this? Seems to have happened since last AMD driver install.


----------



## Grummpy

If you dont own this game get it.
its easy to run and dam scary lol.




This is a test i did months ago 4k vega 64 with stock cooler to see how it performs.



stress test


----------



## jmoonb

Seeing so many hot spot talk I figured I'll share my results.

For reference, my settings were p6 1537Mhz/1020mv p7 1592Mhz/1070mv hbm 1100Mhz/960mv 50% PL, though it is lower now as I rather keep my room cool over a few frames at ridiculous power usage.

After my first successful paste job on my barrow block, where i had plenty of initial failures, I got a max temp of 45 core 47 hbm and 58 hot spot after letting heaven run for about 20-30 minutes. Superposition would keep the temps the on the core and HBM roughly the same but the hot spot would kick up to 65 easily. At full voltage and power the hot spot would hit 70 in heaven and get up to 78 in superposition. I was satisfied with this as I thought it was good for a cheap water block and spent the next few days benching at various settings trying to find the sweet spot when I noticed the HS temps creeping up.

The hot spot was now hitting 64, 72 in superposition, at the same settings. So I took the card apart, redid the TIM better then I did the first time and..... Well the core and hbm dropped a bit to 42 and 44 but there was practically no change to hot spot temps. Puzzled I looked at my card carefully. The only thing that looked off to me was that the 1mm thermal pads used on the mosfets didn't really look like it was making good contact with the block. Could VRM have anything to do with HS temps? Most here have already said no but I was still curious. I looked around and found some users adding a fan to the back of the card to reduce HS temps a bit. My attempts at this failed but it did give me an idea. As the doublers on the back of the card get really hot I cut a bunch of 1x1cm heat sinks in half and attached it to all the doublers.

The result?



Compare this to:



A full 1 degree cooler... on EVERYTHING!?!? And before thinking this was a margin of error, I tested this 3 times each with the window open for the no heat sink test just to make sure. Unfortunately, the tiny heat sinks were too small to do much but something was definitely up.

At this point I really didn't want to open up my block again and thought I could live with this. It definitely wasn't as bad as others so maybe I was being too picky. Then came 17.11.2... This driver made HWinfo show VRM temps! And boy were they getting hot. Like if the water block was barely doing anything at all (60-80 depending on settings). The HBM VRM temp was the most interesting though. Didn't match the HS temps at idle but during load, it almost looked like they tracked each other.. Hmmm.... Even If it didn't reduce hot spot temps, I decided to open the block up again in hopes of bringing VRM temps down.

I purchased a bunch of new 0.5, 1, 1.5 and 2mm thermal pads and placed it on my VRMs to make sure all had perfect contact with the water block. 1.5mm for the mosfets, 0.5mm for the chokes, and 1mm on the rest. I also added the 2mm on the doublers on the back so that it had good contact with the backplate. Did the same TIM job, closed it up and...



What a difference! And what was more interesting was the fact that I could maintain a higher clock where before it would crash eventually. Temporarily installed 17.11.2 (I don't use it as it seems buggy to me) to check my VRM temps and they were now in the 40-50 range with the HBM VRM again being strangely close to the HS temps...

Anyways, there might be something to this but this would require more samples to confirm. Definitely worth a shot if you have high hot spot temps or think your VRM is running hot.


----------



## VicsPC

Quote:


> Originally Posted by *jmoonb*
> 
> Seeing so many hot spot talk I figured I'll share my results.
> 
> For reference, my settings were p6 1537Mhz/1020mv p7 1592Mhz/1070mv hbm 1100Mhz/960mv 50% PL, though it is lower now as I rather keep my room cool over a few frames at ridiculous power usage.
> 
> After my first successful paste job on my barrow block, where i had plenty of initial failures, I got a max temp of 45 core 47 hbm and 58 hot spot after letting heaven run for about 20-30 minutes. Superposition would keep the temps the on the core and HBM roughly the same but the hot spot would kick up to 65 easily. At full voltage and power the hot spot would hit 70 in heaven and get up to 78 in superposition. I was satisfied with this as I thought it was good for a cheap water block and spent the next few days benching at various settings trying to find the sweet spot when I noticed the HS temps creeping up.
> 
> The hot spot was now hitting 64, 72 in superposition, at the same settings. So I took the card apart, redid the TIM better then I did the first time and..... Well the core and hbm dropped a bit to 42 and 44 but there was practically no change to hot spot temps. Puzzled I looked at my card carefully. The only thing that looked off to me was that the 1mm thermal pads used on the mosfets didn't really look like it was making good contact with the block. Could VRM have anything to do with HS temps? Most here have already said no but I was still curious. I looked around and found some users adding a fan to the back of the card to reduce HS temps a bit. My attempts at this failed but it did give me an idea. As the doublers on the back of the card get really hot I cut a bunch of 1x1cm heat sinks in half and attached it to all the doublers.
> 
> The result?
> 
> 
> 
> Compare this to:
> 
> 
> 
> A full 1 degree cooler... on EVERYTHING!?!? And before thinking this was a margin of error, I tested this 3 times each with the window open for the no heat sink test just to make sure. Unfortunately, the tiny heat sinks were too small to do much but something was definitely up.
> 
> At this point I really didn't want to open up my block again and thought I could live with this. It definitely wasn't as bad as others so maybe I was being too picky. Then came 17.11.2... This driver made HWinfo show VRM temps! And boy were they getting hot. Like if the water block was barely doing anything at all (60-80 depending on settings). The HBM VRM temp was the most interesting though. Didn't match the HS temps at idle but during load, it almost looked like they tracked each other.. Hmmm.... Even If it didn't reduce hot spot temps, I decided to open the block up again in hopes of bringing VRM temps down.
> 
> I purchased a bunch of new 0.5, 1, 1.5 and 2mm thermal pads and placed it on my VRMs to make sure all had perfect contact with the water block. 1.5mm for the mosfets, 0.5mm for the chokes, and 1mm on the rest. I also added the 2mm on the doublers on the back so that it had good contact with the backplate. Did the same TIM job, closed it up and...
> 
> 
> 
> What a difference! And what was more interesting was the fact that I could maintain a higher clock where before it would crash eventually. Temporarily installed 17.11.2 (I don't use it as it seems buggy to me) to check my VRM temps and they were now in the 40-50 range with the HBM VRM again being strangely close to the HS temps...
> 
> Anyways, there might be something to this but this would require more samples to confirm. Definitely worth a shot if you have high hot spot temps or think your VRM is running hot.


Very nice, my hotspot temps on water are already ~12°C above core temps and 9°C above HBM temp so without changing thermal pads i already have better delta between core and hotspot then you do which is a bit odd. I could change it and get new pads but id hate to drain my loop so may try it next cycle. I hear ek does use some cheap normal pads so who knows. I used the factory backplate as well and din't put any pads on it.


----------



## jmoonb

Quote:


> Originally Posted by *VicsPC*
> 
> Very nice, my hotspot temps on water are already ~12°C above core temps and 9°C above HBM temp so without changing thermal pads i already have better delta between core and hotspot then you do which is a bit odd. I could change it and get new pads but id hate to drain my loop so may try it next cycle. I hear ek does use some cheap normal pads so who knows. I used the factory backplate as well and din't put any pads on it.


For me I upgraded to EK pads from the cheap 1.2ish mm my block came with which didn't even work.

I think I just lost the lottery with my card and being a Vega 56 with bios mod, I can't expect too much. Having said that, at the same settings on the 56 bios with HBM at 950Mhz my hot spot wont go over 45 so there is that...


----------



## VicsPC

Quote:


> Originally Posted by *jmoonb*
> 
> For me I upgraded to EK pads from the cheap 1.2ish mm my block came with which didn't even work.
> 
> I think I just lost the lottery with my card and being a Vega 56 with bios mod, I can't expect too much. Having said that, at the same settings on the 56 bios with HBM at 950Mhz my hot spot wont go over 45 so there is that...


Yea that's nice, unfortunately i still have case temps of around 23-24°C here in southern France, if i open the window my temps would drop like mad but gets a bit chilly. I've peaked at 48°C HS and 37/40°C core/hbm today so not bad at all.


----------



## SavantStrike

Quote:


> Originally Posted by *jmoonb*
> 
> For me I upgraded to EK pads from the cheap 1.2ish mm my block came with which didn't even work.
> 
> I think I just lost the lottery with my card and being a Vega 56 with bios mod, I can't expect too much. Having said that, at the same settings on the 56 bios with HBM at 950Mhz my hot spot wont go over 45 so there is that...


Which block are you using and how much of each size did you order?

I might want to have pads on hand when I do my three V64 cards this weekend. I bought Bykski blocks so who knows what they come with.


----------



## SpecChum

New drivers out: http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.3-Radeon-RX-Vega-Hotfix-Release-Notes.aspx

Bit vague on the fixes tho "An intermittent crash issue may be experienced on some Radeon RX Vega series graphics products"

I'm going to try DiRT Rally with Enhanced Sync on again...


----------



## Grummpy

Hot spot is nonsense ignore it.
stop being weak minded for heavens sake and use some logic people
proof.


estimated temp instant temp rise instant temp decrease = its pure nonsense


----------



## jmoonb

Quote:


> Originally Posted by *SavantStrike*
> 
> Which block are you using and how much of each size did you order?
> 
> I might want to have pads on hand when I do my three V64 cards this weekend. I bought Bykski blocks so who knows what they come with.


I bought:
1 x 0.5mm 120x24mm
1 x 1mm 120x16mm
1 x 1.5mm 120x24mm
1 x 2mm 120x16mm

Think the 1mm is enough to cover all 3 cards and the 2mm to cover 20








2 or just under that for the rest.


----------



## jmoonb

Quote:


> Originally Posted by *Grummpy*
> 
> Hot spot is nonsense ignore it.
> stop being weak minded for heavens sake and use some logic people
> proof.
> 
> 
> estimated temp instant temp rise instant temp decrease = its pure nonsense


I don't know about that.. When I did a crappy initial job on installing the block, my PC completely shutdown when the hotspot reached over 110C and the card didn't turn back on for nearly a minute. I repeated this more then once so the card itself seems to be making sure it doesn't pass a certain threshold. My GPU and HBM were around 50 when this happened.


----------



## SavantStrike

Quote:


> Originally Posted by *jmoonb*
> 
> I bought:
> 1 x 0.5mm 120x24mm
> 1 x 1mm 120x16mm
> 1 x 1.5mm 120x24mm
> 1 x 2mm 120x16mm
> 
> Think the 1mm is enough to cover all 3 cards and the 2mm to cover 20
> 
> 
> 
> 
> 
> 
> 
> 
> 2 or just under that for the rest.


2 of everything but the 1mm and the 2mm should cover me then?

Thanks. +Rep


----------



## Grummpy

Quote:


> Originally Posted by *jmoonb*
> 
> I don't know about that.. When I did a crappy initial job on installing the block, my PC completely shutdown when the hotspot reached over 110C and the card didn't turn back on for nearly a minute. I repeated this more then once so the card itself seems to be making sure it doesn't pass a certain threshold. My GPU and HBM were around 50 when this happened.


I dont fully understand what it is.
but when i see instant temp increases and instant temp drops it tells me its some kind of estimate that has nothing to do with temperature.
if it was to do with temperature their would be a gradual increase and gradual drop.
there are none which tells me its an estimate.
its estimating what it thinks the temps is not what it actually is.
what use is that if that is the case.


----------



## SpecChum

Quote:


> Originally Posted by *SpecChum*
> 
> New drivers out: http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.3-Radeon-RX-Vega-Hotfix-Release-Notes.aspx
> 
> Bit vague on the fixes tho "An intermittent crash issue may be experienced on some Radeon RX Vega series graphics products"
> 
> I'm going to try DiRT Rally with Enhanced Sync on again...


Well, whatdyaknow, just tried a few DiRT Rally courses with Enhanced Sync on and no crash.

Didn't crash when loading Hard Reset like it did before, either.

Promising.

@madmanmarz try this new driver with ES on, it *seems* to be fixed. My limited testing is by no means conclusive.


----------



## SpecChum

Well, just played Hard Reset for half an hour with ES on, perfectly fine









Nice.


----------



## madmanmarz

Quote:


> Originally Posted by *SpecChum*
> 
> Well, just played Hard Reset for half an hour with ES one, perfectly fine
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nice.


Yep I'll give it a try I see they fix some intermittent crash


----------



## jmoonb

Quote:


> Originally Posted by *Grummpy*
> 
> I dont fully understand what it is.
> but when i see instant temp increases and instant temp drops it tells me its some kind of estimate that has nothing to do with temperature.
> if it was to do with temperature their would be a gradual increase and gradual drop.
> there are none which tells me its an estimate.
> its estimating what it thinks the temps is not what it actually is.
> what use is that if that is the case.


Instant change in temps don't really happen in large objects but can and does happen often at the microscopic level. For example, on cpu cores like my 6800k; Running intelburntest it will instantly get my core to go from 20 to 50 as soon as the test starts and drop back to 20 when i turn it off. Or on my 2500k with a stock cooler I remember it instantly going from something like 30 to 90 on all 4 cores with the same test before I shut it down in fear which would drop it down just as fast. This lead me to get a decent noctua heatsink. The CPU package itself however does take longer to heat up and cool down.

With the Vega, my guess is like the others. The area is not in direct contact with the heat sink so the temps can jump much higher and faster then the actual core/hbm and begin to level off. Its might also be a tiny area just like a CPU core so the instant changes in temps aren't unheard of. All speculations at this point as AMD haven't said anything but for me my hot spot temp drop was very well correlated with a drop in VRM temp so I can live with that.


----------



## By-Tor

Ran superposition to see what I would get for a hot spot temp..


----------



## Trender07

Hugh even with much more high volts I've always used, im running now p6 1060 mV and p7 1125 mV with stock clocks and even so COD WW2 keep crashing, yeah I know ww2 is hard on vega thats why Im using it for stability on my gpu but idk maybe my vfloor (hbm mv) isn't enough? Running 965 mV now

EDIT:

Even with stock volt COD WW2 still crashes lul


----------



## Grummpy

Quote:


> Originally Posted by *By-Tor*
> 
> Ran superposition to see what I would get for a hot spot temp..


You got great gpu,
same power you got more than 50 mhz higher clock than mine. V nice


----------



## Grummpy

I really dislike this hotspot because im seeing more than 50 c difference in my temps.
There is no build up of temp its instant witch dont make much sense to me.
im using the best paste grizzly as well so refitting wont make a difference.


----------



## Grummpy

Quote:


> Originally Posted by *jmoonb*
> 
> Instant change in temps don't really happen in large objects but can and does happen often at the microscopic level. For example, on cpu cores like my 6800k; Running intelburntest it will instantly get my core to go from 20 to 50 as soon as the test starts and drop back to 20 when i turn it off. Or on my 2500k with a stock cooler I remember it instantly going from something like 30 to 90 on all 4 cores with the same test before I shut it down in fear which would drop it down just as fast. This lead me to get a decent noctua heatsink. The CPU package itself however does take longer to heat up and cool down.
> 
> How did you measure the vrm temp ?
> I havent found any software yet that measures it.
> i had to buy an external thermometer to test mine.


----------



## Grummpy

43 C difference.


Do i have to refit my cooler and buy another tube of thermo grizzly.


----------



## TrixX

Quote:


> Originally Posted by *Grummpy*
> 
> 43 C difference.
> 
> 
> Do i have to refit my cooler and buy another tube of thermo grizzly.


Use HWiNFO to get the VRM sensors to show. It doesn't show in GPU-z

Can't hurt to put some good thermal tape on the card for things like VRM's and doublers. They get damn hot. For some reason my loop is running hot at the moment so going to have to drain it and re-paste the lot. Not sure why but loop temp has gone up massively even though ambient hasn't gone up much.


----------



## Kyozon

Have we found out what actually is the Hotspot on VEGA? I previously thought it was maybe the Interposer, but not sure.


----------



## Reikoji

my card does has never taken well to having even the vrm voltage polled from older versions of HWinfo. the newest beta adds mych more vrm data to polling, however i stead of jyst causing stutter, my card simply dies leading to 0 tach bar and system lockup.... anyone else cards mot take to hwinfo polling, particurlarly vrm info?


----------



## By-Tor

Quote:


> Originally Posted by *TrixX*
> 
> Use HWiNFO to get the VRM sensors to show. It doesn't show in GPU-z


I've used HWINFO64 for years and my 290X cards showed the VRM temps of the video card, but the Vega 64 I have does not. Maybe I just need a newer version... What version are you using that shows them?

v5.38 here and I see they have a v5.60 available.


----------



## madmanmarz

Quote:


> Originally Posted by *Grummpy*
> 
> 43 C difference.
> 
> 
> Do i have to refit my cooler and buy another tube of thermo grizzly.


Some of ours just get that hot. I usually run around 30-40c on core/hbm and 60/70 on hotspot but when i push clocks/voltages, i'm about 45c core/hbm and 80-90 hotspot. I am relatively positive that making sure you have the entire die/hbm and gaps in between covered makes the biggest difference, but only up to a point.


----------



## Reikoji

I'd say my hotspot is under control, whenever I let my card be cool, that is.

Default balanced settings, complete with its horrible fan curve that lets everything boil.


with fans spinning up.

even if everything is left to boil from silly fan curve, hotspot didnt get to its critical 105c. All is well.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Reikoji*
> 
> I'd say my hotspot is under control, whenever I let my card be cool, that is.
> 
> Default balanced settings, complete with its horrible fan curve that lets everything boil.
> 
> 
> with fans spinning up.
> 
> even if everything is left to boil from silly fan curve, hotspot didnt get to its critical 105c. All is well.


You have some dips in GPU usage and HBM frequency. Do you get a lot of stutters in gameay?


----------



## Reikoji

Quote:


> Originally Posted by *0451*
> 
> You have some dips in GPU usage and HBM frequency. Do you get a lot of stutters in gameay?


With the particular game i am playing, yes. Not worried about it, ancient DX9 garbage :3


----------



## madmanmarz

Quote:


> Originally Posted by *SpecChum*
> 
> Well, just played Hard Reset for half an hour with ES on, perfectly fine
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nice.


no change for me in the division with enhanced sync on....


----------



## Grummpy

Well i killed my card.
i have crazy pixels all over the screen after redoing my thermal paste.
wasted 500 pounds.
ah well it sucks my 3 years is void,
i cleaned it thinking i may of got some paste over them chips but card still busted.
it was working fine i had no reason to strip it
no idea what went wrong to be honest.
just move on with life and forget it.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Grummpy*
> 
> Well i killed my card.
> i have crazy pixels all over the screen after redoing my thermal paste.
> wasted 500 pounds.
> ah well it sucks my 3 years is void,
> i cleaned it thinking i may of got some paste over them chips but card still busted.
> it was working fine i had no reason to strip it
> no idea what went wrong to be honest.
> just move on with life and forget it.


I had the same thing happen to me last night with my old Fury X. I'll count my blessings that it wasn't my Vega!


----------



## diabetes

Quote:


> Originally Posted by *Grummpy*
> 
> Well i killed my card.
> i have crazy pixels all over the screen after redoing my thermal paste.
> wasted 500 pounds.
> ah well it sucks my 3 years is void,
> i cleaned it thinking i may of got some paste over them chips but card still busted.
> it was working fine i had no reason to strip it
> no idea what went wrong to be honest.
> just move on with life and forget it.


Sorry to hear that. You have my sympathies.

Are you 100% certain that the card is dead? No half-plugged-in Displayport cable or dirt/thermal grease on the connector? Did you try the other ports too (including the HDMI one)? It could even be that the display cable just went bad.

Do you have a pic of these crazy pixels? As long as they dont look like dead VRAM, there is still a chance.


----------



## Grummpy

Yeah i did al the checks.
i think refitting the cooler for a 4th time broke something on the pcb
the gpu and memory is fine but the pcb isnt.



lost a good card,
thats the risk we take when we mod and void the cover.

got myself a £500 badge

i will buy myself a and move on.
https://www.overclockers.co.uk/gigabyte-radeon-rx-vega-64-xtx-8gb-hbm2-pci-express-liquid-cooled-graphics-card-aqua-pack-gx-19k-gi.html

no more modding for me it isnt worth the risk.

anyone want these ?


----------



## diabetes

After seeing this pic, I cannot sleep. Too much gore.


----------



## Grummpy

RIP MY VEGA


http://imgur.com/vrb7Y


i feel like a pet has just died on me.
2 mths old 500 pounds.
not happy
i wont be modding again.


----------



## hyp36rmax

Quote:


> Originally Posted by *Grummpy*
> 
> RIP MY VEGA
> 
> 
> http://imgur.com/vrb7Y


What's the narrative here? What do you think happened? You did one hell of a custom cooled VEGA, I suppose it was bound to happen with this kind of setup.


----------



## Grummpy

Quote:


> Originally Posted by *hyp36rmax*
> 
> What's the narrative here? What do you think happened? You did one hell of a custom cooled VEGA, I suppose it was bound to happen with this kind of setup.


i think it was lack of rear support .
as i glued the vrm coolers in place after the cooler was fitted.
removing the cooler put strains in places because the vrm was being held in place with the glue.
a pcb line broke.
4 x i removed it this was one time to many.

i should of used a second bracket and not rely on the pcb to take the strain.
my own stupid fault.


----------



## hyp36rmax

Quote:


> Originally Posted by *Grummpy*
> 
> i think it was lack of rear support .
> as i glued the vrm coolers in place after the cooler was fitted.
> removing the cooler put strains in places because the vrm was being held in place with the glue.
> a pcb line broke.
> 4 x i removed it this was one time to many.
> 
> i should of used a second bracket and not rely on the pcb to take the strain.
> my own stupid fault.


Lesson learned for all of us. Now someone work on a V2 of this cooler. Curious does NZXT or Corsair have a hybrid bracket for the VEGA 64?


----------



## jmoonb

Quote:


> Originally Posted by *Grummpy*
> 
> RIP MY VEGA
> 
> 
> http://imgur.com/vrb7Y
> 
> 
> i feel like a pet has just died on me.
> 2 mths old 500 pounds.
> not happy
> i wont be modding again.


Damn. Sorry to see that man. I feel terrible as I just can't help but feel responsible with all that hotspot talk.. Hope you have great luck with the LC card.


----------



## Grummpy

£500 lesson.


----------



## Grummpy

Quote:


> Originally Posted by *jmoonb*
> 
> Damn. Sorry to see that man. I feel terrible as I just can't help but feel responsible with all that hotspot talk.. Hope you have great luck with the LC card.


if it wasnt for hotspot i would still be good but it dont matter its only money life will go on.
i think when i glued the vrm cooler on the cooler for the gpu was fitted.
the vrm dried so when i removed the gpu cooler the pcb wanted to flex back but the glue on the vrm was stopping it from happening.
i think i pulled a vrm away from its mounting as i removed the gpu cooler.


----------



## Grummpy

i have to remove this from my mind so i can move on.
not in the habit in wasting money i dont have.
my hole reason for fitting my own cooler was to save money.
i wish i just purchased the wc one now.
sucks being me


----------



## TrixX

Quote:


> Originally Posted by *Grummpy*
> 
> i have to remove this from my mind so i can move on.
> not in the habit in wasting money i dont have.
> my hole reason for fitting my own cooler was to save money.
> i wish i just purchased the wc one now.
> sucks being me


Next time just grab one of the Waterblocks and an AC card or even the LC card


----------



## VicsPC

Quote:


> Originally Posted by *By-Tor*
> 
> I've used HWINFO64 for years and my 290X cards showed the VRM temps of the video card, but the Vega 64 I have does not. Maybe I just need a newer version... What version are you using that shows them?
> 
> v5.38 here and I see they have a v5.60 available.


A lot of us don't get VRM temperatures just VRM voltage, doesn't have to do with the software but the card in question. I messaged Martin from hwinfo and he said the same thing, the card just don't have VRM temperature sensors (extremely odd to me considering my r9 390 had it). But it is what it is, I'm not worried about it, the Vega 64 VRMs are some of the best I've seen in years.


----------



## geriatricpollywog

VRM temperatures are irrevant since he is watercooled.


----------



## jmoonb

Quote:


> Originally Posted by *By-Tor*
> 
> I've used HWINFO64 for years and my 290X cards showed the VRM temps of the video card, but the Vega 64 I have does not. Maybe I just need a newer version... What version are you using that shows them?
> 
> v5.38 here and I see they have a v5.60 available.


For me it only worked on 17.11.2 using the launch 64 bios on my vega 56. 11.3 seems to have removed it again which leads me to believe this was the hotfix. I got this working on 11.2 by accident. Wanted to reset my hwinfo so I went to regedit -> Computer\HKEY_CURRENT_USER\Software\ and deleted the HWINFO folder. Restarted v.5.60 and there it was.

I think all cards have it. My guess is that it was hidden due to instability. Having HWINFO open would occasionally shutdown my card completely. The values like VRM power IN and OUT would disappear and reappear randomly on the list before the crash. Oddly this only happened during idle. Stable enough to run games and benchmarks to allow a good solid reading on my VRM temps though.


----------



## cplifj

17.11.3 only works for VEGA.

my 290X drivers got removed and no new got installed , so the 290X and i asume all other non VEGA cards are end of life (DEAD) for AMD now ??

amateurism showing off it's capabilities , that's what amd means.

AM(ateur) D(evices)


----------



## TrixX

Quote:


> Originally Posted by *cplifj*
> 
> 17.11.3 only works for VEGA.
> 
> my 290X drivers got removed and no new got installed , so the 290X and i asume all other non VEGA cards are end of life (DEAD) for AMD now ??
> 
> amateurism showing off it's capabilities , that's what amd means.
> 
> AM(ateur) D(evices)


That's a pretty harsh leap considering the .3's are a hotfix driver, not a main release...

Seems like kneejerk reactions are still alive and well then...


----------



## SpecChum

Quote:


> Originally Posted by *madmanmarz*
> 
> no change for me in the division with enhanced sync on....












What's odd is they seem to have fixed all my intermittent issues. Weird.


----------



## Naeem

i am still facing game crashes once gpu temp go over 60 my gpu crashed twice in bf1 today withn new drivers now i am using msi ab fan profile wich is keeping it under 60c no crash yet im on vega lc
using driver 17.11.3


----------



## MAMOLII

Quote:


> Originally Posted by *Naeem*
> 
> i am still facing game crashes once gpu temp go over 60 my gpu crashed twice in bf1 today withn new drivers now i am using msi ab fan profile wich is keeping it under 60c no crash yet im on vega lc
> using driver 17.11.3


u have stock vega water or custom?
yea bf1 crahses all the time i had it with my furyx now with my vega! check if that happens in other games
if you search for bf1 crashes on google its a common fact


----------



## Naeem

Quote:


> Originally Posted by *MAMOLII*
> 
> u have stock vega water or custom?
> yea bf1 crahses all the time i had it with my furyx now with my vega! check if that happens in other games
> if you search for bf1 crashes on google its a common fact


i have vega 64 liquid edition it just crashed again on me my GPU was 55c and HBM2 was at 60c


----------



## MAMOLII

yes your error is game related! i had the same error with furyx and with my vega now with differend drivers fresh win 10 install!! i ended not playing bf1 anynore


----------



## geriatricpollywog

Ghost Recon Wildlands


----------



## Naeem

i also get crash in pubg as well also had it in watchdogs 2 and wildlands but it is very random some time it does not crash with temps


----------



## HGooper

Does anyone know why do I only allow to adjust state 6 and state 7 in wattman? Does it have anything to do with afterburner?


----------



## Grummpy

Goods Ordered (prices in GBP)
£499.99 x 1 - Gigabyte Radeon RX VEGA 64 XTX 8GB HBM2 PCI-Express Liquid Cooled Graphics Card - Aqua Pack

Sub-Total: 499.99
Shipping: 8.75
VAT: 101.75
Total: 610.

i wont be modding this one.
best way to forget my loss is to just replace it

still crying.


http://imgur.com/vrb7Y


----------



## LionS7

Quote:


> Originally Posted by *Naeem*
> 
> i have vega 64 liquid edition it just crashed again on me my GPU was 55c and HBM2 was at 60c


This is unstable card. On my R9 Fury X was like that on 1.23V. The game wanted 1.26V+ to stop to show that error. The card need more juice. ...and yes, the drivers from last two months are very bad. The last good ones are 17.9.3.


----------



## SpecChum

My card is still going strong on 915mV HBM, P6 and P7.

I got more FPS than I need usually and it stays reasonably quiet (for a blower).

I'm happy.


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> Hugh even with much more high volts I've always used, im running now p6 1060 mV and p7 1125 mV with stock clocks and even so COD WW2 keep crashing, yeah I know ww2 is hard on vega thats why Im using it for stability on my gpu but idk maybe my vfloor (hbm mv) isn't enough? Running 965 mV now
> 
> EDIT:
> 
> Even with stock volt COD WW2 still crashes lul


Stock volt + stock hbm clocks ?


----------



## fursko

Quote:


> Originally Posted by *Naeem*
> 
> i also get crash in pubg as well also had it in watchdogs 2 and wildlands but it is very random some time it does not crash with temps


Bad experiences with high end expensive product. Poor AMD. Nvidia offers consistent high fps, very low power consumption, problem free experience, out of the box good settings. Need radical changes for Radeon. They are way behind competition.


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> Goods Ordered (prices in GBP)
> £499.99 x 1 - Gigabyte Radeon RX VEGA 64 XTX 8GB HBM2 PCI-Express Liquid Cooled Graphics Card - Aqua Pack
> 
> Sub-Total: 499.99
> Shipping: 8.75
> VAT: 101.75
> Total: 610.
> 
> i wont be modding this one.
> best way to forget my loss is to just replace it
> 
> still crying.
> 
> 
> http://imgur.com/vrb7Y


This is very good price and you will get games too.


----------



## Aenra

610 quid is a very good price? You sure about that? He's not talking $, read again


----------



## HGooper

Quote:


> Originally Posted by *SavantStrike*
> 
> Which block are you using and how much of each size did you order?
> 
> I might want to have pads on hand when I do my three V64 cards this weekend. I bought Bykski blocks so who knows what they come with.


I'm using Bykski Vega full cover waterblock now, this thing is such a steal for the price I paid(got good discount on 11.11 @ taobao). I flashed my Vega56 to 64LC bios and running at 1750mhz/1100mhz currently at default voltage, temp is 37~41C(HBM2 is a bit higher temp).

It does come with thermal pads for and I applied at VRM only though.


----------



## fursko

Quote:


> Originally Posted by *Aenra*
> 
> 610 quid is a very good price? You sure about that? He's not talking $, read again


Wow you right. This is more than 1080 ti price. Vega 64 LC deserves 600-650$ maximum. Prices insane.


----------



## Grummpy

Im happy with the price.
i know vega 64 will be faster than the 1080ti when game devs leverage its architecture.
i have little doubt about that.but time will tell if im right or wrong.
im still hurting i wasted over 500 pounds.
never voiding my 3 year cover again.

and no the 1080ti is much more expensive.
https://www.overclockers.co.uk/pc-components/graphics-cards/nvidia/geforce-gtx-1080-ti

The extra 50 to 80 watts dont bother me.
it isnt a laptop so MEH
EXTRA 50 TO 100 pounds in electric if i use it a lot over 3 years.
not even enough to recoup my losses if i purchased a 1080ti
So what people see as a selling point is actually a deficit.
(unless its a laptop)


----------



## Grummpy

im feeling kind of numb.


----------



## Aenra

Grumps?

- alternate.de and aquatuning.de (where applicable) are good alternatives for you. The odd deal in amazon.de can also apply, never hurts to check.

- if you don't mind the "risk", Eastern EU online retailers can be even cheaper (and frankly, contrary to popular belief, just as reputable; but it depends on whom you go to).

Don't buy from overclockers man. Or anywhere UK-based come to that, waste of money 

P.S. Just read your last pot. Dedicated:


----------



## SavantStrike

Quote:


> Originally Posted by *diabetes*
> 
> After seeing this pic, I cannot sleep. Too much gore.


Yeah, carnage for sure!
Quote:


> Originally Posted by *Grummpy*
> 
> RIP MY VEGA
> 
> 
> http://imgur.com/vrb7Y
> 
> 
> i feel like a pet has just died on me.
> 2 mths old 500 pounds.
> not happy
> i wont be modding again.


If the PCB hadn't snapped you might have been able to get the die re-balled by someone with a rework station and some patience.

What was the mounting mechanism on the back of the card? I think your idea was close to working just, not quite there.

Quote:


> Originally Posted by *HGooper*
> 
> I'm using Bykski Vega full cover waterblock now, this thing is such a steal for the price I paid(got good discount on 11.11 @ taobao). I flashed my Vega56 to 64LC bios and running at 1750mhz/1100mhz currently at default voltage, temp is 37~41C(HBM2 is a bit higher temp).
> 
> It does come with thermal pads for and I applied at VRM only though.


Were the pads decent? I'm considering just using them and calling it a day. I also bought on 11.11 and I paid about 149 for two blocks with dhl shipping included.

On 11.13 I found another Vega for sale and bought it. Then I bought a third block for 90 with dhl shipping. A week later I started building and figured out I needed a flow bridge that I bought with ups express shipping. Even across three orders I paid less than just three blocks from anywhere else.

I plugged my third Vega into a test chassis last night and it causes hard restarts of the system after 5 minutes







. That may have to do with the fact that the test system has an nvidia card in it and msi afterburner was freaking out over that. I really hope so as Vega cards are now unobtainable again.


----------



## cplifj

Quote:


> Originally Posted by *TrixX*
> 
> That's a pretty harsh leap considering the .3's are a hotfix driver, not a main release...
> 
> Seems like kneejerk reactions are still alive and well then...


yeah, that's what incosistency from amd gets.

17.11.2 full driver does not offer a clean-install option, this 17.11.3 hotfix does . and when doing that after amd has tought everyone to use DDU 3 party software to make sure drivers install correctly.......

Quality control assurance and standards in how to do things are not amd's strongest specialties, are they?

Some of you want to sweet talk needing an engineers degree to run plain simple stuff nowadays because manufacturers act like the wind, changing whatever whenever they like it and leaving the consumer to figure everything out every time these little undocumented changes are done.

A real consumer does not sound like all the marketing manipulators on the internet, live with it and learn from it perhaps. Instead of showing arrogance and shouting user error.

For companies doing only one thing , things seem obvious, they forget the consumer has to know every other companies little details. That's why they invented standards in the first place.

But i get it, demanding seriousness and quality makes you a Hitler and they have no shame in calling you that openly in public either.


----------



## AlphaC

https://support.amd.com/en-us/download/desktop?os=Windows%2010%20-%2064

lists Crimson ReLive Edition 17.11.2 Optional

I don't see 17.11.3 listed for people with R9 290X or RX 500 series.


----------



## Trender07

Quote:


> Originally Posted by *fursko*
> 
> Stock volt + stock hbm clocks ?


Yup, I left everythiing as it comes in Custom, just changed the fan speed and it still crashes cod ww2


----------



## cplifj

Quote:


> Originally Posted by *AlphaC*
> 
> 
> https://support.amd.com/en-us/download/desktop?os=Windows%2010%20-%2064
> 
> lists Crimson ReLive Edition 17.11.2 Optional
> 
> I don't see 17.11.3 listed for people with R9 290X or RX 500 series.


you only get to 17.11.3 by clicking the direct link to vega drivers just below that screen.

the driver 17.11.2 itself does not give a warning about 17.11.3 being available either via crimson update function. maybe this only works in a single gfx card system.


----------



## TrixX

Quote:


> Originally Posted by *cplifj*
> 
> yeah, that's what incosistency from amd gets.
> 
> 17.11.2 full driver does not offer a clean-install option, this 17.11.3 hotfix does . and when doing that after amd has tought everyone to use DDU 3 party software to make sure drivers install correctly.......
> 
> Quality control assurance and standards in how to do things are not amd's strongest specialties, are they?
> 
> Some of you want to sweet talk needing an engineers degree to run plain simple stuff nowadays because manufacturers act like the wind, changing whatever whenever they like it and leaving the consumer to figure everything out every time these little undocumented changes are done.
> 
> A real consumer does not sound like all the marketing manipulators on the internet, live with it and learn from it perhaps. Instead of showing arrogance and shouting user error.
> 
> For companies doing only one thing , things seem obvious, they forget the consumer has to know every other companies little details. That's why they invented standards in the first place.
> 
> But i get it, demanding seriousness and quality makes you a Hitler and they have no shame in calling you that openly in public either.


So I can add overreaction to your list too shall I? Geez jump off the damn cliff for a hotfix driver.

I didn't update to the .2 driver so didn't know about the clean install removal. Not a massive hardship though and a relatively new feature to Crimson drivers.

Also you can take your hyperbole and shove it. I never said any of the crap you inferred in this reply, just said you were perhaps being a bit harsh.


----------



## cplifj

you think it's fun trying to correct every other detail after every other patch which are several a week , almost every day ?

Last few weeks i have been doing trouble shooting with every other thing i wanted to do on my computer and i'm getting real tired of it.

Every time i have some spare time to game , there is this or that patch and everything needs to be reset. Goodbye spare time.

Yeah i'm crancked Why do i pay premium prices for everything when i have to do extra work myself to be able to enjoy it in MY TIME?????

you sweettalker are not convincing me wrong at all.
Quote:


> Originally Posted by *TrixX*
> 
> So I can add overreaction to your list too shall I? Geez jump off the damn cliff for a hotfix driver.
> 
> I didn't update to the .2 driver so didn't know about the clean install removal. Not a massive hardship though and a relatively new feature to Crimson drivers.
> 
> Also you can take your hyperbole and shove it. I never said any of the crap you inferred in this reply, just said you were perhaps being a bit harsh.


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> you only get to 17.11.3 by clicking the direct link to vega drivers just below that screen.
> 
> the driver 17.11.2 itself does not give a warning about 17.11.3 being available either via crimson update function. maybe this only works in a single gfx card system.


Dude you wouldn't and shouldn't navigate to drivers with RX Vega selections for 290x drivers. If you were to have selected the GPU you actually have, they would not have 17.11.3 as an option for 290x.



This is what you have when you select 290x for drivers.

RX Vega hotfix 17.11.3 has ONLY RX Vega series graphics card in the supported GPU section.



See? Mistake is yours, not AMD's


----------



## cplifj

Quote:


> Originally Posted by *Reikoji*
> 
> Dude you wouldn't and shouldn't navigate to drivers with RX Vega selections for 290x drivers. If you were to have selected the GPU you actually have, they would not have 17.11.3 as an option for 290x.
> 
> 
> 
> This is what you have when you select 290x for drivers.


DUDE, if you'd look at my sig, you'd notice i run a VEGA 64 LC (gigabyte one) as primary on the pcie 3.0 X 16.

There is also my previous R9290X as secondary on a pcie 2.0 X 4.


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> DUDE, if you'd look at my sig, you'd notice i run a VEGA 64 LC (gigabyte one) as primary on the pcie 3.0 X 16.
> 
> There is also my previous R9290X as secondary on a pcie 2.0 X 4.


then what are you complaining about? You want AMD to support your oddicular GPU crossfire in their drivers? Thats like one dude throwing a ***** fit about AMD or Nvidia not having driver optimization for his X2 Vega FE, 1080 ti, and Titan XP rig.


----------



## cplifj

what i want is for AMD to not have the clean-install driver option in a hotfix driver unless it can install drivers for ALL CARDS AMD possibly in the system.

As if i am the odd on here running two cards in one computer , LOL. go troll someone else dude.

It's better to have this config , then to enable the intel igpu cause that causes it's own problems in itself when running along with AMD in certain situations and games.


----------



## Reikoji

Quote:


> Originally Posted by *cplifj*
> 
> what i want is for AMD to not have the clean-install driver option in a hotfix driver unless it can install drivers for ALL CARDS AMD possibly in the system.
> 
> As if i am the odd on here running two cards in one computer , LOL. go troll someone else dude.
> 
> It's better to have this config , then to enable the intel igpu cause that causes it's own problems in itself when running along with AMD in certain situations and games.


That wouldn't have made a difference. normal install wouldn't leave old drivers there just for your 290x to play with.

you've been trolling here for the longest. AM(ature) D(evices) is a dead give-away of a troll. take that crap to WCCF and get yourself a 1080 ti or something. You clearly dont like AMD or their GPU as all you do is throw some kind of complaint about something extremely silly.

If you are not just being a complain troll, do you REALLY think AMD is bothering to account for the likely only 1 person in the whole world having a RX Vega and a 290x in their PC simultaneously? No, they are not, and they shouldn't.


----------



## SavantStrike

Quote:


> Originally Posted by *Reikoji*
> 
> That wouldn't have made a difference. normal install wouldn't leave old drivers there just for your 290x to play with.
> 
> you've been trolling here for the longest. AM(ature) D(evices) is a dead give-away of a troll. take that crap to WCCF and get yourself a 1080 ti or something. You clearly dont like AMD or their GPU as all you do is throw some kind of complaint about something extremely silly.


Doesn't the normal install provide drivers for both the Vega and the 290X? The radeon crimson relive usually finds drivers for every AMD product in the system.

To be fair the clean install also is frustrating for Ryzen platforms. I went from the normal driver to the block chain driver and the clean install removed my x370 chipset drivers in the process. I'm still working on adding just the chipset drivers back as the default package tries to install the latest gaming drivers in the process.

The block chain drivers are in beta status and I'm happy AMD wrote them at all, so no complaints here









As for crossfire Vega cards alongside nvidia, I've tried it! It didn't work well, but I suspect the problem was with MSI afterburner. I hope it was the weird setup and not the card, as it's a new card I just tried to install. Don't mix Vega with nvidia cards though, I'm sure nvidia still disables physX (why that's not anti trust IDK).


----------



## ontariotl

Quote:


> Originally Posted by *SavantStrike*
> 
> Doesn't the normal install provide drivers for both the Vega and the 290X? The radeon crimson relive usually finds drivers for every AMD product in the system.
> 
> To be fair the clean install also is frustrating for Ryzen platforms. I went from the normal driver to the block chain driver and the clean install removed my x370 chipset drivers in the process. I'm still working on adding just the chipset drivers back as the default package tries to install the latest gaming drivers in the process.
> 
> The block chain drivers are in beta status and I'm happy AMD wrote them at all, so no complaints here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As for crossfire Vega cards alongside nvidia, I've tried it! It didn't work well, but I suspect the problem was with MSI afterburner. I hope it was the weird setup and not the card, as it's a new card I just tried to install. Don't mix Vega with nvidia cards though, I'm sure nvidia still disables physX (why that's not anti trust IDK).


It appears the hotfix driver is for Vega cards only. Which is why the link only work when you select vega product and not the R9 290X which still has 17.11.2.

cplifj simply disregarded the list below of the cards that are compatible with this driver on the 17.11.3 hotfix download page, which by the way is done for every driver release. He would have then known that this driver is specific for Vega only and could have waited until they implement this hotfix in the next full driver release.


----------



## Trender07

COD WW2 keeps crashing even on balanced mode and with custom and stock volts...


----------



## ducegt

Quote:


> Originally Posted by *Grummpy*
> 
> still crying.
> 
> 
> http://imgur.com/vrb7Y


RIP. Your secret is safe with us. She died from natural causes. Better to die standing up than live on your knees.


----------



## ontariotl

Quote:


> Originally Posted by *Trender07*
> 
> COD WW2 keeps crashing even on balanced mode and with custom and stock volts...
> 
> Im going to try flashing liquid bios on my air vega and then unvervolt it because I got out of ideas


Sorry I don't know if this was already mentioned or suggested but have you tried older drivers?


----------



## Trender07

Quote:


> Originally Posted by *ontariotl*
> 
> Sorry I don't know if this was already mentioned or suggested but have you tried older drivers?


Well so far I've played it with the latest whql .2 drivers and now the latest .3 drivers


----------



## ontariotl

Quote:


> Originally Posted by *Trender07*
> 
> Well so far I've played it with the latest whql .2 drivers and now the latest .3 drivers


Go further back, I think these latest drivers might be very finicky.

Like LionS7 mentioned, try 17.9.3 as they were pretty good.


----------



## Trender07

Quote:


> Originally Posted by *ontariotl*
> 
> Go further back, I think these latest drivers might be very finicky.
> 
> Like LionS7 mentioned, try 17.9.3 as they were pretty good.


Well oldest drivers that support cod ww2 are the 17.7.11 maybe I try those


----------



## ontariotl

Quote:


> Originally Posted by *Trender07*
> 
> Well oldest drivers that support cod ww2 are the 17.7.11 maybe I try those


Sure, wouldn't hurt


----------



## Trender07

Quote:


> Originally Posted by *ontariotl*
> 
> Sure, wouldn't hurt


Quote:


> Originally Posted by *ontariotl*
> 
> Sure, wouldn't hurt


Nah tried them and it still crashes, differents directx deviceremoved errors each time, and looks like ReLive recording helps it to crash earlier

cod ww2


----------



## ontariotl

Quote:


> Originally Posted by *Trender07*
> 
> Nah tried them and it still crashes, differents directx deviceremoved errors each time, and looks like ReLive recording helps it to crash earlier


Honestly I don't think it's the card or drivers then. Either its a bad install of the game or windows is buggered.

What I would do, which you may not be able to is take another hard drive, do a clean install of windows (whatever you are using) and just install the necessary drivers and the game and see what it does.


----------



## LicSqualo

Hi all








I write as a new and happy owner of a Vega 64 LC, just mounted on my PC, (making room for the awesome R9 295x2







).
I thank everyone, because I do not remember where and who indicated this page to me and I have followed the indications







.
I set Wattman to 950 mV for HBM with 1100 Mhz clock, and set P6 and P7 values to 1000 and 1050.
This is the result.



During the test I recorded a maximum absorption of 250W on the board (SIV and GPU-Z indicate the same value) with a hotspot temperature of 65°C. I tried SkyDiver and my result is amazing....

Thank you all once again!

Time to play my games!


----------



## The EX1

Quote:


> Originally Posted by *Grummpy*
> 
> i have to remove this from my mind so i can move on.
> not in the habit in wasting money i dont have.
> my hole reason for fitting my own cooler was to save money.
> i wish i just purchased the wc one now.
> sucks being me


Next time just buy a proper waterblock instead. No more frankenstein cooling setups









Good luck with the LC version. I wish they had detachable tubing and fittings so you could just plug them into a custom loop.


----------



## Grummpy

Quote:


> Originally Posted by *The EX1*
> 
> Next time just buy a proper waterblock instead. No more frankenstein cooling setups
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Good luck with the LC version. I wish they had detachable tubing and fittings so you could just plug them into a custom loop.


you can buy them.
my last 2 i did still going strong.
But im not doing it again it just isnt worth the risk.


i applied to much pressure using that spring it damaged the pcb.
im still hurting i dont like loosing 500 pounds much.


----------



## Trender07

Quote:


> Originally Posted by *ontariotl*
> 
> Honestly I don't think it's the card or drivers then. Either its a bad install of the game or windows is buggered.
> 
> What I would do, which you may not be able to is take another hard drive, do a clean install of windows (whatever you are using) and just install the necessary drivers and the game and see what it does.


Well idk it isn't like only the game crashes it also crashes the vega with all the p7 leds turned on and have to restart pc


----------



## ontariotl

Quote:


> Originally Posted by *Trender07*
> 
> Well idk it isn't like only the game crashes it also crashes the vega with all the p7 leds turned on and have to restart pc


Which is why you need to do a process of elimination. Do you have an older GPU you can throw in there and test? And I'm going to ask you like most have asked in here so far when someone has issues is what is your power supply and how old is it?


----------



## Trender07

Quote:


> Originally Posted by *ontariotl*
> 
> Which is why you need to do a process of elimination. Do you have an older GPU you can throw in there and test? And I'm going to ask you like most have asked in here so far when someone has issues is what is your power supply and how old is it?


Maybe my card is broken because it gets stuck on 800 mhz hbm locked when hbm volt is 950 mv(it doesnt happen with 951 mv i.e) and looks like im the only one with this problem (tried flashing the liquid bios only to test this and it doesn't happen but ofc im not keeping the lc bios with my air cooled card)
Sadly I don't have any other gpu to test :/
About the psu its new, its the evga g3 650w, and playing cod hwinfo showing W usage it never go past 250W max. so i dont think my system is lacking energy


----------



## ontariotl

Quote:


> Originally Posted by *Trender07*
> 
> Maybe my card is broken because it gets stuck on 800 mhz hbm locked when hbm volt is 950 mv(it doesnt happen with 951 mv i.e) and looks like im the only one with this problem (tried flashing the liquid bios only to test this and it doesn't happen but ofc im not keeping the lc bios with my air cooled card)
> Sadly I don't have any other gpu to test :/
> About the psu its new, its the evga g3 650w, and playing cod hwinfo showing W usage it never go past 250W max. so i dont think my system is lacking energy


You might be getting 250w through Hwinfo, but your real power would be measured from the wall like using a Kil-a-watt adapter. You are not factoring in any of the other components in your system like Ryzen. Now is that o/c as well?

Now, being that it's a new power supply, you maybe ok, but I wouldn't o/c much.

Do you have a friend you could try your card in his system? I'm only trying to suggest alternate testing methods as I'm sure it will be hard to get another card in the mean time with the scarcity of product once again.


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> Yup, I left everythiing as it comes in Custom, just changed the fan speed and it still crashes cod ww2


Return it if you can. Get a replacement. How much power you add ?


----------



## Trender07

Quote:


> Originally Posted by *fursko*
> 
> Return it if you can. Get a replacement. How much power you add ?


Maybe could it be the Fall CU? (damn those updates always giving problems). About power I tried 0% and 50%


----------



## wellkevi01

Trender, have you tried other games?


----------



## Trender07

Quote:


> Originally Posted by *wellkevi01*
> 
> Trender, have you tried other games?


Yeah I've played PUBG withouth problems


----------



## diggiddi

Quote:


> Originally Posted by *HGooper*
> 
> I'm using Bykski Vega full cover waterblock now, this thing is such a steal for the price I paid(got good discount on 11.11 @ taobao). I flashed my Vega56 to 64LC bios and running at 1750mhz/1100mhz currently at default voltage, temp is 37~41C(HBM2 is a bit higher temp).
> 
> It does come with thermal pads for and I applied at VRM only though.


How big is your Rad and is the cpu in the loop?


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> Maybe could it be the Fall CU? (damn those updates always giving problems). About power I tried 0% and 50%


Try clean install everything windows, drivers... and do not touch wattman. Try everything default. Dont launch gpuz when your game active. Some LC cards not stable at stock but dunno air ones. If game crash again try overvolt (not undervolt).


----------



## Trender07

Quote:


> Originally Posted by *fursko*
> 
> Try clean install everything windows, drivers... and do not touch wattman. Try everything default. Dont launch gpuz when your game active. Some LC cards not stable at stock but dunno air ones. If game crash again try overvolt (not undervolt).


Okay will do so but I kinda have to touch wattman to up the fan .
btw crashes aren't consistent I mean I just played a full game (10 mins) then started another and it crashed in 2 mins


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> Okay will do so but I kinda have to touch wattman to up the fan .
> btw crashes aren't consistent I mean I just played a full game (10 mins) then started another and it crashed in 2 mins


Is it driver crash ?


----------



## SpecChum

Can someone just confirm an anomaly I'm seeing?

When I set DSR to 4k it completely ignores any voltage settings in OverdriveNTool and runs at 330W (+50% Power).


----------



## Trender07

Okay guys I disabled HBCC and now played 3 games of cod ww2 withouth any crashing


----------



## SpecChum

This page seems to exist: http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.4-Release-Notes.aspx

17.11.4?

Interesting.

Can't access yet tho, get

Code:



Code:


401 UNAUTHORIZED


----------



## hotrodkungfury

Is this the annual update known as "redux" this year?


----------



## Naeem

i set max speed of my vega lc to 1740mhz in MSI AB and it not crashed yet


----------



## ducegt

17.11.3 crashed in CEMU and COD WWII. 17.11.1 is stable.


----------



## wellkevi01

Quote:


> Originally Posted by *ducegt*
> 
> 17.11.3 crashed in CEMU and COD WWII. 17.11.1 is stable.


Did you try 17.11.3 with HBCC disabled?


----------



## diabetes

Quote:


> Originally Posted by *SpecChum*
> 
> This page seems to exist: http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.4-Release-Notes.aspx
> 
> 17.11.4?
> 
> Interesting.
> 
> Can't access yet tho, get
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 401 UNAUTHORIZED


Computerbase.de has a prerelease version of 17.11.4 for their review of the Powercoler Vega64 Red Devil.


----------



## wellkevi01

Quote:


> Originally Posted by *diabetes*
> 
> Computerbase.de has a prerelease version of 17.11.4 for their review of the Powercoler Vega64 Red Devil.


I, along with probably every other Vega owner, am curious if the DSBR will be enabled with 11.4.. or in the Redux update... or even at all.


----------



## Naeem

Quote:


> Originally Posted by *wellkevi01*
> 
> I, along with probably every other Vega owner, am curious if the DSBR will be enabled with 11.4.. or in the Redux update... or even at all.


what is DSBR ?


----------



## tarot

Quote:


> Originally Posted by *Naeem*
> 
> what is DSBR ?


well duh its this...
Double-strand break repair model of meiotic recombination

nah kidding no idea







and I just installed the new ones ya bastards...they seem to work well I disabled the hbcc awhile ago as it really does nothing for me

aha
Quote:


> [/The Draw-Stream Binning Rasterizer (DSBR) is an
> important innovation to highlight. It has been designed to
> reduce unnecessary processing and data transfer on the
> GPU, which helps both to boost performance and to reduce
> power consumption. The idea was to combine the beneﬁts
> of a technique already widely used in handheld graphics
> products (tiled rendering) with the beneﬁts of
> immediate-mode rendering used high-performance PC
> graphics.
> Standard immediate-mode rendering works by rasterizing
> each polygon as it is submitted until the whole scene is
> complete, whereas tiled rendering works by dividing the
> screen into a grid of tiles and then rendering each tile
> independently.The DSBR works by ﬁrst dividing the image to be rendered
> into a grid of bins or tiles in screen space and then
> collecting a batch of primitives to be rasterized in the scan
> converter. The bin and batch sizes can be adjusted
> dynamically to optimize for the content being rendered.
> The DSBR then traverses the batched primitives one bin at
> a time, determining which ones are fully or partially
> covered by the bin. Geometry is processed once, requiring
> one clock cycle per primitive in the pipeline. There are no
> restrictions on when binning can be enabled, and it is fully
> compatible with tessellation and geometry shading. ("Vega"
> 10 has four front-ends in all, each with its own rasterizer.)
> This design economizes o-chip memory bandwidth by
> keeping all the data necessary to rasterize geometry for a
> bin in fast on-chip memory (i.e., the L2 cache). The data in
> o-chip memory only needs to be accessed once and can
> then re-used before moving on to the next bin. "Vega" uses
> a relatively small number of tiles, and it operates on
> primitive batches of limited size compared with those used
> in previous tile-based rendering architectures. This setup
> keeps the costs associated with clipping and sorting
> manageable for complex scenes while delivering most of the
> performance and eciency beneﬁts.
> Pixel shading can also be deferred until an entire batch has
> been processed, so that only visible foreground pixels need
> to be shaded. This deferred step can be disabled selectively
> for batches that contain polygons with transparency.
> Deferred shading reduces unnecessary work by reducing
> overdraw (i.e., cases where pixel shaders are executed
> multiple times when dierent polygons overlap a single
> screen pixel).
> Deferred pixel processing works by using a scoreboard for
> color samples prior to executing pixel shaders on them. If a
> later sample occludes or overwrites an earlier sample, the
> earlier sample can be discarded before any pixel shading is
> done on it. The scoreboard has limited depth, so it is most
> powerful when used in conjunction with binning.
> These optimizations can signiﬁcantly reduce o-chip
> memory trac, boosting performance in memory-bound
> scenarios and reducing total graphics power consumption.
> In the case of "Vega" 10, we observed up to 10% higher
> frame rates and memory bandwidth reductions of up to
> 33% when the DSBR is enabled for existing game
> applications, with no increase in power consumption.8
> Figure 8: HBCC vs. Standard Memory Allocation
> 
> 0%
> 5%
> 10%
> 15%
> 
> 20%
> 25%
> 30%
> 35%
> 10
> AMD | Radeon Technologies GroupQUOTE]


----------



## ITAngel

Quote:


> Originally Posted by *Trender07*
> 
> Okay will do so but I kinda have to touch wattman to up the fan .
> btw crashes aren't consistent I mean I just played a full game (10 mins) then started another and it crashed in 2 mins


I had driver issue crashing my screen and high temps. I used UDD while not plugged to the internet. Then I installed the downloaded Radeon driver. After that reset the Radeon system into balance mode. Then I ran io booster to Install updates. Finally all my issues are gone and Vega64 is working great. Hopefully this gives you some ideas to troubleshoot your card issues. Gluck.


----------



## Grummpy

is this Draw-Stream Binning Rasterizer being turned on any time soon ?


----------



## TrixX

Quote:


> Originally Posted by *Grummpy*
> 
> is this Draw-Stream Binning Rasterizer being turned on any time soon ?


No word as yet though if it were to be I'd expect it with the Redux drivers. Not gonna get my hopes up though.


----------



## wellkevi01

No one outside of AMD knows.. Before Vega launched, RTG was pretty much beating DSBR to death in their tech talks and architectural overview slides, but just before launch, and now after, they pretty much gloss over it. I honestly won't be surprised if they just completely ditch it by now.


----------



## steadly2004

Quote:


> Originally Posted by *wellkevi01*
> 
> No one outside of AMD knows.. Before Vega launched, RTG was pretty much beating DSBR to death in their tech talks and architectural overview slides, but just before launch, and now after, they pretty much gloss over it. I honestly won't be surprised if they just completely ditch it by now.


I think I remember a few posts showing it enabled, but not giving any FPS improvement. Maybe I'm remembering wrong. Now it's just off. Or maybe it was a flag in the code showing it on, but the test didn't show it on the monitor....


----------



## Grummpy

my card died as you know.
but by magic and after using a plastic card i got myself another.


running 1700 core 1100 hbm2 pulling 550 watt at wall.
why it says to use a 1000 watt psu is beyond me.


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> my card died as you know.
> but by magic and after using a plastic card i got myself another.
> 
> 
> running 1700 core 1100 hbm2 pulling 550 watt at wall.
> why it says to use a 1000 watt psu is beyond me.


This is best looking gpu. No competition.


----------



## Grummpy

Quote:


> Originally Posted by *fursko*
> 
> This is best looking gpu. No competition.




i pull 550 watt at wall.
why they print 1000 watt makes very little sense.

650 if i set turbo.
but manual under volt i get it down to 550 with memory over clock.
its kind of insane how much power these can pull but for very little gain.
i gained like 4 fps going from 420 watts to 650


----------



## Grummpy

I have no idea how to claim my aqua pack games.
just no instructions.


----------



## wellkevi01

Quote:


> Originally Posted by *Grummpy*
> 
> my card died as you know.
> but by magic and after using a plastic card i got myself another.
> 
> 
> running 1700 core 1100 hbm2 pulling 550 watt at wall.
> why it says to use a 1000 watt psu is beyond me.


Nice. I picked up the Gigabyte V64 LC a couple weeks ago from Newegg's eBay store for $625. One thing you can't tell from just looking at pictures is the great build quality. When I first held the card I was like , "dammnnn... This feels good!" The aluminum shroud and backplate just feel really nice and sturdy.


----------



## Grummpy

got my ryzen 1700 on order with a rog board.
no memory as yet but thats going to have to wait.


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> 
> 
> i pull 550 watt at wall.
> why they print 1000 watt makes very little sense.
> 
> 650 if i set turbo.
> but manual under volt i get it down to 550 with memory over clock.
> its kind of insane how much power these can pull but for very little gain.
> i gained like 4 fps going from 420 watts to 650


Yeah same. Those numbers peak watts probably. Average consumption should be lower. Some apps or games benefit more than 4 fps but overall its useless not worth. Best way %0 power limit and uv core oc hbm.


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> got my ryzen 1700 on order with a rog board.
> no memory as yet but thats going to have to wait.


Here's a tip from someone whose built a dozen and has one as a daily. Get samsung bdie ram no matter what.


----------



## wellkevi01

17.11.4 is up.

https://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Crimson-ReLive-Edition-17.11.4-Release-Notes.aspx


----------



## ducegt

Quote:


> Originally Posted by *Grummpy*










I like the upside down style as well. Being I can't OC my Vega LC much either without crazy power increases, I actually lowered my CPU OC 7700K from 5.2/5.1 1.424/1.328v to 5.0 1.264v because in cases like ours, we're pulling air next to the motherboard VRM. Also, don't be worried if anyone gives you any advice that the radiator should be above the pump/card. Vega's LC is designed to be mounted every which way.


----------



## barbz127

Would anyone know what load the onboard fan controller can handle?

I have a Morpheus cooler on way and looking at fans to pair with it.

Thank-you


----------



## owntecx

Hi, are this vrm temps fine?


----------



## Reikoji

Quote:


> Originally Posted by *Grummpy*
> 
> 
> 
> i pull 550 watt at wall.
> why they print 1000 watt makes very little sense.
> 
> 650 if i set turbo.
> but manual under volt i get it down to 550 with memory over clock.
> its kind of insane how much power these can pull but for very little gain.
> i gained like 4 fps going from 420 watts to 650


1000w PSU is reccomended because you don't want to get too close to full PSU load, and it also represents other power draws your system can go through other than the GPU.

1600w is still the best choice.


----------



## diabetes

Quote:


> Originally Posted by *owntecx*
> 
> Hi, are this vrm temps fine?


Yes, but you only have very little headroom left. While VRMs are specified for 115C, i wouldn't let them exceed 95C for longevity reasons.


----------



## Naeem

Quote:


> Originally Posted by *fursko*
> 
> This is best looking gpu. No competition.


i got same card from gigabyte


----------



## jmoonb

Quote:


> Originally Posted by *owntecx*
> 
> Hi, are this vrm temps fine?


Pretty high but within the limits. Does seem awfully high for the voltage you are giving it. Are you passively cooling the VRMs with heatsinks or using the original vega plate with the fan? I'm assuming vrm 1 is core and vrm 2 is HBM just by the temp diff at idle. If that is true it might give some legs to my theory of hotspot temps being closely related to HBM VRM temps on load.


----------



## owntecx

Quote:


> Originally Posted by *jmoonb*
> 
> Pretty high but within the limits. Does seem awfully high for the voltage you are giving it. Are you passively cooling the VRMs with heatsinks or using the original vega plate with the fan? I'm assuming vrm 1 is core and vrm 2 is HBM just by the temp diff at idle. If that is true it might give some legs to my theory of hotspot temps being closely related to HBM VRM temps on load.


Yes, using morpheus II heatsinks on everything. so they are passive cooled. Anyone with reference cooler to compare to? and anothe morpheus?


----------



## madmanmarz

Ooh! It seems with 11.4 I no longer have to set my p7 state twice for it to lock in, it sets it straight away instead of defaulting to 1200mv the first time.


----------



## SpecChum

Quote:


> Originally Posted by *madmanmarz*
> 
> Ooh! It seems with 11.4 I no longer have to set my p7 state twice for it to lock in, it sets it straight away instead of defaulting to 1200mv the first time.


Oh nice, I've just installed it. Will check when I set it again.

Did anyone get the chance to test it completely ignoring voltage setting and VSR is used?


----------



## tarot

new version of hwinfo has the vrms a little different I think.
http://www.bluetarot.com/hardware/hdwrerev/vcrev/xfx-vega-64-air-review/

scroll tot he bottom I just ran that test I screwed up and put fs over it though








reads
core
hbm
vr vdc
vr mvdd\
hotspot

i'm ok with my temps but they could be better and the vrm;s(assuming that's what I,m looking at) never get very high with the block on.

as for the hotspot theory in regards to hbm temps...maybe BUT in the shot my hbm is 41 degrees my hotspot is 66 so to me more related to core temps.
Until someone gets a good thermal temp reader on every section of the card and relates those temps to reported temps we will never know


----------



## Grummpy

One was 450 watts 510 watts at wall the other was 610 watt at wall.
and yet same performance.
Just messed up.

i slashes 160 watts by overclocking the memory to 1100 mh
removing 100 mhz of the core and dropping volt by 150 mv.
I lost 1 frame
its just messed up.
Vega power usage is just unnecessary,
for that extra 5 % in performance they are adding 25% to their power usage maybe even more.
its just stupid if you ask me.

default




doing this saved me over 160 watts and i lost 1 frame in this test.
Everyone thinks over lock overclock faster faster faster.
How about AMD get some reputation back hey and loose 5 to 8 % in performance.


----------



## TrixX

Quote:


> Originally Posted by *Grummpy*
> 
> doing this saved me over 160 watts and i lost 1 frame in this test.
> Everyone thinks over lock overclock faster faster faster.
> How about AMD get some reputation back hey and loose 5 to 8 % in performance.


Welcome to the weird world of Vega Tuning. I'm not sure Overclocking is the viable terminology any more as the performance increases are a balance of different components rather than just upping the Core MHz and Voltage until crashing and then dial back by one or two clicks...

Basically finding the balance between HBM MHz, Core MHz, Power Target, Voltage and if on air/stock LC Fan Speeds.


----------



## Grummpy

Quote:


> Originally Posted by *TrixX*
> 
> Welcome to the weird world of Vega Tuning. I'm not sure Overclocking is the viable terminology any more as the performance increases are a balance of different components rather than just upping the Core MHz and Voltage until crashing and then dial back by one or two clicks...
> 
> Basically finding the balance between HBM MHz, Core MHz, Power Target, Voltage and if on air/stock LC Fan Speeds.


yeah its crazy.
i lost 1 frame and gained 150 watts.
it might be down to the application but still worth doing.

-100 mhz - 150 mv overclocked hbm2 to 1100,
lost 1 frame is just madness.


----------



## Grummpy

How do you claim your aqua packs after you purchase the LC vega 64 ?
im looking but not finding any info.


----------



## SpecChum

Yeah, OC'in this thing is weird. Very weird. Even just dropping HBM from 1050Mhz to 950Mhz send the wattage and core clock rocketing - I still don't really know why.

I had intended my quiet profile (915mV across the board) to only be used when I needed the fans to be quiet, but I actually use it all the time now.

It's quiet (obvs), stable and I don't notice the performance loss - it only works out about 50Mhz on the core anyway; not really worth it for several dB of noise.

Overall tho, I'm very happy with my Vega.


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> One was 450 watts 510 watts at wall the other was 610 watt at wall.
> and yet same performance.
> Just messed up.
> 
> i slashes 160 watts by overclocking the memory to 1100 mh
> removing 100 mhz of the core and dropping volt by 150 mv.
> I lost 1 frame
> its just messed up.
> Vega power usage is just unnecessary,
> for that extra 5 % in performance they are adding 25% to their power usage maybe even more.
> its just stupid if you ask me.
> 
> doing this saved me over 160 watts and i lost 1 frame in this test.
> Everyone thinks over lock overclock faster faster faster.
> How about AMD get some reputation back hey and loose 5 to 8 % in performance.


There is no vega overclock. Its vega tuning ^^ Dont use afterburner. Wattman works good. Just overclock hbm and undervolt your p6 and p7. Try Cod WW2 for stability test.


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> How do you claim your aqua packs after you purchase the LC vega 64 ?
> im looking but not finding any info.


Where did you buy ? Talk with customer service.


----------



## diabetes

Get abord the hype train! Vega56 will deliver 1080Ti SLI performance after this







:


----------



## Newbie2009

Quote:


> Originally Posted by *diabetes*
> 
> Get abord the hype train! Vega56 will deliver 1080Ti SLI performance after this
> 
> 
> 
> 
> 
> 
> 
> :


Soon, you just have to wait a little longer


----------



## diabetes

Quote:


> Originally Posted by *Newbie2009*
> 
> Soon, you just have to wait a little longer


Yes, soon(TM)


----------



## Grummpy

Question people pls.
This card here
https://www.overclockers.co.uk/gigabyte-radeon-rx-vega-64-xtx-8gb-hbm2-pci-express-liquid-cooled-graphics-card-aqua-pack-gx-19k-gi.html
My question is do you get free games with it or not ?
I dont see anything to say i do during my purchase am i wrong to think i was ?

https://www.amdrewards.com/amdrewards/files/AMD-Rewards-Radeon-RX-Vega.pdf


----------



## wellkevi01

Quote:


> Originally Posted by *Grummpy*
> 
> Question people pls.
> This card here
> https://www.overclockers.co.uk/gigabyte-radeon-rx-vega-64-xtx-8gb-hbm2-pci-express-liquid-cooled-graphics-card-aqua-pack-gx-19k-gi.html
> My question is do you get free games with it or not ?
> I dont see anything to say i do during my purchase am i wrong to think i was ?
> 
> https://www.amdrewards.com/amdrewards/files/AMD-Rewards-Radeon-RX-Vega.pdf


You'll have to contact Overclocker's customer support to see if they're participating in the promotion.


----------



## jmoonb

Quote:


> Originally Posted by *Grummpy*
> 
> Question people pls.
> This card here
> https://www.overclockers.co.uk/gigabyte-radeon-rx-vega-64-xtx-8gb-hbm2-pci-express-liquid-cooled-graphics-card-aqua-pack-gx-19k-gi.html
> My question is do you get free games with it or not ?
> I dont see anything to say i do during my purchase am i wrong to think i was ?
> 
> https://www.amdrewards.com/amdrewards/files/AMD-Rewards-Radeon-RX-Vega.pdf


I think you should have gotten a piece paper with a code with your card.


----------



## hyp36rmax

Quote:


> Originally Posted by *Reikoji*
> 
> 1000w PSU is reccomended because you don't want to get too close to full PSU load, and it also represents other power draws your system can go through other than the GPU.
> 
> 1600w is still the best choice.


You better believe this! Lol! I had to upgrade my Seasonic PRIME 1000 Watt Platinum as soon as I got my second VEGA 64. Looks like I was triggering the OCP with an instant shutdown. I now have an EVGA 1600 Watt T2 Titanum PSU. Runs like a champ.


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> Yeah, OC'in this thing is weird. Very weird. Even just dropping HBM from 1050Mhz to 950Mhz send the wattage and core clock rocketing - I still don't really know why.
> 
> I had intended my quiet profile (915mV across the board) to only be used when I needed the fans to be quiet, but I actually use it all the time now.
> 
> It's quiet (obvs), stable and I don't notice the performance loss - it only works out about 50Mhz on the core anyway; not really worth it for several dB of noise.
> 
> Overall tho, I'm very happy with my Vega.


I gave-up on my UQP (ultra quiete profile) - 905mv - 910mv. Currently at 950mv - 970mv and pretty stable with 1020mhz on HBM.


----------



## jbravo14

not sure if it was already posted here, but the vega EK block were at $70 - $76. Was able to grab 2.

you should grab some before the price goes back up.


----------



## ontariotl

Quote:


> Originally Posted by *Newbie2009*
> 
> Soon, you just have to wait a little longer


I rather them stop spending money on promoting it with fancy videos which could have probably hired another driver developer and just release the damn thing.


----------



## ontariotl

Quote:


> Originally Posted by *jbravo14*
> 
> not sure if it was already posted here, but the vega EK block were at $70 - $76. Was able to grab 2.
> 
> you should grab some before the price goes back up.


It won't matter to me as I already have one, but it might be helpful to others where you got this deal from.


----------



## SpecChum

Quote:


> Originally Posted by *jbravo14*
> 
> I gave-up on my UQP (ultra quiete profile) - 905mv - 910mv. Currently at 950mv - 970mv and pretty stable with 1020mhz on HBM.


My hbm is at 1020mhz too. It's the highest frequency that seemed stable at the 95c max temp. Not that it gets near that.

I like to keep it under 85c to avoid from timing drop which knocks off a couple of fps.

I probably need to retest now they seem to have fixed the enhanced sync issue tho, I had it on before and it maybe, just maybe, might have been that causing some of the driver restarts


----------



## jbravo14

Quote:


> Originally Posted by *SpecChum*
> 
> My hbm is at 1020mhz too. It's the highest frequency that seemed stable at the 95c max temp. Not that it gets near that.
> 
> I like to keep it under 85c to avoid from timing drop which knocks off a couple of fps.
> 
> I probably need to retest now they seem to have fixed the enhanced sync issue tho, I had it on before and it maybe, just maybe, might have been that causing some of the driver restarts


My UV/OC was also stable on driver v17.11.1 when i upgraded to 17.11.2, i get games crashing after prolonged session.


----------



## jbravo14

Quote:


> Originally Posted by *ontariotl*
> 
> It won't matter to me as I already have one, but it might be helpful to others where you got this deal from.


ekwb.com website last night, I was just playing with the idea of watercooling my vega, ordered 2, without any kits.

Have not tried a custom loop ever, but probably will try now.


----------



## ontariotl

Quote:


> Originally Posted by *jbravo14*
> 
> ekwb.com website last night, I was just playing with the idea of watercooling my vega, ordered 2, without any kits.
> 
> Have not tried a custom loop ever, but probably will try now.


Yeah it looks like that deal is done now.


----------



## ducegt

Quote:


> Originally Posted by *ontariotl*
> 
> I rather them stop spending money on promoting it with fancy videos which could have probably hired another driver developer and just release the damn thing.


I don't think that's how it would work. I kind of like the excitement that comes with the new overhauls.


----------



## ontariotl

Quote:


> Originally Posted by *ducegt*
> 
> I don't think that's how it would work. I kind of like the excitement that comes with the new overhauls.


I disagree from all the comments I've read in this thread about issues or fixes with new driver updates, but at least they come out with little fanfare. And yet, it's still discussed.

We are all excited including myself to get new overhauls, but for god sakes just release it, not tease it! We had enough of those during the launch of Vega.


----------



## Grummpy

Is their a bios switch on the vega 64 LC ?



nvm i found it.
its just under the pipes.


----------



## kundica

Quote:


> Originally Posted by *jbravo14*
> 
> My UV/OC was also stable on driver v17.11.1 when i upgraded to 17.11.2, i get games crashing after prolonged session.


I've found 17.11.2 and 17.11.3 to be very unstable on my system. Currently using 17.11.1 but I might try the .4 version just to see.


----------



## TrixX

I'm liking 17.11.4 currently. Running max undervolt and it's running sweet!

BTW back on air for some testing and troubleshooting. Was having insane Hot Spot temps and a few other issues. Seems that the pads I was using on my card for the VRM's weren't making good contact with the block. Which does explain a bit. Had good CPU paste coverage though with the Hydronaut which doesn't explain why after a couple of weeks it went from never going over 44C to never going under 40C...

Ah well Conductonaut train here I come









Image below is Hydronaut paste with stock blower and undervolt as per ONT in the image


----------



## SpecChum

Quote:


> Originally Posted by *TrixX*
> 
> I'm liking 17.11.4 currently. Running max undervolt and it's running sweet!
> 
> BTW back on air for some testing and troubleshooting. Was having insane Hot Spot temps and a few other issues. Seems that the pads I was using on my card for the VRM's weren't making good contact with the block. Which does explain a bit. Had good CPU paste coverage though with the Hydronaut which doesn't explain why after a couple of weeks it went from never going over 44C to never going under 40C...
> 
> Ah well Conductonaut train here I come
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Image below is Hydronaut paste with stock blower and undervolt as per ONT in the image


What advantage does increasing P6 give?

I've never actually touched that one, only P7.


----------



## TrixX

Quote:


> Originally Posted by *SpecChum*
> 
> What advantage does increasing P6 give?
> 
> I've never actually touched that one, only P7.


Just using it to have the ability to downclock a but and save some power. Not that under 100W during the game is power hungry anyway









I guess I just got into the habit of it with games. If only we had easy changing of P0-P5 it would make things so much better for testing.


----------



## ITAngel

My Vega card looks so nice now with the EK block.


----------



## geriatricpollywog

Quote:


> Originally Posted by *ITAngel*
> 
> My Vega card looks so nice now with the EK block.


I'd be interested to ser how it looks with the LEDs.


----------



## ITAngel

Quote:


> Originally Posted by *0451*
> 
> I'd be interested to ser how it looks with the LEDs.


I am not sure which LED you are talking about but here is a picture of the card on my current setup.


----------



## Naeem

Quote:


> Originally Posted by *Grummpy*
> 
> Is their a bios switch on the vega 64 LC ?
> 
> 
> 
> nvm i found it.
> its just under the pipes.


it needs to be on left side for turbo bios unless your gpu ise upside down in that case it's needs to be on right side


----------



## geriatricpollywog

Quote:


> Originally Posted by *ITAngel*
> 
> I am not sure which LED you are talking about but here is a picture of the card on my current setup.


I thought the plexi EK-FC had LEDs. I have the acetal version which doesn't.


----------



## Grummpy

Its set to be closest to the back plate.


----------



## Grummpy

Quote:


> Originally Posted by *ITAngel*
> 
> I am not sure which LED you are talking about but here is a picture of the card on my current setup.


I love how you did this the layout is perfection.
thx for sharing most setups look all colourful and ******ed with their obsession with clean bends and pretty liquid.


----------



## ITAngel

Quote:


> Originally Posted by *Grummpy*
> 
> I love how you did this the layout is perfection.
> thx for sharing most setups look all colourful and ******ed with their obsession with clean bends and pretty liquid.


Thank You Grummy!, Is pretty hard to get these soft tubers going on this case. I have a second new rad 240 to put on the front but I can't because it made it very hard to setup and messy. So i may be upgrading cases here in the near future.


----------



## ITAngel

Quote:


> Originally Posted by *0451*
> 
> I thought the plexi EK-FC had LEDs. I have the acetal version which doesn't.


Oh no this one doesn't but it looks like you can replace it with one that does. I am not sure where you can find a LED version yet but would be cool to put an LED version on it.


----------



## geriatricpollywog

Quote:


> Originally Posted by *ITAngel*
> 
> Oh no this one doesn't but it looks like you can replace it with one that does. I am not sure where you can find a LED version yet but would be cool to put an LED version on it.


Nevermind they are pre-drilled.
Quote:


> The top also features two pre-drilled slots for 3mm LED diodes.


https://www.ekwb.com/shop/ek-fc-radeon-vega


----------



## ITAngel

Quote:


> Originally Posted by *0451*
> 
> Nevermind they are pre-drilled.
> https://www.ekwb.com/shop/ek-fc-radeon-vega


They give you a key to remove it, so technically you can get one later on LED or something along those lines. The little book that the block come with talks about it.


----------



## Grummpy

Box says to use a 1000 watt psu but on turbo it dont even hit 650 watts during stress testing.
Im seeing 550 during gaming.
why put a overkill 1000 watt if it isnt needed.


----------



## wellkevi01

They specify a 1000 watt to accommodate for some super ****ty power supplies.


----------



## Reikoji

Regardless... you definitely want to have at least a 1000w PSU for the LC vega or vega using LC bios.


----------



## Grummpy

Well im using a 850 watt silverstone titanium 93 to 95 efficient psu and not going anywhere close to maxing it out.
I pulled more power when i ran 2 x 290 cards.
quality counts.
I got 200 watts to 300 watts to play with

im just Shy of 30000
http://www.luxmark.info/


----------



## porschedrifter

Quote:


> Originally Posted by *Naeem*
> 
> i have vega 64 liquid edition it just crashed again on me my GPU was 55c and HBM2 was at 60c


Disable "enhanced sync" vsync option, it does not work. Use any other option.


----------



## Grummpy

i think his installed that driver booster program.
I had this exact same problem after running that.


----------



## VicsPC

I had some serious crashing in Rainbow Six Siege the past week. Thought it was related to my vega (bit odd since it only started happening) i resinstalled all the C++ redistributables and disabled cloud sync and uplay overlay. So far its been fine all day no freezing/crashing. On another hand, where all my racing fans at? If anyone is up for some multi F1 or dirt let me know, love racing games.

https://www.humblebundle.com/games/codemasters-racing-2017


----------



## Grummpy

stock clocks, stock voltages no under volt
hbm2 to 1100 +075 mv
memory on this card isnt as good as my last one.
i was able to hit 1200 with only a .025 mv increase on that card


power usage hit 650 at wall at once point.


----------



## Soggysilicon

Quote:


> Originally Posted by *Reikoji*
> 
> Regardless... you definitely want to have at least a 1000w PSU for the LC vega or vega using LC bios.


No.

Running a 7+ year old 775 Thermal Take just fine... which is the same as the corsair and seagate rebrands... cause they were all made in the same factory and rebadged...

If someone wanted to mGPU or crossfire sure... but not for a single... no way...


----------



## Grummpy

Quote:


> Originally Posted by *Soggysilicon*
> 
> No.
> 
> Running a 7+ year old 775 Thermal Take just fine... which is the same as the corsair and seagate rebrands... cause they were all made in the same factory and rebadged...
> 
> If someone wanted to mGPU or crossfire sure... but not for a single... no way...


yeah i ran my last vega 64 on a 650 watt power supply.
it didnt even hit 550 watts.ever.....
I just think amd is covering their backs thats all.
but it looks bad for the rep i think.


----------



## geriatricpollywog

I have a 1200 watt power supply and I am confident I could add a second watercooled card and overclock it. I have never seen more than 600 from the wall.


----------



## Rexer

Vega 64 in crossfire. Still, I'd suspect 1000w is more than adequate. Sure gets hot. For sure.. . water cooling's in the future.


----------



## diggiddi

Noice, Must be only Crysis3 and Deus EX which can utilize the power of that crossfire eh?


----------



## Naeem

Quote:


> Originally Posted by *Grummpy*
> 
> Box says to use a 1000 watt psu but on turbo it dont even hit 650 watts during stress testing.
> Im seeing 550 during gaming.
> why put a overkill 1000 watt if it isnt needed.


Quote:


> Originally Posted by *Grummpy*
> 
> yeah i ran my last vega 64 on a 650 watt power supply.
> it didnt even hit 550 watts.ever.....
> I just think amd is covering their backs thats all.
> but it looks bad for the rep i think.


1000w has to do with 18 cores cpus available in market


----------



## Mumak

Don't know if this has been already noticed here, but it seems that AMD has finally unblocked I2C access to VRMs on Vega.



Not sure if it's since Crimson 17.11.3, but with 17.11.4 it's surely there.
So now you should be able to see all GPU VRM details in HWiNFO including temperatures for both GPU Core and Memory VRMs.
Unfortunately some users started to experience crashes here and it happens when HWiNFO tries to access those VRMs via I2C. Some GPUs seems to be affected, some not. Don't know yet why is this..
If you're experiencing such hard crash during HWiNFO startup, then disable the "GPU I2C Support" option.


----------



## jmoonb

Quote:


> Originally Posted by *Mumak*
> 
> Don't know if this has been already noticed here, but it seems that AMD has finally unblocked I2C access to VRMs on Vega.
> 
> 
> 
> Not sure if it's since Crimson 17.11.3, but with 17.11.4 it's surely there.
> So now you should be able to see all GPU VRM details in HWiNFO including temperatures for both GPU Core and Memory VRMs.
> Unfortunately some users started to experience crashes here and it happens when HWiNFO tries to access those VRMs via I2C. Some GPUs seems to be affected, some not. Don't know yet why is this..
> If you're experiencing such hard crash during HWiNFO startup, then disable the "GPU I2C Support" option.


I've seen it in 11.2 and 11.4. Generally it's been a completely random when it comes to crashes. Sometimes completely stable, then in others, usually during idle the screen would go blank. Usual symptoms when crashing is some VRM related values disappearing and reappearing constantly.


----------



## Mumak

Well, initially access to those VRMs was blocked, apparently due to a bug. I discussed this with AMD and they told me it will be fixed since version 17.40 (which was Crimson 17.10.2 series).
But the first time I saw these VRMs accessible on my reference RX Vega 64 was with 17.11.4.
Don't know yet why those crashes happen, nor how to fix that. But monitoring GPU VRMs was often problematic and usually can lead to stuttering or performance loss. But such hard crashes were rather rare (well, except Fiji, where a crash was guaranteed).


----------



## VicsPC

Quote:


> Originally Posted by *Mumak*
> 
> Well, initially access to those VRMs was blocked, apparently due to a bug. I discussed this with AMD and they told me it will be fixed since version 17.40 (which was Crimson 17.10.2 series).
> But the first time I saw these VRMs accessible on my reference RX Vega 64 was with 17.11.4.
> Don't know yet why those crashes happen, nor how to fix that. But monitoring GPU VRMs was often problematic and usually can lead to stuttering or performance loss. But such hard crashes were rather rare (well, except Fiji, where a crash was guaranteed).


If we disable gpu irc can we still monitor VRM temps or it's all included in one package? I'm on water so not worried about VRM temps but would love to see what temps they're running at maxed out.


----------



## Spacebug

Great that vrm monitoring works, especially temperature, will try the new driver later.

I did a hardmod for more HBM voltage, seems like frequency scales with voltage








Guess i'll have to see if decreased lifespan/degradation also scales with voltage :/


----------



## Mumak

Quote:


> Originally Posted by *VicsPC*
> 
> If we disable gpu irc can we still monitor VRM temps or it's all included in one package? I'm on water so not worried about VRM temps but would love to see what temps they're running at maxed out.


In that case you should still see the "GPU VR VDDC Temperature" and "GPU VR MVDD Temperature" values, as these are provided by AMD drivers regardless of the "GPU I2C Support" setting in HWiNFO. But those values are available on some Liquid series only, I haven't seen them on other models and even for Liquid series, they might might not be available in some cases.


----------



## VicsPC

Quote:


> Originally Posted by *Mumak*
> 
> In that case you should still see the "GPU VR VDDC Temperature" and "GPU VR MVDD Temperature" values, as these are provided by AMD drivers regardless of the "GPU I2C Support" setting in HWiNFO. But those values are available on some Liquid series only, I haven't seen them on other models and even for Liquid series, they might might not be available in some cases.


So i just installed 17.11.4 and still have no vrm temps at all. I even had to use beta 5.61 because in 5.60 my hbm temps were reading 0. Not sure if i need to enable something but for me under 17.11.4 and my air card on an ek block it does not work.


----------



## Mumak

Quote:


> Originally Posted by *VicsPC*
> 
> So i just installed 17.11.4 and still have no vrm temps at all. I even had to use beta 5.61 because in 5.60 my hbm temps were reading 0. Not sure if i need to enable something but for me under 17.11.4 and my air card on an ek block it does not work.


Try to click the "Reset GPU I2C Cache" option in HWiNFO.
If that won't help, try to completely remove existing AMD drivers (using the AMD driver cleanup utility) and then install them.


----------



## VicsPC

Quote:


> Originally Posted by *Mumak*
> 
> Try to click the "Reset GPU I2C Cache" option in HWiNFO.
> If that won't help, try to completely remove existing AMD drivers (using the AMD driver cleanup utility) and then install them.


Unfortunately done both of those haha. I always install my AMD drivers this way, an issue i now have that i didn't have before, resizing the radeon settings window causes it to crash







. Shame, didn't have it on 17.10.1. Still no VRM info at all, i even restored my VRM voltage monitoring but that made no difference.


----------



## Mumak

Quote:


> Originally Posted by *VicsPC*
> 
> Unfortunately done both of those haha. I always install my AMD drivers this way, an issue i now have that i didn't have before, resizing the radeon settings window causes it to crash
> 
> 
> 
> 
> 
> 
> 
> . Shame, didn't have it on 17.10.1. Still no VRM info at all, i even restored my VRM voltage monitoring but that made no difference.


Well, not a big surprise to me as there are so many oddities with the Vega... I have spent a significant portion of my life trying to get things to work there...
If you attach the HWiNFO Debug File with sensor data, I can have a look at the details, but no guarantee that I will solve this.


----------



## VicsPC

Quote:


> Originally Posted by *Mumak*
> 
> Well, not a big surprise to me as there are so many oddities with the Vega... I have spent a significant portion of my life trying to get things to work there...
> If you attach the HWiNFO Debug File with sensor data, I can have a look at the details, but no guarantee that I will solve this.


I sent you one ages ago when i first got my card and you told me mine had no VRM monitoring so could still be that. it's a possibility that Sapphire didn't include VRM sensors, or that its enabled by maybe flashing to the LC bios, i have no idea haha. All i know is i have none but it's honestly not a big deal, pretty sure when my core is 36°C and HBM at 40°C my VRMs aren't going to be 100°C.


----------



## Naeem

here is how it showes in my liquid vega card


----------



## jmoonb

Quote:


> Originally Posted by *Mumak*
> 
> In that case you should still see the "GPU VR VDDC Temperature" and "GPU VR MVDD Temperature" values, as these are provided by AMD drivers regardless of the "GPU I2C Support" setting in HWiNFO. But those values are available on some Liquid series only, I haven't seen them on other models and even for Liquid series, they might might not be available in some cases.


Just played around with settings and it does seem I can see VDDC and MVDD temps with I2c Disabled. On 5.60 and had to set the values to 1000 like the hotspot. I needed to use restart64 a few times to see it but its there.


----------



## VicsPC

Quote:


> Originally Posted by *jmoonb*
> 
> Just played around with settings and it does seem I can see VDDC and MVDD temps with I2c Disabled. On 5.60 and had to set the values to 1000 like the hotspot. I needed to use restart64 a few times to see it but its there.


So weird, this is all i see, i even deleted the hwinfo regkey and started fresh.


----------



## jmoonb

Quote:


> Originally Posted by *VicsPC*
> 
> So weird, this is all i see, i even deleted the hwinfo regkey and started fresh.


Yours is normal. I literally had to go out of my way to break the drivers to get it to show up. Once I restart, its gone again.


----------



## VicsPC

Quote:


> Originally Posted by *jmoonb*
> 
> Yours is normal. I literally had to go out of my way to break the drivers to get it to show up. Once I restart, its gone again.


Yea no worries, gave me a reason to update to 17.11.4 to give it a try. Not sure how Martin got his to show up consistently but I'm guessing some cards have the sensors and some don't? Would make no sense but then again its AMD who knows haha.


----------



## Ipak

For me temps show up after reseting I2C cache, im on 17.11.4


----------



## cg4200

My 64 lc card while gaming in gta 1130 hbm 1710-1735 max temp after new install tim 46 c 47 hbm 70 hotspot
which is good but I want more I have a water block on its way..does anyone run turbo bios ?
if i flip the switch away from hdmi plug side will it give me more power? or is it best to run regular bios and undervolt like i have been?
Any way to give hbm a little more volts>? thanks


----------



## Grummpy

I found using this hwinfo causes my gpu to pulsate in its power.
up down up down up down.
i can here the slight coil wine going up and down up and down.
causes the gpu to go into safe mode i found.
i wont use it again.
gpu is seeing an intrusion


----------



## Mumak

Quote:


> Originally Posted by *Naeem*
> 
> 
> 
> here is how it showes in my liquid vega card


That's how the VRM temperatures ("GPU VR VDDC Temperature" and "GPU VR MVDD Temperature") should look like when provided by AMD drivers. But this seems to be present only on some LC models.
My Vega64 air reference was showing the additional uPI VRM sensors as well, but after the last upgrade to 17.11.4 I can get the new VRM ones as posted before.
AFAIK all Vegas should have the same IR35217 VRMs for VDDCR_GPU and VDDHBM, the problem was that these were unintentionally disabled on RX models and intentionally (via BIOS) disabled on FE.
I'm still not sure why some users can see the new VRMs now, while others cannot, or get system crashes. It looks like AMD did fix something in recent drivers, but there's probably still something broken..


----------



## Mumak

Quote:


> Originally Posted by *Ipak*
> 
> For me temps show up after reseting I2C cache, im on 17.11.4


Do they show up as "GPU VR VDDC Temperature" and "GPU VR MVDD Temperature" values under the main sensor, or additional sensors ?
Quote:


> Originally Posted by *Grummpy*
> 
> I found using this hwinfo causes my gpu to pulsate in its power.
> up down up down up down.
> i can here the slight coil wine going up and down up and down.
> causes the gpu to go into safe mode i found.
> i wont use it again.
> gpu is seeing an intrusion


Try to disable monitoring of the additional VRM sensors by hitting Del key over their heading. I believe that will fix this problem.

As I already said, monitoring via GPU I2C is sometimes problematic...

Also guys, if you think new sensors don't show up and they should, be sure to check if you have any sensors hidden in HWiNFO - settings - layout.


----------



## BeetleatWar1977

Getting way more info here:


----------



## Mumak

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> Getting way more info here:


That's it !








Did this happen after driver update ?


----------



## BeetleatWar1977

Quote:


> Originally Posted by *Mumak*
> 
> That's it !
> 
> 
> 
> 
> 
> 
> 
> 
> Did this happen after driver update ?


yes - straight after going to 17.11.4

do you need anything?


----------



## Mumak

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> yes - straight after going to 17.11.4
> 
> do you need anything?


Thanks. Perhaps if you could watch it for some time if you will see some performance or stability issues while monitoring the new sensors. If anything such happens, you might try to disable monitoring of them (hit Del over the sensor heading).


----------



## VicsPC

Quote:


> Originally Posted by *Mumak*
> 
> Thanks. Perhaps if you could watch it for some time if you will see some performance or stability issues while monitoring the new sensors. If anything such happens, you might try to disable monitoring of them (hit Del over the sensor heading).


Quite odd that i can''t get mine to work. For people that have it working, what manufacturer and card do you have. For me its Sapphire 64 air, factory BIOS.


----------



## SpecChum

Quote:


> Originally Posted by *VicsPC*
> 
> Quite odd that i can''t get mine to work. For people that have it working, what manufacturer and card do you have. For me its Sapphire 64 air, factory BIOS.


I thought the cards were identical, all being built by AMD?

All the vendors do is stick a sticker on the fan and put it in a nice box.

Oh, and warranties are different.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *VicsPC*
> 
> Quite odd that i can''t get mine to work. For people that have it working, what manufacturer and card do you have. For me its Sapphire 64 air, factory BIOS.


Quote:


> Originally Posted by *SpecChum*
> 
> I thought the cards were identical, all being built by AMD?
> 
> All the vendors do is stick a sticker on the fan and put it in a nice box.
> 
> Oh, and warranties are different.


There are some different Bios out there, 4 for Vega 56 only that i know of


----------



## ducegt

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> There are some different Bios out there, 4 for Vega 56 only that i know of


Yup and some different ones for LC version. It's hard to replicate crashing when monitoring, but I think the newest LC version from Sapphire is more stable than my stock Power Color one.


----------



## Mumak

Mine is the reference Vega64 and its BIOS is 100% identical with this Sapphire: https://www.techpowerup.com/vgabios/194680/sapphire-rxvega64-8176-170728
so VBIOS Version: 016.001.001.000.008730


----------



## BeetleatWar1977

Played around a bit:
sometimes the driver crashed at HWinfo startup, i think I2C, i see the Drivervalues, the VRM´s got a red X and then no screen and the fan goes to Fullspeed.
If it runs - it runs, no problems. The Values seem to need some corrections:



Current IN mostly
forget it - In is on 12V......  more coffee......


----------



## owntecx

Any morpheus II owner not using the stockplane, to share his vrm temps?


----------



## ITAngel

Quote:


> Originally Posted by *owntecx*
> 
> Any morpheus II owner not using the stockplane, to share his vrm temps?


That is a pretty sick air cooler the Morpheus II. Was almost tempted to go that route but due to Threadripper already being in a loop. It was easier for me to grab a EK block to add my gpu to the loop.


----------



## BeetleatWar1977

Temps from Custom LC would be nice as well

Edit: - dont want to make a doublepost....

OverdriveNTool 0.2.2

Code:



Code:


0.2.2 (23.11.2017)
-added Friendly name and Registry key to gpu additional info
[B]-SoftPowerPlayTable editor can now automatically restart GPU when click "Save" or "Delete"[/B]


----------



## Exposal

Anyone know if 016.001.001.000.008774 is the latest LC bios? anyone seen anything newer?

Thanks!


----------



## JackCY

Quote:


> Originally Posted by *SpecChum*
> 
> I thought the cards were identical, all being built by AMD?
> 
> All the vendors do is stick a sticker on the fan and put it in a nice box.
> 
> Oh, and warranties are different.


AMD doesn't make anything, they only design. It's all made by someone else, GPUs, CPUs, custom chips, ...


----------



## Ipak

Quote:


> Originally Posted by *owntecx*
> 
> Any morpheus II owner not using the stockplane, to share his vrm temps?


after 15 min of superposition and 280W of reported power draw, my vrm's max out at 90'C and 83'C with gpu at 62'C and HBM 69'C


----------



## Rexer

AMD sent a monthly update on a driver called Adrenalin to show this December. Lol. I just now installed Afterburner. But I thought this was sort of interesting and I'd pass this by in case no one's got it. I hope it's something worth while.

http://links.em.experience.amd.com/servlet/MailView?ms=MzE5NTg0NjAS1&r=Mzg3ODUzODMxOTQ1S0&j=MTE2NDA3NzM1NQS2&mt=1&rt=0


----------



## owntecx

Quote:


> Originally Posted by *Ipak*
> 
> after 15 min of superposition and 280W of reported power draw, my vrm's max out at 90'C and 83'C with gpu at 62'C and HBM 69'C


Any Photo of the layout you used?, im gettings lose to 99c with 250w power draw.


----------



## Ipak

Quote:


> Originally Posted by *owntecx*
> 
> Any Photo of the layout you used?, im gettings lose to 99c with 250w power draw.


I used thin long radators on vrm above gpu


Spoiler: Warning: Spoiler!







on right side from gpu i borrow some left overs from arctic accelero xtreme 3


Spoiler: Warning: Spoiler!







I also use two Corsairs ML120 with high static pressure. Those reading were with fans at around 2200-2400 rpms (controlled by card itself, temp target @ 60'C 3000 rpm max)


----------



## Reikoji

Quote:


> Originally Posted by *Mumak*
> 
> That's it !
> 
> 
> 
> 
> 
> 
> 
> 
> Did this happen after driver update ?


i was getting that much info with 17.11.3, however am also one where that much info causes the carf to crash.

however i plan on reflashing my lc card back to its original bios to see if that eliminates the problem. dont think it will tho as my card never even liked having the vrm voltage monitored.


----------



## Trender07

Hey guys can't confirm for sure but, looks like the coil whine depends with the voltage?


----------



## 113802

Quote:


> Originally Posted by *Grummpy*
> 
> I love how you did this the layout is perfection.
> thx for sharing most setups look all colourful and ******ed with their obsession with clean bends and pretty liquid.


After de-lidding I no longer have a need for custom water loops but I kinda want to because I miss it. I also enjoy the smaller computer case compared to the Corsair 650D and Phanteks Enthoo Primo. The RX Vega 64 kinda makes me want to build a loop for it.

My first loop:



My second loop:


----------



## diggiddi

Quote:


> Originally Posted by *WannaBeOCer*
> 
> My first loop:
> 
> 
> Spoiler: Warning: Spoiler!


What fittings are those?


----------



## Papa Emeritus

Looks like Monsoon


----------



## 113802

Quote:


> Originally Posted by *diggiddi*
> 
> What fittings are those?


Quote:


> Originally Posted by *Papa Emeritus*
> 
> Looks like Monsoon


That is correct they are Monsoon fittings.


----------



## ducegt

@Mumak

Hi Martin. I'm using a Power Color 64 Liquid Eition and with the stock higher TDP ROM 8734, there is no VRM monitoring. When I flash the latest 8774, everything is showing







My system crashes on both though! I just crashed while only coming to post this on 8774.


----------



## tarot

ok I have a weird one only happened after going to the 11.3 drivers and it has only happened twice.
once when I was testing mining and once loading diablo 3
the card turned off...went from one gpu light to none and the screen went off, computer didn't restart nothing weird there just turned the card off.

also since summer is here it was 32 degrees in here and temps are not sexy up to 70 degrees after a couple of hours and 90 on the hotspot no throttling but the air coming out of the radiator and tube were very hot.
my question is I have a smaller pump 2k speed but I have a 4k pump on the cpu and that seems fine....could a lower speed pump be causing the extra heat? I do not think it is the waterblock mount or the radiator and tubes/res wouldn't be getting g that hot...
oh its a 280 ek rad and 2 varder 1600 rpm fans running around 1200/1400 rpm.


----------



## Roboyto

Haven't been around much lately, just wondering how people are fairing in SP4K with driver updates etc?

Just installed 17.11.3 the other night and finally cracked the 7K points barrier







Feeling pretty swell about this since I'm still on air cooled BIOS

Settings as follows:

P6: 1597 @ 950mV

P7: 1722 @ 1125mV

HBM: 1150 @ 1025mV

HBM stayed pegged and core cruising nicely at 1670-1680.



May still be some juice left in the HBM tank as I've only run a few benches. Randomly chose 1150 hearing about the BIOS update that unlocked some more HBM speed.

Steady 1700 core still elusive...probably need to flash LC BIOS to get there.


----------



## Reikoji

@Mumak

I flashed back to my LC cards original 8709 bios and it didnt shut down immediately after opening HWInfo with GPU I2C enabled. Showed the vrm info and everything, but proptly disabled monitoring







for now.

However it only shows VRM info form one GPU, the primary one. My air card has no VRM info displayed.


----------



## Grummpy

Quote:


> Originally Posted by *Roboyto*
> 
> Haven't been around much lately, just wondering how people are fairing in SP4K with driver updates etc?
> 
> Just installed 17.11.3 the other night and finally cracked the 7K points barrier
> 
> 
> 
> 
> 
> 
> 
> Feeling pretty swell about this since I'm still on air cooled BIOS
> 
> Settings as follows:
> 
> P6: 1597 @ 950mV
> P7: 1722 @ 1125mV
> HBM: 1150 @ 1025mV
> 
> HBM stayed pegged and core cruising nicely at 1670-1680.
> 
> 
> 
> 
> May still be some juice left in the HBM tank as I've only run a few benches. Randomly chose 1150 hearing about the BIOS update that unlocked some more HBM speed.
> 
> Steady 1700 core still elusive...probably need to flash LC BIOS to get there.


exact same settings.
7164

550 to 600 watt at wall.


----------



## Roboyto

Quote:


> Originally Posted by *Grummpy*
> 
> exact same settings.
> 7164


Cool, cool. Only ~1.5% variance there.

My R7 1700 @ 3.6GHz w/ 3200 RAM. What's the heart of your rig?


----------



## Grummpy

Quote:


> Originally Posted by *Roboyto*
> 
> Cool, cool. Only ~1.5% variance there.
> 
> My R7 1700 @ 3.6GHz w/ 3200 RAM. What's the heart of your rig?


an old asus maximum z77 2700k @ 4.9 gtz 2133 mem 16 gig 11,11,27 t2 vega 64 lc gigabyte model


----------



## geriatricpollywog

Quote:


> Originally Posted by *Roboyto*
> 
> Haven't been around much lately, just wondering how people are fairing in SP4K with driver updates etc?
> 
> Just installed 17.11.3 the other night and finally cracked the 7K points barrier
> 
> 
> 
> 
> 
> 
> 
> Feeling pretty swell about this since I'm still on air cooled BIOS
> 
> Settings as follows:
> 
> P6: 1597 @ 950mV
> P7: 1722 @ 1125mV
> HBM: 1150 @ 1025mV
> 
> HBM stayed pegged and core cruising nicely at 1670-1680.
> 
> 
> 
> 
> May still be some juice left in the HBM tank as I've only run a few benches. Randomly chose 1150 hearing about the BIOS update that unlocked some more HBM speed.
> 
> Steady 1700 core still elusive...probably need to flash LC BIOS to get there.


My scores and HBM speeds haven't changed since installing the .4 driver. Max HBM speed I can bench is still 1180mhz and my Superposition 4k scores are still around 7300. 1080p Extreme score is 5400.


----------



## Roboyto

Quote:


> Originally Posted by *Grummpy*
> 
> an old asus maximum z77 2700k @ 4.9 gtz 2133 mem 16 gig 11,11,27 t2 vega 64 lc gigabyte model


Damn, Z77 still trudging along
















My buddie's ASRock Z77 ITX just took a dump today after ~6 years of abuse

Quote:


> Originally Posted by *0451*
> 
> My scores and HBM speeds haven't changed since installing the .4 driver. Max HBM speed I can bench is still 1180mhz and my Superposition 4k scores are still around 7300. 1080p Extreme score is 5400.


Didn't even know there was a .4 driver out haha. I was running 17.10.2 prior to this the 17.11.3

Thanks for the feedback. Is that LC card or one with LC BIOS?


----------



## TrixX

17.11.3 was bugged according to GamersNexus, so I'd grab the .4 version.


----------



## Roboyto

Quote:



> Originally Posted by *TrixX*
> 
> 17.11.3 was bugged according to GamersNexus, so I'd grab the .4 version.


It acted real strange at first...I uninstall/reinstall and it's been OK for the last few days.

I'll scope it out though


----------



## diggiddi

Quote:


> Originally Posted by *WannaBeOCer*
> 
> That is correct they are Monsoon fittings.


Thanks









So whats the card to get in this series, a vega 56 flashed to 64 or 64 with LC bios?


----------



## geriatricpollywog

Quote:


> Originally Posted by *Roboyto*
> 
> Damn, Z77 still trudging along
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My buddie's ASRock Z77 ITX just took a dump today after ~6 years of abuse
> 
> Didn't even know there was a .4 driver out haha. I was running 17.10.2 prior to this the 17.11.3
> 
> Thanks for the feedback. Is that LC card or one with LC BIOS?


Air card with LC bios and EK waterblock, 150% power limit, 500 amp limit, and P7 set to 1772.


----------



## Roboyto

Quote:


> Originally Posted by *0451*
> 
> Air card with LC bios and EK waterblock, 150% power limit, 500 amp limit, and P7 set to 1772.












I'm not doing so bad within the confines of stock air BIOS then


----------



## geriatricpollywog

Quote:


> Originally Posted by *Roboyto*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm not doing so bad within the confines of stock air BIOS then


I was doing much worse than you with the air bios.


----------



## Mumak

Thanks for the feedback @ducegt and @Reikoji.
I'm sorry, but I don't have any other suggestion for now. Will need to discuss this with AMD in more detail if it's possible to improve both availability of those sensors and stability. It might require further cooperation in drivers or SMU firmware.


----------



## HGooper

Quote:


> Originally Posted by *0451*
> 
> Air card with LC bios and EK waterblock, 150% power limit, 500 amp limit, and P7 set to 1772.


How can you modify the HBM2 voltage? Via wattman, bios or regedit?


----------



## Spacebug

Quote:


> Originally Posted by *HGooper*
> 
> How can you modify the HBM2 voltage? Via wattman, bios or regedit?


Regedit don't seem to work and bios modding is locked, for now it seems like hardmod is the only way, i think...


----------



## RyanRazer

Hey guys, quick newbie question.
Why is there difference in mem voltage? Wattman 1V, Hwinfo reports 1.356V.

Is Wattman wrong, HWinfo reporting wrong or am I missing something fundamentally here? I'd bet on the last one...


----------



## Roboyto

Quote:


> Originally Posted by *RyanRazer*
> 
> Hey guys, quick newbie question.
> Why is there difference in mem voltage? Wattman 1V, Hwinfo reports 1.356V.
> 
> Is Wattman wrong, HWinfo reporting wrong or am I missing something fundamentally here? I'd bet on the last one...


I was confused by this at first as well when doing some extensive benching several weeks back. I believe I'm grasping how this works now, but anyone else can jump in and correct me if the following is incorrect information.

The voltage adjustment that is located next to HBM speed is not adjusting HBM voltage. It is what everyone is referring to as 'floor voltage', meaning the core won't be receiving any less than that amount under load.

For example if you wanted to undervolt the core by setting P6 to 1000mV and P7 to 1050mV, but left the HBM voltage at 1100mV....your adjustments to P6/P7 would mean nothing because the floor voltage is set higher.


----------



## RyanRazer

Quote:


> Originally Posted by *Roboyto*
> 
> I was confused by this at first as well when doing some extensive benching several weeks back. I believe I'm grasping how this works now, but anyone else can jump in and correct me if the following is incorrect information.
> 
> The voltage adjustment that is located next to HBM speed is not adjusting HBM voltage. It is what everyone is referring to as 'floor voltage', meaning the core won't be receiving any less than that amount under load.
> 
> For example if you wanted to undervolt the core by setting P6 to 1000mV and P7 to 1050mV, but left the HBM voltage at 1100mV....your adjustments to P6/P7 would mean nothing because the floor voltage is set higher.


I heard that Buoldzoid saying. Didn't get it rly though. So you're saying that that floor voltage also applies and affects the GPU core voltage, not only HBM? this wattman is confusing man.
I'll play around with it later and pay attention to both voltages. Tnx


----------



## TrixX

Quote:


> Originally Posted by *RyanRazer*
> 
> I heard that Buoldzoid saying. Didn't get it rly though. So you're saying that that floor voltage also applies and affects the GPU core voltage, not only HBM? this wattman is confusing man.
> I'll play around with it later and pay attention to both voltages. Tnx


It shouldn't be called HBM votlage as it doesn't affect the HBM voltage directly at all.

It is the base core voltage from what others have said and affects the minimum voltage the core will get. Setting P6 and P7 below that voltage is pointless for instance.


----------



## Roboyto

Quote:


> Originally Posted by *TrixX*
> 
> It shouldn't be called HBM votlage as it doesn't affect the HBM voltage directly at all.
> 
> It is the base core voltage from what others have said and affects the minimum voltage the core will get. Setting P6 and P7 below that voltage is pointless for instance.


^ This ^

Quote:



> Originally Posted by *RyanRazer*
> 
> I heard that Buoldzoid saying. Didn't get it rly though. So you're saying that that floor voltage also applies and affects the GPU core voltage, not only HBM? this wattman is confusing man.
> I'll play around with it later and pay attention to both voltages. Tnx


HBM voltage is locked and not adjustable if I'm not mistaken. That extra voltage adjustment being with HBM is just downright misleading.

These cards are not particularly easy to tune for maximum performance. Tiny adjustments to any of the settings can give, or take, a decent chunk of performance if not cause system hangs and all sorts of crashes....at least in benchmarks from what I'm seeing.

To clarify my definition of tiny in this particular instance...I mean 5Mhz and/or 5mV can be a game changer for stability and then pushing forward for more performance.

I am still inching forward with my EK blocked AC card with the AC BIOS







7100+ SP4K scores

Floor @ 1090mV

P6: 1627 @ 1090mV

P7: 1722 @ 1140mV

HBM: 1175

These settings net a small fluctuation in core clock of 1685-1695...I'll just call it 1690 average.



Make sure every time you want to adjust settings that you get into the habit of resetting WattMan back to balance. I know this is a PITA, but it's necessary to make sure the new desired settings take place.

When you get a driver hang/crash, or software borks because of a problem, you should just reboot your machine otherwise your next benchmark attempt will (likely) be fooey.


----------



## RyanRazer

Quote:


> Originally Posted by *Roboyto*
> 
> HBM voltage is locked and not adjustable if I'm not mistaken. That extra voltage adjustment being with HBM is just downright misleading.
> 
> These cards are not particularly easy to tune for maximum performance. Tiny adjustments to any of the settings can give, or take, a decent chunk of performance if not cause system hangs and all sorts of crashes....at least in benchmarks from what I'm seeing.
> 
> To clarify my definition of tiny in this particular instance...I mean 5Mhz and/or 5mV can be a game changer for stability and then pushing forward for more performance.
> 
> I am still inching forward with my EK blocked AC card with the AC BIOS
> 
> 
> 
> 
> 
> 
> 
> 7100+ SP4K scores
> 
> Floor @ 1090mV
> P6: 1627 @ 1090mV
> P7: 1722 @ 1140mV
> HBM: 1175
> 
> These settings net a small fluctuation in core clock of 1685-1695...I'll just call it 1690 average.
> 
> 
> 
> 
> Make sure every time you want to adjust settings that you get into the habit of resetting WattMan back to balance. I know this is a PITA, but it's necessary to make sure the new desired settings take place.
> 
> When you get a driver hang/crash, or software borks because of a problem, you should just reboot your machine otherwise your next benchmark attempt will (likely) be fooey.


jeez, hopefully amd get drivers right with adrenaline version. i cant even get 1000mhz on hbm. will try rising voltage a bit, hope that helps, but temps are starting to block me. I am going to get morpheus ll cooler in january, this should help...
TNX for the info guys


----------



## Roboyto

Quote:


> Originally Posted by *RyanRazer*
> 
> jeez, hopefully amd get drivers right with adrenaline version. i cant even get 1000mhz on hbm. will try rising voltage a bit, hope that helps, but temps are starting to block me. I am going to get morpheus ll cooler in january, this should help...
> TNX for the info guys


You're quite welcome.

If you're still on stock blower then you are being held back for sure. I have a Morpheus sitting on my desk right now waiting to be installed on my other 64.

Here's a couple links for other items you will need to make the Morpheus functional:

PWM Fan adapter cable:

https://www.amazon.com/gp/product/B005ZKZEQA/ref=oh_aui_detailpage_o06_s02?ie=UTF8&psc=1

PWM 2 fan splitter:

https://www.amazon.com/gp/product/B01EF9OI0O/ref=oh_aui_detailpage_o06_s02?ie=UTF8&psc=1

You can use any fan you would like obviously, but I'm going to see how these Arctic's fair in this application since the Morpheus is freaking ginormous lol:

https://www.amazon.com/gp/product/B007YLUC0Q/ref=oh_aui_detailpage_o06_s02?ie=UTF8&psc=1

I don't know if these couple things are making a difference in my stability and successful OCing so far, but I figured I would throw it out there.

I took note of AMD factory settings and have kept the 95MHz gap between P6/P7 clock speeds.

I also kept the same factory gap of 50mV between P6/P7.

(And most recently) The floor voltage I've been matching to P6.

When benching keep an eye on GPU-Z, HWInfo and/or WattMan. If you get a crash and see an atypical spike in core clock, ~50Mhz-ish, you have overboosted. Knocking 5MHz off of P7 can remedy this. For Example, I can't set P7 beyond 1722...as soon as I get to 1727+ it will overboost and crash.

When HBM is set too high, it generally crashes quite quickly if not instantly.


----------



## Roboyto

Spoiler: Warning: Not Enough Power















SP4K topped out at 7170...She needs more power. More voltage and crash...more clock speed and crash. 18% performance boost over factory balance settings which scored 6065









Floor Voltage: 1095mV

P6: 1627 @ 1095mV

P7: 1722 @ 1145mV

HBM @ 1185


----------



## cplifj

Here's another thought for AMD, how are people gonna use their spanking new card without included drivers and without internet ?

There are still people who don't want internet in the first place, and i can't blame them if you know what it's really used for.

Same with games, why need internet for games that don't even have online play options.....

So many things wrong these days but plenty of internet-todlers (aka marketing nazi's) around who want to shout it's all ok.

WELL, it's not ok, it's FAR FROM OK.


----------



## Roboyto

Quote:


> Originally Posted by *cplifj*
> 
> Here's another thought for AMD, how are people gonna use their spanking new card without included drivers and without internet ?
> 
> There are still people who don't want internet in the first place, and i can't blame them if you know what it's really used for.
> 
> Same with games, why need internet for games that don't even have online play options.....
> 
> So many things wrong these days but plenty of internet-todlers (aka marketing nazi's) around who want to shout it's all ok.
> 
> WELL, it's not ok, it's FAR FROM OK.


I would assume most anyone purchasing a $400-$700 graphics card would have internet access to get updated drivers.


----------



## geriatricpollywog

Quote:


> Originally Posted by *cplifj*
> 
> Here's another thought for AMD, how are people gonna use their spanking new card without included drivers and without internet ?
> 
> There are still people who don't want internet in the first place, and i can't blame them if you know what it's really used for.
> 
> Same with games, why need internet for games that don't even have online play options.....
> 
> So many things wrong these days but plenty of internet-todlers (aka marketing nazi's) around who want to shout it's all ok.
> 
> WELL, it's not ok, it's FAR FROM OK.


I think there is a Linux build for crazy people who are afraid of the World Wide Web.


----------



## VicsPC

Quote:


> Originally Posted by *cplifj*
> 
> Here's another thought for AMD, how are people gonna use their spanking new card without included drivers and without internet ?
> 
> There are still people who don't want internet in the first place, and i can't blame them if you know what it's really used for.
> 
> Same with games, why need internet for games that don't even have online play options.....
> 
> So many things wrong these days but plenty of internet-todlers (aka marketing nazi's) around who want to shout it's all ok.
> 
> WELL, it's not ok, it's FAR FROM OK.


Ok seriously can we ban trolls like this? No company will ship you updated drivers on a disk or a USB.


----------



## Trender07

Quote:


> Originally Posted by *RyanRazer*
> 
> jeez, hopefully amd get drivers right with adrenaline version. i cant even get 1000mhz on hbm. will try rising voltage a bit, hope that helps, but temps are starting to block me. I am going to get morpheus ll cooler in january, this should help...
> TNX for the info guys


I also am with stock air cooler and with my high temps (80ºc) I can set up to 1080 HBM MHz, more and I see artifacts but doesn't crash


----------



## Chaoz

Played some NFS Payback for hours on end and I finally remembered to monitor my Hotspot temps, it seems my Hotspot doesn't go over 50°C, while the core, stays at 35-40°C after playing around 4-5hrs.

Pretty good, imho. So the Triple X method on the 3 stacks works great in my case.


----------



## RyanRazer

Quote:


> Originally Posted by *Trender07*
> 
> I also am with stock air cooler and with my high temps (80ºc) I can set up to 1080 HBM MHz, more and I see artifacts but doesn't crash


hm i get a crash without artifacts... maybe should i rise voltage? 1000mhz HBM crash on 0.950V. What do you set for flooor VCore?


----------



## fewness

Anyone can get Vega FE crossfire to work under 17.11.2/3/4? It seems I lost the crossfire checkbox after recent driver update....a clean re-install of Windows didn't bring crossfire back....


----------



## Trender07

Quote:


> Originally Posted by *RyanRazer*
> 
> hm i get a crash without artifacts... maybe should i rise voltage? 1000mhz HBM crash on 0.950V. What do you set for flooor VCore?


Hmm.. I just can't set 950 mv. Idk if my card is bugged or what (maybe u should also check in) but If I set 950 mV my hbm clocks gets locked to 800 mhz no matter what, so I have to set 951 mv lol(air bios) since october or september drivers


----------



## madmanmarz

Quote:


> Originally Posted by *RyanRazer*
> 
> hm i get a crash without artifacts... maybe should i rise voltage? 1000mhz HBM crash on 0.950V. What do you set for flooor VCore?


the hbm voltage is locked into the bios. 56 bios is 1250mv and 64 bios is 1350mv. bumping the floor voltage can help especially when pushing core clock and hbm. do you have samsung hbm?


----------



## Rexer

Quote:


> Originally Posted by *Trender07*
> 
> I also am with stock air cooler and with my high temps (80ºc) I can set up to 1080 HBM MHz, more and I see artifacts but doesn't crash


I just added two 140mm fans to the front of my case. More noise. Helps a little bit. For sure Vega is better off having LC.
I heard about taping a 1/4 rubber ball as a scoop over the stock fan. Basket fans are rumored to move air out faster than it takes in. I'm going to see if there's any truth to that. My logic is, old school tech seems to still work on cars and planes. What does it hurt to try some less than analog tech? Anything to help cool Vega's furnace. If it works, I'll post back.


----------



## SavantStrike

Man the prices on Vega are sad









Vega can't sell at 1080 TI prices, which is where

So it looks like reference RX Vega cards are going to be EOL soon with only AIB cards on offer. Really sucks for anyone still hoping to water cool a Vega as there probably won't be many full cover options available for the AIB cards.

From videocardz
Quote:


> It has been confirmed that RX Vega stocks will be increased shortly. This will allow retailers, such as OverclockersUK, to adjust the price accordingly. Our sources have confirmed that AMD is finally supplying partners with Vega chips, which will allow them to introduce custom SKUs in satisfactory number, while reference designs will no longer be produced.


Quote:


> Originally Posted by *Rexer*
> 
> I just added two 140mm fans to the front of my case. More noise. Helps a little bit. For sure Vega is better off having LC.
> I heard about taping a 1/4 rubber ball as a scoop over the stock fan. Basket fans are rumored to move air out faster than it takes in. I'm going to see if there's any truth to that. My logic is, old school tech seems to still work on cars and planes. What does it hurt to try some less than analog tech? Anything to help cool Vega's furnace. If it works, I'll post back.


What are basket fans?


----------



## AlphaC

https://videocardz.com/74260/amds-james-prior-talks-ryzen-2-and-vega-11
Quote:


> AMD Vega 11 is integrated into Raven Ridge APU
> 
> The mysterious Vega 11 is not a GPU by itself. It's a solution for AMD Raven Ridge APUs with 11 Compute Units enabled. James Prior confirmed that Ryzen APUs offer up to 11 Compute Units. So far AMD only released two mobile APU variants, which feature either 8 or 10 CUs (Vega 8/10 Graphics). That said, the chip with 11 Vega Compute Units would be the top tier Raven Ridge APU. No details about desktop APUs have been shared.


Quote:


> Originally Posted by *https://wccftech.com/amd-james-prior-interview-vega-11-ryzen-2-am4-vega-supply/*
> AMD's Raven Ridge APUs have the Vega 11 GPU and the "11" stands for the 11 compute units in the chip. Similarly, there are other SKUs of the Vega chip featuring 8, 10, 5, etc CUs that are labeled as Vega 8, Vega 10, Vega 5. I guess it was misinterpreted as the smallest of the two Vega chips as we thought of Polaris (10/11) but that was never the case. That would imply that we may never see a Vega GPU aside from the high-end Vega 64 and Vega 56 offerings but if Vega can scale from 2 CUs to all the way up to 64 CUs on RX Vega 64, then there's no doubt that AMD cannot offer a discrete board with lower CU count in the budget segment.


I guess Vega 11 was misinterpreted?

AMD needs something that can scale down to 75W TDP yet produce over 7 or 8K Firestrike. Right now both the Radeon Pro WX 5100 and Radeon Pro WX 4100 (~5K Firestrike) cannot compete with Pascal Quadro P2000 (~8-9K Firestrike) even if those two cards were close to the Quadro M2000 (~5K Firestrike).

The Radeon Pro WX 7100 (RX 480 / RX 580 equivalent , ~ 10K firestrike) at 150W TDP also is not cutting it against the Quadro P4000 (1792 CUDA similar to mobile GTX 1070) whatsoever.

Essentially Polaris was able to compete with Maxwell to an extent but not Pascal.


----------



## duole

Vega 64 FE with water cooler mod
But i cant make it under 65*


----------



## Rexer

Quote:


> Originally Posted by *SavantStrike*
> 
> Man the prices on Vega are sad
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Vega can't sell at 1080 TI prices, which is where
> 
> So it looks like reference RX Vega cards are going to be EOL soon with only AIB cards on offer. Really sucks for anyone still hoping to water cool a Vega as there probably won't be many full cover options available for the AIB cards.
> 
> From videocardz
> 
> What are basket fans?


Sorry. Basket fan = Radial fan. The stock fan which reference gpu cards have. It's a name the fan picks up for a particular device it's designed in. Larger ones were called squirrel cage or wheel fans. It's a common design in all sorts of electronic stuff like lap tops, milling machines and air conditioners. Used in wood shop devices to vacuum saw dust or expel paint fumes in auto body booths. If you live places like Phoenix, they're in swamp coolers. Sorry I blew that one by.


----------



## By-Tor

Was able to manage this today..


----------



## geriatricpollywog

Quote:


> Originally Posted by *By-Tor*
> 
> Was able to manage this today..


WOW nice


----------



## Grummpy

stock clocks stock volt core.
hbm2 1100 mhz 1000 mv.


----------



## LicSqualo

me too!

Stock clock (without Power Limit):



Undervolt ([email protected][email protected]) same stock clock + HBM overclock ([email protected]) + PL 50%


----------



## jbravo14

Any other crossfire users here? Running 2xvega 56 on b64 bios. I'm getting terrible performance on crossfire, compared to single card.

Currently on 11.4 drivers


----------



## geriatricpollywog




----------



## LicSqualo

Quote:


> Originally Posted by *0451*










settings? HBM over 1100Mhz? Or simply "Intel" rig?

This my result raising the voltage and p6-p7 clocks to manage 1750Mhz constantly.


----------



## RyanRazer

Quote:


> Originally Posted by *Trender07*
> 
> Hmm.. I just can't set 950 mv. Idk if my card is bugged or what (maybe u should also check in) but If I set 950 mV my hbm clocks gets locked to 800 mhz no matter what, so I have to set 951 mv lol(air bios) since october or september drivers


MY HBM works fine at 970mh with 0.950V. *You shouldn't set the P6 and P7 state at same voltage, though.* That will cause a Whatmann to bug. This might be the reason.
Quote:


> Originally Posted by *madmanmarz*
> 
> the hbm voltage is locked into the bios. 56 bios is 1250mv and 64 bios is 1350mv. bumping the floor voltage can help especially when pushing core clock and hbm. do you have samsung hbm?


Yes, i have samsung. What is the other option? Hynix? Which is better?


----------



## geriatricpollywog

Quote:


> Originally Posted by *LicSqualo*
> 
> 
> 
> 
> 
> 
> 
> 
> settings? HBM over 1100Mhz? Or simply "Intel" rig?
> 
> This my result raising the voltage and p6-p7 clocks to manage 1750Mhz constantly.


With those settings, core was set to 1772 mhz P7 and HBM was set to crashy.

Here are my stable gaming settings: 1762 mhz P7, 1110 mhz HBM, stock voltages with liquid bios, 150% PL. Actual core speed was 1730 mhz constantly.


----------



## LionS7

Quote:


> Originally Posted by *0451*
> 
> Here are my stable gaming settings: 1762 mhz P7, 1110 mhz HBM, stock voltages with liquid bios, 150% PL. Actual core speed was 1730 mhz constantly.


So with 150% Power Limit, the frequency don't move at all ?


----------



## Trender07

Quote:


> Originally Posted by *RyanRazer*
> 
> MY HBM works fine at 970mh with 0.950V. *You shouldn't set the P6 and P7 state at same voltage, though.* That will cause a Whatmann to bug. This might be the reason.
> Yes, i have samsung. What is the other option? Hynix? Which is better?


I think all of the vegas as for now are samsung
Yeah I have different voltages por p6 p7 and hbm, here's my settings:

Maybe you don't have that bug as me because of your <1000 HBM speed.
Damn this is hard to explain.

* At less than 1000 MHz hbm I don't have the bug which locks the hbm speed to 800 mhz and I can set my hbm vfloor to whatever I want 950 mV or whatever.
* But at more than 1000 MHz HBM if I set 950 mV , the speed gets locked to 800 MHz, so I have to set it to 951 mV and that wont get locked. This happened since Octobers drivers +-, I hadn't this bug in september.

My card: Sapphire V64 limited edition. My bios: https://www.techpowerup.com/vgabios/194774/194774
I was testing and with Liquid bios I don't have this bug of 800 mhz locked at less than 950 mv. But ofc on air I won't use the lc bios.


----------



## Grummpy




----------



## SavantStrike

Quote:


> Originally Posted by *Rexer*
> 
> Sorry. Basket fan = Radial fan. The stock fan which reference gpu cards have. It's a name the fan picks up for a particular device it's designed in. Larger ones were called squirrel cage or wheel fans. It's a common design in all sorts of electronic stuff like lap tops, milling machines and air conditioners. Used in wood shop devices to vacuum saw dust or expel paint fumes in auto body booths. If you live places like Phoenix, they're in swamp coolers. Sorry I blew that one by.


Oh yes I know what you're referring to now.

Centrifugal fans are great from a static pressure perspective. I've kicked around the idea of using a squirrel cage on a chassis for years but never done it.


----------



## By-Tor

Actual readings:

Core 1737 on 1.17v
Memory 1190 on 1.35v

Temps:
Core 30c
VRM 48c
Hot Spot 52c

Pulling 394 watts


----------



## geriatricpollywog

Quote:


> Originally Posted by *LionS7*
> 
> So with 150% Power Limit, the frequency don't move at all ?


It starts the benchmark at 1725 and slowly walks upward to 1730. Frequency is a little higher in games.


----------



## fursko

Quote:


> Originally Posted by *By-Tor*
> 
> Actual readings:
> 
> Core 1737 on 1.17v
> Memory 1190 on 1.35v
> 
> Temps:
> Core 30c
> VRM 48c
> Hot Spot 52c
> 
> Pulling 394 watts


394 watts ? Power limit ? It must be wrong reading. Probably 600+ Watt total consumption at least.


----------



## Spacebug

What i got so far not fully stable though, but with some more volts perhaps

coreclock around 1783
hbm 1205, benchstable but not fully stable, got 1190 stable though
done with 1.38V core and 1.44V hbm on water


----------



## By-Tor

Quote:


> Originally Posted by *fursko*
> 
> 394 watts ? Power limit ? It must be wrong reading. Probably 600+ Watt total consumption at least.


Towers plugged into a watt meter at the wall and it was pulling 630 watts out of the wall..


----------



## ontariotl

Hmmm, this is kinda concerning. I just read this thread on Hardocp from a forum member that seems to know someone on the inside of RTG.

*"AMD plans to release Vega refresh in 2018.

Vega refresh will come enabled with features such as primitive shaders and tile-based rasterization that AMD wasn't able to get working with the initial Vega's release.
There will also be other features that AMD will tout as new, but are features that AMD wasn't able to get working with Vega's initial release.
In other words, Vega refresh will be what Vega was supposed to be.
Now, if you are a current Vega owner, don't get too excited because these features will require new hardware.
I hate to tell you, but you bought a alpha product and no magical drivers can fix that."*

Later on the thread after much usual stupid debates back and forth the member then states...

*"So, one thing I want to clarify here.
At least one feature (specifically, tile-based rasterization) actually works on the current hardware.
By that, I mean, that the feature can be turned on and it can be verified that the feature actually works.
The problem is that performance gain has been minimal, and in some cases, performance actually regresses.
So, I guess, it's not technically "broken and unfixable" in the way you've defined it."*

While I take this with a grain of salt, a little part of me has lowered my excitement for the up coming Adrenaline driver set if in fact these two features are still not enabled. It may just prove the statements from this member true.


----------



## fursko

Quote:


> Originally Posted by *ontariotl*
> 
> Hmmm, this is kinda concerning. I just read this thread on Hardocp from a forum member that seems to know someone on the inside of RTG.
> 
> *"AMD plans to release Vega refresh in 2018.
> 
> Vega refresh will come enabled with features such as primitive shaders and tile-based rasterization that AMD wasn't able to get working with the initial Vega's release.
> There will also be other features that AMD will tout as new, but are features that AMD wasn't able to get working with Vega's initial release.
> In other words, Vega refresh will be what Vega was supposed to be.
> Now, if you are a current Vega owner, don't get too excited because these features will require new hardware.
> I hate to tell you, but you bought a alpha product and no magical drivers can fix that."*
> 
> Later on the thread after much usual stupid debates back and forth the member then states...
> 
> *"So, one thing I want to clarify here.
> At least one feature (specifically, tile-based rasterization) actually works on the current hardware.
> By that, I mean, that the feature can be turned on and it can be verified that the feature actually works.
> The problem is that performance gain has been minimal, and in some cases, performance actually regresses.
> So, I guess, it's not technically "broken and unfixable" in the way you've defined it."*
> 
> While I take this with a grain of salt, a little part of me has lowered my excitement for the up coming Adrenaline driver set if in fact these two features are still not enabled. It may just prove the statements from this member true.


I returned my vega and im waiting for adrenalin. If this is true i'll buy 1080 ti.


----------



## diabetes

Quote:


> Originally Posted by *ontariotl*
> 
> Hmmm, this is kinda concerning. I just read this thread on Hardocp from a forum member that seems to know someone on the inside of RTG.
> 
> *"AMD plans to release Vega refresh in 2018.
> 
> Vega refresh will come enabled with features such as primitive shaders and tile-based rasterization that AMD wasn't able to get working with the initial Vega's release.
> There will also be other features that AMD will tout as new, but are features that AMD wasn't able to get working with Vega's initial release.
> In other words, Vega refresh will be what Vega was supposed to be.
> Now, if you are a current Vega owner, don't get too excited because these features will require new hardware.
> I hate to tell you, but you bought a alpha product and no magical drivers can fix that."*
> 
> Later on the thread after much usual stupid debates back and forth the member then states...
> 
> *"So, one thing I want to clarify here.
> At least one feature (specifically, tile-based rasterization) actually works on the current hardware.
> By that, I mean, that the feature can be turned on and it can be verified that the feature actually works.
> The problem is that performance gain has been minimal, and in some cases, performance actually regresses.
> So, I guess, it's not technically "broken and unfixable" in the way you've defined it."*
> 
> While I take this with a grain of salt, a little part of me has lowered my excitement for the up coming Adrenaline driver set if in fact these two features are still not enabled. It may just prove the statements from this member true.


If this is true then there is a class action law-suit incoming as original Vega was advertised with the programmable geometry pipeline. As of now these are just unverified claims.

Found the discussion:
https://hardforum.com/threads/amd-plans-to-release-vega-refresh-in-2018.1949215/

The same guy was ranting about Vega's pricing on Reddit:

__
https://www.reddit.com/r/7h948l/amd_will_stop_production_of_reference_rx_vega_56/%5B/URL
https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next&qt=grep&q=gfx9

Except for some "scissor bug" Mesa looks fine too:
https://cgit.freedesktop.org/mesa/mesa/log/?qt=grep&q=vega
https://cgit.freedesktop.org/mesa/mesa/log/?qt=grep&q=gfx9

This bugs seems to have something to do with shader utilization, so it is unrelated to non-functional hardware features.
https://cgit.freedesktop.org/mesa/mesa/commit/?id=0fe0320dc074023489e2852771edc487c0142927
https://cgit.freedesktop.org/mesa/mesa/commit/?id=d1285a710329dca907ebab0154b6c16b89b945ef

With Vega AIB sales starting in December, PR for AMD would be extremely bad as they would sell disappointment for christmas. I dont see them being this stupid.

Shops do also still mention the "New programmable geometry pipeline" as a feature. If AMD knew that this wasn't coming, they were legally bound to tell the shops to (silently) remove that statement from their product advertisements.


----------



## Reikoji

That would be a bummer, but oh well at the same time.

Was probably just part of the Raja Sabotage anyway. Now that he is out of the picture, they can recreate Vega in beastness.


----------



## Grummpy

Not buying it.
just pure nonsense before the driver release.


----------



## jmoonb

I recently saw a post on reddit about yet another morpheus install. I usually check any vega posts out in hopes of seeing a decent screen shot with score and hwinfo readings so I can compare numbers as I had a theory of differing power draw between cards at the same settings. Most though are crazy overclocks I have no hopes of matching with my non overclockable card but I lucked out and saw this.



I followed the settings to a T, aside from lowering my p7 to 1622 as it would crash at 1632. Here is my results...



Now lets assume the scores are similar just due to my better cooling. Unless the readings are faulty, what was most interesting was the power draw. 247 to 217. A 30 watt difference! I need 1070mV to even hit 240w in super position.

I'm not an expert in undervolting as this card is the first time I tried but this is quite jarring. You'd think the mechanism controlling the current would be the same on every card/bios/driver. Apparently it isn't.


----------



## barbz127

Quote:


> Originally Posted by *jmoonb*
> 
> I recently saw a post on reddit about yet another morpheus install. I usually check any vega posts out in hopes of seeing a decent screen shot with score and hwinfo readings so I can compare numbers as I had a theory of differing power draw between cards at the same settings. Most though are crazy overclocks I have no hopes of matching with my non overclockable card but I lucked out and saw this.
> 
> 
> 
> I followed the settings to a T, aside from lowering my p7 to 1622 as it would crash at 1632. Here is my results...
> 
> 
> 
> Now lets assume the scores are similar just due to my better cooling. Unless the readings are faulty, what was most interesting was the power draw. 247 to 217. A 30 watt difference! I need 1070mV to even hit 240w in super position.
> 
> I'm not an expert in undervolting as this card is the first time I tried but this is quite jarring. You'd think the mechanism controlling the current would be the same on every card/bios/driver. Apparently it isn't.


I can't comment on the differences although you would expect there be none, unless maybe a dud PSU.

Your temps inc hotspot look great with the Morpheus, would you mind sharing any photos of your heat sink layout or anything else that would help someone meet those temps?

I have one sitting here in the box but almost every second day I see a post on /r/AMD about average hotspot temps after doing the mod

Thank you


----------



## majestynl

Quote:


> Originally Posted by *Spacebug*
> 
> What i got so far not fully stable though, but with some more volts perhaps
> 
> coreclock around 1783
> hbm 1205, benchstable but not fully stable, got 1190 stable though
> done with 1.38V core and 1.44V hbm on water


Best score ever seen so far for me








And that with a old FX ! Temps are also very low.
2 cards ???

Can you share more screens eg Wattman / Wattool / HwInfo or any other screens you have?


----------



## SpecChum

Anyone got an opinion on the new EKWB Phoenix modular WC stuff?

It's tempting for me as my H110i is more than adequate for my Ryzen, so I mainly only want to WC my Vega, and adding the CPU block would be trivial if/when I decide to. Doing this on a full custom loop would be far more hassle; I assume anyway?

Morpheus not really an option, I really don't like the look of it and I've seen a couple of dead cards from doing it wrong.


----------



## Spacebug

Quote:


> Originally Posted by *majestynl*
> 
> Best score ever seen so far for me
> 
> 
> 
> 
> 
> 
> 
> 
> And that with a old FX ! Temps are also very low.
> 2 cards ???
> 
> Can you share more screens eg Wattman / Wattool / HwInfo or any other screens you have?


EDIT: that SP4k run was done with 1205MHz HBM, not the 1190 that was fully stable..

Yeah, fairly good for an old FX i guess but I think in the 4k bench I'm mostly GPU bound, 1080P presets though I guess i'm a bit bottlenecked by the old FX platform.
But i'm focusing on tuning the gpu so keeping the test high rez means it should be more gpu bound with cpu not bottlenecking as much.

the cooling system seems to handle it fairly well despite the, i think, rediculus power draw/heat dump.
Sadly my wall-meter has died, would be fun to know how much it pulled








Cooling is a fullcover EKWB with liquid ultra on core and HBM, rads are a total of 11 120mm fan/rad spaces.

No just one Vega 64.
Not sure if wattman screens are much useful since most useful changes are hardmods.
Basically I god fed up by the way the card reacts to voltage changes and decides clocks on its own.
Basically negative performance scaling going over 1.2V core, probably cause ACG decides its drawing too much power and lowers maintained clocks for me during 3d load.
Solution as of now seems to be hardmods, had been watching a far few of buildzoids videos and went from there.

Current sens circuit is modded to read about 40% of the actual gpu core current draw and that seems to work in bypassing all powerbased throttling or downclocking from AVFS/ACG.
It actually boosts over set P7 clocks now, i guess since AVFS algoritm thinks it has plenty power headroom left and auto overclocks some.
A side benefit of the current sense mod is a bit lower Vdroop as well.
HBM voltage is also a hardmod that for now allows up to 1.44V.

As for software mods, LC bios, using registry soft powerplay editor to set p7 core clock to 1757 and core voltage to 1380mV.
During 3d load clocks stabilze to around 1783MHz.
Set higher voltages than 1.25V core in wattman on LC bios doesn't work, so have to be through registry soft powerplay, the downside here is that you cant change coreclock in wattman either since it tries to apply higher voltage at the same time, so both voltage and coreclock has to be set in registry, means a lot of edits to finetune clocks and voltages.
Wattman to set 1190Mhz HBM clock (and 1250 "hbm" voltage, if that does anything).

That is about it I think, perhaps not useful for guys that don't want to take a soldering iron to their card though









No idea what kind of voltages the core and HBM can sustain though, the cooling seems to handle it fine but perhaps voltages are on the danger side when it comes to degradation.
At 1.4V Vcore I had the display output randomly dropping out, decided that was perhaps a bit too high Vcore and backed off


----------



## ducegt

Good stuff @Spacebug. That's ~21% more than stock balanced profile V64LC


----------



## jmoonb

Quote:


> Originally Posted by *barbz127*
> 
> I can't comment on the differences although you would expect there be none, unless maybe a dud PSU.
> 
> Your temps inc hotspot look great with the Morpheus, would you mind sharing any photos of your heat sink layout or anything else that would help someone meet those temps?
> 
> I have one sitting here in the box but almost every second day I see a post on /r/AMD about average hotspot temps after doing the mod
> 
> Thank you


Unfortunately, I'm on water. I did think about a morpheus as well but decided not to due to the temps and shorting risk. If you read my post from before, it does seem that hotspot can be lowered quite a bit by keeping the VRM cool. I just don't think the small heatsinks that come with it are enough hence why even a perfect install still seems to have relatively high HS and VRM temps. The ones that did better were the morpheus installs with the modded stock plates or ones with extra big heatsinks with good airflow to them.


----------



## RyanRazer

Quote:


> Originally Posted by *jmoonb*
> 
> Now lets assume the scores are similar just due to my better cooling. Unless the readings are faulty, what was most interesting was the power draw. 247 to 217. A 30 watt difference! I need 1070mV to even hit 240w in super position.
> 
> I'm not an expert in undervolting as this card is the first time I tried but this is quite jarring. You'd think the mechanism controlling the current would be the same on every card/bios/driver. Apparently it isn't.


I think it's save to assume that hwinfo's software voltage reading isn't that accurate. A high quality volt meter would be s safer bet


----------



## MAMOLII

Quote:


> Originally Posted by *SpecChum*
> 
> Anyone got an opinion on the new EKWB Phoenix modular WC stuff?
> 
> It's tempting for me as my H110i is more than adequate for my Ryzen, so I mainly only want to WC my Vega, and adding the CPU block would be trivial if/when I decide to. Doing this on a full custom loop would be far more hassle; I assume anyway?
> 
> Morpheus not really an option, I really don't like the look of it and I've seen a couple of dead cards from doing it wrong.


if you want a cheap/quick solutlion... i added a cooper plate 48mm x 48mm x 4mm on gpu and added a cpu aio on it!i just drilled holes to
intel bracket! i used the plate because i want to keep the stock amd plate for vrms and needed a 3mm hight clear space to mount properly!
so nonthing else opened just the front cover and unmounted the amd heatsink!paste-cooper plate-paste-aio water!
i used an artic liquid freezer 120!

my max temps at firestrike ultra stress test 10 minutes...


----------



## SpecChum

Quote:


> Originally Posted by *MAMOLII*
> 
> if you want a cheap/quick solutlion... i added a cooper plate 48mm x 4.8mm x 4mm on gpu and added a cpu aio on it!i just drilled holes to
> intel bracket! i used the plate because i want to keep the stock amd plate for vrms and needed a 3mm hight clear space to mount properly!
> so nonthing else opened just the front cover and unmounted the amd heatsink!paste-cooper plate-paste-aio water!
> i used an artic liquid freezer 120!
> 
> my max temps at firestrike ultra stress test 10 minutes...


Very nice!

However, I would like to go full loop at some point so cooling the vega with a proper pump and rad seemed logical. I could even sell my H110i I guess. Not sure what the market is like for second hand aio's though.


----------



## SpecChum

OK, so after a quick bit of research, here's the plan; hopefully this is on topic as it's mainly about cooling the Vega.

I'm thinking of at first getting an EK Vega Block, obviously, and a 280mm Black Ice rad for the front of my case and then if/when I decide to WC the Ryzen too I'll add the CPU block and a slim 360mm radiator for the top. Technically, the case can take PE and even EX radiators but it'd cover the QLED on my Crosshair 6, and the XE would actually go as low as the RAM but the radiator is offset so it'd be in front of it.

This way I can just cool the Vega on the custom loop for now and still use my H110i for the Ryzen.

For the pump of thinking of the EK-XRES 100 DDC 3.1 PWM MX which actually seems quite cheap and cheerful and looks like it wouldn't have too much trouble even when I add the CPU block and slim 360 into the loop.

Seem reasonable?


----------



## Aenra

Quote:


> Originally Posted by *SpecChum*
> Seem reasonable?


You've been around longer than me so you're probably already aware, but..

To take advantage of the small lead Black Ice has over say an EK rad, you'd pretty much need to be running your fans at 2500+ RPM. And mind you, that's assuming you're talking about a push-pull config; otherwise, you'd actually be behind in terms of dissipation. If that's O.K. for you, by all means. Got no idea of what clearance you have /chassis you use, so no comments on alternatives 

Regarding the pump, more question than opinion; why a DDC? Obviously you're thinking future-proofing (getting rid of the 110i), i get that, so why not a D5? Less noise, lower thermals, definitely no need to actively cool it.


----------



## TrixX

Quote:


> Originally Posted by *Aenra*
> 
> You've been around longer than me so you're probably already aware, but..
> To take advantage of the tiny advantage Black Ice has over say an EK rad, you'd pretty much need to be running your fans at 2500+ RPM. If that's O.K. by you, by all means. got no idea of what clearance you have /chassis you use, so no comments on alternatives
> 
> 
> 
> 
> 
> 
> 
> 
> Regarding the pump, more question than opinion; why a DDC? Obviously you're thinknig future-proofind (getting rid of the 110i), i get that, so why not a D5? Less noise, lower thermals.


Was going to mention that, the EK XRES D5 I have is really good. Not so happy with the EK rads (taken a month to get them cleaned out properly) but they work. Think my TR4 needs a separate loop to my Vega though.


----------



## Aenra

A month as in..? Mounting them without having flushed in advance?

(also, you quoted me before i edited for syntax/grammar.. apologies, lol)


----------



## SpecChum

Quote:


> Originally Posted by *Aenra*
> 
> You've been around longer than me so you're probably already aware, but..
> To take advantage of the small lead Black Ice has over say an EK rad, you'd pretty much need to be running your fans at 2500+ RPM. And mind you, that's assuming you're talking about a push-pull config; otherwise, you'd actually be behind in terms of dissipation. If that's O.K. for you, by all means. Got no idea of what clearance you have /chassis you use, so no comments on alternatives
> 
> 
> 
> 
> 
> 
> 
> 
> Regarding the pump, more question than opinion; why a DDC? Obviously you're thinking future-proofing (getting rid of the 110i), i get that, so why not a D5? Less noise, lower thermals, definitely no need to actively cool it.


Been many years since I've planned a full loop, 10 or so.

Black ice was just the first one that came to mind really, I have no real preference. Same with the pump, it had decent head pressure and flow, but a D5 is certainly doable.

My case is a thermaltake x31, so it can house a 360mm top and front, just not at the same time. Max for both would be 360mm top and 280mm front, which is my plan


----------



## gedoze

Hello all,
so i'm new here, i'm not the owner of any vegas yet, but i have been eyeing them for a long time now, waiting, to see what kind of cooling sollutions AIB cards will brind to the table.
so for now, i figured out one air cooled mod:
Raijintek Morpheus II + Alphacool Upgrade-kit for NexXxoS GPX - ATI RX Vega M01 - black + 2x Noctua NF F12 Industrial PPC-3000 pwm fans along with liquid metal thermal compound for GPU and high grade thermal pad replacements

I know the only real problem will be mounting morpheus, some screw cutting will be needed to fit that alphacool's plate. Second problem might be too tall fins of alphacools' plate, but they could be "trimmed".

So i just wanted to share my thoughts with you and maybe get some valuable insight from Vega vet club.


----------



## SpecChum

Also, since my wc query is borderline off topic I've posted it here http://www.overclock.net/t/1643361/parts-help-not-done-this-for-years-so-im-back-at-noob-status#post_26481654


----------



## Aenra

Quote:


> Originally Posted by *SpecChum*
> 
> I have no real preference.


If you check recent-ish reviews, you'll find that later radiators match or exceed the Black Ice, unless we're talking about overly high RPMs, where it is still king. Again, talking push-pull here. Push/pull only, it actually falls behind. Just something to keep in mind before picking one; not necessarily have an alternative to suggest, the one mentioned above was just an example.

In terms of the pump however, current D5s are fine for mostly anything, need no active cooling/excellent case airflow and are a lot more quiet (even at full speed). Don't get a DDC unless you absolutely have to. And even then, i'd get a dual D5 to be honest..


----------



## SpecChum

Quote:


> Originally Posted by *Aenra*
> 
> If you check recent-ish reviews, you'll find that later radiators match or exceed the Black Ice, unless we're talking about overly high RPMs, where it is still king. Again, taling push-pull here. Push/pull only, it actually falls behind. Just something to keep in mind before picking one; not necessarily have an alternative to suggest, the one mentioned above was just an example.
> 
> In terms of the pump however, current D5s are fine for mostly anything, need no active cooling/excellent case airflow and are lot more quiet (even at full speed). Don't get a DDC unless you absolutely have to. And even then, i'd get a dual D5 to be honest..


Ah, right. Black ice was all the rage back when I last gave this much thought lol.

My pump back then was a hydor l30 pond pump lol it didn't even turn on with the pc, it had a separate mains plug lol

And no, I'm thinking push fans at reasonable rpm, not delta tornados in push/pull.

Thanks for all your help, I'll get there!


----------



## TrixX

Quote:


> Originally Posted by *Aenra*
> 
> A month as in..? Mounting them without having flushed in advance?
> 
> (also, you quoted me before i edited for syntax/grammar.. apologies, lol)


Flushed for hours, took time to make sure clean, after first test run of loop (2 weeks) it still had matter coming out when drained. Second time I drained it more flash came out (very small this time but still present only two weeks later.


----------



## Aenra

Quote:


> Originally Posted by *SpecChum*
> 
> Thanks for all your help, I'll get there!


No worries, 'bout time i helped someone for a change ^^

See these:





As you can see, at 750 RPM the black ice comes last, of all, while at 1300 it barely makes the middle.

In as far as pumps, any D5 will be fine. So basically, your choice of pump is actually your choice of a top+reservoir (compatibility reasons, aka better safe than sorry).


----------



## SpecChum

Quote:


> Originally Posted by *Aenra*
> 
> No worries, 'bout time i helped someone for a change ^^
> 
> See these:
> 
> 
> 
> 
> 
> 
> 
> As you can see, at 750 RPM the black ice comes last, of all, while at 1300 it barely makes the middle.
> 
> In as far as pumps, any D5 will be fine. So basically, your choice of pump is actually your choice of a top+reservoir (compatibility reasons, aka better safe than sorry).


I'm just going to get a pump/res combo


----------



## SpecChum

Oh, nice graph. Interesting.

The PE360 does quite well there, might check out the CE280.

280mm is great as I've got 2 x ML140 fans for my H110i, so I'd just use those.


----------



## gedoze

this just makes me sad:
new vega card from MSI


----------



## Razkin

That seems like a utterly pointless version of a variant of the stock ref blower style. I


----------



## The EX1

Quote:


> Originally Posted by *Razkin*
> 
> That seems like a utterly pointless version of a variant of the stock ref blower style. I


Well there will always be a market for blower style coolers as they have certain use cases.

With AMD supposedly halting production of the reference models (LINK), it may make sense for MSi to produce their own.


----------



## Razkin

I didn't realize that the ref version will no longer be manufactured, it ineed does have use case I find it hard to believe it makes business sense though. Uninformed gamers buy nVidia, informed buyers would rather not buy the blower variant and buy nVidia mostly anyway, which leaves a very small niche of buyers.


----------



## SavantStrike

Quote:


> Originally Posted by *Razkin*
> 
> I didn't realize that the ref version will no longer be manufactured, it ineed does have use case I find it hard to believe it makes business sense though. Uninformed gamers buy nVidia, informed buyers would rather not buy the blower variant and buy nVidia mostly anyway, which leaves a very small niche of buyers.


I pointed it out a few days ago.

With Vega, the reference version is very compelling - it's beneficial to put Vega under water. Vega isn't very popular, so a full cover block is a reference model only kind of thing.


----------



## Razkin

But doesn't it make more sense to have a ref board with proper cooler and a higher class custom pcb variant and omit the ref bord + blower cooler? Seems like you touch al the bases and cut down on unnecessary products/stock/"support".


----------



## NI6HTHAWK

Quote:


> Originally Posted by *cplifj*
> 
> does anyone happen to be running corsair's link software and also happen to get clocks stuck at max after gaming or compute loads ?
> 
> it doesn't seem to happen when corsair's link software isn't running, this also monitors temps and fanspeeds of everything in the system.
> 
> more monitoring just seems to conflict with each other and then some.


I have been having the clocks get stuck at P7 after a driver crash while gaming even when running it at stock balanced profile on my RX64LC and it happens almost instantly at stock settings. Usually the driver crash happens as soon as the GPU begins rendering, then i get a black screen (well 3 actually), then the screens start to comeback one by one, then the card gets stuck at like 1750-1800 MHz but nothing is being rendered just a black screen where the game should have been.

I do run the Corsair link software for my AX860i PSU. I've been having to lower the 1750 p7 clock setting to about 1722 or so to keep it from crashing in game, or just manually lock to the P7 clock with a slightly lower than stock setting. I recently switched back to my old 5.0GHz 2600k/Fury X machine just for some stability, but I will be putting a 1TB SSD into the 3.9 GHz 1700/VegaLC machine tonight so I will try to see if it still crashes, then I will try without the corsair link software running to see if its any different.


----------



## Aenra

Quote:


> Originally Posted by *SavantStrike*
> 
> With Vega, the reference version is very compelling.


Difference being that (contrary to Nvidia's) ATI/Radeon ref models used to be of a much higher component quality than the "equivalent" AIB ref models.. got no clue about current AIB Vegas, so if anyone has any info that contradicts this, aka that AIBs are not of a lower quality (which really would be a first), do let us know.

Otherwise, this could be a point of concern.

*edit: also come to think of it, i remember some Gigabyte ""ref"" models that'd had their BIOS locked.. that's eons ago, but it had happened. Another thing to watch out for


----------



## SavantStrike

Quote:


> Originally Posted by *Razkin*
> 
> But doesn't it make more sense to have a ref board with proper cooler and a higher class custom pcb variant and omit the ref bord + blower cooler? Seems like you touch al the bases and cut down on unnecessary products/stock/"support".


A better stock blower would be nice. The stock design is always going to be a blower though because it's what is needed for cheap OEM cases and for workstation/server use.


----------



## owntecx

Well, from nothing, hwinfo started showing new sensors on the gpu side, and it keeps crashing the gpu so hard, i have to hardreset the pc, anyone with the same problem?


----------



## gedoze

Quote:


> Originally Posted by *Aenra*
> 
> Difference being that (contrary to Nvidia's) ATI/Radeon ref models used to be of a much higher component quality than the "equivalent" AIB ref models.. got no clue about current AIB Vegas, so if anyone has any info that contradicts this, aka that AIBs are _not_ of a lower quality (*which really would be a first*), do let us know.
> 
> Otherwise, this could be a point of concern.
> 
> *edit: also come to think of it, i remember some Gigabyte ""ref"" models that'd had their BIOS locked.. that's eons ago, but it had happened. Another thing to watch out for


Did you just said most AIB cards are almost always worse than ref? Please explain!

regarding AIB vegas (my chart, by coolers): #1 and #2 deathmatch will be between xfx (2x6mm and 4x8mm heatpipes, small pcb = 2nd fin stack is open for breathing)


and sapphire, #3 will be asus, #4 and #5 will be battle between powercolor and gigabyte - with, finally, an vrm backside heatpipe on a backplate, but with poor decission of direct contact 5x8mm heatpipes on no IHS = bad die contact...

P.S. I tried asking sapphire via support ticket when their "bling" cards will be available, i got a reply! Though I don't want to spoil the mood. Still, you can politelly ask them and get a reply


----------



## geriatricpollywog

Quote:


> Originally Posted by *SpecChum*
> 
> OK, so after a quick bit of research, here's the plan; hopefully this is on topic as it's mainly about cooling the Vega.
> 
> I'm thinking of at first getting an EK Vega Block, obviously, and a 280mm Black Ice rad for the front of my case and then if/when I decide to WC the Ryzen too I'll add the CPU block and a slim 360mm radiator for the top. Technically, the case can take PE and even EX radiators but it'd cover the QLED on my Crosshair 6, and the XE would actually go as low as the RAM but the radiator is offset so it'd be in front of it.
> 
> This way I can just cool the Vega on the custom loop for now and still use my H110i for the Ryzen.
> 
> For the pump of thinking of the EK-XRES 100 DDC 3.1 PWM MX which actually seems quite cheap and cheerful and looks like it wouldn't have too much trouble even when I add the CPU block and slim 360 into the loop.
> 
> Seem reasonable?


The DDC 3.1 is a weaker pump than the D5 and DDC 3.2.


----------



## SpecChum

Quote:


> Originally Posted by *0451*
> 
> The DDC 3.1 is a weaker pump than the D5 and DDC 3.2.


I meant 3.2, I know it's got double the head pressure and flow over the 3.1









£25 cheaper than a D5 too.

I like it actually.


----------



## JasonMZW20

So, I decided to add another Vega64. They both achieve serious undervolts (exactly the same), and run at around [email protected] I won't allow them to draw 300w+ (each) as there's not much observable gain. I do have a 1200w PSU to swap in, but meh, more interested in making Vega efficient.


----------



## Spacebug

Undervolting is probably the more sensible thing to do with these cards.

I'm going another route though...
Curiosity got the better of me so I got myself a wall power meter.
Wanted to know what the rig pulled.

Holy Crap, in Superposition 4K the rig averaged a kilowatt from the wall









Sure, an old FX platform isn't the most power efficient thing out there but...
Overvolted Vegas, atleast on ambient cooling, pulls A LOT of power


----------



## By-Tor

Every time HWINFO64 v5.60 opens my windows black screens.

Anyone have this problem and know of a fix?

Thank you


----------



## SpecChum

How this looking?



I've actually changed the tubing to Mayhem since then, saved a few quid and it's got excellent reviews and I've already got 2 x ML140 fans.

Don't forget this is just for the Vega for the time being, son don't worry about it "only" being 280mm rad, not that I can see a 360mm making a huge difference.


----------



## geriatricpollywog

Quote:


> Originally Posted by *SpecChum*
> 
> How this looking?
> 
> 
> 
> I've actually changed the tubing to Mayhem since then, saved a few quid and it's got excellent reviews and I've already got 2 x ML140 fans.
> 
> Don't forget this is just for the Vega for the time being, son don't worry about it "only" being 280mm rad, not that I can see a 360mm making a huge difference.


280mm rads have 91% of the surface area as 360mm rads.

Just make sure you don't buy anything that you will end up replacing next time you upgrade.


----------



## TrixX

Hence recommendations to get the D5 pump/res combo even if it's a little more expensive.


----------



## Arizonian

@JasonMZW20 Running Two 64's on a RM 850, any shut downs during gaming? I'm impressed with the under volting results. Thanks for sharing.


----------



## hyp36rmax

Quote:


> Originally Posted by *Arizonian*
> 
> @JasonMZW20 Running Two 64's on a RM 850, any shut downs during gaming? I'm impressed with the under volting results. Thanks for sharing.


Yea really, at least he has a 1200 watt on stand by. I would get instant OCP shutdown with a 1000 Watt Seasonic Prime Platinum.


----------



## Razkin

I tested Vega 56 and 64 in crossfire and even though seperately the cards were clocked to consume around 300W(at the wall), it tripped OCP of my Seasonic Focus+ 850W. The rest of my system only goes above 150W at the wall when using prime95 and such.


----------



## Aenra

Quote:


> Originally Posted by *gedoze*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Aenra*
> 
> higher component quality
> 
> 
> 
> Please explain!
Click to expand...

I don't see the difficulty here.

But anyway, to rephrase, they tend to go overkill, and often. AIBs will typically feature just as much (in terms of component quality/specifications) as they -think- you need for the settings they -think- you're gonna have; subject as they are to external hardware, cooling, etc. which being subjective, is not their problem.

And since you missed it, i would add that they also tend to charge less for them and are guaranteed to have their BIOS unlocked. While very probable, that's not a guarantee for AIBs; or at the least, one should never view it as one.


----------



## owntecx

Quote:


> Originally Posted by *By-Tor*
> 
> Every time HWINFO64 v5.60 opens my windows black screens.
> 
> Anyone have this problem and know of a fix?
> 
> Thank you


Hapenned the same to me yesterday, i was monitorying gpu vrm temps for a week without a problem... i went to bios to increase the case fan speeds, and for some reason, i got new sensors on the gpu tab about vrm temos something lime vddc vrm. one for core, ogher hbm, and well, it all hardcrashed the gpu, no lights at all, needed to hardreset the gpu. I noticed that, if hwinfo showed new sensors, gpuz showed them too, and that made the pc crash too. So its something with the sensors. somehow its fixed now, dunno if wss something i did in hwinfo, but i have the same sensors i had, and it stoped crashing


----------



## jmoonb

Quote:


> Originally Posted by *By-Tor*
> 
> Every time HWINFO64 v5.60 opens my windows black screens.
> 
> Anyone have this problem and know of a fix?
> 
> Thank you


In hwinfo you can just disabled GPU I2C Support. Won't show the vrm info but it will stop the black screen.


----------



## jmoonb

Quote:


> Originally Posted by *SpecChum*
> 
> I meant 3.2, I know it's got double the head pressure and flow over the 3.1
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> £25 cheaper than a D5 too.
> 
> I like it actually.


You may also want to think about a way to drain the loop.


----------



## By-Tor

Quote:


> Originally Posted by *jmoonb*
> 
> In hwinfo you can just disabled GPU I2C Support. Won't show the vrm info but it will stop the black screen.


I just rolled back to v5.38 and all seems fine.. vrm temps dont show, but thats ok..

Thanks


----------



## jmoonb

Quote:


> Originally Posted by *JasonMZW20*
> 
> So, I decided to add another Vega64. They both achieve serious undervolts (exactly the same), and run at around [email protected] I won't allow them to draw 300w+ (each) as there's not much observable gain. I do have a 1200w PSU to swap in, but meh, more interested in making Vega efficient.


That's a nice clock. I run it similar to you but I get about 1535Mhz.

What is roughly the exact draw on your cards anyways? Worst case for me is about 200W and about a 370W draw from the wall when playing something like bf1.


----------



## gedoze

Quote:


> Originally Posted by *Aenra*
> 
> I don't see the difficulty here.
> 
> But anyway, to rephrase, they tend to go overkill, and often. AIBs will typically feature just as much (in terms of component quality/specifications) as they -think- you need for the settings they -think- you're gonna have; subject as they are to external hardware, cooling, etc. which being subjective, _is not their problem_.
> And since you missed it, i would add that they also tend to charge less for them and are guaranteed to have their BIOS unlocked. While very probable, that's not a guarantee for AIBs; or at the least, one should never view it as one.


pardon me, I missunderstood you.
Yes AIB card features are overkill, specially when all of them sport stuff to help with LN2 cooling, but I view it as a good thing. All these extreme measures with extreme overclocking helps both original tech authors, both their partners and both us. This way extreme overclocking will show what's bad with certain card's components/sollutions, thus leading to improvements.

worth watching (nice sollution comparison):




P.S. can you guys do vega FE vs. vega 64 (air / costume air / lc / costume lc) gaming/bench comparisons with latest drivers ? new driver pack is coming... lets find out the difference.


----------



## fursko

Asus rog strix vega64 700 euro MSRP ?? Adrenalin drivers must be magic drivers for vega.


----------



## ITAngel

Quote:


> Originally Posted by *gedoze*
> 
> pardon me, I missunderstood you.
> Yes AIB card features are overkill, specially when all of them sport stuff to help with LN2 cooling, but I view it as a good thing. All these extreme measures with extreme overclocking helps both original tech authors, both their partners and both us. This way extreme overclocking will show what's bad with certain card's components/sollutions, thus leading to improvements.
> 
> worth watching (nice sollution comparison):
> 
> 
> 
> 
> P.S. can you guys do vega FE vs. vega 64 (air / costume air / lc / costume lc) gaming/bench comparisons with latest drivers ? new driver pack is coming... lets find out the difference.


Nice video thanks for sharing! Very useful information since my loop is finished now is time to test a few things on that card.









+1 Rep


----------



## JasonMZW20

Quote:


> Originally Posted by *jmoonb*
> 
> That's a nice clock. I run it similar to you but I get about 1535Mhz.
> 
> What is roughly the exact draw on your cards anyways? Worst case for me is about 200W and about a 370W draw from the wall when playing something like bf1.


I've been running through some benches, like Ashes and 3DMark, and yeah, it's about the same. I've seen about 212W in certain scenes (quick spikes, mostly) in Ashes, but mostly runs at around 175-202W per card. My secondary card (newest one) seems to be clocking a tiny bit better than my primary card, by about 25-30MHz at the same voltages, clocks, and WM settings. Running resolution is 4K.

I'll need to get a wall meter to get full system draw, but I haven't tripped the OCP in my PSU yet. I'm using 0% power target to keep the Vega64s contained. Scored 12772 in Time Spy drawing about 200W each (consistently above 1565MHz on both). Top CF Vega64 is 15176 at 1845MHz under water. I keep my temperature target at 65C to control leakage. Makes the cooler less efficient, but the GPU can use about 10-15W less.

There are certain games or benches that can push Vega's consumption through the roof. Deus Ex: Mankind Divided was pulling about 242W on just one (can't get multi-GPU to work with Vega), and Superposition using 4K optimized setting does the same.

Tested Ether mining too, and it's at 145W each at 43.5MH/s each (Claymore's; HBCC on and set at 12GB for both cards, Crossfire disabled) using 1246MHz (card 1)/1276MHz(card 2) at 0.875v and 1100MHz HBM (985mv). Card 1's clocks increase only if I bump the voltage a bit, but it also bumps consumption to 150W for no gain in hashing. P6 is set at 1277MHz and P7 is set to 1352MHz. Have to use Afterburner to force it to undervolt more.

So, I think Vega can be efficient in the right circumstances. It just needs to be more consistently efficient, and hopefully the new Adrenalin drivers later this month help Vega achieve even more performance along with more efficiency gains.


----------



## Trender07

Im not into mining but just trying "nicehash" anyone knows why my vega can barely mine ? Nicehash says it should be minin 5.64 € and mine barely does 2.80 USD :s and yes I did precise benchmark(before was doing 1.+ $ lol)
Also it looks like its barely working, the blower barely spins at 1700 rpm and the temp is cool, even tho its on P7 and with its right high frequencys


----------



## SavantStrike

Quote:


> Originally Posted by *Trender07*
> 
> Im not into mining but just trying "nicehash" anyone knows why my vega can barely mine ? Nicehash says it should be minin 5.64 € and mine barely does 2.80 USD :s and yes I did precise benchmark(before was doing 1.+ $ lol)
> Also it looks like its barely working, the blower barely spins at 1700 rpm and the temp is cool, even tho its on P7 and with its right high frequencys


You need to configure xmr-stak manually to get maximum performance mining. Not sure if NH exposes those settings or not. Ideally you should also be on the beta block chain driver, but it really sucks for everything but mining and is somewhat unstable.


----------



## Grummpy

Pwer usage dont matter unless its a laptop.
thats how i see it.


----------



## Grummpy

I found it i drop my clock speed from 1752 down to 1717 @1150 mv i can run my memory at 1200 mhz 1035 mv and use 550 watt at wall during superposition.
I get better performance pushing my memory speed rather than pushing my core speed.
How crazy is that


----------



## ducegt

Quote:


> Originally Posted by *Grummpy*
> 
> I found it i drop my clock speed from 1752 down to 1717 @1150 mv i can run my memory at 1200 mhz 1035 mv and use 550 watt at wall during superposition.
> I get better performance pushing my memory speed rather than pushing my core speed.
> How crazy is that


Reminds me of the 9600 PRO. You were better off pushing the core or memory to their respective limit.


With Tonga, R9 285/380, a few people reported higher performance when disconnecting the fans from the card as that too impacts the power tuning.


----------



## Mumak

Quote:


> Originally Posted by *By-Tor*
> 
> Every time HWINFO64 v5.60 opens my windows black screens.
> 
> Anyone have this problem and know of a fix?
> 
> Thank you


see https://www.hwinfo.com/forum/Thread-IMPORTANT-HWinfo-Vega-64-intermittent-crash


----------



## jmoonb

Quote:


> Originally Posted by *JasonMZW20*
> 
> I've been running through some benches, like Ashes and 3DMark, and yeah, it's about the same. I've seen about 212W in certain scenes (quick spikes, mostly) in Ashes, but mostly runs at around 175-202W per card. My secondary card (newest one) seems to be clocking a tiny bit better than my primary card, by about 25-30MHz at the same voltages, clocks, and WM settings. Running resolution is 4K.
> 
> I'll need to get a wall meter to get full system draw, but I haven't tripped the OCP in my PSU yet. I'm using 0% power target to keep the Vega64s contained. Scored 12772 in Time Spy drawing about 200W each (consistently above 1565MHz on both). Top CF Vega64 is 15176 at 1845MHz under water. I keep my temperature target at 65C to control leakage. Makes the cooler less efficient, but the GPU can use about 10-15W less.
> 
> There are certain games or benches that can push Vega's consumption through the roof. Deus Ex: Mankind Divided was pulling about 242W on just one (can't get multi-GPU to work with Vega), and Superposition using 4K optimized setting does the same.
> 
> Tested Ether mining too, and it's at 145W each at 43.5MH/s each (Claymore's; HBCC on and set at 12GB for both cards, Crossfire disabled) using 1246MHz (card 1)/1276MHz(card 2) at 0.875v and 1100MHz HBM (985mv). Card 1's clocks increase only if I bump the voltage a bit, but it also bumps consumption to 150W for no gain in hashing. P6 is set at 1277MHz and P7 is set to 1352MHz. Have to use Afterburner to force it to undervolt more.
> 
> So, I think Vega can be efficient in the right circumstances. It just needs to be more consistently efficient, and hopefully the new Adrenalin drivers later this month help Vega achieve even more performance along with more efficiency gains.


Hey thanks for the details. I can't seem to break 210W in superposition 4k on these settings but can get to 240 with 1070mV. Then again, I can't even seem to break 490W from the wall when maxing out everything which would explain why I can't seem to push it any higher then 1650Mhz.


----------



## jmoonb

Quote:


> Originally Posted by *Grummpy*
> 
> I found it i drop my clock speed from 1752 down to 1717 @1150 mv i can run my memory at 1200 mhz 1035 mv and use 550 watt at wall during superposition.
> I get better performance pushing my memory speed rather than pushing my core speed.
> How crazy is that


At first I thought it was a typo when you said HBM at 1200mhz. Then, for science, I upped mine from 1100 to 1150. It was completely stable... 1200mhz crashes for me though but how is this now possible and when did this change happen?!?!?


----------



## Rexer

Quote:


> Originally Posted by *fursko*
> 
> Asus rog strix vega64 700 euro MSRP ?? Adrenalin drivers must be magic drivers for vega.


Makes yah feel hopeful. Only thing I know that didn't come with teething problems is a chair I bought. Lol.


----------



## barbz127

Quote:


> Originally Posted by *jmoonb*
> 
> Hey thanks for the details. I can't seem to break 210W in superposition 4k on these settings but can get to 240 with 1070mV. Then again, I can't even seem to break 490W from the wall when maxing out everything which would explain why I can't seem to push it any higher then 1650Mhz.


Would you mind posting your full settings?
Thankyou


----------



## geriatricpollywog

Quote:


> Originally Posted by *Trender07*
> 
> Im not into mining but just trying "nicehash" anyone knows why my vega can barely mine ? Nicehash says it should be minin 5.64 € and mine barely does 2.80 USD :s and yes I did precise benchmark(before was doing 1.+ $ lol)
> Also it looks like its barely working, the blower barely spins at 1700 rpm and the temp is cool, even tho its on P7 and with its right high frequencys


I am getting $4 a day using the CryptoNight miner on nicehash. If I run DaggerHashimoto, I get $3 a day and it only puts load on the HBM.


----------



## cmogle4

Anyone with a Frontier Edition (air) having any issues trying to get HBM above 1040? If i set mine any higher than that and it can pass Firestrike but games or Timespy crash. Also I cannot undervolt this thing at all or it crashes. I think I lost the lottery.


----------



## diggiddi

Quote:


> Originally Posted by *JasonMZW20*
> 
> I've been running through some benches, like Ashes and 3DMark, and yeah, it's about the same. I've seen about 212W in certain scenes (quick spikes, mostly) in Ashes, but mostly runs at around 175-202W per card. My secondary card (newest one) seems to be clocking a tiny bit better than my primary card, by about 25-30MHz at the same voltages, clocks, and WM settings. Running resolution is 4K.
> 
> I'll need to get a wall meter to get full system draw, but I haven't tripped the OCP in my PSU yet. I'm using 0% power target to keep the Vega64s contained. Scored 12772 in Time Spy drawing about 200W each (consistently above 1565MHz on both). Top CF Vega64 is 15176 at 1845MHz under water. I keep my temperature target at 65C to control leakage. Makes the cooler less efficient, but the GPU can use about 10-15W less.
> 
> There are certain games or benches that can push Vega's consumption through the roof. Deus Ex: Mankind Divided was pulling about 242W on just one (can't get multi-GPU to work with Vega), and Superposition using 4K optimized setting does the same.
> 
> Tested Ether mining too, and it's at 145W each at 43.5MH/s each (Claymore's; HBCC on and set at 12GB for both cards, Crossfire disabled) using 1246MHz (card 1)/1276MHz(card 2) at 0.875v and 1100MHz HBM (985mv). Card 1's clocks increase only if I bump the voltage a bit, but it also bumps consumption to 150W for no gain in hashing. P6 is set at 1277MHz and P7 is set to 1352MHz. Have to use Afterburner to force it to undervolt more.
> 
> So, I think Vega can be efficient in the right circumstances. It just needs to be more consistently efficient, and hopefully the new Adrenalin drivers later this month help Vega achieve even more performance along with more efficiency gains.


Repped up for that BTW Which Vega 64 model are u rocking?
Quote:


> Originally Posted by *Trender07*
> 
> Im not into mining but just trying "nicehash" anyone knows why my vega can barely mine ? Nicehash says it should be minin 5.64 € and mine barely does 2.80 USD :s and yes I did precise benchmark(before was doing 1.+ $ lol)
> Also it looks like its barely working, the blower barely spins at 1700 rpm and the temp is cool, even tho its on P7 and with its right high frequencys


Which driver are you using? Uninstall with DDU and reinstall either block chain beta or latest driver


----------



## Mandarb

My Vega 64 AC is now still air cooled, although by two Noctua NF-F12 fans through a Raijintek Morpheus II:


http://imgur.com/rTLha


Currently running at 1652MHz/1100mV which usually results in about 1605Mhz clocks, HBM at 1100MHz.

Haven't really tried upping the HBM further yet, but these are the results so far:

https://www.3dmark.com/spy/2834678

I tried slowly edging the upper clock a bit up to get the card to run at slightly higher frequency, but usually results and clocks stay around the same until I seem to reach a threshold at around 1672MHz at which point it boosts up to about 1630MHz and crashed at one point when running tests because frequency spikes above 1650MHz and the card crashes, but I'm getting quite a bit higher results:

https://www.3dmark.com/spy/2843142
.
Also with these settings I get 60°C GPU, 65°C HBM and 85°C hotspot temps, VRMs are 90°C. This is with fans to 1450rpm max and temp target 60°C.

Once I can find the correct nuts I will remount the backplate and put some thermal pads on some components on the back.


----------



## ITAngel

Is this good her for a Flashed Vega64 Air into LC?



Got this with GPU Core Clock of 1670Mhz and 1140Mhz on the Memory. Max VDDC of 1.1438V and GPU Only Power Draw Max of 378W. Max Temp of 45C on the GPU and GPU HOT SPOT TEMP Max of 64C.

One of the test it show GPU hit 1788Mhz. lol not sure how hahaha odd huh?


----------



## TrixX

Quote:


> Originally Posted by *ITAngel*
> 
> Is this good her for a Flashed Vega64 Air into LC?
> 
> 
> 
> Got this with GPU Core Clock of 1670Mhz and 1140Mhz on the Memory. Max VDDC of 1.1438V and GPU Only Power Draw Max of 378W. Max Temp of 45C on the GPU and GPU HOT SPOT TEMP Max of 64C.
> 
> One of the test it show GPU hit 1788Mhz. lol not sure how hahaha odd huh?


It's a bit below what I'd expect for the power draw though your Core Clock is set lower than mine.

I got 7092 with 1752MHz Core Clock and 1150mv on P7 and HBM @ 1096MHz and 950mv. Balancing out memory performance vs core clock to optimise results may differ from test to test as some will prefer core speed others HBM.


----------



## Roboyto

Quote:


> Originally Posted by *ITAngel*
> 
> Is this good her for a Flashed Vega64 Air into LC?
> 
> 
> 
> Got this with GPU Core Clock of 1670Mhz and 1140Mhz on the Memory. Max VDDC of 1.1438V and GPU Only Power Draw Max of 378W. Max Temp of 45C on the GPU and GPU HOT SPOT TEMP Max of 64C.
> 
> One of the test it show GPU hit 1788Mhz. lol not sure how hahaha odd huh?


Pretty good, but you can probably pull some more out of it though with extra tweaking.

Still on stock AC BIOS with my card:

Floor Volts: 1095mV

P6: 1627 @ 1095mV

P7: 1722 @ 1145mV

HBM: 1185Mhz



Be mindful of that 1788 spike...that can become problematic and cause overboost crashes. These cards are super sensitive to small changes in clocks/voltages...5mV/5MHz can be the difference between glorious stability and hair-pulling aggravation.


----------



## jmoonb

Quote:


> Originally Posted by *barbz127*
> 
> Would you mind posting your full settings?
> Thankyou


I used:
p7 1592Mhz 1020mV
p6 1537Mhz 970mV
HBM 1100Mhz 960mV
Power Limit 50%


----------



## 1337bigmac

Having some major crashing issues with my Vega 64. Seems to only happen when I have 3 monitors plugged in. Any advice?


----------



## Mumak

@Those who had issues monitoring VRM sensors in HWiNFO, please try the new Beta build 3297.
I made some improvements in I2C access, but not sure if that will help.


----------



## asder00

Quote:


> Originally Posted by *Mumak*
> 
> @Those who had issues monitoring VRM sensors in HWiNFO, please try the new Beta build 3297.
> I made some improvements in I2C access, but not sure if that will help.


Same black screen crash, i uploaded the report in your forum.


----------



## Grummpy

Wile playing assassins creed origins


----------



## VicsPC

Quote:


> Originally Posted by *Mumak*
> 
> @Those who had issues monitoring VRM sensors in HWiNFO, please try the new Beta build 3297.
> I made some improvements in I2C access, but not sure if that will help.


Still nothing from me, i did disable i2c support as i had a huge crash on my ryzen build when enabling monitoring of ic vrm voltage i believe, caused a d5 error and i had to reset cmos for the pc to even boot back up.

I did however have a radeon crash yesterday and then the 2 AMD vrm temp sensors showed up under hwinfo but they haven't since.


----------



## JasonMZW20

Quote:


> Originally Posted by *jmoonb*
> 
> At first I thought it was a typo when you said HBM at 1200mhz. Then, for science, I upped mine from 1100 to 1150. It was completely stable... 1200mhz crashes for me though but how is this now possible and when did this change happen?!?!?


I've been playing around with HBM clocks, and found that after 1115MHz, Vega's SoC clock OCs to 1199MHz (1200 for simplicity) from 1107MHz. This does increase power consumption overall, but also increases bandwidth of Infinity Fabric. Mine is seriously unstable when the SoC clock bumps up to 1200MHz, but can get it to work using auto voltage and much lower core clocks. This was on my 1st card (unmolded Vega64, Korean made). Haven't tested the 2nd yet.
Quote:


> Originally Posted by *diggiddi*
> 
> Repped up for that BTW Which Vega 64 model are u rocking?


Ah thanks man. XFX Air for both.

I've also observed that if I keep the HBM voltage (floor) at 950mv, it quickly downclocks/upclocks from 800MHz to set MHz range (so 800MHz one second, then 1100MHz, then 800MHz and so on). Putting it at 975mv keeps stable mem clock. As always with silicon, yours might vary.


----------



## hyp36rmax

Quote:


> Originally Posted by *Mumak*
> 
> @Those who had issues monitoring VRM sensors in HWiNFO, please try the new Beta build 3297.
> I made some improvements in I2C access, but not sure if that will help.


Thanks! I'll check this when I get home.


----------



## Trender07

Quote:


> Originally Posted by *0451*
> 
> I am getting $4 a day using the CryptoNight miner on nicehash. If I run DaggerHashimoto, I get $3 a day and it only puts load on the HBM.


Well I have every algorithm enabled but yeah mines mine with cryptonight too (I have claymore_old disabled)
Quote:


> Originally Posted by *diggiddi*
> 
> Repped up for that BTW Which Vega 64 model are u rocking?
> Which driver are you using? Uninstall with DDU and reinstall either block chain beta or latest driver


Yeah im using the latest driver (11.4) and with DDU


----------



## Reikoji

Quote:


> Originally Posted by *Mumak*
> 
> @Those who had issues monitoring VRM sensors in HWiNFO, please try the new Beta build 3297.
> I made some improvements in I2C access, but not sure if that will help.


Yea no dice. As soon as it gets to detecting ATI devices, GPU shuts off.


----------



## geriatricpollywog

Quote:


> Originally Posted by *Grummpy*
> 
> Wile playing assassins creed origins


Is that a constant solid 1749? Nice! What quality settings and resolution are you uaing?


----------



## diggiddi

Quote:


> Originally Posted by *Trender07*
> 
> Well I have every algorithm enabled but yeah mines mine with cryptonight too (I have claymore_old disabled)
> Yeah im using the latest driver (11.4) and with DDU


I think you need to select only the most profitable algo's to get the best rate, at least I do


----------



## elox

Anyone already flashed the v64 red devil bios?


----------



## ducegt

Quote:


> Originally Posted by *elox*
> 
> Anyone already flashed the v64 red devil bios?


I'm going to try it on 64 LC this evening.


----------



## Grummpy

Quote:


> Originally Posted by *0451*
> 
> Is that a constant solid 1749? Nice! What quality settings and resolution are you uaing?


2560x1440 all maxed.
just run stock with 1100 hbm2 @ 1000 mv for stability.
+50 power.
auto voltages


----------



## Mumak

Oh, well.. I don't think there's more I can do about the GPU VRM access via I2C.
It looks like the problem is in drivers, so now it's AMD's turn. I told them about this, let's see what will they do...


----------



## Grummpy

When i run HWinfo i get bsod and reboots.
just dislikes reading the vega 64


----------



## Mumak

Quote:


> Originally Posted by *Grummpy*
> 
> When i run HWinfo i get bsod and reboots.
> just dislikes reading the vega 64


Disabling the "GPU I2C Support" option in HWiNFO should fix it.


----------



## Stige

So...

I'm looking to upgrade, how bad are the problems with Vega 56 and are they fixable with future driver updates?

1070 TI seems pretty hot right now as it comes pretty close in most games to Vega 56 and beats it just completely in the rest so I'm wondering if it's something that AMD can redemy in the future with driver updates like they did incase of 390 vs 970 or is it just hopeless?


----------



## porschedrifter

Quote:


> Originally Posted by *fursko*
> 
> There is no vega overclock. Its vega tuning ^^ Dont use afterburner. Wattman works good. Just overclock hbm and undervolt your p6 and p7. Try Cod WW2 for stability test.


I went from swearing by Afterburner for years to now just using it to do on screen display in games.
Wattman is surprisingly good and all I need.


----------



## fursko

Quote:


> Originally Posted by *Stige*
> 
> So...
> 
> I'm looking to upgrade, how bad are the problems with Vega 56 and are they fixable with future driver updates?
> 
> 1070 TI seems pretty hot right now as it comes pretty close in most games to Vega 56 and beats it just completely in the rest so I'm wondering if it's something that AMD can redemy in the future with driver updates like they did incase of 390 vs 970 or is it just hopeless?


Its looks like vega series best finewine potential gpu in the history. A lot of features still disabled and next gen game will favor vega. But its just look like that. Dunno whats gonna happen. If you dont have freesync monitor i think nvidia offerings better. Low power, high performance, cheap, problem free. Wait for adrenalin edition drivers.


----------



## geriatricpollywog

Quote:


> Originally Posted by *fursko*
> 
> Its looks like vega series best finewine potential gpu in the history. A lot of features still disabled and next gen game will favor vega. But its just look like that. Dunno whats gonna happen. If you dont have freesync monitor i think nvidia offerings better. Low power, high performance, cheap, problem free. Wait for adrenalin edition drivers.


Right, but if they release a successor to the popular midrange Polaris cards, support for Vega in future may be limited.


----------



## fursko

Quote:


> Originally Posted by *0451*
> 
> Right, but if they release a successor to the popular midrange Polaris cards, support for Vega in future may be limited.


I hope they work on just one arch like nvidia. Midrange vega would be nice. But yes Fury cards suffers right now maybe they will forget vega cards too.


----------



## JasonMZW20

Quote:


> Originally Posted by *Mumak*
> 
> Disabling the "GPU I2C Support" option in HWiNFO should fix it.


Weird. I had the black screen (and green and red!) crash with 5.59, but moved to 5.60 beta and enabled AMD ADL and set it to prefer ADL. Hasn't crashed my system and has been running non-stop for over 28 hours. Detects both Vega64s without issue now, well aside from the VRM temps dropping in and out and PLX temp popping in after a while. Wonder how much stuff the Adrenalin drivers will break.


----------



## Mumak

Quote:


> Originally Posted by *JasonMZW20*
> 
> Weird. I had the issue with 5.59, but moved to 5.60 beta and enabled AMD ADL and set it to prefer ADL. Hasn't crashed my system and has been running non-stop for over 28 hours. Detects both Vega64s without issue now, well aside from the VRM temps dropping in and out and PLX temp popping in after a while. Wonder how much stuff the Adrenalin drivers will break.


That's because when HWiNFO uses ADL for I2C ("GPU I2C via ADL") then in fact it will not access any I2C devices including the VRMs (which are causing the recent crashes). ADL is AMD's own library which claims to offer I2C access as well, but in fact it doesn't work at all








So in the end - enabling "GPU I2C via ADL" has a similar effect as disabling the entire "GPU I2C Support" option


----------



## JasonMZW20

Quote:


> Originally Posted by *Mumak*
> 
> That's because when HWiNFO uses ADL for I2C ("GPU I2C via ADL") then in fact it will not access any I2C devices including the VRMs (which are causing the recent crashes). ADL is AMD's own library which claims to offer I2C access as well, but in fact it doesn't work at all
> 
> 
> 
> 
> 
> 
> 
> 
> So in the end - enabling "GPU I2C via ADL" has a similar effect as disabling the entire "GPU I2C Support" option


I was about to edit my post and say something along the lines of "different route to roughly the same solution". But good to know the technical details on why.


----------



## Mumak

Well, due to so many issues with new drivers and VRM access on Vega, I will most probably disable GPU I2C Support on Vega by default in the next version. There will be a new switch to force it to enable..


----------



## diggiddi

Quote:


> Originally Posted by *0451*
> 
> Right, but if they release a successor to the popular midrange Polaris cards, support for Vega in future may be limited.


I don't think that's gonna happen, Fury was a stepping stone to Vega/Navi by first introducing HBM, Navi should be similar to Vega with one of the main differences being its gpu will be more like treadripper ie instead of large monolithic core its several smaller cores together
The HBCC and all the other features should be carried over and improved + some new stuff thrown in, if my understanding is right


----------



## Roboyto

Quote:


> Originally Posted by *1337bigmac*
> 
> Having some major crashing issues with my Vega 64. Seems to only happen when I have 3 monitors plugged in. Any advice?


Are you running at stock settings? If not, revert to stock and check.

What drivers? I haven't had many crash issues, unless they were OC related, with my 3x1080 Eyefinity.


----------



## fursko

Quote:


> Originally Posted by *diggiddi*
> 
> I don't think that's gonna happen, Fury was a stepping stone to Vega/Navi by first introducing HBM, Navi should be similar to Vega with one of the main differences being its gpu will be more like treadripper ie instead of large monolithic core its several smaller cores together
> The HBCC and all the other features should be carried over and improved + some new stuff thrown in, if my understanding is right


Dunno what is gonna happen but DSBR, NGG and PS still disabled. Something must be wrong. Maybe hardware broken and they can not enable with drivers.


----------



## 1337bigmac

Hey, thanks for the follow up. It's a WEIRD issue. Let me summarize best as I can.

The Environment:
+ Stock settings on the GPU. Pulled it outta the box, plugged it in. No fiddling around at all.
+ Currently "stable" on the Nov 10th drivers (17.11.1)
+ Full clean install of drivers was performed, reboots in between each step.
+ Win 7 64-bit
+ 6-core Intel 5820k OC'd to 4.4
+ 16 GB DDR4 2400mhz

The issue:
+ When I have only TWO monitors plugged in, I am stable with no issues (no issue on ANY driver version.) High load, this baby is rock solid. As expected.
+ The moment I plug in a third monitor - to any slot on the GPU (any of the three display port AND the HDMI), and seemingly any monitor (have tested with 3 so far) the whole PC shuts down. Monitors lose detection. Black screen, like a semi-hard boot. Except I have to manually start it up again to get the machine to post. So it goes into some sort of limbo.
+ If I boot up with all 3 plugged in I will generally get Windows to load, but the moment I open a program (Chrome, Steam, whatever) it crashes.
+ I clean installed to 17.11.1 and was stable for one evening. The next day when I booted up it almost immediately crashed after the desktop loaded and I opened Chrome.
+ The crash can occur under load or under no load. Occasionally it will be stable enough to open Steam and start to launch a game. Then, the crash.
+ Temps are never even remotely high.
+ 1000w high quality PSU, power shouldnt be an issue.
+ Using two separate power leads from the PSU to the GPU (not using two daisy-chained leads).

WhoCrashed says it's a driver issue, from looking at the dmp files. Windows Event Viewer has nothing useful to say.

(shrug) Stable for the moment? Curious when I get home from work and boot it up if I'll see a repeat of yesterdays degradation.


----------



## Reikoji

Quote:


> Originally Posted by *Mumak*
> 
> Oh, well.. I don't think there's more I can do about the GPU VRM access via I2C.
> It looks like the problem is in drivers, so now it's AMD's turn. I told them about this, let's see what will they do...


I kinda figure it will be either in the drivers or the bios as well that causes catastrophic failure with sensor access.


----------



## diggiddi

Quote:


> Originally Posted by *fursko*
> 
> Dunno what is gonna happen but DSBR, NGG and PS still disabled. Something must be wrong. Maybe hardware broken and they can not enable with drivers.


Someone posted on here about a source on HardOCP who was in touch with someone @AMD who said there will be a vega refresh next year, because Vega was rushed to market and there was a disconnect between Hardware and software team that's why some of these features are disabled. Only one feature has the ability to be enabled and it sometimes gave -ve effects or very small increases if at all.
Now this begins to make sense when there is news that AMD will stop producing reference cards and give chips to AIB for aftermarket designs. Also intimated that this is why Raja is no longer with AMD
But I believe the proof of the pudding will be in the Adrenalin drivers if they do not enable all the functions then this rumor is most likely true


----------



## kundica

Quote:


> Originally Posted by *diggiddi*
> 
> Someone posted on here about a source on HardOCP who was in touch with someone @AMD who said there will be a vega refresh next year, because Vega was rushed to market and there was a disconnect between Hardware and software team that's why some of these features are disabled. Only one feature has the ability to be enabled and it sometimes gave -ve effects or very small increases if at all.
> Now this begins to make sense when there is news that AMD will stop producing reference cards and give chips to AIB for aftermarket designs. Also intimated that this is why Raja is no longer with AMD
> But I believe the proof of the pudding will be in the Adrenalin drivers if they do not enable all the functions then this rumor is most likely true


That guy who posted on hardocp is just an Nvidia guy who regularly ****s on AMD on Reddit. Take anything he says with a grain of salt.


----------



## diggiddi

Quote:


> Originally Posted by *kundica*
> 
> That guy who posted on hardocp is just an Nvidia guy who regularly ****s on AMD on Reddit. Take anything he says with a grain of salt.


Ahh I see


----------



## Grummpy

My old air cooled vega 64 that got destroyed.


all maxed on nightmare textures and shadows as reference.
2560x1440


----------



## Grummpy

It seems to me from testing my vega 64 LC i found reducing performance by 5% slashes power usage in half.
Why AMD put the vega cards into the red by default makes little sense.
I can hit 700 watts at wall if i want to for a 1 % in performance.
or i can hit 380 watt at wall for a 5% drop in performance.
You can run this vega 64 that says you need 1000 watts on a 500 watt psu easy


----------



## Grummpy

One on left is the LC one on right is the air cooled.
better binned silicon as you can clearly see by 36 watts


thought it be cool to make comparison as i can.


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> One on left is the LC one on right is the air cooled.
> better binned silicon as you can clearly see by 36 watts
> 
> 
> thought it be cool to make comparison as i can.


My air card hits 222w on an ekwb stock bios and all.


----------



## Grummpy

Got over 30k
nvidia 1080ti way down in the low 23000 poor things.
http://www.luxmark.info/
http://www.luxmark.info/node/5222


----------



## Grummpy

Look at this from apple a Prototype Compute Engine
http://www.luxmark.info/node/5221

Im in 9th








i should push some overclock because it only pulls like 400 watts at wall during this test.
http://www.luxmark.info/top_results/LuxBall%20HDR/OpenCL/GPU/1


----------



## Naeem

Quote:


> Originally Posted by *Grummpy*
> 
> Look at this from apple a Prototype Compute Engine
> http://www.luxmark.info/node/5221
> 
> Im in 9th
> 
> 
> 
> 
> 
> 
> 
> 
> i should push some overclock because it only pulls like 400 watts at wall during this test.
> http://www.luxmark.info/top_results/LuxBall%20HDR/OpenCL/GPU/1


i am number 4


----------



## jehovah3003

Ah, i'm using 3 monitors too but i'm not getting those issues, multi monitor is getting more and more buggy :/


----------



## 1337bigmac

Quote:


> Ah, i'm using 3 monitors too but i'm not getting those issues, multi monitor is getting more and more buggy :/


Currently stable after yet another DDU clean install of the Nov 11th drivers. *shrug* We shall see! I remain optimistic. Sad to see multi-monitor support get worse.


----------



## gedoze

Quote:


> Originally Posted by *Grummpy*
> 
> One on left is the LC one on right is the air cooled.
> better binned silicon as you can clearly see by 36 watts
> 
> 
> thought it be cool to make comparison as i can.


Quote:


> Originally Posted by *VicsPC*
> 
> My air card hits 222w on an ekwb stock bios and all.


Is it just me, or that 36 watt difference shows the VRM efficiency with proper cooling?


----------



## pengs

Some minimal driver improvements so far. Not bad, nothing unexpected.

I'm wondering if the user who made this video altered his target temperature between drivers. I've noticed reduced temperatures overall with 17.11.4 and the fan revving up more aggressively than previous, 55*C vs. laying on the target temp (65*C for example) - I wonder if it's bugged or if this was a change that was made.

With the minimum fan state set to 400rpm it never runs below 560rpm. I seem to remember the fan idling at 400rpm on previous drivers.


----------



## Grummpy

Wish this guy worked for a graphics card manufacturer



Save a lot of messing about



want to see a case with this design to have huge light coolers on the gpu.
or a gpu that comes with a case.


----------



## Grummpy

https://i.imgur.com/8PElGbl.jpg
me and my vega 64 lol


----------



## jbravo14

has anyone tried flashing a vega 64 LC bios on a watercooled vega 56?

I'm planning on building a custom loop for my PC.

Also anyone has any experience with crossfire on vega? I've been having problems with my games running with low FPS (Ghost Recon Wildlands, AC Syndicate - does not start, AC - Origins - does not start)

Only the Rise of the Tomb Raider seems optimized for crossfire.


----------



## poisson21

Some experience with crossfire here, you have to tried the different setting with each game you want to play , it's not "launch and play" each time. Some 3D engine didn't accept crossfire at all (Unity and unreal engine come to my mind) and some other are really finicky with the graphic option you choose beside crossfire. With ashes of singularity you have to disable AA and some post processing effect to avoid flickering per exemple.

You also can choose an official profile if your game have one, or choose one that have the same 3D engine as your game , it help a lot sometime.

But all in all, most of the game are not programed to run on crossfire/sli.


----------



## ITAngel

Quote:


> Originally Posted by *Grummpy*
> 
> https://i.imgur.com/8PElGbl.jpg
> me and my vega 64 lol


Nice! I still want to hit again that 1788Mhz that I got before. lol I will continue searching for the force and Unlimited Power!


----------



## NI6HTHAWK

Quote:


> Originally Posted by *1337bigmac*
> 
> Having some major crashing issues with my Vega 64. Seems to only happen when I have 3 monitors plugged in. Any advice?


I've experienced the same issue with the Vega 64 LC XTX where I was getting a hard crash followed by a reboot while just sitting idle at desktop or watching a YT video when running the Balanced Profile in Wattman.

Honestly I'm not sure what setting cured my hard crash when running my 3 displays (two 27" ASUS MG278Q 144 Hz 1440p Freesync and a 43" Sony XBR43X800E 60 Hz 4k). Basically what i did was lower my P7 GPU clock from 1750 to 1702, raised P3 memory mV from 950 to 1000, raised power target to 50%, of course I was also running the HBM at 1150 MHz so your results may vary.

What I didn't realize was that it was running the 3rd monitor which was causing the hard crash (4k TV was my 3rd display). I have the 4k TV running on the HDMI while the two ASUS are on the DP. This 3 display issue may have been why i was unable to run stock settings out of the box.

To test this I am going to run GPU on balanced profile with the system idle and just the two ASUS monitors enabled to see if reboots itself with the output to the 4k disabled. I'm interested if this cures my inability to run games at stock balanced profile settings without having to run custom settings.

Either way I will say this card seems to run my 3 displays much better than my 1080ti FE which would clock at 1400MHz on the desktop anytime the 4k TV was enabled, even when it was the only thing enabled. Not sure if it doesn't like running HDMI or what but it was consuming an extra 40 watts at idle!


----------



## NI6HTHAWK

Okay, I believe the crashing with 3x multi display issue is simply due to the default Power Limit on Balanced Profile being too low and possible PSU OCP due to GPU Spikes.

I was not hard crashing and self rebooting while idle or watching YT videos with only two displays using the Balanced Profile, but as soon as I loaded World of Warships into its 3D engine it caused the driver to crash (which is not the only title I've seen that issue with). This got me thinking that the driver crashing and stability issues were power related.

So i switched to Turbo Profile which looks like it only raised Power Limit to +25% while leaving all other settings same as Balanced. Now its running World of Warships and the two other displays without crashing the driver. This is actually the first time i haven't crashed while using anything but custom settings, although i may have not tried Turbo mode since the early 17.8.2 Vega drivers. I just figured if it didn't work on Balanced then maybe there was something wrong with the card or driver and tuned the card with a lower clock for P7. Now its running the default 1752 clock and running at 1700 MHz GPU core and 945 MHz HBM running WoW @ 1440p MSAA x8 and highest detail setttings and at 66C on the Turbo fan settings.

Next I wanted to see if I could find the breaking point so I switched the World of Warships onto the 4k TV running 3840x2160 with MSAA turned off and it crashed the driver as well as the application. I switched to custom profile and turned the Power Limit to 50% after a restart and tried again and this time the 4k Display output shut off and then so did the other two displays followed by the Mobo resetting. Now I suspect its a power spike issue from the GPU tripping OCP on my AX-860i as I saw the Core spike last recorded 1777 MHz. I turn off the 1440p monitors and run just the 4k TV and the game runs at 2160p just fine without crashing. The default Balanced Profile was achieving 1563 MHz, Turbo was running at 1640 MHz and Power Save was at 1442 MHz which are all above the advertised base clock of 1406 MHz.

So if you have 3x multi display setups i suggest raising Power Limit if you are having stability issues running the default out of the box settings and if that isn't working it could be related to the PSU not providing the needed current caused by the GPU spikes.

I'm just thankful that my card isn't defective and will be trading my Sandy Bridge Fury X CF rig's AX-1200i PSU with the AX-860i and just sell one of the Fury's. It seems the ridiculous 1000 W minimum PSU requirement for Vega 64 LC is because of these potential spikes as the card is seemingly very power efficient most of the time.


----------



## hyp36rmax

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> I've experienced the same issue with the Vega 64 LC XTX where I was getting a hard crash followed by a reboot while just sitting idle at desktop or watching a YT video when running the Balanced Profile in Wattman.
> 
> Honestly I'm not sure what setting cured my hard crash when running my 3 displays (two 27" ASUS MG278Q 144 Hz 1440p Freesync and a 43" Sony XBR43X800E 60 Hz 4k). Basically what i did was lower my P7 GPU clock from 1750 to 1702, raised P3 memory mV from 950 to 1000, raised power target to 50%, of course I was also running the HBM at 1150 MHz so your results may vary.
> 
> What I didn't realize was that it was running the 3rd monitor which was causing the hard crash (4k TV was my 3rd display). I have the 4k TV running on the HDMI while the two ASUS are on the DP. This 3 display issue may have been why i was unable to run stock settings out of the box.
> 
> To test this I am going to run GPU on balanced profile with the system idle and just the two ASUS monitors enabled to see if reboots itself with the output to the 4k disabled. I'm interested if this cures my inability to run games at stock balanced profile settings without having to run custom settings.
> 
> Either way I will say this card seems to run my 3 displays much better than my 1080ti FE which would clock at 1400MHz on the desktop anytime the 4k TV was enabled, even when it was the only thing enabled. Not sure if it doesn't like running HDMI or what but it was consuming an extra 40 watts at idle!


Quote:


> Originally Posted by *NI6HTHAWK*
> 
> Okay, I believe the crashing with 3x multi display issue is simply due to the default Power Limit on Balanced Profile being too low and possible PSU OCP due to GPU Spikes.
> 
> I was not hard crashing and self rebooting while idle or watching YT videos with only two displays using the Balanced Profile, but as soon as I loaded World of Warships into its 3D engine it caused the driver to crash (which is not the only title I've seen that issue with). This got me thinking that the driver crashing and stability issues were power related.
> 
> So i switched to Turbo Profile which looks like it only raised Power Limit to +25% while leaving all other settings same as Balanced. Now its running World of Warships and the two other displays without crashing the driver. This is actually the first time i haven't crashed while using anything but custom settings, although i may have not tried Turbo mode since the early 17.8.2 Vega drivers. I just figured if it didn't work on Balanced then maybe there was something wrong with the card or driver and tuned the card with a lower clock for P7. Now its running the default 1752 clock and running at 1700 MHz GPU core and 945 MHz HBM running WoW @ 1440p MSAA x8 and highest detail setttings and at 66C on the Turbo fan settings.
> 
> Next I wanted to see if I could find the breaking point so I switched the World of Warships onto the 4k TV running 3840x2160 with MSAA turned off and it crashed the driver as well as the application. I switched to custom profile and turned the Power Limit to 50% after a restart and tried again and this time the 4k Display output shut off and then so did the other two displays followed by the Mobo resetting. Now I suspect its a power spike issue from the GPU tripping OCP on my AX-860i as I saw the Core spike last recorded 1777 MHz. I turn off the 1440p monitors and run just the 4k TV and the game runs at 2160p just fine without crashing. The default Balanced Profile was achieving 1563 MHz, Turbo was running at 1640 MHz and Power Save was at 1442 MHz which are all above the advertised base clock of 1406 MHz.
> 
> So if you have 3x multi display setups i suggest raising Power Limit if you are having stability issues running the default out of the box settings and if that isn't working it could be related to the PSU not providing the needed current caused by the GPU spikes.
> 
> I'm just thankful that my card isn't defective and will be trading my Sandy Bridge Fury X CF rig's AX-1200i PSU with the AX-860i and just sell one of the Fury's. It seems the ridiculous 1000 W minimum PSU requirement for Vega 64 LC is because of these potential spikes as the card is seemingly very power efficient most of the time.


I also have two VEGA 64's on EK blocks and experienced shutdowns on a single Seasonic 1000 Watt PRIME Platinum, with triple ASUS MG279Q 1440P 144hz monitors in Eyefinity (7680x1440P) and an ASUS PB287Q 4K monitor. Those power spikes are no joke and happen only when running a graphic intensive app or game. The issue was alleviated when i upgraded to an EVGA 1600 Watt T2 Titanium PSU. I had the same issue with my GTX 1080Ti FTW3's in SLI.


----------



## 1337bigmac

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> I've experienced the same issue with the Vega 64 LC XTX where I was getting a hard crash followed by a reboot while just sitting idle at desktop or watching a YT video when running the Balanced Profile in Wattman.
> 
> Honestly I'm not sure what setting cured my hard crash when running my 3 displays (two 27" ASUS MG278Q 144 Hz 1440p Freesync and a 43" Sony XBR43X800E 60 Hz 4k). Basically what i did was lower my P7 GPU clock from 1750 to 1702, raised P3 memory mV from 950 to 1000, raised power target to 50%, of course I was also running the HBM at 1150 MHz so your results may vary.
> 
> What I didn't realize was that it was running the 3rd monitor which was causing the hard crash (4k TV was my 3rd display). I have the 4k TV running on the HDMI while the two ASUS are on the DP. This 3 display issue may have been why i was unable to run stock settings out of the box.
> 
> To test this I am going to run GPU on balanced profile with the system idle and just the two ASUS monitors enabled to see if reboots itself with the output to the 4k disabled. I'm interested if this cures my inability to run games at stock balanced profile settings without having to run custom settings.
> 
> Either way I will say this card seems to run my 3 displays much better than my 1080ti FE which would clock at 1400MHz on the desktop anytime the 4k TV was enabled, even when it was the only thing enabled. Not sure if it doesn't like running HDMI or what but it was consuming an extra 40 watts at idle!


What are you using to adjust power profiles? Are you just referring to Windows power profiles? Or is this some setting in an AMD software profile?


----------



## NI6HTHAWK

Quote:


> Originally Posted by *hyp36rmax*
> 
> I also have two VEGA 64's on EK blocks and experienced shutdowns on a single Seasonic 1000 Watt PRIME Platinum, with triple ASUS MG279Q 1440P 144hz monitors in Eyefinity (7680x1440P) and an ASUS PB287Q 4K monitor. Those power spikes are no joke and happen only when running a graphic intensive app or game. The issue was alleviated when i upgraded to an EVGA 1600 Watt T2 Titanium PSU. I had the same issue with my GTX 1080Ti FTW3's in SLI.


Yeah I can't believe it all this time I thought I was just on the short end of silicon lottery! I think I am finally beginning to enjoy my Vega triple display setup for the first time its been a frustrating road after the 1080ti FE didn't pan out with the high idle power consumption issue.

Vega is running everything perfect now that I swapped the AX-1200i in. 50% power limit is getting a stable 1691 MHz @ 945 MHz HBM on WoW @ 3840 x 2160p with MSAA x8 and it peaked around 334W! GPU Idle power consumption is like 7 watts compared to 60 watts with the 1080ti FE which was peaking around 348W with an OC'd 2025 MHz core clock! Freesync on the 144 Hz monitors is much better than the experience with frame tearing on the 1080ti.


----------



## NI6HTHAWK

Quote:


> Originally Posted by *1337bigmac*
> 
> What are you using to adjust power profiles? Are you just referring to Windows power profiles? Or is this some setting in an AMD software profile?


The default Wattman Power profiles. I've got the CPU power running the High Performance profile with a 5% minimum CPU usage settting since i'm running P-state OC @ 3914 MHz.


----------



## ITAngel

I got my card running a lot better using stock settings under Turbo but still experimenting with the custom values. Here is a screenshot.



Note I don't want to over do it until I replace the PSU at the end of the month.







With a Seasonic Focus Series 850W


----------



## SavantStrike

If anyone is looking for full cover blocks for Vega, I have two Bykski blocks I cannot use due to an order screw up on aliexpress. I'm still trying to figure out of I can even send them back.

Sad day. I won't have the parts I need by the LST week of the year when I have time off to do a build. Two months apparently isn't long enough to get parts from China.


----------



## Aenra

Quote:


> Originally Posted by *SavantStrike*
> 
> Bykski blocks


I hope you get this sorted, but in case you don't, could you dissect one for our viewing pleasure? Can't say i've ever seen the innards of one; Bykski that is.


----------



## SavantStrike

Quote:


> Originally Posted by *Aenra*
> 
> I hope you get this sorted, but in case you don't, could you dissect one for our viewing pleasure? Can't say i've ever seen the innards of one; Bykski that is.


I'm currently debating if I should double down on my idiocy and buy the third block I need. They are pretty blocks, and I'm pretty sure they'll go back together after a dissection.

Do any of you have the EK block? If so could you give me a dimension in mm from the bracket to the first port? Maybe I can cram an EK in there and make things work with fittings.

This process has been like pulling teeth. I've been jerked around by the suppliers and support for Ali doesn't type good English.


----------



## hyp36rmax

Quote:


> Originally Posted by *SavantStrike*
> 
> I'm currently debating if I should double down on my idiocy and buy the third block I need. They are pretty blocks, and I'm pretty sure they'll go back together after a dissection.
> 
> Do any of you have the EK block? If so could you give me a dimension in mm from the bracket to the first port? Maybe I can cram an EK in there and make things work with fittings.
> 
> This process has been like pulling teeth. I've been jerked around by the suppliers and support for Ali doesn't type good English.


I've got the EK blocks and can measure later this evening.


----------



## SavantStrike

Quote:


> Originally Posted by *hyp36rmax*
> 
> I've got the EK blocks and can measure later this evening.


That would be super helpful! As long as the ports line up, I can jump from card to card using rigid connectors.

I'm going to make a phone call to aliexpress tonight, maybe a human can better understand my plight.


----------



## Chaoz

Quote:


> Originally Posted by *SavantStrike*
> 
> That would be super helpful! As long as the ports line up, I can jump from card to card using rigid connectors.
> 
> I'm going to make a phone call to aliexpress tonight, maybe a human can better understand my plight.


I never really had any issues with sending back stuff that arrived broken or didn't want anymore.

Just open a dispute asking for a refund with sending back the item.

You would've to pay the shippingcost out of your own pocket to send it back.


----------



## webhito

So, what are your thoughts on the titan v? Do you think 1180ti's will be priced where the titans now sit at?


----------



## pmc25

Quote:


> Originally Posted by *webhito*
> 
> So, what are your thoughts on the titan v? Do you think 1180ti's will be priced where the titans now sit at?


There won't be any, and it's not a gaming card.


----------



## webhito

Quote:


> Originally Posted by *pmc25*
> 
> There won't be any, and it's not a gaming card.


Oh, I am quite aware its not a gaming card, but the naming scheme leads me to think that they are probably gonna cut down the card, raise the clock a bit, put in less memory and call it a "TI". I guess we shall see.


----------



## ducegt

After 1 month of owning this card, I experienced severe coil whine for the first time. It's absolutely terrible. I've played a few hours of Wolfenstein and never heard it before... Puzzling.

It's doing it when I use the Render Test in GPU-Z now as well; And it never did before.


----------



## geriatricpollywog

Quote:


> Originally Posted by *ducegt*
> 
> After 1 month of owning this card, I experienced severe coil whine for the first time. It's absolutely terrible. I've played a few hours of Wolfenstein and never heard it before... Puzzling.
> 
> It's doing it when I use the Render Test in GPU-Z now as well; And it never did before.


Try setting a frame rate target.


----------



## fursko

Quote:


> Originally Posted by *webhito*
> 
> So, what are your thoughts on the titan v? Do you think 1180ti's will be priced where the titans now sit at?


No way. Titan V has tensor cores and HBM2. Thats why price is too much i guess. They will release non ti cards first as always. 1160, 1170, 1180 spring release. I expect really high prices. They will aim 4k 144hz this time. Gsync HDR 4k 144hz gaming monitors coming. Vega really needs that features. But non of them working. Literally no competition for Nvidia. Nvidia did revolution with maxwell. Radeon still same old Radeon.


----------



## Naeem

Quote:


> Originally Posted by *fursko*
> 
> No way. Titan V has tensor cores and HBM2. Thats why price is too much i guess. They will release non ti cards first as always. 1160, 1170, 1180 spring release. I expect really high prices. They will aim 4k 144hz this time. Gsync HDR 4k 144hz gaming monitors coming. Vega really needs that features. But non of them working. Literally no competition for Nvidia. Nvidia did revolution with maxwell. Radeon still same old Radeon.


nvidia will most probably release a new chip for next gen gaming cards without tensor cores and maybe even skip on hbm2 as well


----------



## fursko

Quote:


> Originally Posted by *Naeem*
> 
> nvidia will most probably release a new chip for next gen gaming cards without tensor cores and maybe even skip on hbm2 as well


Definitely yes. Probably skip hbm2 too.


----------



## Aenra

Quote:


> Originally Posted by *fursko*
> 
> Probably skip hbm2 too.


Definitely skipping it. They didn't waste millions on R&D on the new GDDR just for fun.


----------



## Delijohn

Hello to all,

i'm a new Vega 64 user but oooold with ati gpus.
I bought this card 2 weeks ago and since I started "playing" with Wattman, I have some problems. The main problem is low clocks (and performance) in games. Should I ask for help here or in a new thread?

p.s.: I already bought a custom watercooling solution but i wanna be sure that my card is "ok" before i ruin my warranty and apply water.
Many many thanks


----------



## ducegt

Quote:


> Originally Posted by *0451*
> 
> Try setting a frame rate target.


Thanks, that's what it was. Forgot I stopped using Afterburner which had RivaTuner limiting the framerate.


----------



## Trender07

Quote:


> Originally Posted by *Delijohn*
> 
> Hello to all,
> 
> i'm a new Vega 64 user but oooold with ati gpus.
> I bought this card 2 weeks ago and since I started "playing" with Wattman, I have some problems. The main problem is low clocks (and performance) in games. Should I ask for help here or in a new thread?
> 
> p.s.: I already bought a custom watercooling solution but i wanna be sure that my card is "ok" before i ruin my warranty and apply water.
> Many many thanks


Sure, first try with those safe settings:

p6 stock clock / 985 mV
p7 stock clock 1125 mV
HBM stock clock / 955 mV (you can try and set hbm to 1060 mhz at least after ur tests, I run 1080 mhz hbm on air but temps cause artifacts with higher speed unless u watercool)
PL 39%, and ramp up the fan speed up to 3200/3400 rpm just to make sure temps arent a problem


----------



## Delijohn

Quote:


> Originally Posted by *Trender07*
> 
> Sure, first try with those safe settings:


Thanks for your reply!
With these settings I got easily this one:

The best number i've seen in Superposition was 4867, with some artifacts.
Temps didn't go above 70C as you see.
I'll check now again in Witcher 3 and GTA V.


----------



## freeleacher

Witcher 3 and GTA V both have gimp works running crippling performance


----------



## Grummpy

Reset windows used drivers windows provided.
got 44 fps.
Installed latest drivers
non-whql-win10-64bit-radeon-software-crimson-relive-17.11.4-nov27
Ran benchmark gained 10 fps to 55 fps.
Uninstalled drivers using AMD uninstall utility and Display Driver Uninstaller in safe mode.
Installed drivers again and gained a extra 5 frames.
Just found it interesting gaining performance using those tools.


Default speed.
seems using this benchmark tool no matter how much you overclock the results stay the same.
just no draw calls and power usage at wall stays the same.
But in game is different.
this tool is useless for comparing cards but fine for comparing new drivers.


----------



## Gregix

Should it take picture with name or cpu-z validation is enough for you guys?
https://valid.x86.fr/b2l2ea

Its Vega 64, air, with morpheus 2.

oh, found gpu-z validation
https://www.techpowerup.com/gpuz/details/f6fws


----------



## pmc25

Was told that new drivers are cautiously aimed at second half of next week.


----------



## Trender07

Quote:


> Originally Posted by *Delijohn*
> 
> Thanks for your reply!
> With these settings I got easily this one:
> 
> The best number i've seen in Superposition was 4867, with some artifacts.
> Temps didn't go above 70C as you see.
> I'll check now again in Witcher 3 and GTA V.


Artifacts at 1060 hbm right? At default hbm clocks(945 mhz iirc) you shouldnt see artifacts
Quote:


> Originally Posted by *pmc25*
> 
> Was told that new drivers are cautiously aimed at second half of next week.


12th december aint it


----------



## geriatricpollywog

Quote:


> Originally Posted by *freeleacher*
> 
> Witcher 3 and GTA V both have gimp works running crippling performance


How? I am getting 60 FPS max settings at 3440x1440 in Witcher 3 with Hairworks at max. With Hairworks off, 80 fps.


----------



## Grummpy

0451
amd hard work circumventing that poison man


----------



## gupsterg

Quote:


> Originally Posted by *Mumak*
> 
> Well, due to so many issues with new drivers and VRM access on Vega, I will most probably disable GPU I2C Support on Vega by default in the next version. There will be a new switch to force it to enable..


VRM temp is working well for me IMO.

3DM FS


Spoiler: Warning: Spoiler!








Did do a 3rd run but forgot to grab screenie







, here is 2x runs without monitoring, IMO negligible difference with/without monitoring.

SP 4K


Spoiler: Warning: Spoiler!








Again negligible difference with/without monitoring.


Spoiler: Warning: Spoiler!







Did also ~20min Wolfenstein 2, didn't note any performance issues from having monitoring.


Spoiler: Warning: Spoiler!







I'm using:-

VBIOS P/N: 113-D0500100-104
VBIOS Version: 016.001.001.000.008769

(TPU DB link)

With PP reg I mod only:-

i) GPU DPM 5 1062mV, DPM 6: 1557MHz 1075mV, DPM 7: 1642MHz 1125mV
ii) SOC DPM 5 to 7 1107MHz
iii) HBM DPM 3 1100MHz

Set PowerLimit: 38% in WattMan.

I hope you keep a switch for users who have no issue with additional VEGA monitoring







, as always cheers Martin for your support







.


----------



## Delijohn

Quote:


> Originally Posted by *Trender07*
> 
> Artifacts at 1060 hbm right? At default hbm clocks(945 mhz iirc) you shouldnt see artifacts


Yes, at 1060MHz I can see red dots in-game or in superposition.
If I put it in 1045 for example, it's ok.
With your settings it's much more stable and ok. Many many thanks!! I feel like a fool cause I wasn't doing anything wrong, as far as I know.. even in GTA it's much better, even with higher settings. I had everything in LOW and I couldn't get higher than 40fps. So weird...
check now










http://imgur.com/3kZ1S


----------



## Grummpy

why Ross Brawn designer as picture ?
just wondering.
that guy make MC world champ 5 x in Ferrari and button
aswell as many others


----------



## Mumak

Quote:


> Originally Posted by *gupsterg*
> 
> I hope you keep a switch for users who have no issue with additional VEGA monitoring
> 
> 
> 
> 
> 
> 
> 
> , as always cheers Martin for your support
> 
> 
> 
> 
> 
> 
> 
> .


Tomorrow is a new release planned that by default will disable I2C access on Vega.
But a new switch called "GPU I2C Support Force" can be used to force access.


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> VRM temp is working well for me IMO.
> 
> 3DM FS
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> Did do a 3rd run but forgot to grab screenie
> 
> 
> 
> 
> 
> 
> 
> , here is 2x runs without monitoring, IMO negligible difference with/without monitoring.
> 
> SP 4K
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> Again negligible difference with/without monitoring.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Did also ~20min Wolfenstein 2, didn't note any performance issues from having monitoring.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> I'm using:-
> 
> VBIOS P/N: 113-D0500100-104
> VBIOS Version: 016.001.001.000.008769
> 
> (TPU DB link)
> 
> With PP reg I mod only:-
> 
> i) GPU DPM 5 1062mV, DPM 6: 1557MHz 1075mV, DPM 7: 1642MHz 1125mV
> ii) SOC DPM 5 to 7 1107MHz
> iii) HBM DPM 3 1100MHz
> 
> Set PowerLimit: 38% in WattMan.
> 
> I hope you keep a switch for users who have no issue with additional VEGA monitoring
> 
> 
> 
> 
> 
> 
> 
> , as always cheers Martin for your support
> 
> 
> 
> 
> 
> 
> 
> .


Can you post a GPU-Z monitoring of yoir in-game sensor readings?


----------



## Grummpy

what features can we expect to have with the new drivers that are soon to be released ?
will we be seeing major performance increases ?


----------



## Trender07

Quote:


> Originally Posted by *Grummpy*
> 
> what features can we expect to have with the new drivers that are soon to be released ?
> will we be seeing major performance increases ?


A leak basically confirmed there won't be any performance increased nor vega features unlocks, Adrenalin will be tons of new tools (like that one OSD already shown) and clean UI, and supposedly to be released on 12th december


----------



## LocoDiceGR

Quote:


> Originally Posted by *Trender07*
> 
> A leak basically confirmed there won't be any performance increased nor vega features unlocks, Adrenalin will be tons of new tools (like that one OSD already shown) and clean UI, and supposedly to be released on 12th december


the ''leak'' said ... dont expect HUGE PERFOMANCE improvements...


----------



## fursko

Yeah looks like no features. Another disappointment. I hope %5-%10 consistent performance increase for vega but looks like its not gonna happen. All i know osd feature and overwatch bug fix. UI was good actually. Hope they will not break anything.


----------



## Gregix

http://imgur.com/akpU2

My best for now, balanced but somehow mem is 1050, probably stuck after some earlier OC tries.
Great card. Bad i have 550w PSU only so I am rather carefull with OC. Saw few times 330+ peak in GPUz...
Got sec PC with 4.8ghz 8700k so will post results later maybe.


----------



## pengs

Quote:


> Originally Posted by *Trender07*
> 
> A leak basically confirmed there won't be any performance increased nor vega features unlocks, Adrenalin will be tons of new tools (like that one OSD already shown) and clean UI, and supposedly to be released on 12th december


I think we'll get some regular performance improvements within the initial releases of Adrenalin (at the least) and as always with a new line of cards and with Vega being in the spotlight.

On the other hand, that leak is not an engineer. I guess it depends on how intricately tied some of these features need to be at the engine level.


----------



## diabetes

AMD saying nothing about the missing Vega features is not a good sign. Real information is locked down like "Fort Knox". There are claims and rumors here and there but basically nothing solid. Is it really so hard for AMD to just say what's going on? Slowly I really want to know if I spent a lot of money on a chip that is "broken by design". The limited supply of Vega chips could also be AMD trying not to f*ck that many ppl over (oh what twisted logic that is) while keeping the investors happy. I dunno. The features not being included in Adrenalin is the final straw for me. Maybe its time for a flooding their social media?

It is like selling a car with air conditioning that doesnt work and then just ignoring all customer complaints about that. As the very few cards that are currently being sold do still have the "programmable geometry engine" tag in their marketing material, this is basically fraud and therefore lawsuit material.

I dont even care whether the missing features will bring perf improvements or not. I just want what I paid for.


----------



## Grummpy

my box says rasterizer tech in optimised pixel engine.
is that already in use today ?

ive invested 1200 pounds in vega i just have to wait and see if they deliver.
i hope so


----------



## diabetes

The new pixel engine is already in use. It uses depth passes in order to determine whether a pixel shader has to be executed or not for a given set of verticies. Gains are rather small though as there is enough raw power for shading.


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Mumak*
> 
> Tomorrow is a new release planned that by default will disable I2C access on Vega.
> But a new switch called "GPU I2C Support Force" can be used to force access.






I was going to ask a question but maybe this is it.
anyone else have the vega turn off as in no power as in no gpu tach but the computer still going when they open hwinfo(3927beta at the moment)
it doesn't always do it just now and then and it is a recent thing.
also doesn't matter if it is stock balanced or custom


----------



## pmc25

Rasteriser stuff must be implemented engine side as well.

The update coming this week (probably this week) should offer good performance gains. As with most new AMD architectures, Vega will continue to get faster and faster.

People were trying to pretend that a Vega64 was slower than a 1070 at 2x the power draw, at release (which was never true).

Now, at ~200W draw it's getting very close to a 1080Ti in quite a few games. Even in engines that have traditionally hugely favoured NVIDIA (UE4).


----------



## diabetes

Quote:


> Originally Posted by *pmc25*
> 
> The update coming this week (probably this week) should offer good performance gains.


No.


__ https://twitter.com/i/web/status/939203218172731393


----------



## fursko

Quote:


> Originally Posted by *pmc25*
> 
> Rasteriser stuff must be implemented engine side as well.
> 
> The update coming this week (probably this week) should offer good performance gains. As with most new AMD architectures, Vega will continue to get faster and faster.
> 
> People were trying to pretend that a Vega64 was slower than a 1070 at 2x the power draw, at release (which was never true).
> 
> Now, at ~200W draw it's getting very close to a 1080Ti in quite a few games. Even in engines that have traditionally hugely favoured NVIDIA (UE4).


Which ue4 game uses 200w and performance close 1080 ti ?


----------



## Trender07

Quote:


> Originally Posted by *diabetes*
> 
> AMD saying nothing about the missing Vega features is not a good sign. Real information is locked down like "Fort Knox". There are claims and rumors here and there but basically nothing solid. Is it really so hard for AMD to just say what's going on? Slowly I really want to know if I spent a lot of money on a chip that is "broken by design". The limited supply of Vega chips could also be AMD trying not to f*ck that many ppl over (oh what twisted logic that is) while keeping the investors happy. I dunno. The features not being included in Adrenalin is the final straw for me. Maybe its time for a flooding their social media?
> 
> It is like selling a car with air conditioning that doesnt work and then just ignoring all customer complaints about that. As the very few cards that are currently being sold do still have the "programmable geometry engine" tag in their marketing material, this is basically fraud and therefore lawsuit material.
> 
> I dont even care whether the missing features will bring perf improvements or not. I just want what I paid for.


It is what it is man, features must by used by Game Developers, just look at the really nice optimization for Vega of Wolfentein 2, beating 1080 To, gotta wait what we get with Far Cry 5


----------



## pmc25

Quote:


> Originally Posted by *diabetes*
> 
> No.
> 
> 
> __ https://twitter.com/i/web/status/939203218172731393


I'm not sure what you imagine that Twitter string says.


----------



## HardwareTom

Hi Guys,
I have build a selfmade Custumcard.

Hardware-Tom Radeon RX Vega 64

Morpheus Core II
2x Corsair AF120 QE

Open Case:
Idle: 23°C
Load: ~50°C

Closed Case:
Idle: 25°C
Load: ~65°C

Before:
Idle: 30°C
Load: ~80°C

Its very quiet and looks nice


----------



## gupsterg

Quote:


> Originally Posted by *Mumak*
> 
> Tomorrow is a new release planned that by default will disable I2C access on Vega.
> But a new switch called "GPU I2C Support Force" can be used to force access.
> Quote:
> 
> 
> 
> Originally Posted by *tarot*
> 
> I was going to ask a question but maybe this is it.
> anyone else have the vega turn off as in no power as in no gpu tach but the computer still going when they open hwinfo(3927beta at the moment)
> it doesn't always do it just now and then and it is a recent thing.
> also doesn't matter if it is stock balanced or custom
Click to expand...

@Mumak

Thanks







.

@tarot

I have experienced this 2-3 times since yesterday. I believe it's driver and down to access to these "sensors", as it happens on v5.61-3297 and prior HWiNFO for me







.
Quote:


> Originally Posted by *0451*
> 
> Can you post a GPU-Z monitoring of yoir in-game sensor readings?


Will do ASAP







.


----------



## Mumak

HWiNFO v5.70 is out.
Reminder: I2C access on Vega is disabled by default here. Use the "GPU I2C Support Force" option to enable it, but remember you're risking a system crash in this case.


----------



## Grummpy

Just testing before Adrenalin driver @ default.


----------



## cplifj

Let's just hope they are not calling it Adrenalin because we'll get angry with all the new quircks and hickups introduced with this "new" adrenalin driver. Or angry because of lack of support for certain "features"....


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> Just testing before Adrenalin driver @ default.


Not too shabby, I'm about to buy it and give it a try to see how it does on my Vega 64 and 1700x, i hear its VERY cpu intensive so we shall see. I'm playing on 2560x1080 though and would love to see an average of 70fps but we'll give it a go.


----------



## Grummpy

i will run a 2560x1080 im sure i will be able to make that res in custom setting. i will post results.


----------



## Grummpy

well the amd custom res building isnt fit for purpose.
completely useless because it dont work


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> well the amd custom res building isnt fit for purpose.
> completely useless because it dont work


Haha i dont usually mess with custom resolutions, will never work right. I think I'm getting something like 60fps average on very high (ish) but it says low performance as the last 2-3secs of the benchmark drops to aroudn 20fps for whatever reason. Does use up the system quite a bit lol.


----------



## os2wiz

Quote:


> Originally Posted by *Trender07*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Delijohn*
> 
> Thanks for your reply!
> With these settings I got easily this one:
> 
> The best number i've seen in Superposition was 4867, with some artifacts.
> Temps didn't go above 70C as you see.
> I'll check now again in Witcher 3 and GTA V.
> 
> 
> 
> Artifacts at 1060 hbm right? At default hbm clocks(945 mhz iirc) you shouldnt see artifacts
> Quote:
> 
> 
> 
> Originally Posted by *pmc25*
> 
> Was told that new drivers are cautiously aimed at second half of next week.
> 
> Click to expand...
> 
> 12th december aint it
Click to expand...

I have my Vega56 underwater with an Alphacool Eiswolf GPX Pro. My gpu is not optimized for overclocking yet I am using Wattman balanced power plan right now until I can optimize my voltage and frequency settings. Here is my Superposition benchmark for 4k. I have flashed my PowerColor RX Vega 56 with a RX Vega64 bios.

CaptureSuperposition4koptimized.PNG 3525k .PNG file


----------



## BeetleatWar1977

Quote:


> Originally Posted by *os2wiz*
> 
> I have my Vega56 underwater with an Alphacool Eiswolf GPX Pro. My gpu is not optimized for overclocking yet I am using Wattman balanced power plan right now until I can optimize my voltage and frequency settings. Here is my Superposition benchmark for 4k. I have flashed my PowerColor RX Vega 56 with a RX Vega64 bios.
> 
> CaptureSuperposition4koptimized.PNG 3525k .PNG file


A bit low under water^^

my best Score with an 56 & 64er Bios - under AIR:


----------



## serave

Just got my Raijintek Morpheus today, anybody knows how to remove this 1 damned screw in the rear ?

Is it safe if i just drill through it?

i cant open it with all of my screwdriver which annoys me the most atm

Screw im talking about


----------



## Chaoz

Quote:


> Originally Posted by *serave*
> 
> Just got my Raijintek Morpheus today, anybody knows how to remove this 1 damned screw in the rear ?
> 
> Is it safe if i just drill through it?
> 
> i cant open it with all of my screwdriver which annoys me the most atm
> 
> Screw im talking about


You don't need to unscrew that screw on the side. It's only purpose is to hold the ref cooler cover on, not the PCB.


----------



## Grummpy

Im sitting here checking for updates every 12 hours or so looking for the new drivers.
I do hope they deliver im sick to death of people slagging vega off.


----------



## Grummpy

Try not to void the warranty


----------



## Grummpy

Wish i still had a working multi meter i could check the resistance to find out how much this needs to run.
What voltage does this require i wonder. look nice on the case.


----------



## Grummpy

Remember to support the back or you will crack the pcb.
and have a bad day as well as weeks of regret.


----------



## dagget3450

Quote:


> Originally Posted by *Grummpy*
> 
> Try not to void the warranty


LoL, @destruction

I got 357 posts to read, sorry if your waiting on me to add you to club! Ill try to get this done bye end of week!!


----------



## Grummpy

How long did my vega 64 card last me.
well
Answer this many days....


----------



## Grummpy




----------



## os2wiz

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> Quote:
> 
> 
> 
> Originally Posted by *os2wiz*
> 
> I have my Vega56 underwater with an Alphacool Eiswolf GPX Pro. My gpu is not optimized for overclocking yet I am using Wattman balanced power plan right now until I can optimize my voltage and frequency settings. Here is my Superposition benchmark for 4k. I have flashed my PowerColor RX Vega 56 with a RX Vega64 bios.
> 
> CaptureSuperposition4koptimized.PNG 3525k .PNG file
> 
> 
> 
> 
> A bit low under water^^
> 
> my best Score with an 56 & 64er Bios - under AIR:
Click to expand...

I did state that I had not optimized the overclock and am using Wattman balanced power profile. So excuse me! I have not been able to figure out how to overclock with undervolt. I have tried dozen of different settings and the results are dismal.


----------



## os2wiz

Quote:


> Originally Posted by *Grummpy*
> 
> How long did my vega 64 card last me.
> well
> Answer this many days....


Sorry for your loss. I remember that war well. My neighbors were coming home in body bags. I researched why Johnson started that war and I saw it was a crock of bs. So at the university I joined the protest movement hoping to end that war. But Wall Street had a different idea. Spent 271 days in jail for leading some violent protests on the Stony Brook campus in 1970. Never regretted it.Spent some time also leafleting Fort Dix in New Jersey and Aberdeen Proving Grounds in Maryland. I met a lot of solid GI's who opposed that war for Wall Street. That time changed my life forever. I decided to put my life in service to the working class. Was involved in many labor struggles and anti-racist actions here in Brooklyn where I live. I know this is off-topic but so was your video. I will never forget. Check out www.plp.org


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *os2wiz*
> 
> I did state that I had not optimized the overclock and am using Wattman balanced power profile. So excuse me! I have not been able to figure out how to overclock with undervolt. I have tried dozen of different settings and the results are dismal.






different bench that's the 4k vs the 1080p extreme which has a lower score.
run the 4k test for a direct comparison or you run the 1080p etc








it's like my 1/4 mile time is so much lower than your 1 mile time.

here's mine with a Threadripper as we all know not the fastest horse in the superposition race(due to the bench sucking







)





Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Mumak*
> 
> HWiNFO v5.70 is out.
> Reminder: I2C access on Vega is disabled by default here. Use the "GPU I2C Support Force" option to enable it, but remember you're risking a system crash in this case.





yep got that thanks....seemed to have stopped the issue so far running 17.11.4 drivers.

also the extra radiator on Frankenstein seems to have settled down and seems to be working better.

I did a few runs with a wall meter and tapped out around 540 watts in firestrike test 1 and 2 and around 500 watts in cpu mixed test (with an overclocked card and a Threadripper running at 4050)


----------



## cg4200

Quote:


> Originally Posted by *os2wiz*
> 
> I did state that I had not optimized the overclock and am using Wattman balanced power profile. So excuse me! I have not been able to figure out how to overclock with undervolt. I have tried dozen of different settings and the results are dismal.


Hey bud I have 56 flashed to 64 just put on water they all act different .. I would try first no oc on core at all just oc your hbm till you find max mine is 1160..then put back 945 hbm and try p7 1782 p6 1682 see if you can pass some benches get max core then work on undervolt


----------



## Efilnikuf

Hi all, just recently got a great price on a Vega 64 at Microcenter, so thought I would join the club. Under an XSPC waterblock on a custom loop on 8774 bios. Enjoying so far. Just wish it oc'd like everything else. Such a finicky beast.

1732 @ 1.23v gpu
1140 @ 1.1v hbm
17.11.4


----------



## Grummpy

@os2wiz
interesting.


----------



## fursko

@Grummpy

What is your out of the box 1080p extreme score with vega 64 lc ?


----------



## gedoze

woot, it's confirmed, sapphire's "bling" vegas will be with vapor chamber! woot, trix + vapor-x = epic!

source (read the comments)

i have a dream... merge sapphire's trix + vapor-x with xfx's small pcb and breathing heatsink design, and add some backplate heatpipe love from gigabyte... that would be just perfect for vega!


----------



## cephelix

Finally put my vega 56 under water!been waiting for years for a worthy successor to my R9 290. Temps at load have gone down by 30C+ easily! Gaming the other day did did not see temps past 41C on core or hbm. Will have to redo the hardline tubing though as I am unhappy with the layout. Also flashed the v64 air bios and it works like a charm. Now to to tweak my card. So first up is uv or oc hbm? Then uv or oc core?


----------



## geriatricpollywog

Quote:


> Originally Posted by *cephelix*
> 
> Finally put my vega 56 under water!been waiting for years for a worthy successor to my R9 290. Temps at load have gone down by 30C+ easily! Gaming the other day did did not see temps past 41C on core or hbm. Will have to redo the hardline tubing though as I am unhappy with the layout. Also flashed the v64 air bios and it works like a charm. Now to to tweak my card. So first up is uv or oc hbm? Then uv or oc core?


Download the Vega 64 Liquid bios, then the Hellm powerplay table for 142% power limit and 400 amps.


----------



## cephelix

Quote:


> Originally Posted by *0451*
> 
> Download the Vega 64 Liquid bios, then the Hellm powerplay table for 142% power limit and 400 amps.


Thanks! Don't think my 750W could handle the increased power limit. Though I suppose it's just to remove power limitations. Any link/guide as to where to grab the powerplay tables?


----------



## geriatricpollywog

Quote:


> Originally Posted by *cephelix*
> 
> Thanks! Don't think my 750W could handle the increased power limit. Though I suppose it's just to remove power limitations. Any link/guide as to where to grab the powerplay tables?


I'm on my phone so I won't be able to find the post, but the user is Hellm and it was on this thread.

750 watts is fine. I have never seen more than 600 watts at the wall under both heavy CPU and GPU load on a quad core. If you have an 8 core, maybe 650 watts.


----------



## diggiddi

Quote:


> Originally Posted by *0451*
> 
> Download the Vega 64 Liquid bios, then the Hellm powerplay table for 142% power limit and 400 amps.


What kind of performance boost will that give the 56? how close to the 64 liquimetal?


----------



## geriatricpollywog

Quote:


> Originally Posted by *diggiddi*
> 
> What kind of performance boost will that give the 56? how close to the 64 liquimetal?


I don't know since I have a 64 air with an EK waterblock. The extra compute units on the 64 are only worth 0-3% extra performance. With a 64, your GPU may be binned for higher clockspeed but that's it.


----------



## cephelix

Quote:


> Originally Posted by *0451*
> 
> I'm on my phone so I won't be able to find the post, but the user is Hellm and it was on this thread.
> 
> 750 watts is fine. I have never seen more than 600 watts at the wall under both heavy CPU and GPU load on a quad core. If you have an 8 core, maybe 650 watts.


+rep.on my phone now too. Will take a look through later when i get home.also just purchased a watt meter plug just to see how much i'm drawing at the wall for my system. I'm pretty stoked to start tinkering with my system again! Can't wait to see what I can get with my whole system liquid cooled.


----------



## cephelix

Quote:


> Originally Posted by *diggiddi*
> 
> What kind of performance boost will that give the 56? how close to the 64 liquimetal?


Well, the general consensus here is that it increases the voltage supplied to the HBM. 1.25v for the 56 vs 1.35v for the 64 allowing for higher hbm clocks. Also increases max power the card is able to draw, the LC64 allowing the most at the cost of a 10C lower temperature limit. The powerplay tables increases that limit even more.


----------



## diggiddi

Quote:


> Originally Posted by *cephelix*
> 
> Well, the general consensus here is that it increases the voltage supplied to the HBM. 1.25v for the 56 vs 1.35v for the 64 allowing for higher hbm clocks. Also increases max power the card is able to draw, the LC64 allowing the most at the cost of a 10C lower temperature limit. The powerplay tables increases that limit even more.


Cool, but do you have any hard numbers?


----------



## cephelix

Quote:


> Originally Posted by *diggiddi*
> 
> Cool, but do you have any hard numbers?


I personally don't since I haven't played around with mine enough. But there are plenty on this very thread if you take a look.


----------



## Ne01 OnnA

Still VEGA is the Best









1st. 499USD


2nd. 2999USD


----------



## Grummpy

Quote:


> Originally Posted by *fursko*
> 
> @Grummpy
> 
> What is your out of the box 1080p extreme score with vega 64 lc ?


balanced or turbo ?


----------



## Trender07

WHERES MUH DRIVER?? I'm already hyped by only AMD Linkk


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> balanced or turbo ?


Stock out of the box without touching anything. 5273 must be uv/oc


----------



## Grummpy

Reset defaults ran benchmark.
wattman not even running.
clocks stayed around 1600 core 945 mem power at wall 500 to 530 watt
if i turn off my hard drives and other stuff power would drop 40 to 50 watts
so 460 to 490 watts.


----------



## VicsPC

For anyone interested it seems to be up already, anyone given the jump yet?

http://support.amd.com/en-us/download/desktop?os=Windows+10+-+64

http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Adrenalin-Edition-17.12.1-Release-Notes.aspx


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> Reset defaults ran benchmark.
> wattman not even running.
> clocks stayed around 1600 core 945 mem power at wall 500 to 530 watt
> if i turn off my hard drives and other stuff power would drop 40 to 50 watts
> so 460 to 490 watts.


Thanks for info my old vega 64 lc scores stock: 4862, tweaked: 5334. I returned it and i buy new one. Lets see new silicon lottery







Old one was gigabyte, new one sapphire. I will install it tomorrow.

edit: my settings was +%50 power p6: 1130 p7: 1180mV and HBM2 1150mhz or 1190mhz i cant remember exactly.


----------



## Grummpy

Quote:


> Originally Posted by *fursko*
> 
> Thanks for info my old vega 64 lc scores stock: 4862, tweaked: 5334. I returned it and i buy new one. Lets see new silicon lottery
> 
> 
> 
> 
> 
> 
> 
> Old one was gigabyte, new one sapphire. I will install it tomorrow.
> 
> edit: my settings was +%50 power p6: 1130 p7: 1180mV and HBM2 1150mhz or 1190mhz i cant remember exactly.


i chose the gigabyte for the 3 year cover where the sapphire only has 2 years for the same price of 600 pounds.
i look forward in seeing your results.


----------



## Trender07

Quote:


> Originally Posted by *VicsPC*
> 
> For anyone interested it seems to be up already, anyone given the jump yet?
> 
> http://support.amd.com/en-us/download/desktop?os=Windows+10+-+64
> 
> http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Adrenalin-Edition-17.12.1-Release-Notes.aspx


Yeah everything running fine, love the new features, OSD and the in-menu UI settings are so comfy
but aaagh I was hoping this drivers to fix my HBM Volt bug since like 2 months ago whenever I set hbm mv to 950 mv my HBM Core speed gets locked to 800 mhz no matter what and I have to use 951 mv or 055 mv whatever. With LC bios this doesn't happen but I won't use that bios as im running on AIR


----------



## VicsPC

Quote:


> Originally Posted by *Trender07*
> 
> Yeah everything running fine, love the new features, OSD and the in-menu UI settings are so comfy
> but aaagh I was hoping this drivers to fix my HBM Volt bug since like 2 months ago whenever I set hbm mv to 950 mv my HBM Core speed gets locked to 800 mhz no matter what and I have to use 951 mv or 055 mv whatever. With LC bios this doesn't happen but I won't use that bios as im running on AIR


Possibility that 950mv is an air limitation. But then again i get it at stock voltages just changing HBM speed so who knows.


----------



## Grummpy

new drivers.
http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-Adrenalin-Edition-17.12.1-Release-Notes.aspx


----------



## pmc25

FINALLY.

Saveable profiles in Wattman.


----------



## Chaoz

Jesus, this Adrenalin update is taking forever to install. Did a DDU and I've let it run for 20 mins. now and it's still not done.
Process hasn't crashed just seems it takes a while to install.

Edit: finally it installed.


----------



## owntecx

Installed the new update, less 500 points in firestrike and superposition, need to retest later


----------



## Kyozon

Is the Adrenalin Compatible with VEGA Frontier Cards?


----------



## Grummpy

went in ok here.
i ran the amd uninstaller in c amd folder then ran ddu in safe mode then deleted the amd folder then i installed.


----------



## pmc25

It's generally better to just use the amd uninstaller, and not ddu at all, for reasons I've now forgotten.


----------



## hyp36rmax

Quote:


> Originally Posted by *owntecx*
> 
> Installed the new update, less 500 points in firestrike and superposition, need to retest later


AMD also released the Pro Drivers

https://www.techpowerup.com/239626/amd-radeon-pro-adrenalin-edition-17-12-1-drivers-detailed


----------



## Chaoz

Quote:


> Originally Posted by *pmc25*
> 
> It's generally better to just use the amd uninstaller, and not ddu at all, for reasons I've now forgotten.


DDU works fine, tbh.


----------



## owntecx

Well, superpossition 1080extreme, from 4600 to 4050, no diference with hbcc on or off, Strange. 4k optimized, about the same, 6200k, rx vega 56 stock bios, with [email protected]/930 hbm 45c max


----------



## Grummpy

i gained 4 % in time spy.
i did notice turbo voltage has gone from 1200 to 1250 up .050 mv


----------



## LordDain

Superposition 1080P extreme score just went from 4800+ to 4300ish points with the 'adrenaline' driver. Yuck.


----------



## owntecx

Quote:


> Originally Posted by *LordDain*
> 
> Superposition 1080P extreme score just went from 4800+ to 4300ish points with the 'adrenaline' driver. Yuck.


Well, seems like it wasnt only me after all


----------



## Grummpy

Yeah same here but its just a benchmark tool
who cares, i find its clearly designed with nvidia in mind.


----------



## Grummpy

https://play.google.com/store/apps/details?id=com.amd.link&rdid=com.amd.link


----------



## Grummpy

funny reading over other forums.
users are complaining about bugs and they dont even own AMD hardware .
such desperation to try to sabotage,
envy is a ugly thing.


----------



## flipmatthew

Quote:


> Originally Posted by *owntecx*
> 
> Well, superpossition 1080extreme, from 4600 to 4050, no diference with hbcc on or off, Strange. 4k optimized, about the same, 6200k, rx vega 56 stock bios, with [email protected]/930 hbm 45c max


Quote:


> Originally Posted by *LordDain*
> 
> Superposition 1080P extreme score just went from 4800+ to 4300ish points with the 'adrenaline' driver. Yuck.


I dropped from 4900+ (1080p extreme) with 17.1.1.4 to ~4400 with adrenalin drivers! Glad to see I'm not alone. I also had higher scores with earlier drivers (when I first purchased vega early november) at lower clocks, I'd break 5000.


----------



## porschedrifter

17.12.1 NEW DRIVERS ARE OUT!

Yay, enhanced sync finally works


----------



## VicsPC

I will NEVER understand why people are so hell bent on synthetic benchmarks, hell i dont get why people are bent on gaming benchmarks either. No matter what, nvidia will ALWAYS run better if its sponsored by em, PERIOD. It's honestly very sad but it's the PC world in a nutshell. Unless your running synthetic benchmarks all day who gives a f**k. If games run as they should and as smooth as they should then your fine, stop comparing synthetic benchmarks between drivers, try gaming benchmarks and compare that, its a lot more USEFUL.


----------



## Grummpy

Doom records with vulkan and its outstanding. 90+ fps 4 k wile recording.


----------



## LordDain

Quote:


> Originally Posted by *Grummpy*
> 
> Yeah same here but its just a benchmark tool
> who cares, i find its clearly designed with nvidia in mind.


Synthetic benchmark is the best we can have fast. Also, if it favors AMD card it's a good bench if it doesn't it isn't a good bench and its NV favoring?

Numbers don't lie. Let's wait what the next weeks bring in regards to benchmarks of games and the how and why before drawing conclusions.


----------



## Efilnikuf

Quote:


> Originally Posted by *VicsPC*
> 
> I will NEVER understand why people are so hell bent on synthetic benchmarks, hell i dont get why people are bent on gaming benchmarks either. No matter what, nvidia will ALWAYS run better if its sponsored by em, PERIOD. It's honestly very sad but it's the PC world in a nutshell. Unless your running synthetic benchmarks all day who gives a f**k. If games run as they should and as smooth as they should then your fine, stop comparing synthetic benchmarks between drivers, try gaming benchmarks and compare that, its a lot more USEFUL.


They are just used as a baseline to monitor adjustments. Chill.


----------



## PontiacGTX

have you compaared the new driver on vega?


----------



## raysheri

It was the Win 10 Fall creators update that gave the extra 500 pts performance in benchmarks and this new driver just took them away.
Also, maybe I'm missing something, but I couldn't get the performance overlay to work in Superposition or 3dMark.
Nothing in these drivers I need, rolled back to 11.4


----------



## LordDain

They're using a blacklist.. maybe those benchmarks are on it.


----------



## raysheri

Got the overlay working, looks pretty crappy with a black box background, can't see any reason to choose it over Afterburner.


----------



## By-Tor

Superposition scored 778 points less after this new driver install. Going back..


----------



## drufause

5554 today after new patch on Fire strike Ultra 1.1
https://www.3dmark.com/3dm/23923114?


----------



## 113802

Quote:


> Originally Posted by *VicsPC*
> 
> I will NEVER understand why people are so hell bent on synthetic benchmarks, hell i dont get why people are bent on gaming benchmarks either. No matter what, nvidia will ALWAYS run better if its sponsored by em, PERIOD. It's honestly very sad but it's the PC world in a nutshell. Unless your running synthetic benchmarks all day who gives a f**k. If games run as they should and as smooth as they should then your fine, stop comparing synthetic benchmarks between drivers, try gaming benchmarks and compare that, its a lot more USEFUL.


You're correct, nVidia will always run better because they have the superior product. My RX Vega plays 1440p games just fine though


----------



## VicsPC

Quote:


> Originally Posted by *WannaBeOCer*
> 
> You're correct, nVidia will always run better because they have the superior product. My RX Vega plays 1440p games just fine though


Yea not sure about that but whatever helps you sleep at night














.

Synthetic benchmarks are only good for one thing, stability. Posting your scores around on the web to see who has the bigger "stick" doesn't translate to gaming performance. Example? A vega 64 beating a 1080ti in forza 7, yet gets DESTROYED in synthetics.


----------



## Grummpy

radion chill is one of the best features i have seen there really inst any need to waste power rendering the same frame over and over and over.
Did a vid wile i tested it.





The amount of power you can save without it effecting your game play experience is insane.


----------



## 113802

Quote:


> Originally Posted by *VicsPC*
> 
> Yea not sure about that but whatever helps you sleep at night
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Synthetic benchmarks are only good for one thing, stability. Posting your scores around on the web to see who has the bigger "stick" doesn't translate to gaming performance. Example? A vega 64 beating a 1080ti in forza 3, yet gets DESTROYED in synthetics.


Please don't compare a RX Vega 64 to a GTX 1080 Ti. Vega 64 gets destroyed in synthetics because it is actually getting destroyed in many games by a GTX 1080 Ti.


----------



## VicsPC

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Please don't compare a RX Vega 64 to a GTX 1080 Ti. Vega 64 gets destroyed in synthetics because it is actually getting destroyed in many games by a GTX 1080 Ti.


forza motorsport 7


----------



## 113802

Quote:


> Originally Posted by *VicsPC*
> 
> Please learn to read, forza motorsport 7.


I know how to read, I even quoted you saying Forza 3. nVidia released a driver to fix the performance of Forza 7 with driver 387.92. You get the performance you pay for. That's why the GTX 1080 Ti is priced in a league of it's own.

Shows your time stamp when you edited your post.
Quote:


> Edited by VicsPC - Today at 2:49 pm


https://www.pcper.com/reviews/Graphics-Cards/Forza-Motorsport-7-Performance-Preview-Vega-vs-Pascal


----------



## VicsPC

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I know how to read, I even quoted you saying Forza 3. nVidia released a driver to fix the performance of Forza 7 with driver 387.92. You get the performance you pay for. That's why the GTX 1080 Ti is priced in a league of it's own.
> 
> Shows your time stamp when you edited your post.
> https://www.pcper.com/reviews/Graphics-Cards/Forza-Motorsport-7-Performance-Preview-Vega-vs-Pascal


Yea i changed that meant to write 7 and btw even with the update the 64 still beats the 1080ti in forza 7. I love how u picked the most biased website to quote from lol.


----------



## Grummpy

No point arguing with nvidia users they embrace old tech and that's their choice.
Me i got hbm2 memory for 600 pounds to do the same on nvidia side you got to spend 3000


----------



## Trender07

Quote:


> Originally Posted by *VicsPC*
> 
> Yea i changed that meant to write 7 and btw even with the update the 64 still beats the 1080ti in forza 7. I love how u picked the most biased website to quote from lol.


Yeah speaking to nvidia fans is like speaking to walls, guy saying about 1080ti as if were the next coming of Jesus while deleting Vega when Vega destroys 1080 ti much expensive card in about every new game like Wolfenstein 2, Dirt 4, Forza 7 etc


----------



## Grummpy

well they have the fastest gaming card but i destroy the 1080ti in compute tasks.
just a matter of time untill vega is unlocked but it wont happen over night









http://www.luxmark.info/top_results/LuxBall%20HDR/OpenCL/GPU/1

im going to aim for 3500 take that second place.


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> No point arguing with nvidia users they embrace old tech and that's their choice.
> Me i got hbm2 memory for 600 pounds to do the same on nvidia side you got to spend 3000


Quote:


> Originally Posted by *Trender07*
> 
> Yeah speaking to nvidia fans is like speaking to walls, guy saying about 1080ti as if were the next coming of Jesus while deleting Vega when Vega destroys 1080 ti much expensive card in about every new game like Wolfenstein 2, Dirt 4, Forza 7 etc


Oh i know, it's why i don't bother with em, even when i have nvidia card issues for customer builds i dont even bother with their forums.


----------



## 113802

Quote:


> Originally Posted by *Grummpy*
> 
> No point arguing with nvidia users they embrace old tech and that's their choice.
> Me i got hbm2 memory for 600 pounds to do the same on nvidia side you got to spend 3000


Quote:


> Originally Posted by *Trender07*
> 
> Yeah speaking to nvidia fans is like speaking to walls, guy saying about 1080ti as if were the next coming of Jesus while deleting Vega when Vega destroys 1080 ti much expensive card in about every new game like Wolfenstein 2, Dirt 4, Forza 7 etc


I am a RX Vega 64 LC user, bought it first day at release for $700 because it's the best looking card on the market. I am not biased and compare actual gaming performance benchmarks. All the games you named the GTX 1080 Ti is faster. I just wanted a card that could play 1440p perfectly which the RX Vega 64 does. nVidia has the better product, performance shows it. Their stock even reflects it.

Wolfenstein 2: http://www.guru3d.com/articles_pages/wolfenstein_ii_the_new_colossus_pc_graphics_analysis_benchmark_review,5.html
Forza 7: https://www.pcper.com/reviews/Graphics-Cards/Forza-Motorsport-7-Performance-Preview-Vega-vs-Pascal
Quote:


> Originally Posted by *Grummpy*
> 
> well they have the fastest gaming card but i destroy the 1080ti in compute tasks.
> just a matter of time untill vega is unlocked but it wont happen over night
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.luxmark.info/top_results/LuxBall%20HDR/OpenCL/GPU/1
> 
> im going to aim for 3500 take that second place.


Everyone uses CUDA


----------



## VicsPC

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I am a RX Vega 64 LC user, bought it first day at release for $700 *because it's the best looking card on the market.* I am not biased and compare actual gaming performance benchmarks. All the games you named the GTX 1080 Ti is faster. I just wanted a card that could play 1440p perfectly which the RX Vega 64 does. nVidia has the better product, performance shows it. Their stock even reflects it.
> 
> Wolfenstein 2: http://www.guru3d.com/articles_pages/wolfenstein_ii_the_new_colossus_pc_graphics_analysis_benchmark_review,5.html
> Forza 7: https://www.pcper.com/reviews/Graphics-Cards/Forza-Motorsport-7-Performance-Preview-Vega-vs-Pascal
> Everyone uses CUDA


That's pretty much all you need to know.


----------



## Grummpy

before


after


happy larry

No performance boost turning on hbcc in doom.


----------



## By-Tor

Love how the nvidia fan boys always come to AMD threads and start shat... Don't care about nvidia....


----------



## Grummpy

sʇןnsǝɹ ɯǝɥʇ ǝǝs sʇǝן ǝuop sʞɹɐɯɥɔuǝq ɯǝɥʇ ʇǝƃ


----------



## Grummpy

Quote:


> Originally Posted by *By-Tor*
> 
> Love how the nvidia fan boys always come to AMD threads and start shat... Don't care about nvidia....


they all game on laptops.
thats why they obsess over power usage so much.

see they are jealous because their user interface is none existent and it looks like a 1990s UI complete garbage and nvidia see no reason to fix it because they dont need to.
cracks me up.


----------



## geriatricpollywog

My watercooled Vega outperforms the 1080ti in every game I currently play and there is a good chance this trend will continue with future games. Never settle! I don't care about power consumption because I live in a developed nation. For a 1080ti to match my performance, it would need LN2 and consequently need to use more power.


----------



## 113802

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Someone please overclock the core and give us an example! I'm stuck at work
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here are some signature badges
> 
> *Radeon Vega Frontier Edition Owner*
> *Radeon RX Vega 64 XTX Owner*
> *Radeon RX Vega 64 Owner*
> *Radeon RX Vega 56 Owner*
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon Vega Frontier Edition Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 XTX Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 64 Owner[/B][/URL][/CENTER]
> 
> [CENTER][URL=http://www.overclock.net/t/1634018/vega-frontier-rx-vega-owners-info-thread][B][color=red]Radeon RX Vega 56 Owner[/B][/URL][/CENTER]


Sure, I'm a nVidia fan boy just because I look at data. Keep rocking the nVidia fan boy created Signatures.


----------



## 113802

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Sure, I'm a nVidia fan boy just because I look at data. Keep rocking the nVidia fan boy created Signatures.


Quote:


> Originally Posted by *0451*
> 
> My watercooled Vega outperforms the 1080ti in every game I currently play and there is a good chance this trend will continue with future games. Never settle! I don't care about power consumption because I live in a developed nation. For a 1080ti to match my performance, it would need LN2 and consequently need to use more power.


Actual proof please? Never seen a Vega card out match a GTX 1080 Ti. Mine runs at 1720/1140 and I still don't touch the GTX 1080 Tis performance.

Edit: Meant to clock Edit but quoted myself.


----------



## Grummpy

Nvidia set a hard line on their power usage years ago and people were complaining about that.
the advantage of that meant their cards run cooler and more efficient and with AMD setting such a high limit its kills their reputation.
Vega can be very efficient if they just lower the clock speeds 5 %.
AMD is way up on the red line and they leave no room left to overclock at all.
Madness for a reputation stand point.


----------



## Grummpy

Look at this it can be very efficient indeed and its rather amazing just how good it can perform at low wattage.


----------



## pengs

Quote:


> Originally Posted by *WannaBeOCer*
> 
> Their stock even reflects it.


I know, right?
That's why I only eat McDonalds happy meals. Only high stock fast food restaurants in stomach


----------



## Grummpy

I think i will do a graph at 5 fps intervals against power usage to see the graph line, i think its gets very steep after you go over a point.


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I am a RX Vega 64 LC user, bought it first day at release for $700 because it's the best looking card on the market. I am not biased and compare actual gaming performance benchmarks. All the games you named the GTX 1080 Ti is faster. I just wanted a card that could play 1440p perfectly which the RX Vega 64 does. nVidia has the better product, performance shows it. Their stock even reflects it.
> 
> Wolfenstein 2: http://www.guru3d.com/articles_pages/wolfenstein_ii_the_new_colossus_pc_graphics_analysis_benchmark_review,5.html
> Forza 7: https://www.pcper.com/reviews/Graphics-Cards/Forza-Motorsport-7-Performance-Preview-Vega-vs-Pascal
> Everyone uses CUDA


Vega destroys 1080 ti in wolfenstein 2 you referring outdated benchmarks. But its just outlier. Wolfenstein 2 designed for vega gpu. Probably Far Cry 5 will follow this and generally new releases favors vega. Vega compete with gtx 1080. I guess AC origins negative outlier because vega performs worse than gtx 1080.

So i can say vega 64 just a little bit better than gtx 1080 specially for new games. Rarely beats 1080 ti (promising) and suffers with unoptimized gimpworks titles.

But... missing vega features big fiasco. They advertised DSBR, NGG, PS and none of them working. Rumors says hardware broken so drivers can not solve this problem. This is big fiasco. They are selling broken early access product if rumors true. I wonder how it will affect vega future. Fury cards really bad right now.


----------



## fursko

Quote:


> Originally Posted by *0451*
> 
> My watercooled Vega outperforms the 1080ti in every game I currently play and there is a good chance this trend will continue with future games. Never settle! I don't care about power consumption because I live in a developed nation. For a 1080ti to match my performance, it would need LN2 and consequently need to use more power.


Which games ?


----------



## geriatricpollywog

Quote:


> Originally Posted by *fursko*
> 
> Which games ?


Right now I am playing Wolfenstein II and Forza 7. In other games, it performs between a 1080 and 1080ti, but closer to a 1080. In Firestrime, my score is closer to a 1080ti than a 1080.


----------



## Grummpy

This new app is great. works fine on a kindle fire that cost 30 pounds.
lets me record my average gpu clocks, average gpu temps,average frame rate as well as current,Gpu power,all the usual things.
lets me monitor CPU RAM FAN,
Let me takes screen shots and they show up within the app instantly and share them in seconds. stream start stop , record start stop,
The fps counter with its dedicated screen is very nice with the history graph,
would be nice to be able to reset the average and duration info without having to reload the app to reset it.
Im impressed with this program it will only get better as they tweak it here and there.

ps i like how it wont capture desktop screen shots for security reasons i like that it takes that worry away.


----------



## Naeem

Anyone else is getting random crash in GTA V ? i crashed on new 17.12.1 and 17.11.4 drivers after 10 15 min into game on my Vega LC does not crash in other games like BF1 and PUBG


----------



## VicsPC

Quote:


> Originally Posted by *Naeem*
> 
> Anyone else is getting random crash in GTA V ? i crashed on new 17.12.1 and 17.11.4 drivers after 10 15 min into game on my Vega LC does not crash in other games like BF1 and PUBG


Is this before yesterdays update or after? Ive played online for a couple hours and didn't have any issues, I'm on 17.11.4.


----------



## 113802

Quote:


> Originally Posted by *fursko*
> 
> Vega destroys 1080 ti in wolfenstein 2 you referring outdated benchmarks. But its just outlier. Wolfenstein 2 designed for vega gpu. Probably Far Cry 5 will follow this and generally new releases favors vega. Vega compete with gtx 1080. I guess AC origins negative outlier because vega performs worse than gtx 1080.
> 
> So i can say vega 64 just a little bit better than gtx 1080 specially for new games. Rarely beats 1080 ti (promising) and suffers with unoptimized gimpworks titles.
> 
> But... missing vega features big fiasco. They advertised DSBR, NGG, PS and none of them working. Rumors says hardware broken so drivers can not solve this problem. This is big fiasco. They are selling broken early access product if rumors true. I wonder how it will affect vega future. Fury cards really bad right now.


It destroys the 1080 in Wolfenstein 2. Vega 64 is faster than a 1080 and that's what it competes with. The 1080 Ti is in its own class. If you mention an overclocked Vega card, don't forget how easy it is to overclock a GTX 1080 Ti. I have no complaints about the RX Vega 64. It delivers performance above the 1080 and looks nice. The new driver overlay is fantastic.


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It destroys the 1080 in Wolfenstein 2. Vega 64 is faster than a 1080 and that's what it competes with. The 1080 Ti is in its own class. If you mention an overclocked Vega card, don't forget how easy it is to overclock a GTX 1080 Ti. I have no complaints about the RX Vega 64. It delivers performance above the 1080 and looks nice. The new driver overlay is fantastic.


Outdated comparison. Vega gains a lot after that video. Check other videos or review sites.


----------



## NI6HTHAWK

Interesting thing happened just before i updated drivers, ran a superposition benchmark just to get a baseline before installing the 17.12.1 Adrenalin drivers and i had GPU-z open along with Wattman, running the Balanced profile i watched as the splash screen began loading up the scene, my GPU core climbed endlessly to 2895 MHz before it crashed! It never loaded the scene but I think that has to be a record for runaway clock speed! I wish i had my phone camera handy because i couldn't get a screen grab. After that i turned off GPU-z and reset the wattman settings including shader cache and it worked fine afterwards.

It seems i am still having issues with the 3 screen setup even with the AX-1200i PSU swap to meet the required 1000w recommendation, random crashes, driver hangs, to strange to pin down. Haven't tried using the 4k TV yet with the new drivers just the two 144hz freesync displays but so far its stable on the Balanced profile. I will say I noticed a huge drop in Superposition score going to the new drivers but haven't run a game that has a high frame rate cap to see if i notice a big difference.

I will say that Chill is awesome though, i never really tried to use it before but wow does it make gaming more efficient and the ability to set it individually to each game is a huge benefit to power consumption. I thought i would see the jump in FPS but if you set it properly its pretty seamless especially with Freesync enabled!


----------



## 113802

Quote:


> Originally Posted by *fursko*
> 
> Outdated comparison. Vega gains a lot after that video. Check other videos or review sites.


My RX Vega 64 performs exactly the same with 17.12.1 using the latest Wolfenstein 2 update. Instead of claiming 1 month old data is outdated how about you provide proof?


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> My RX Vega 64 performs exactly the same with 17.12.1 using the latest Wolfenstein 2 update. Instead of claiming 1 month old data is outdated how about you provide proof?







Im personally didnt even see below 120 fps. Just one place was buggy. Drop down to 70 fps for a few seconds. 1440p maxed settings.

edit: http://digiworthy.com/2017/11/12/wolfenstein-2-patch-rx-vega-64/ -

__
https://www.reddit.com/r/7c13ox/wolfenstein_2_latest_patch_accelerates_rx_vega_by/


----------



## 113802

Quote:


> Originally Posted by *fursko*
> 
> 
> 
> 
> 
> 
> Im personally didnt even see below 120 fps. Just one place was buggy. Drop down to 70 fps for a few seconds. 1440p maxed settings.
> 
> edit: http://digiworthy.com/2017/11/12/wolfenstein-2-patch-rx-vega-64/ -
> 
> __
> https://www.reddit.com/r/7c13ox/wolfenstein_2_latest_patch_accelerates_rx_vega_by/


The video you provided is from the same exact date as I posted. The only difference is that the RX Vega 64 is overclocked in the video you provided.


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> The video you provided is from the same exact date as I posted. The only difference is that the RX Vega 64 is overclocked in the video you provided.


Its just lc version. 1030 mhz not much oc and its before %22 performance drivers. Why you dont want understand ? Weird... 1080 ti maybe beats highly throttled 1300mhz vega.


----------



## cplifj

the new driver adrenalin does nothing at all for me, there ain't even the compute button for neither my vega or 290X,

alot of gimmicks for the asocial socializers who love being connected to whatever they don't know the first thing about....

oh well. another missed opportunity for amd. Even benchmark performance went down.

Ah, and some more monitorting crap that nobody was waiting on since there are plenty of those around allready.

But the old bugs still persist, Clocks not comming down after gaming when RELIVE is enabled...still have to test the results on forza 7, it worked flawless last time but with new driver anything can regress.

YEP, GOOD JOB AMD. *sigh* & *facepalm".


----------



## 113802

Quote:


> Originally Posted by *fursko*
> 
> Its just lc version. 1030 mhz not much oc and its before %22 performance drivers. Why you dont want understand ? Weird... 1080 ti maybe beats highly throttled 1300mhz vega.


I don't get why you don't understand it's comparing a stock GTX 1080 Ti vs a RX Vega 64. An overclocked GTX 1080 Ti can hit 2ghz on the core clock and 6000Mhz on the memory. That patch increased performance "up to 22%" on 4k resolution Lower resolutions barely got a boost. I'll post a video of my results with my heavily overclocked RX Vega 64 LC when I get home today.


----------



## Rootax

Quote:


> Originally Posted by *cplifj*
> 
> the new driver adrenalin does nothing at all for me, there ain't even the compute button for neither my vega or 290X,
> 
> alot of gimmicks for the asocial socializers who love being connected to whatever they don't know the first thing about....
> 
> oh well. another missed opportunity for amd. Even benchmark performance went down.
> 
> Ah, and some more monitorting crap that nobody was waiting on since there are plenty of those around allready.
> 
> But the old bugs still persist, Clocks not comming down after gaming when RELIVE is enabled...still have to test the results on forza 7, it worked flawless last time but with new driver anything can regress.
> 
> YEP, GOOD JOB AMD. *sigh* & *facepalm".


Agree 100%.

It's a joke for Vega users...


----------



## IvantheDugtrio

So it may have been coincidence but my Vega 56 is dead after the update. I was running the Vega 64 LC bios at the time and stressing the GPU with a bit of monero mining. During mining there was heavy artifacting on the screen and I stopped mining after a few seconds. The system appeared stable after that though I figured I should have rebooted after updating the drivers. I rebooted and got no led lights from the GPU on post. The motherboard reports that there's no GPU installed.

Seems to me the card was killed by a voltage spike though I don't know from what since at the time I was running stock settings on the Vega 64 LC bios.

Also now seems to be the worst possible time to find a replacement Vega card with all inventory of reference cards gone.


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I don't get why you don't understand it's comparing a stock GTX 1080 Ti vs a RX Vega 64. An overclocked GTX 1080 Ti can hit 2ghz on the core clock and 6000Mhz on the memory. That patch increased performance "up to 22%" on 4k resolution Lower resolutions barely got a boost. I'll post a video of my results with my heavily overclocked RX Vega 64 LC when I get home today.


Ok my friend


----------



## 113802

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> So it may have been coincidence but my Vega 56 is dead after the update. I was running the Vega 64 LC bios at the time and stressing the GPU with a bit of monero mining. During mining there was heavy artifacting on the screen and I stopped mining after a few seconds. The system appeared stable after that though I figured I should have rebooted after updating the drivers. I rebooted and got no led lights from the GPU on post. The motherboard reports that there's no GPU installed.
> 
> Seems to me the card was killed by a voltage spike though I don't know from what since at the time I was running stock settings on the Vega 64 LC bios.
> 
> Also now seems to be the worst possible time to find a replacement Vega card with all inventory of reference cards gone.


Flick the bios switch on the back of the card and use the Vega 56 bios. Cross your fingers.


----------



## IvantheDugtrio

I already tried the backup bios and it's the same thing. No led lights on the card. No GPU installed post message.


----------



## Naeem

Quote:


> Originally Posted by *VicsPC*
> 
> Is this before yesterdays update or after? Ive played online for a couple hours and didn't have any issues, I'm on 17.11.4.


after latest update on game i didn't play gta v on vega before so i don't know i tested with 17.11.4 and 17.12.1 , 12.1 is all messed up freesync is causing lots of stutter in games and apps and my super pos score droped from 5200 to 4450


----------



## SavantStrike

Is anyone here mining with the new drivers? Are they better than the block chain drivers?

It would be nice to get better gaming performance without sacrificing mining performance.


----------



## LeadbyFaith21

Does anyone have the new driver and have got the Link app set up? I can't get the connection to set up, it keeps timing out, and was hoping I could get setting that others are using for it (if anyone here is)
Quote:


> Originally Posted by *Naeem*
> 
> after latest update on game i didn't play gta v on vega before so i don't know i tested with 17.11.4 and 17.12.1 , 12.1 is all messed up freesync is causing lots of stutter in games and apps and my super pos score droped from 5200 to 4450


I'm noticing the stutter on 12.1 as well, I just hadn't thought it would be freesync. Guess I'll test that out and see if that relieves my suffering!


----------



## Grummpy

point camera at the screeen at the white box its fast and simple


----------



## dagget3450

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> I already tried the backup bios and it's the same thing. No led lights on the card. No GPU installed post message.


did you verify it's not PSU?


----------



## dagget3450

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It destroys the 1080 in Wolfenstein 2. Vega 64 is faster than a 1080 and that's what it competes with. The 1080 Ti is in its own class. If you mention an overclocked Vega card, don't forget how easy it is to overclock a GTX 1080 Ti. I have no complaints about the RX Vega 64. It delivers performance above the 1080 and looks nice. The new driver overlay is fantastic.


After your posts on the driver thread i am surprised you are happy with the new driver features... Not a dig at you, just glad to see a somewhat positive reception as i thought for sure you weren't going to like it.


----------



## pengs

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It destroys the 1080 in Wolfenstein 2. Vega 64 is faster than a 1080 and that's what it competes with. The 1080 Ti is in its own class. If you mention an overclocked Vega card, don't forget how easy it is to overclock a GTX 1080 Ti. I have no complaints about the RX Vega 64. It delivers performance above the 1080 and looks nice. The new driver overlay is fantastic.


And then there is the LCE which is running about 200MHz higher than the reference shown in the video you linked.





It's a win over the Ti for the LC.


----------



## 113802

Quote:


> Originally Posted by *dagget3450*
> 
> After your posts on the driver thread i am surprised you are happy with the new driver features... Not a dig at you, just glad to see a somewhat positive reception as i thought for sure you weren't going to like it.


I enjoy the overlay since it allows me to avoid entering AMD Radeon Settings. The overlay is fantastic since I no longer have to alt/tab or close the game to change settings.

My RX Vega 64 LC @ 50% Power Limit HBM @ 1105Mhz, completely max settings with HBCC enabled and Freesync.





Quote:


> Originally Posted by *pengs*
> 
> And then there is the LCE which is running about 200MHz higher than the reference shown in the video you linked.
> 
> 
> 
> 
> 
> It's a win over the Ti for the LC.


Like I mentioned before that is an overclocked RX Vega 64 with the HBM set to 1035Mhz with a power target set to 50%

The GTX 1080 Ti is also stock in the video I linked. A Gtx 1080 Ti can easily hit 2Ghz on the core and 6000Mhz on the memory.

Edit: NVidia released the game ready driver 388.13 - which wasn't used in any of the videos.


----------



## pengs

Quote:


> Originally Posted by *WannaBeOCer*
> 
> I enjoy the overlay since it allows me to avoid entering AMD Radeon Settings. The overlay is fantastic since I no longer have to alt/tab or close the game to change settings.
> 
> My RX Vega 64 LC @ 50% Power Limit HBM @ 1105Mhz, completely max settings with HBCC enabled and Freesync.
> 
> 
> 
> 
> 
> Like I mentioned before that is an overclocked RX Vega 64 with the HBM set to 1035Mhz with a power target set to 50%
> 
> The GTX 1080 Ti is also stock in the video I linked. A Gtx 1080 Ti can easily hit 2Ghz on the core and 6000Mhz on the memory.
> 
> Edit: NVidia released the game ready driver 388.13 - which wasn't used in any of the videos.


Yeah, they'd probably end up dead even. Not a bad showing from Vega which hopefully continues.

The overlay is good.


----------



## Efilnikuf

Ok, with the new driver getting better 4k Superposition scores, worse 1080p extreme scores. Same clocks with lower voltages (1.23 vs 1.19,) but still doesn't like over 1732 on the core. Higher clocks on the hbm though, 1190 without any of the artifacting I was getting before.





Was hitting a consistent 7150+-7240 in 4k optimized. 5100+-5240 or so 1080p extreme.

PS: Enough with the red vs green fighting ITT. This thread is about Vega.

Edit: HBCC on or off doesn't seem to matter.


----------



## Grummpy




----------



## geriatricpollywog

Quote:


> Originally Posted by *Efilnikuf*
> 
> Ok, with the new driver getting better 4k Superposition scores, worse 1080p extreme scores. Same clocks with lower voltages (1.23 vs 1.19,) but still doesn't like over 1732 on the core. Higher clocks on the hbm though, 1190 without any of the artifacting I was getting before.
> 
> 
> 
> 
> 
> Was hitting a consistent 7150+-7240 in 4k optimized. 5100+-5240 or so 1080p extreme.
> 
> PS: Enough with the red vs green fighting ITT. This thread is about Vega.
> 
> Edit: HBCC on or off doesn't seem to matter.


This. I am getting the exact same results +/- 10.


----------



## Grummpy

http://www.bbc.co.uk/news/av/42341736/what-is-net-neutrality-and-how-could-it-affect-you


----------



## Grummpy

200 pounds is kind of insane price for what you get.
3 of them 600 pounds 7680x1440 insane value
https://www.overclockers.co.uk/aoc-q3279vwf-32-2560x1440-va-75hz-freesync-widescreen-led-monitor-mo-04g-ao.html


----------



## webhito

Had some weird stuttering/flickering since updating to the latest driver. Improvements seemed to be non existant since my benchmarks were spitting out the same results. Wattman voltages looked higher for some reason. Saving profiles is nice but the flickering is a deal breaker for me so had to go back to 17.11.4.


----------



## Reikoji

seems 17.14.1 at least increased crossfire stability in the one game i played since installing them. it has a lot of much needed enhancements made to it. honestly i dont think the crimson edition really liked vega, and this is the first step to some real improvements. i can also resize the settings window without ot crashing now.

the overlay needs some custimizability tho. while the black box wasnt in my way, i can see it being in the way on other games, and the measurements have too much space in between them. also only shows the primary gpu, not both. naturally no vrm info as it probably causes crashes even for them.


----------



## fursko

My new vega 64 lc arrived and its behave very different.

old: undervolt good, hbm oc without artifacts, low clocks
new: overclock good, hbm oc artifacts, high clocks

Old one was around 1730 mhz in superposition. Hbm was 1190 mhz without artifact 1200mhz instant crash. Above 1150 mhz hbm almost 0 gain for superposition 1080p extreme.
New one almost reaching 1800 mhz (needs more tweak). Hbm artifact above 1150mhz. Undervolting not helps. Core overclock works.

Probably i was using 17.10.2 driver with old gpu. 17.11.4 with new gpu.

New gpu (out of the box) = old gpu (uv/oc) (in some game benchmarks)(shadow of war, rise of the tomb raider etc.)

Looks like new gpu has better silicon. Need more testing. HBM looks worse than old gpu. But didnt test games yet. In superposition old gpu HBM was definitely better.


----------



## os2wiz

CaptureSuperposition4k.PNG 154k .PNG file

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> Quote:
> 
> 
> 
> Originally Posted by *os2wiz*
> 
> I have my Vega56 underwater with an Alphacool Eiswolf GPX Pro. My gpu is not optimized for overclocking yet I am using Wattman balanced power plan right now until I can optimize my voltage and frequency settings. Here is my Superposition benchmark for 4k. I have flashed my PowerColor RX Vega 56 with a RX Vega64 bios.
> 
> CaptureSuperposition4koptimized.PNG 3525k .PNG file
> 
> 
> 
> 
> A bit low under water^^
> 
> my best Score with an 56 & 64er Bios - under AIR:
Click to expand...

AS I told you I had not even begun to adjust performance post water block so your comments were not helpful nor appreciated. I am uploading scores today, some of which are quite good. I will add superposition as well to this post as I run it in a few minutes. These are still not fully optimized. Unigine should be ashamed of not doing Superposition with a DX12 option.

CaptureAOTSEscalation4kleaderboard.PNG 2324k .PNG file


CaptureFurmark1.19.14k.PNG 60k .PNG file


CaptureRiseofTombRaiderBega56with64bios.PNG 367k .PNG file
[/ATTACHMENT]


----------



## ducegt

Nevermind,my memory failed me







Who has the hard-modded card ?


----------



## punchmonster

While other GPU's have gotten the "compute" toggle for their cards, Adrenalin does not give this to us Vega owners. AMD claims this is because all Vega mining fixes are already in the driver anyhow, but this is blatantly untrue as the beta blockchain driver mines 35% faster.

Where are the promised fixes?

tl;dr Vega's on Adrenalin mines significantly slower than the BETA blockchain driver which is 3 months old, meaning I have to choose between $3,5 a day more or bug free games.


----------



## SavantStrike

Quote:


> Originally Posted by *punchmonster*
> 
> While other GPU's have gotten the "compute" toggle for their cards, Adrenalin does not give this to us Vega owners. AMD claims this is because all Vega mining fixes are already in the driver anyhow, but this is blatantly untrue as the beta blockchain driver mines 35% faster.
> 
> Where are the promised fixes?
> 
> tl;dr Vega's on Adrenalin mines significantly slower than the BETA blockchain driver which is 3 months old, meaning I have to choose between $3,5 a day more or bug free games.


This is an answer to a question I asked a while ago. Thanks!

I guess I'll stay with the beta drivers for the time being. Gotta pay the bills first and foremost.


----------



## Efilnikuf

Quote:


> Originally Posted by *webhito*
> 
> Had some weird stuttering/flickering since updating to the latest driver. Improvements seemed to be non existant since my benchmarks were spitting out the same results. Wattman voltages looked higher for some reason. Saving profiles is nice but the flickering is a deal breaker for me so had to go back to 17.11.4.


Having a similar problem trying to run Timespy. Flickering then it just crashes. Can't even seem to run it at stock settings, maybe a dx12 thing? What are you running that you are getting flickering in?


----------



## diabetes

Complaints about the missing µArch features on Reddit and the LTT Video about Frontier Edition were now picked up by the German tech press. If you want to draw more attention to this topic (and in the best case provoke an official statement from AMD), write emails to the editorial staff of tech and games magazines!

www.pcgameshardware.de/Vega-Codename-265481/News/AMD-Radeon-RX-Vega-Primitive-Shader-DSBR-Beschwerden-1245870/


__
https://www.reddit.com/r/7js2wj/lets_make_some_noise_until_amd_will_answer/


----------



## webhito

Quote:


> Originally Posted by *Efilnikuf*
> 
> Having a similar problem trying to run Timespy. Flickering then it just crashes. Can't even seem to run it at stock settings, maybe a dx12 thing? What are you running that you are getting flickering in?


Stock speeds with a stable undervolt as to before the new drivers, any game I throw at it will randomly flicker and the stuttering even happened on the desktop just browsing around. They aren't just dx12 titles though, warband was one of them.


----------



## Efilnikuf

Quote:


> Originally Posted by *webhito*
> 
> Stock speeds with a stable undervolt as to before the new drivers, any game I throw at it will randomly flicker and the stuttering even happened on the desktop just browsing around. They aren't just dx12 titles though, warband was one of them.


Well, I just seem to be getting it in dx12, can see a little here and there on dx11, it's a momentary thing and causing no driver crashing, but only on dx11 when I am really pushing the card.


----------



## punchmonster

So apparently compute mode in Vega IS enabled but only active with certain algo's? Can anyone confirm? And if so is there maybe a way for me to force that mode?


----------



## wellkevi01

Quote:


> Originally Posted by *punchmonster*
> 
> So apparently compute mode in Vega IS enabled but only active with certain algo's? Can anyone confirm? And if so is there maybe a way for me to force that mode?


Well it's certainly not enabled with XMR-STAK. I went from ~1800 H/s, down to ~1200 H/s.


----------



## geriatricpollywog

Does anybody see the option to enable compute mode in the radeon settings? It is not where it is spposed to be.


----------



## diabetes

There is no compute mode toggle for Vega. According to AMD, the current driver automatically includes all of the compute optimizations. Some people disagree though, as the blockchain beta driver can still be faster for some workloads

Something unrelated:

Appearantly the "new geometry fast path" and "primitive shaders" are coming to Macs: http://creators.radeon.com/Radeon-pro-vega/#section--7
Still no Windows announcement though.
Dafuq.


----------



## SpecChum

Quote:


> Originally Posted by *diabetes*
> 
> Appearantly the "new geometry fast path" and "primitive shaders" are coming to Macs: http://creators.radeon.com/Radeon-pro-vega/#section--7
> Still no Windows announcement though.
> Dafuq.


Ah, I just came on here to say that! I've just seen the Mac spiel myself.

Promising if it's enabled on Mac tho, with the rumours that it's fundamentally broke in hardware going around...


----------



## surfinchina

Quote:


> Originally Posted by *SpecChum*
> 
> Ah, I just came on here to say that! I've just seen the Mac spiel myself.
> 
> Promising if it's enabled on Mac tho, with the rumours that it's fundamentally broke in hardware going around...


I've been running the Vega on my hackintosh for a couple of months. The drivers are definitely getting better. Now about on par with the windows benches. Which means it all seems faster because of the slightly better efficiency of the Mac software. Plus the 'metal' which, for the (bugger all) software that uses it, is in a different league to pretty well any other GPU on any platform.
So not really broke, more like having issues but evolving just a bit faster than the windows drivers.

ps: Atari was way better than Amiga


----------



## cplifj

Quote:


> Originally Posted by *surfinchina*
> 
> ps: Atari was way better than Amiga


Never !

On the other topic, if Vega would have broken hardware that would mean a global recal/replacement is in order, go ask VW.

Doing crap like that will cost them their head in the end, so would anyone be dumb enough to do it ? again , ask VW.

What what was that word "trust" all about again ?


----------



## SpecChum

Quote:


> Originally Posted by *surfinchina*
> 
> ps: Atari was way better than Amiga


How very dare you!

(actually, the later STe models were







)


----------



## kundica

Quote:


> Originally Posted by *diabetes*
> 
> There is no compute mode toggle for Vega. According to AMD, the current driver automatically includes all of the compute optimizations. Some people disagree though, as the blockchain beta driver can still be faster for some workloads


As I understand it, Vega auto toggles itself but it doesn't work with all crypto yet. For example, ETH hashrate will be the same on the newest driver and the blockchain driver but there's a significant difference in Monero. I tested this myself last night with my single Vega setup. I get 45 h/s for ETH on both drivers but with Monero I get just under 1400 with the current driver and 2k with the blockchain driver.


----------



## PontiacGTX

Quote:


> Originally Posted by *0451*
> 
> Right now I am playing Wolfenstein II and Forza 7. In other games, it performs between a 1080 and 1080ti, but closer to a 1080. In Firestrime, my score is closer to a 1080ti than a 1080.


i wonder why if most sites shows that Vega 64 was faster than 1080Ti in wolfenstein?


----------



## Spacebug

Quote:


> Originally Posted by *ducegt*
> 
> Nevermind,my memory failed me
> 
> 
> 
> 
> 
> 
> 
> Who has the hard-modded card ?


I have one if it was me you thought of.
Hardmod for lower sensed current draw and voltage mod for HBM.
I also tried soldering more capacitors to the HBM phase/vrm but that didn't seem to help oc anything, atleast on stock HBM voltage.

I haven't tried the new adrenalin driver yet, so don't know if there is any improvement over the 11.4 driver, will try that at a later stage aswell as more voltage on core and HBM.
As it is now basically right after i did the HBM voltage and cap mod I got pretty hooked on a game that really isn't that demanding graphically.

So the card is now downclocked to around 1700MHz core at 1.25V and 1110MHz HBM at stock 1.35V.
Main reason being that I don't want to stress the HBM too much if around 1.44V which was max I used is too much for "daily use".
That and I managed to trip OCP or some other protection circuit of my AX1200i PSU after about 30min gameplay in ROTTR, HBM doesn't pull much so that is probably the 1.38V Vcore that perhaps was too much for the PSU.
Managed benchmarks fine but longer gaming sessions in ROTTR no, thinking OCP if there is nasty current spikes or thermal protection, will try to figure that out later aswell and if need be lower Vcore a bit to dodge that PSU shutdown, dont feel like buying a new PSU right now...


----------



## LionS7

Is there any memory mV control in MSI Afterburner for RX Vega ?


----------



## webhito

So, no vega stock at newegg for less than 700 and prices on ebay have gone up quite a bit. Is this due to the lack of availability and miners once again?

Also, anyone else seen the nitro + vega 64 review yet?


----------



## Trender07

Quote:


> Originally Posted by *webhito*
> 
> So, no vega stock at newegg for less than 700 and prices on ebay have gone up quite a bit. Is this due to the lack of availability and miners once again?
> 
> Also, anyone else seen the nitro + vega 64 review yet?


Yeah theyre so good and cool in like really low temps cool


----------



## os2wiz

Quote:


> Originally Posted by *webhito*
> 
> So, no vega stock at newegg for less than 700 and prices on ebay have gone up quite a bit. Is this due to the lack of availability and miners once again?
> 
> Also, anyone else seen the nitro + vega 64 review yet?


It has nothing to do with miners. If you keep up with tech news AMD has temporarily stopped producing reference cards and now all gpu chips are being shipped to AIB partners so they can make their own designed Vega 56 and 64 gpu cards.


----------



## GroupB

I tried the new driver and there something wrong about it, did a couple of superposition 4k and my previous boost into the 1700 mh is gone and down to 1660 with the same setting so I investigate...

I used to log the vddc, boostclock, watt, etc during the benchmark and if I refer to my 17.11.2 result the new driver is doing something with the voltage I set the same usual OC I always use and its boosting less clock and giving more watt even if I set the same mv.

What I see is the GPU VDDC reading is not stable anymore its jumping from 1070 anywhere to 1115 where it used to sit closer to say 1070mv and vary in the 10mv range, its look to me like they change the LLC setting if gpu have one to a more extreme one, the watt also is higher for the set MV in wattman that it used to be probably because of those higher GPU VDDC and the result is less clock cause I hit the watt throttle at 142% ( that will be close to 325-335 watt in hwinfo64).

I also can go lower in MV for the same set mhz, previous driver the game/benchmark will crash at the same setting,its what a more extreme LLC do.

That may explain also the v56 user that had a dead gpu mining if the LLC is more extreme meaning the vddc is more likely to spike in dangerous zone if you have a stock 1200 MV and power tune to start with.

Anyone see the same thing VDDC spike and more watt ?
If there a setting for LLC in the registery, can someone that know what to look for can compare adrenaline and 17.2 value see if there a change?


----------



## Razkin

Quote:


> Originally Posted by *GroupB*
> 
> I tried the new driver and there something wrong about it, did a couple of superposition 4k and my previous boost into the 1700 mh is gone and down to 1660 with the same setting so I investigate...
> 
> I used to log the vddc, boostclock, watt, etc during the benchmark and if I refer to my 17.11.2 result the new driver is doing something with the voltage I set the same usual OC I always use and its boosting less clock and giving more watt even if I set the same mv.
> 
> What I see is the GPU VDDC reading is not stable anymore its jumping from 1070 anywhere to 1115 where it used to sit closer to say 1070mv and vary in the 10mv range, its look to me like they change the LLC setting if gpu have one to a more extreme one, the watt also is higher for the set MV in wattman that it used to be probably because of those higher GPU VDDC and the result is less clock cause I hit the watt throttle at 142% ( that will be close to 325-335 watt in hwinfo64).
> 
> I also can go lower in MV for the same set mhz, previous driver the game/benchmark will crash at the same setting,its what a more extreme LLC do.
> 
> That may explain also the v56 user that had a dead gpu mining if the LLC is more extreme meaning the vddc is more likely to spike in dangerous zone if you have a stock 1200 MV and power tune to start with.
> 
> Anyone see the same thing VDDC spike and more watt ?
> If there a setting for LLC in the registery, can someone that know what to look for can compare adrenaline and 17.2 value see if there a change?


I haven't logged and compared between drivers, but I did notice in Superposition 1080p extreme using the same clock setings I went from 5200 to 4500 with about 15MHz less actual clocks on the core and it uses about 40W more power.


----------



## Grummpy

https://radeon.com/_downloads/vega-whitepaper-11.6.17.pdf


----------



## tarot

I ran it using the amd link app...lovin it...and did notice nothing out of the ordinary except the benchmark sucks doesn't use any of the cpu and is just a waste.

2 are hbcc on and 2 off


----------



## Trender07

Anyone else using CoD WW2 for stability? IMO better than benchs


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> Anyone else using CoD WW2 for stability? IMO better than benchs


Also Overwatch. Im using both. Best stability tests. I returned my vega 64 lc (it was faulty) and i bought new one. Now im testing my tweaks. Old gpu hbm was better but new gpu reachs better core clocks. Overwatch better for hbm testing. Cod better for core clock testing i guess. Unstable hbm clocks results black painting in overwatch. All screen turns black but targets red lol. Its like hack









Radiator fan faulty again. Terrible quality control. No coil whine this time. Not sure its pump noise or coil whine but i can hear little bit noise if i open my case.

HBM worse than old v64 but im getting better fps even at stock settings. Dont know why. I tried older drivers. Im sure its not driver optimizations. Same windows, same environment.


----------



## gupsterg

Quote:


> Originally Posted by *gupsterg*
> 
> VRM temp is working well for me IMO.
> 
> 3DM FS
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> Did do a 3rd run but forgot to grab screenie
> 
> 
> 
> 
> 
> 
> 
> , here is 2x runs without monitoring, IMO negligible difference with/without monitoring.
> 
> SP 4K
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> Again negligible difference with/without monitoring.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Did also ~20min Wolfenstein 2, didn't note any performance issues from having monitoring.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> I'm using:-
> 
> VBIOS P/N: 113-D0500100-104
> VBIOS Version: 016.001.001.000.008769
> 
> (TPU DB link)
> 
> With PP reg I mod only:-
> 
> i) GPU DPM 5 1062mV, DPM 6: 1557MHz 1075mV, DPM 7: 1642MHz 1125mV
> ii) SOC DPM 5 to 7 1107MHz
> iii) HBM DPM 3 1100MHz
> 
> Set PowerLimit: 38% in WattMan.
> 
> I hope you keep a switch for users who have no issue with additional VEGA monitoring
> 
> 
> 
> 
> 
> 
> 
> , as always cheers Martin for your support
> 
> 
> 
> 
> 
> 
> 
> .
> Quote:
> 
> 
> 
> Originally Posted by *0451*
> 
> Can you post a GPU-Z monitoring of yoir in-game sensor readings?
Click to expand...

Wolfenstein 2 GPU-Z


----------



## barbz127

Anyone in the UK or AU that want to sell the stock Vega cooler inc should, back plate etc?

Looking to the free of you who have busted cards








Thankyou


----------



## cg4200

I was reading the review for the new red devil 56 and if you scroll to the bottom where it says tweaking ..
It says you can only grab extra 4 % because amd locked hbm clocks ??
That must be to not rob 64 sales wonder if you can still flash or if they block that as well ?
I love my 56's one of them I game 1100 /1750 water cooled lc bios so close to my lc 64 almost can't tell difference ..
hbnhttp://www.guru3d.com/articles_pages/powercolor_red_devil_vega_56_8gb_review,30.html


----------



## Naeem

what gpu is this on number 4 ?


----------



## Grummpy

im in 10th now.
i can beat that 3200 if i try


----------



## VicsPC

Weird bug for me in BF1, playing conquest if i go near water my screen flickers then just goes black, if i look at the sky though while running across the water, no issues. how odd.


----------



## kundica

Quote:


> Originally Posted by *os2wiz*
> 
> It has nothing to do with miners. If you keep up with tech news AMD has temporarily stopped producing reference cards and now all gpu chips are being shipped to AIB partners so they can make their own designed Vega 56 and 64 gpu cards.


Not tech news, it's all speculation and rumor. Visit any mining forum or the Monero mining subreddit and you'll see that Vega is very much being scooped up by miners. Average people are running 6 Vegas which offers insane Monero hashrate, not to mention fairly profitable right now. Cards can be tweaked to consume very little power too. Mining on my single vega 64 gives me about a 2k hashrate and consumes 140w.
Quote:


> Originally Posted by *cg4200*
> 
> I was reading the review for the new red devil 56 and if you scroll to the bottom where it says tweaking ..
> It says you can only grab extra 4 % because amd locked hbm clocks ??
> That must be to not rob 64 sales wonder if you can still flash or if they block that as well ?
> I love my 56's one of them I game 1100 /1750 water cooled lc bios so close to my lc 64 almost can't tell difference ..
> hbnhttp://www.guru3d.com/articles_pages/powercolor_red_devil_vega_56_8gb_review,30.html


They might not know what they're talking about or meant to say voltage. As far as I know, all these cards will let you OC the memory.


----------



## punchmonster

Is there any risk in boosting SoC frequency?


----------



## Trender07

I still eventually get crashes in Cod ww2 , "device gets removed".


----------



## Grummpy

Quote:


> Originally Posted by *punchmonster*
> 
> Is there any risk in boosting SoC frequency?


stick to 100 but i fyou have to dont go any higher than 103.
if you need better memory speed lower the timings.


----------



## gupsterg

Quote:


> Originally Posted by *punchmonster*
> 
> Is there any risk in boosting SoC frequency?


I used 1199MHz with 1100MHz HBM for fair few days, certain lengthy use cases would have issues. Nuke33 also had similar issues. I now use 1107MHz with 1100MHz HBM.


----------



## fursko

Quote:


> Originally Posted by *Grummpy*
> 
> im in 10th now.
> i can beat that 3200 if i try


How did u get 30407 score. I run benchmark with 1900/1250 clocks vega 64 lc. I get 25k. Its because of adrenalin ?


----------



## geriatricpollywog

Quote:


> Originally Posted by *fursko*
> 
> How did u get 30407 score. I run benchmark with 1900/1250 clocks vega 64 lc. I get 25k. Its because of adrenalin ?


1900/1250? LN2?


----------



## GroupB

I lost plenty of boost and got higher wattage with adrenaline so I went back to 17.11.2 to get my boost back and it did not work... im still stuck on adrenaline boost... I was able to do 1700 with 1165mv setting p7 at 1722 with 17.11.2 and now same driver after a downgrade from adrenaline and I cant reach more than 1660 on same setting, its the same I was seeing with adrenaline low boost high watt. I dont get it what the hell happen I use DDU in safe mode as usual.

WTH this %$#^ adrenaline driver as done to my vega.

Anyone see the same ? you downgrade and dont see the previous boost you had before adrenaline.


----------



## fursko

Quote:


> Originally Posted by *0451*
> 
> 1900/1250? LN2?


Nope. Works without crash in luxmark bench.


----------



## Naeem

Quote:


> Originally Posted by *fursko*
> 
> Nope. Works without crash in luxmark bench.


----------



## fursko

Quote:


> Originally Posted by *Naeem*


What is the trick ?


----------



## By-Tor

Tried Luxmark this morning at: Core 1751, memory 1220


----------



## fursko

Quote:


> Originally Posted by *By-Tor*
> 
> Tried Luxmark this morning at: Core 1751, memory 1220


Which driver ?


----------



## By-Tor

Quote:


> Originally Posted by *fursko*
> 
> Which driver ?


17.11.4


----------



## Naeem

i got 54000+ in one run now i am number 2 on luxmark :v


----------



## Trender07

CoD WW2 crashes -> DXGI_DEVICE_RESET. Have to restart PC and gpu goes it with all the P7 leds on. I think its because of HBCC.
(btw this is while recording with ReLive HVEC)


----------



## fursko

Quote:


> Originally Posted by *Trender07*
> 
> CoD WW2 crashes -> DXGI_DEVICE_RESET. Have to restart PC and gpu goes it with all the P7 leds on. I think its because of HBCC.
> (btw this is while recording with ReLive HVEC)


I heard that hbcc causing crash in cod ww2.


----------



## Trender07

Quote:


> Originally Posted by *fursko*
> 
> I heard that hbcc causing crash in cod ww2.


I've just played with HBCC off and it still crashes. Now Im gonna try with freesync off because Ive tried everything already.

I still have to do long plays withouth recording with ReLive if maybe thats the issue, because sometimes it doesn't crash in 2 hours and other times it crashes in 7 mins


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *By-Tor*
> 
> Tried Luxmark this morning at: Core 1751, memory 1220






yeah I don't get it the best I,m pulling is 24k
is this lux 3.1 from the site and no changes just run?
could be my old windows setup I guess...in the realbench 2.54 version I score pretty compared to others which is as far as I know apart from the image the same thing.

oh ps for adrenaline I am really liking it so far and using chill works very nicely especially in diablo 3


----------



## Naeem

Quote:


> Originally Posted by *tarot*
> 
> 
> yeah I don't get it the best I,m pulling is 24k
> is this lux 3.1 from the site and no changes just run?
> could be my old windows setup I guess...in the realbench 2.54 version I score pretty compared to others which is as far as I know apart from the image the same thing.
> 
> oh ps for adrenaline I am really liking it so far and using chill works very nicely especially in diablo 3


17.12.1 drops scores in many benchmarks i am using 17.11.4 also i have vega 64 liquid


----------



## 99belle99

What's happening with stock of these cards? I'm looking to get one but out of stock. It's looking like there won't be any available till new year.


----------



## diabetes

Does anyone else have microstuttering issues with Adrenalin on a single-GPU configuration? This seems to happen independend of the game, but does not occur in 2d applications and videos. Clean installation of Windows 10 FCU, AOC G2460PF 144Hz Freesync monitor.


----------



## geriatricpollywog

Quote:


> Originally Posted by *diabetes*
> 
> Does anyone else have microstuttering issues with Adrenalin on a single-GPU configuration? This seems to happen independend of the game, but does not occur in 2d applications and videos. Clean installation of Windows 10 FCU, AOC G2460PF 144Hz Freesync monitor.


Post a pic of GPUz sensor graphs.


----------



## pmc25

Toggling the performance overlay on in-game results in immediate hard lock up which is very difficult (nigh impossible) to regain control of the PC from (managed it 1 in 10 times).

This happens regardless of the game, and regardless of my usual custom settings, or stock clocks and voltages.

Anyone else?

It's instant.


----------



## pengs

Quote:


> Originally Posted by *pmc25*
> 
> Toggling the performance overlay on in-game results in immediate hard lock up which is very difficult (nigh impossible) to regain control of the PC from (managed it 1 in 10 times).
> 
> This happens regardless of the game, and regardless of my usual custom settings, or stock clocks and voltages.
> 
> Anyone else?
> 
> It's instant.


I've had it stop (looked like a crash) the game but recovered by toggling it again. Not a hard lock or anything on these drivers.


----------



## webhito

Quote:


> Originally Posted by *99belle99*
> 
> What's happening with stock of these cards? I'm looking to get one but out of stock. It's looking like there won't be any available till new year.


Same thing that happened with ethereum, miners are picking them up, plus the fact that amd is only shipping chips and not producing any cards has made getting stock very difficult. Ebay has some but I have seen the prices for around 800 for a vega 64 and 650 for vega 56.


----------



## rancor

Quote:


> Originally Posted by *pmc25*
> 
> Toggling the performance overlay on in-game results in immediate hard lock up which is very difficult (nigh impossible) to regain control of the PC from (managed it 1 in 10 times).
> 
> This happens regardless of the game, and regardless of my usual custom settings, or stock clocks and voltages.
> 
> Anyone else?
> 
> It's instant.


I'm getting hard locks when I try to adjust the memory clock and also sometimes get hard locks when lunching GPUz but that's not new. The overlay hasn't given me issues.


----------



## gupsterg

@0451

This HML is better IMO on showing how my OC profile is for clocks, etc in Wolfenstein 2, 1440P Mein Leben! 144FPS cap.

Wolf2_V64_OC.zip 160k .zip file


@pmc25

So far not had an issue with overlay from Adrenalin. On Latest HWiNFO where GPU i2c is disabled I seem to get VRM temps.



Sometimes not though.


----------



## fursko

My new vega 64 lc probably false reading core clock. When i undervolt or overclock core im getting less fps, less scores but higher core clocks. Using auto voltage settings results better. This is weird. My default voltages p6=1150 p7=1250 is this because of adrenalin ? My old vega 64 lc and older drivers was p6=1150 p7=1200. The weird thing is everything looks worse than my older vega 64 lc but im getting better fps. This is crazy but overclocking gives me less fps than underclocking. Yes not undervolt, underclock the core. Adrenalin or my vega broken lol. I cant tweak the new vega 64 lc but im getting better or same results with my old tweaked vega 64 lc.

So i tried everything. My optimum settings: %0 power, auto voltages and oc hbm or %50 power oc hbm and undervolt p7 from 1250 to 1190. I watch scores not clocks. Core clocks wrong.


----------



## fursko

Stock lc fan really terrible. What a noisy fan. How can i change the fan. Any advice for screwdriver ? I dont want damage screws or gpu. Any experienced user ?


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> @0451
> 
> This HML is better IMO on showing how my OC profile is for clocks, etc in Wolfenstein 2, 1440P Mein Leben! 144FPS cap.
> 
> Wolf2_V64_OC.zip 160k .zip file
> 
> 
> @pmc25
> 
> So far not had an issue with overlay from Adrenalin. On Latest HWiNFO where GPU i2c is disabled I seem to get VRM temps.
> 
> 
> 
> Sometimes not though.


How do you open that file?
Quote:


> Originally Posted by *fursko*
> 
> Stock lc fan really terrible. What a noisy fan. How can i change the fan. Any advice for screwdriver ? I dont want damage screws or gpu. Any experienced user ?


It comes with a Gentle Typhoon, one of the quietest fans you can get.


----------



## geriatricpollywog

Duplicate


----------



## geriatricpollywog

Duplicate


----------



## gupsterg

@0451

MSI AB must be installed on system to view HML file. Once you unzip and double click HML file a viewing window will open, click the text at top and below you will see monitoring data.



As you move pointer over graph line appears showing values, pressing ALT+left mouse will produce marker as in image.


----------



## fursko

Quote:


> Originally Posted by *0451*
> 
> It comes with a Gentle Typhoon, one of the quietest fans you can get.


I bought 2x RX Vega 64 lc and both of them noisy as hell. Even at 500 rpm. Actually 2000rpm better for noise. Im just hearing air noise not motor noise at 2000rpm. I have corsair ml fans dead silent.


----------



## diabetes

Quote:


> Originally Posted by *0451*
> 
> Post a pic of GPUz sensor graphs.


Nvm, I could diagnose the problem. AMD broke Freesync with this driver upgrade. I verified this with the "Windmill Demo". "Instant FPS" are always bouncing from a lot below the target FPS, to a lot above the target FPS, instead of being pinned at the set framerate like they should be. When turning freesync off, FPS are acting like they should. Seems like AMD was using Chill/FRTC for frametime smoothing with Freesync and the changes made in Adrenalin broke that mechanism.

All of my games work perfectly fine without Freesync, but as soon as I turn it on again, the microstutters are back.


----------



## Naeem

Quote:


> Originally Posted by *diabetes*
> 
> Nvm, I could diagnose the problem. AMD broke Freesync with this driver upgrade. I verified this with the "Windmill Demo". "Instant FPS" are always bouncing from a lot below the target FPS, to a lot above the target FPS, instead of being pinned at the set framerate like they should be. When turning freesync off, FPS are acting like they should. Seems like AMD was using Chill/FRTC for frametime smoothing with Freesync and the changes made in Adrenalin broke that mechanism.
> 
> All of my games work perfectly fine without Freesync, but as soon as I turn it on again, the microstutters are back.


yes freesync is broken with 17.12.1 i had to go back to 17.11.4 as well


----------



## fursko

Quote:


> Originally Posted by *diabetes*
> 
> Nvm, I could diagnose the problem. AMD broke Freesync with this driver upgrade. I verified this with the "Windmill Demo". "Instant FPS" are always bouncing from a lot below the target FPS, to a lot above the target FPS, instead of being pinned at the set framerate like they should be. When turning freesync off, FPS are acting like they should. Seems like AMD was using Chill/FRTC for frametime smoothing with Freesync and the changes made in Adrenalin broke that mechanism.
> 
> All of my games work perfectly fine without Freesync, but as soon as I turn it on again, the microstutters are back.


Same when i enable freesync windmill demo stuck at 16fps.


----------



## SavantStrike

Quote:


> Originally Posted by *diabetes*
> 
> Does anyone else have microstuttering issues with Adrenalin on a single-GPU configuration? This seems to happen independend of the game, but does not occur in 2d applications and videos. Clean installation of Windows 10 FCU, AOC G2460PF 144Hz Freesync monitor.


AMD stopped producing them so they could provide chips to AIBs. Supply was always limited so its a double punch. Miners aren't making it any easier either, even Polaris cards are becoming scarce(ish) again.

The Vega cards I've seen are all gouged. Best buy and new egg sell them for 700+ and they end up out of stock quickly.


----------



## diabetes

Quote:


> Originally Posted by *SavantStrike*
> 
> Quote:
> 
> 
> 
> Originally Posted by *diabetes*
> 
> ...
> 
> 
> 
> AMD stopped producing them so they could provide chips to AIBs. Supply was always limited so its a double punch. Miners aren't making it any easier either, even Polaris cards are becoming scarce(ish) again.
> 
> The Vega cards I've seen are all gouged. Best buy and new egg sell them for 700+ and they end up out of stock quickly.
Click to expand...

I dont know what this has to do with my post, but OK.


----------



## TwilightRavens

I’m gonna get one as soon as they aren’t stupidly priced lol


----------



## Grummpy

Quote:


> Originally Posted by *fursko*
> 
> I bought 2x RX Vega 64 lc and both of them noisy as hell. Even at 500 rpm. Actually 2000rpm better for noise. Im just hearing air noise not motor noise at 2000rpm. I have corsair ml fans dead silent.


My LC vega 64 is dead silent.


----------



## LionS7

Quote:


> Originally Posted by *fursko*
> 
> I bought 2x RX Vega 64 lc and both of them noisy as hell. Even at 500 rpm. Actually 2000rpm better for noise. Im just hearing air noise not motor noise at 2000rpm. I have corsair ml fans dead silent.


Yeah, it was like that and on R9 Fury X. Im using mine on 1950rpm constant. The fan I think is the same.


----------



## Trender07

If anyone else have Call of Duty WW2 could try playing it while recording with ReLive and play it for like 10-15 mins ingame? It just keeps crashing with whatever volts u set, HBCC On or Off, also tried Balanced mode and still crashes my vega and have to restart PC.


----------



## Rei86

Quote:


> Originally Posted by *webhito*
> 
> Same thing that happened with ethereum, miners are picking them up, plus the fact that amd is only shipping chips and not producing any cards has made getting stock very difficult. Ebay has some but I have seen the prices for around 800 for a vega 64 and 650 for vega 56.


AVG going rate for used and new ones seem to be around 800~1000 bucks for a 64 and even some 56s are selling for near that price limit on Ebay mind you.

Still have mine sealed, and makes me wonder if I should just unload it also.


----------



## Gambit2K

Quick question, can anybody that has one or two Vegas (any version) in a custom loop tell me what kind of temps they are getting. I suspect mine are running very hot but can't really find any sources to compare to.


----------



## SavantStrike

Quote:


> Originally Posted by *Gambit2K*
> 
> Quick question, can anybody that has one or two Vegas (any version) in a custom loop tell me what kind of temps they are getting. I suspect mine are running very hot but can't really find any sources to compare to.


Give me a week and I'll have an answer for you


----------



## Rei86

Quote:


> Originally Posted by *Gambit2K*
> 
> Quick question, can anybody that has one or two Vegas (any version) in a custom loop tell me what kind of temps they are getting. I suspect mine are running very hot but can't really find any sources to compare to.


Would love to answer you that question but my EKWB that I ordered on black friday from Performance PCS is still on back order


----------



## SavantStrike

Quote:


> Originally Posted by *Rei86*
> 
> Would love to answer you that question but my EKWB that I ordered on black friday from Performance PCS is still on back order


One of my Bykski blocks is stuck in China. Royal pain in the posterior. Nothing to do but wait for Vega.. Again...


----------



## allenwr1505

Quote:


> Originally Posted by *Gambit2K*
> 
> Quick question, can anybody that has one or two Vegas (any version) in a custom loop tell me what kind of temps they are getting. I suspect mine are running very hot but can't really find any sources to compare to.


I have four Vega 64's in a water loop with EKWB blocks and backplates. My temps are about 33 gpu, 36 hotspot, and 49 HBM temps. The cores are undervolted to between 0.85v-0.865v and are running at 1000 core clock with the HBM at 1100.


----------



## VicsPC

Quote:


> Originally Posted by *allenwr1505*
> 
> I have four Vega 64's in a water loop with EKWB blocks and backplates. My temps are about 33 gpu, 36 hotspot, and 49 HBM temps. The cores are undervolted to between 0.85v-0.865v and are running at 1000 core clock with the HBM at 1100.


Quite a huge difference between core and hbm there. Guessing its a mining rig since the cores are so undervolted lol.


----------



## Grummpy

Latest driver make system shut down without warning when running games with eyefinity mixed mode.
doom runs ok but bf1 alien isolation and assins creed origins make system shut down instantly.
Had to run windows repair at one point and install game again from scratch.
This kind of fault is bad i could of lost my system install due to bad drivers.
That's kind of disgusting actually.

19" dell x 2 and a single 32" samsung.
works amazing usually but not today.
So much for being certified.......
Im having to roll back driver if i wish to game over eyefinity with 3 screen setup.


----------



## LeadbyFaith21

Quote:


> Originally Posted by *Gambit2K*
> 
> Quick question, can anybody that has one or two Vegas (any version) in a custom loop tell me what kind of temps they are getting. I suspect mine are running very hot but can't really find any sources to compare to.


I've got the 64 with ek block and special edition stock backplate. Using a slim 360 ek rad and another slim 360 rad (can't remember brand). And Corsair MagLevs for the fans at around 1200-1400 rpm inside the in win 305. Ambiant around 24 c and at stock, I'm getting around 45-50 on the core depending on the game, and around 70-80 on hotspot.


----------



## AlphaC

http://blog.livedoor.jp/wisteriear/archives/1068809121.html




BIOS settings - clockspeeds




Power consumption


----------



## Almost Heathen

Anyone with a Vega GPU on Sandy/Ivy Bridge? I've read some people are having problems like BSOD from an undetermined issue.









Would appreciate hearing about your experiences with your Vega on Sandy/Ivy, right here, or here: http://www.overclock.net/t/1644060/issues-with-my-build#post_26501756

Thank you much.


----------



## TwilightRavens

Quote:


> Originally Posted by *Almost Heathen*
> 
> Anyone with a Vega GPU on Sandy/Ivy Bridge? I've read some people are having problems like BSOD from an undetermined issue.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would appreciate hearing about your experiences with your Vega on Sandy/Ivy, right here, or here: http://www.overclock.net/t/1644060/issues-with-my-build#post_26501756
> 
> Thank you much.


I got a friend that has a Vega 64 LC with a 3770K and so far has had no issues.


----------



## Newbie2009

Quote:


> Originally Posted by *Almost Heathen*
> 
> Anyone with a Vega GPU on Sandy/Ivy Bridge? I've read some people are having problems like BSOD from an undetermined issue.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would appreciate hearing about your experiences with your Vega on Sandy/Ivy, right here, or here: http://www.overclock.net/t/1644060/issues-with-my-build#post_26501756
> 
> Thank you much.


No issues here, 3770k


----------



## Gambit2K

Quote:


> Originally Posted by *LeadbyFaith21*
> 
> I've got the 64 with ek block and special edition stock backplate. Using a slim 360 ek rad and another slim 360 rad (can't remember brand). And Corsair MagLevs for the fans at around 1200-1400 rpm inside the in win 305. Ambiant around 24 c and at stock, I'm getting around 45-50 on the core depending on the game, and around 70-80 on hotspot.


Thanks for the info, I have a similar setup with with two EK 360PE rads with two Vega64 cards. And I am getting horrible temps of up to 68-70c (core) during mining (around 60 while gaming). So I think I might need to remove the blocks and put it back together, and see if I can improve the loop in other ways.


----------



## TwilightRavens

ANY idea when vega pricing will stabilize?


----------



## GroupB

Quote:


> Originally Posted by *Gambit2K*
> 
> Thanks for the info, I have a similar setup with with two EK 360PE rads with two Vega64 cards. And I am getting horrible temps of up to 68-70c (core) during mining (around 60 while gaming). So I think I might need to remove the blocks and put it back together, and see if I can improve the loop in other ways.


Both of you guys should redo the paste , those are horrible temp! Those are not proper water cool temperature. proper range will be core around 30-35 , hbm around 40-50 and hot spot around 50-60 for one card in the loop. for 2 card with my experience with my previous crossfire r9 I say add 10-15c at max to the temps I list will be consider normal for 2X360 rad.

same setup 280 +420 xspc ex rad gave me 30 core ,42 hbm, 48 hot for 1 vega (full load benchmark) and when I had 2 r9 290x on same setup and im pretty sure r9 290x produce more heat than 2 vega since I had them at 1220+ mhz, 1.3v+ gave me a core around 45C , vrm 55-60C for both.

BTW your temps while mining should be LOWER than while gaming/benchmark. redo your setup something is wrong, You can try that for 1900 hash : p6 lock (min/max) at 1100 mhz that will reduce soc to 1028 mhz and then put hbm at 1025., Less heat, less load on hbm and soc ( you dont want to degrade them with mining) on blockchain driver off course. if you using amd stak then 2 thread one 1920 the other 1620 this setup work wonder for me and I dont feel like Im wasting hbm away, memory degrade are real with mining, my r9 290/390 's and rx 580's both degrade over time, both took 3 month of mining to start degrade. Since like you I have a WB and if I rma I wont get a reference back I try to save the hbm the best I can.


----------



## VicsPC

Quote:


> Originally Posted by *GroupB*
> 
> Both of you guys should redo the paste , those are horrible temp! Those are not proper water cool temperature. proper range will be core around 30-35 , hbm around 40-50 and hot spot around 50-60 for one card in the loop. for 2 card with my experience with my previous crossfire r9 I say add 10-15c at max to the temps I list will be consider normal for 2X360 rad.
> 
> same setup 280 +420 xspc ex rad gave me 30 core ,42 hbm, 48 hot for 1 vega (full load benchmark) and when I had 2 r9 290x on same setup and im pretty sure r9 290x produce more heat than 2 vega since I had them at 1220+ mhz, 1.3v+ gave me a core around 45C , vrm 55-60C for both.
> 
> BTW your temps while mining should be LOWER than while gaming/benchmark. redo your setup something is wrong, You can try that for 1900 hash : p6 lock (min/max) at 1100 mhz that will reduce soc to 1028 mhz and then put hbm at 1025., Less heat, less load on hbm and soc ( you dont want to degrade them with mining) on blockchain driver off course. if you using amd stak then 2 thread one 1920 the other 1620 this setup work wonder for me and I dont feel like Im wasting hbm away, memory degrade are real with mining, my r9 290/390 's and rx 580's both degrade over time, both took 3 month of mining to start degrade. Since like you I have a WB and if I rma I wont get a reference back I try to save the hbm the best I can.


Yea except that's not exactly true. You need to know what they have in the system, what else is in the loop, the case, ambient temps and so on. I have a 360/240 both in push/pull. My core will never be 30-35°C it regularly reaches 39°C with an ambient of around 20-21°C. I also have a 1700x in my loop. I do however have better hotspot temps then most people, but if you have a core of 30 and HBM of 40 something is already wrong there. I get a difference of 3°C between core and HBM so its a per game basis, redoing the paste won't matter in the least unless it was done poorly in the first place.


----------



## Razkin

Some more LC temps.

One 360x60mm rad with 3 mediocre fans in pull config tuned for silence, cooling an i7 4790 and Vega 64 while pulling ~520 power out of the wall at around 20 degrees ambient. Core 49, HBM 58, Hot spot 70. I can easily drop the temps a bit by running the fans harder. This is after hours of playing Fallout 4 at 2160p which has my Vega running at the edge of its capabilities most of the time.

My setup is far from ideal, so those with two rads should not get higher temps than my setup is able to, unless they have 25+ ambient temperatures.


----------



## VicsPC

Quote:


> Originally Posted by *Razkin*
> 
> Some more LC temps.
> 
> One 360x60mm rad with 3 mediocre fans in pull config tuned for silence, cooling an i7 4790 and Vega 64 while pulling ~520 power out of the wall at around 20 degrees ambient. Core 49, HBM 58, Hot spot 70. I can easily drop the temps a bit by running the fans harder. This is after hours of playing Fallout 4 at 2160p which has my Vega running at the edge of its capabilities most of the time.
> 
> My setup is far from ideal, so those with two rads should not get higher temps than my setup is able to, unless they have 25+ ambient temperatures.


One rad for 2 components isn't going to give you great temps, playing siege (which maxes out my gpu and gets my cpu up to 60-70%) i get around 39°C core and 42°C HBM and hotspot of around 51°C, this is with a 1700x at 3.8 in there as well so id say your temps are fine, hotspot is a bit hot but with little case flow that's to be expected.


----------



## GroupB

Quote:


> Originally Posted by *VicsPC*
> 
> Yea except that's not exactly true. You need to know what they have in the system, what else is in the loop, the case, ambient temps and so on. I have a 360/240 both in push/pull. My core will never be 30-35°C it regularly reaches 39°C with an ambient of around 20-21°C. I also have a 1700x in my loop. I do however have better hotspot temps then most people, but if you have a core of 30 and HBM of 40 something is already wrong there. I get a difference of 3°C between core and HBM so its a per game basis, redoing the paste won't matter in the least unless it was done poorly in the first place.


yah forgot to mention I have a i7 6700k @ 4.9 on the same loop with the vega the 420 is push pull 900 rpm fan, the 280 only push 900 rpm, but I had a 1090T at 4.3 with 1.56v for the longest time with the 2 r9 and same rad setup and temp were still 45C ,55-60vrm for both r9 , mainstream cpu today do not pull that much watt vs those old workhorse like 1090T , intel or rysen it dont matter much if you have a 360 + something else, specially mainstream intel since the paste is bad and the heat of the cpu have a hard time reaching the loop.

About your 3c between hbm and core I guess you are lucky, must of us with EK block its more a 10C diff if you have 1100 hbm and boost in the 1700 range and give it 142 % powertune, that what the majority of user were saying at launch went we first start putting the block on our card.


----------



## VicsPC

Quote:


> Originally Posted by *GroupB*
> 
> yah forgot to mention I have a i7 6700k @ 4.9 on the same loop with the vega the 420 is push pull 900 rpm fan, the 280 only push 900 rpm, but I had a 1090T at 4.3 with 1.56v for the longest time with the 2 r9 and same rad setup and temp were still 45C ,55-60vrm for both r9 , mainstream cpu today do not pull that much watt vs those old workhorse like 1090T , intel or rysen it dont matter much if you have a 360 + something else, specially mainstream intel since the paste is bad and the heat of the cpu have a hard time reaching the loop.
> 
> About your 3c between hbm and core I guess you are lucky, must of us with EK block its more a 10C diff if you have 1100 hbm and boost in the 1700 range and give it 142 % powertune, that what the majority of user were saying at launch went we first start putting the block on our card.


Yea i haven't messed with HBM OC yet but i wouldn't give it a 142% tune either, from what I've read after 50% it doesn't make any difference, but I've yet to try it but if anything ill do a mild OC and see what i can get on stock voltages, might only do 25%. If your die is unmolded though that could easily be why there's a big difference between the two. Don't forget, the more paste you use the LESS it works.


----------



## GroupB

Quote:


> Originally Posted by *VicsPC*
> 
> Yea i haven't messed with HBM OC yet but i wouldn't give it a 142% tune either, from what I've read after 50% it doesn't make any difference, but I've yet to try it but if anything ill do a mild OC and see what i can get on stock voltages, might only do 25%. If your die is unmolded though that could easily be why there's a big difference between the two. Don't forget, the more paste you use the LESS it works.


Nope im molded and use ic7 paste


----------



## Tyrael

Hey guys,

i need some advice. I modded my RX Vega 56 with Morpheus II and I have issues now in Fullscreen Applications. After a while all of them just crashes without error. There are no entries in the event log. I suspected to high temperatures, but HBM and GPU stays on 40 ° Celsius. At example PUBG just stopps or in Battlefront 2 the screen is like burned in and after some moving it just close
Next I suspected drivers, I uninstalled them reinstalled them but still same issue.

What can I have done wrong? Where can I look?

Yours

Tyrael


----------



## 99belle99

Quote:


> Originally Posted by *TwilightRavens*
> 
> I'm gonna get one as soon as they aren't stupidly priced lol


Me too but I'm stuck in abad position as I just sold my Fury X thinking I would just pick up a Vega 64, little did I know how scarce they are and how over priced the ones in stock are. OCUK have some in stock for 800 uk pounds that is ridiculous.

I even got my hopes up when I came across a local ad near me with reference 64's for sale at €500 but they were sold and the seller never took down the ad. Typical.


----------



## SavantStrike

Quote:


> Originally Posted by *99belle99*
> 
> Me too but I'm stuck in abad position as I just sold my Fury X thinking I would just pick up a Vega 64, little did I know how scarce they are and how over priced the ones in stock are. OCUK have some in stock for 800 uk pounds that is ridiculous.
> 
> I even got my hopes up when I came across a local ad near me with reference 64's for sale at €500 but they were sold and the seller never took down the ad. Typical.


Temporary Polaris card? Green team? Sell a kidney?

Options aren't plentiful.


----------



## ontariotl

Quote:


> Originally Posted by *Almost Heathen*
> 
> Anyone with a Vega GPU on Sandy/Ivy Bridge? I've read some people are having problems like BSOD from an undetermined issue.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would appreciate hearing about your experiences with your Vega on Sandy/Ivy, right here, or here: http://www.overclock.net/t/1644060/issues-with-my-build#post_26501756
> 
> Thank you much.


I forget what another members CPU was but it was on a PCI-E 2.0 spec'ed version and his Vega64 wasn't working right. As soon as he put it in his buddies system which was PCI-E 3.0 compliant it ran fine.

Same as I suggested to that member before he tested in his buddies PC, it's more than likely Vega does not like PCI-E 2.0. So with Sandybridge and past architecture might bring issues with installing Vega. Ivy bridge shouldn't have many issues as it was PCI-E 3.0 compliant.


----------



## 99belle99

Quote:


> Originally Posted by *SavantStrike*
> 
> Temporary Polaris card? Green team? Sell a kidney?
> 
> Options aren't plentiful.


I was thinking of a Polaris card but then I came to my senses as they cost as much as I sold my Fury X. I would never buy a card from Nvidia. I think I will keep my kidneys.









I'm not really stuck as i bought a really cheap old AMD card to keep my PC running but I won't be able to play games but I have my Xbox One X to keep me going anyway.


----------



## webhito

All of a sudden I am now getting radeon host has stopped working error on 17.11.4, anyone else ran into this issue?


----------



## pmc25

Quote:


> Originally Posted by *TwilightRavens*
> 
> ANY idea when vega pricing will stabilize?


I think it will go quite a bit higher still.

It's a compute / crypto / hashing monster, and people are back on cryptocurrency bandwagon.

It demolishes the $3000 Titan Volta in a number of scenarios.


----------



## TwilightRavens

Quote:


> Originally Posted by *pmc25*
> 
> I think it will go quite a bit higher still.
> 
> It's a compute / crypto / hashing monster, and people are back on cryptocurrency bandwagon.
> 
> It demolishes the $3000 Titan Volta in a number of scenarios.


Sad too because I really want one, maybe I'll just hold out with my 290X til Navi is closer. I personally will not buy from Nvidia, so the only thing that can be done is to wait.


----------



## ontariotl

Quote:


> Originally Posted by *webhito*
> 
> All of a sudden I am now getting radeon host has stopped working error on 17.11.4, anyone else ran into this issue?


I have that issue when an overclock fails or game crashes and I go check the options in Crimson. I just close it and reload it again.


----------



## webhito

Quote:


> Originally Posted by *ontariotl*
> 
> I have that issue when an overclock fails or game crashes and I go check the options in Crimson. I just close it and reload it again.


Happened while playing once and then just fiddling in the settings.

Now even 3dmark is freezing up/hard locking on me while loading.


----------



## jmoonb

Quote:


> Originally Posted by *Tyrael*
> 
> Hey guys,
> 
> i need some advice. I modded my RX Vega 56 with Morpheus II and I have issues now in Fullscreen Applications. After a while all of them just crashes without error. There are no entries in the event log. I suspected to high temperatures, but HBM and GPU stays on 40 ° Celsius. At example PUBG just stopps or in Battlefront 2 the screen is like burned in and after some moving it just close
> Next I suspected drivers, I uninstalled them reinstalled them but still same issue.
> 
> What can I have done wrong? Where can I look?
> 
> Yours
> 
> Tyrael


What is your VRM temps like. Sounds like they may be getting a bit too hot which is common on Morpheus mods due to the tiny heatsinks.


----------



## ontariotl

Quote:


> Originally Posted by *webhito*
> 
> Happened while playing once and then just fiddling in the settings.
> 
> Now even 3dmark is freezing up/hard locking on me while loading.


Sorry I don't want to check past posts, but is your card flashed to liquid or default bios? Is it overclocked at all?


----------



## tarot

I added a 140 rad to my 280 with a little cougar fan 2 vardars at 1600 on the 280 and ambient temps around 35 degrees yes with aircon on and after an hour or so of D3 (don't laugh it pushes hard in 4k







) and I was getting temps around 60 on ram and gpu and 80 on the hotspot maximum so I decided chill was for me now it drops to a top of 50 and 70 and looses about 10 fps but is buttery smooth.

I do not think in my case it is the mount because the air and tubes are hot and I mean hot so the heart is getting out my thought is the weak pump I have only pushing 2300 rpm whereas my overclocked Threadripper with a 420 and 4300 pump full blat is handling it fine.

so maybe faster pump would help?
I really don't want to drop another 150 bucks if the general opinion is it won't help(oh and it is the only thing in the loop the Threadripper is all by itself







)


----------



## pengs

Quote:


> Originally Posted by *Almost Heathen*
> 
> Anyone with a Vega GPU on Sandy/Ivy Bridge? I've read some people are having problems like BSOD from an undetermined issue.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would appreciate hearing about your experiences with your Vega on Sandy/Ivy, right here, or here: http://www.overclock.net/t/1644060/issues-with-my-build#post_26501756
> 
> Thank you much.


3770K, no problems.


----------



## webhito

Quote:


> Originally Posted by *ontariotl*
> 
> Sorry I don't want to check past posts, but is your card flashed to liquid or default bios? Is it overclocked at all?


Nope, stock, not flashed at all.

One thing though, I sold it to a miner and the guy said he was not able to get them working in crossfire so he gave them back. Never checked bios version before but one is reporting to be
016.001.001.000.008776 which says is not recognized in the tpu website.

I wonder if maybe the guy flashed them and messed them up. No hardware swap was done though.


----------



## Almost Heathen

Quote:


> Originally Posted by *ontariotl*
> 
> I forget what another members CPU was but it was on a PCI-E 2.0 spec'ed version and his Vega64 wasn't working right. As soon as he put it in his buddies system which was PCI-E 3.0 compliant it ran fine.
> 
> Same as I suggested to that member before he tested in his buddies PC, it's more than likely Vega does not like PCI-E 2.0. So with Sandybridge and past architecture might bring issues with installing Vega. Ivy bridge shouldn't have many issues as it was PCI-E 3.0 compliant.


+Rep all around for sharing.

That makes sense. Though I always thought PCIE 3.0 was backwards compatible.

I believe Sandy Bridge is not capable of PCIE 3.0 (even if the board is capable). So if anyone has Vega on Sandy Bridge, please share right here or here: http://www.overclock.net/t/1644060/issues-with-my-build . I appreciate it.


----------



## ontariotl

Quote:


> Originally Posted by *Almost Heathen*
> 
> +Rep all around for sharing.
> 
> That makes sense. Though I always thought PCIE 3.0 was backwards compatible.
> 
> I believe Sandy Bridge is not capable of PCIE 3.0 (even if the board is capable). So if anyone has Vega on Sandy Bridge, please share right here or here: http://www.overclock.net/t/1644060/issues-with-my-build . I appreciate it.


AMD cards are not as backwards compatible like Nvidia unfortunately. Sandy Bridge is PCI-E 2.0 only, only the motherboard would allow 3.0 when Ivy Bridge was installed. The PCI-E architecture is on the cpu die.


----------



## ontariotl

Quote:


> Originally Posted by *webhito*
> 
> Nope, stock, not flashed at all.
> 
> One thing though, I sold it to a miner and the guy said he was not able to get them working in crossfire so he gave them back. Never checked bios version before but one is reporting to be
> 016.001.001.000.008776 which says is not recognized in the tpu website.
> 
> I wonder if maybe the guy flashed them and messed them up. No hardware swap was done though.


Sounds like he could have buggered it up on you and returned it for that reason.


----------



## raysheri

I haven't had any problems running Polaris or Vega on Sandy Bridge - i7 [email protected] 4.6 on asus maximus IV extreme, although the MB bios does not appear to identify Vega.
But, as I said, it has not presented any issues and Win 7 and 10 both detect the cards and I have also flashed Vega numerous times and overclocked it to the max as well.
Having said that, my motherboard was the top of the range P67.
I keep all my drivers updated with "Driver Genius" and had a new chipset driver from Intel only a couple of wks ago.


----------



## webhito

Quote:


> Originally Posted by *ontariotl*
> 
> Sounds like he could have buggered it up on you and returned it for that reason.


Yea, wondering the same, think that a flash might fix the issue or should I just call it a loss? It works fine as long as I dont bench on it as far as I have tested so far.


----------



## ontariotl

Quote:


> Originally Posted by *raysheri*
> 
> I haven't had any problems running Polaris or Vega on Sandy Bridge - i7 [email protected] 4.6 on asus maximus IV extreme, although the MB bios does not appear to identify Vega.
> But, as I said, it has not presented any issues and Win 7 and 10 both detect the cards and I have also flashed Vega numerous times and overclocked it to the max as well.
> Having said that, my motherboard was the top of the range P67.


That's good to know. You are probably one of the few (on here at least) that has been able to get Vega to run on PCI-E 2.0.

Quote:


> Originally Posted by *webhito*
> 
> Yea, wondering the same, think that a flash might fix the issue or should I just call it a loss? It works fine as long as I dont bench on it as far as I have tested so far.


Wouldn't harm to try and flash it. Or have you even tried the secondary bios and see how the card reacts with that?


----------



## webhito

Quote:


> Originally Posted by *ontariotl*
> 
> That's good to know. You are probably one of the few (on here at least) that has been able to get Vega to run on PCI-E 2.0.
> Wouldn't harm to try and flash it. Or have you even tried the secondary bios and see how the card reacts with that?


Nice catch, I thought that second switch was just some sort of boost, never bothered even moving it. Will give it a try later today and see if that does anything.

Cheers!


----------



## Almost Heathen

Quote:


> Originally Posted by *ontariotl*
> 
> AMD cards are not as backwards compatible like Nvidia unfortunately. Sandy Bridge is PCI-E 2.0 only, only the motherboard would allow 3.0 when Ivy Bridge was installed. The PCI-E architecture is on the cpu die.


Thank you for clarifying. And congrats on the well deserved flame.








Quote:


> Originally Posted by *raysheri*
> 
> I haven't had any problems running Polaris or Vega on Sandy Bridge - i7 [email protected] 4.6 on asus maximus IV extreme, although the MB bios does not appear to identify Vega.
> But, as I said, it has not presented any issues and Win 7 and 10 both detect the cards and I have also flashed Vega numerous times and overclocked it to the max as well.
> Having said that, my motherboard was the top of the range P67.
> I keep all my drivers updated with "Driver Genius" and had a new chipset driver from Intel only a couple of wks ago.


Thank you for sharing that. I'm glad to see someone on Sandy using Vega without issue. Way underrated GPUs IMO.


----------



## SavantStrike

Quote:


> Originally Posted by *Almost Heathen*
> 
> +Rep all around for sharing.
> 
> That makes sense. Though I always thought PCIE 3.0 was backwards compatible.
> 
> I believe Sandy Bridge is not capable of PCIE 3.0 (even if the board is capable). So if anyone has Vega on Sandy Bridge, please share right here or here: http://www.overclock.net/t/1644060/issues-with-my-build . I appreciate it.


I'm building a sandy bridge e test bench next week. If you'd like I could throw my Vega in it to see how it fares before I use the rig for it's intended purpose.


----------



## Almost Heathen

Quote:


> Originally Posted by *SavantStrike*
> 
> I'm building a sandy bridge e test bench next week. If you'd like I could throw my Vega in it to see how it fares before I use the rig for it's intended purpose.


If it's no trouble, that would be great. Thank you.


----------



## ontariotl

New Drivers are out!

https://videocardz.com/driver/amd-radeon-adrenalin-edition-17-12-2-beta

They finally fixed the black screen with freesync enabled on the Samsung CF791. I wonder if Freesync works however since some mentioned that it didn't work with 17.12.1. I didn't realise it until I tested it as well.


----------



## Grummpy

Quote:


> Originally Posted by *ontariotl*
> 
> New Drivers are out!
> 
> https://videocardz.com/driver/amd-radeon-adrenalin-edition-17-12-2-beta
> 
> They finally fixed the black screen with freesync enabled on the Samsung CF791. I wonder if Freesync works however since some mentioned that it didn't work with 17.12.1. I didn't realise it until I tested it as well.


Thx nice one for the heads up.


----------



## ontariotl

Quote:


> Originally Posted by *Grummpy*
> 
> Thx nice one for the heads up.


No problem. If you have freesync, can you test it for me as I don't want to jump into another driver until it's confirmed. It gets tiring changing from driver to driver too often.


----------



## GroupB

BTW vega run fine on pci-e 2.0 but you have to stick on windows 7


----------



## raysheri

Quote:


> Originally Posted by *GroupB*
> 
> BTW vega run fine on pci-e 2.0 but you have to stick on windows 7


As I posted earlier, Vega runs fine on pcie 2.0 on Win 10 as well.


----------



## VicsPC

Quote:


> Originally Posted by *ontariotl*
> 
> No problem. If you have freesync, can you test it for me as I don't want to jump into another driver until it's confirmed. It gets tiring changing from driver to driver too often.


I have freesync as well but haven't jumped into 17.12.x yet as people have said freesync is broken. I did notice though that with some past drivers i had to restart my pc twice after an installation to get freesync to work which was weird, hasn't been the case lately though.

I usually use uninstall utility, ccleaner registry clean, then restart with my ethernet cable unplugged (so windows doesnt try to update over the net, and so bitdefender doesnt cause install issues, ive tried this method and have had no issues installing drivers) then i install and restart and plug it back in. Seems tedious but I've had zero issues so im sticking to it.


----------



## punchmonster

I hate going to Reddit to give feedback forcing me to interact with soybrains angry that their massively parallel processor has other uses than drawing triangles.

I can't mention the compute performance on Adrenalin drivers not being optimized without 10 chuds crying about mining when I don't own a mining rig.

Compute performance is simply left on the table in the latest driver and it's silly.


----------



## ontariotl

Quote:


> Originally Posted by *raysheri*
> 
> As I posted earlier, Vega runs fine on pcie 2.0 on Win 10 as well.


I wonder if it has to do with certain motherboards that it works with 2.0. Possibly differences with P67, Z67, or Z77 with sandybridge. Would be interesting to know since some are having issues while others are not.


----------



## ontariotl

Quote:


> Originally Posted by *VicsPC*
> 
> I have freesync as well but haven't jumped into 17.12.x yet as people have said freesync is broken. I did notice though that with some past drivers i had to restart my pc twice after an installation to get freesync to work which was weird, hasn't been the case lately though.
> 
> I usually use uninstall utility, ccleaner registry clean, then restart with my ethernet cable unplugged (so windows doesnt try to update over the net, and so bitdefender doesnt cause install issues, ive tried this method and have had no issues installing drivers) then i install and restart and plug it back in. Seems tedious but I've had zero issues so im sticking to it.


Yeah that seems tedious to make drivers work. AMD drivers are so finicky. For sure, if it works for you keep doing it that way.

I tested the windmill demo on 17.11.4 and I still had issues with freesync. Soon as I disabled my second monitor it worked fine. I'll have to test this with the latest drivers if freesync still doesn't work.

I'm just happy they finally fixed the black out issue for Samsung CF791 ultrawide monitors. However, I can't confirm that yet.


----------



## Grummpy

Quote:


> Originally Posted by *GroupB*
> 
> BTW vega run fine on pci-e 2.0 but you have to stick on windows 7


2700k here using pci e 2.0 and no bottle neck using windows 10


----------



## ontariotl

Quote:


> Originally Posted by *Grummpy*
> 
> 2700k here using pci e 2.0 and no bottle neck using windows 10


What motherboard chipset are you using? Z77 or Z67?


----------



## Trender07

My games crashes with the "device reset" warning because my gpu crashes it gets locked into the p7 state and have to restart pc.
You guys think the HBM voltage floor could've cause it? I run 960 mv hvm voltage, might I increase it to like 1000?


----------



## jmoonb

Quote:


> Originally Posted by *Trender07*
> 
> My games crashes with the "device reset" warning because my gpu crashes it gets locked into the p7 state and have to restart pc.
> You guys think the HBM voltage floor could've cause it? I run 960 mv hvm voltage, might I increase it to like 1000?


What exactly are the settings you are using anyways?


----------



## pengs

Quote:


> Originally Posted by *VicsPC*
> 
> I have freesync as well but haven't jumped into 17.12.x yet as people have said freesync is broken. I did notice though that with some past drivers i had to restart my pc twice after an installation to get freesync to work which was weird, hasn't been the case lately though.
> 
> I usually use uninstall utility, ccleaner registry clean, then restart with my ethernet cable unplugged (so windows doesnt try to update over the net, and so bitdefender doesnt cause install issues, ive tried this method and have had no issues installing drivers) then i install and restart and plug it back in. Seems tedious but I've had zero issues so im sticking to it.


I've had some problems with Freesync also since the 17.11.x drivers. I clean installed 17.12.2 last night and forced freesync in my games profile (AMD optimized/on/off) and it appears to be working so I'm not sure if it was the driver update which helped, forcing freesync or the game profiles not detected correctly when first scanned for.

I noticed some games' profiles need to be deleted from the get-go and you'l need to manually add the correct executable.

I've also stopped using AMD's frame limiter at the moment. Freesync+enhanced sync+RTSS frame limiter is the way to go. Night and day smoothness.


----------



## diabetes

I was having framedrops/stutter lately in Unigine Heaven, The Witcher 3 and Dirty Bomb (other games were not tested, but ASUS ROG Furmark works fine without drops in OpenGL and Vulkan). I came as far as to determine that Freesync is probably not what is causing this for me. Here is a GPU-Z screenshot from a few selected scenes in Heaven: https://abload.de/img/heaven_with_fsdrocy.gif

There are three drops in GPU Load, HBM clocks, and Hotspot temp on the graph that correspond to the stutter. They occur on stock 64 settings and with DPM7 locked +maxed powertarget. In Heaven, the stutter always occurs in the same scenes!

I am running an [email protected], 16GB Ram, fresh install of Windows 10FCU, RX Vega56 reference edition with 64 bios on EKWB, Corsair HX 850i single rail PSU - GPU is connected via two separate power cables nevertheless. Chill, FRTC, Enhanced Sync and Freesync are off in the driver.

How do I diagnose this further? Does someone have similar issues?


----------



## VicsPC

Quote:


> Originally Posted by *pengs*
> 
> I've had some problems with Freesync also since the 17.11.x drivers. I clean installed 17.12.2 last night and forced freesync in my games profile (AMD optimized/on/off) and it appears to be working so I'm not sure if it was the driver update which helped, forcing freesync or the game profiles not detected correctly when first scanned for.
> 
> I noticed some games' profiles need to be deleted from the get-go and you'l need to manually add the correct executable.
> 
> I've also stopped using AMD's frame limiter at the moment. Freesync+enhanced sync+RTSS frame limiter is the way to go. Night and day smoothness.


I haven't had to force it, i leave it on in my monitor and on under the display tab, i do lock my frames anywhere between 73-75fps (limit of my freesync) and haven't had issues, i do have some games where it doesn't work properly but different story.

Best way for me to test is either car mechanic simulator or rocket league they tend to show it the best, probably because its stationary. I may try 17.2.2 but id hate to lose freesync. A while back it stopped working for siege (a game i play a lot and at the time used freesync) took AMD 2-3 months to fix it.


----------



## pengs

Quote:


> Originally Posted by *VicsPC*
> 
> I haven't had to force it, i leave it on in my monitor and on under the display tab, i do lock my frames anywhere between 73-75fps (limit of my freesync) and haven't had issues, i do have some games where it doesn't work properly but different story.
> 
> Best way for me to test is either car mechanic simulator or rocket league they tend to show it the best, probably because its stationary. I may try 17.2.2 but id hate to lose freesync. A while back it stopped working for siege (a game i play a lot and at the time used freesync) took AMD 2-3 months to fix it.


17.2.2 may had fixed my issue.
It's only b-class indie type PC games that I've had issues with freesync, unreal engine/older cryteks. General lesser knowns.


----------



## webhito

G'day Gents!

Still having issues with my cards, however it seems that wattman has been buggering them up, if I modify anything my issues are much more frequent.

Question! Is there any way to modify the bios alone to drop consumption and a more aggressive fan profile? I read somewhere that if the card was anything but stock, drivers would not load.

Cheers!


----------



## SavantStrike

Quote:


> Originally Posted by *webhito*
> 
> G'day Gents!
> 
> Still having issues with my cards, however it seems that wattman has been buggering them up, if I modify anything my issues are much more frequent.
> 
> Question! Is there any way to modify the bios alone to drop consumption and a more aggressive fan profile? I read somewhere that if the card was anything but stock, drivers would not load.
> 
> Cheers!


BIOS is locked so you can't flash anything but a signed BIOS from another Vega card. It's much worse than on polaris where there were issues installing driverswithsa modded card.


----------



## webhito

Quote:


> Originally Posted by *SavantStrike*
> 
> BIOS is locked so you can't flash anything but a signed BIOS from another Vega card. It's much worse than on polaris where there were issues installing driverswithsa modded card.


So there is no workaround, =(. Oh well, guess I will have to wait it out.


----------



## Trender07

Anyone else with the latest update sees like when using the OSD it gets like a lot lot of tearing and game laggy?


----------



## gupsterg

@0451

Only got this recently, Doom. Set to Vulkan, 1440P, 144 FPS cap, v17.12.2 with reg mod as before.

Doom_V64_OC.zip 124k .zip file


----------



## Grummpy

Im still shutting down wile using eye finity just instant power off like plug been pulled.
unable to use it untill its fixed.
had to do 2 hard drive scans now lucky not to loose my operating system.


----------



## hyp36rmax

Quote:


> Originally Posted by *Grummpy*
> 
> Im still shutting down wile using eye finity just instant power off like plug been pulled.
> unable to use it untill its fixed.
> had to do 2 hard drive scans now lucky not to loose my operating system.


How many VEGAS's? What PSU are you using?


----------



## Grummpy

....


----------



## Trender07

Quote:


> Originally Posted by *jmoonb*
> 
> What exactly are the settings you are using anyways?


Well I tried with really high volts and low HBM speed.

I usually had and run every everyy benchmark withouth problem nor crash at :

-1080 HBM mhz @ 955 mV
-p6 1532 mhz @ 975
-p7 1632 [email protected] 1090 mV
+50% PL and 3100 rpm fan (air)

But because of the crashes in games making me have to restart whole PC (vega leds shows p7 locked state), I tried much more safe setttings:

- 985 hbm mhz @ 1000 mV
-p6 1532 mhz @ 1050
-p7 1632 [email protected] 1150 mV
+50% PL and 3100 rpm fan (air)

But it still crashes in games eventually, with the screen restarting and the "DEVICE RESET" warning all I have left is long playing with balanced mode(default) and im not using powerplay tables nor anything


----------



## hyp36rmax

Quote:


> Originally Posted by *Grummpy*
> 
> 800 watt platinum 93% efficient silverstone psu.
> Max at wall ive recorded is 650 peak @ wall i have plenty left in the tank.
> I will be rolling back to older drivers i dislike these Adrenalin drivers they are causing me to allmost loose my win 10 install.
> That just inst exceptable at all.
> 
> I will be sticking with 17.11.1 until they enable primitive shaders and draw stream binning rasterizer.
> Just isnt worth the risk in loosing my operating system install for the sake of a few toys i can live without.
> 
> Well problem exists after roll back so it isnt the drivers.
> just a case of figuring it out.


Running only one VEGA right?

I had a similar issue happen with two VEGA 64's with a 1000 Watt PSU. Basically hitting the OCP. I now have a 1600 Watt EVGA T2 Titanium, with no issues.

Didn't have any issues with one VEGA 64 with a single on the same PSU though. Have you tried a larger PSU? Even though you're not hitting the max PSU 800 Watts, something power wise is causing your PSU to shutdown.


----------



## Grummpy

......


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> I found the problem and its really dumb.
> when fitting the screw to hold gpu in place it pulled card out of the slot.
> pushing it i found the card dropped into the pcie slot.
> what a idiot i am.
> embarrassing really.
> 
> tested all seems fine.
> this is set to turbo using older drivers.


Surprised your mobo doesnt have a locking pcie slot like most do. Shouldn't be able to pull out that easily but glad that's solved.


----------



## Grummpy

It does but it wasnt fully up.
my own stupid fault i remember having trouble getting the screw in due to screw driver not being magnetic.
i will be refitting it again latter the case support needs a mod i have to pull inwards to get access to screw hole.
its pulling the card out the slot over time it seems.


----------



## jmoonb

Quote:


> Originally Posted by *Trender07*
> 
> Well I tried with really high volts and low HBM speed.
> 
> I usually had and run every everyy benchmark withouth problem nor crash at :
> 
> -1080 HBM mhz @ 955 mV
> -p6 1532 mhz @ 975
> -p7 1632 [email protected] 1090 mV
> +50% PL and 3100 rpm fan (air)
> 
> But because of the crashes in games making me have to restart whole PC (vega leds shows p7 locked state), I tried much more safe setttings:
> 
> - 985 hbm mhz @ 1000 mV
> -p6 1532 mhz @ 1050
> -p7 1632 [email protected] 1150 mV
> +50% PL and 3100 rpm fan (air)
> 
> But it still crashes in games eventually, with the screen restarting and the "DEVICE RESET" warning all I have left is long playing with balanced mode(default) and im not using powerplay tables nor anything


Try lowering the p7 clock to something like 1592 for 1070mV and 1612 for 1150mV. Many Vegas either can't boost well or ,in my case, sips power compared to other cards outside of stock settings. Doesn't mean the cards won't boost but it won't boost to its max potential due to the above reasons. Silicon lottery..

For reference, at 1070mV I use.
p7 1592Mhz 1070mV
p6 1537Mhz 1020mV
1100Mhz HBM (If you can that is) 1020mV (Getting this at p6 mV will get you maximum boost clock potential without going over the p7 mV but 960mV works too at a loss of a few Mhz)
50% PL

This gets me a max power draw of 240Watts (AVG 190Watts in games) and AVG clocks of 1565Mhz.


----------



## Grummpy

.....


----------



## geriatricpollywog

Quote:


> Originally Posted by *Grummpy*
> 
> seen this...
> hdr videos in Netflix on a Windows 10 PC is not possible with an AMD-gpu
> Looks like i will be closing my netflix account after the end of the month
> https://tweakers.net/nieuws/133205/netflix-ondersteunt-hdr-videos-in-windows-10-alleen-met-intel-of-nvidia-gpu.html


Netflix is whack on PC. I need to use the Edge browser to get 4K, but ultrawide movies (most movies) have a black border completely surrounding the content. To remove the border I need to use the chrome browser, which doesn't display 4K content in 4K. I would get a better picture if I pirated the content.


----------



## sega4ever

Thanks for posting this, I was wondering why my hbm was stuck at 800mhz when I set the mv to 900.


----------



## Trender07

Quote:


> Originally Posted by *sega4ever*
> 
> Thanks for posting this, I was wondering why my hbm was stuck at 800mhz when I set the mv to 900.


I have to set more than 950 mV or my HBM gets locked to 800 mhz too. Kinda weird in like at 951 mV it works ok but at 950 mV it gets locked to 800 mhz like you said


----------



## Grummpy

...


----------



## TrixX

A quick question guys, I'm getting a friend a Morpheus II cooler, but there's two editions, the standard and the 'CORE' edition. Just wondering which one is more appropriate for Vega?

I know it requires a few extra ram sinks for the VRM's but mainly which version to grab is what I'm after


----------



## owntecx

Quote:


> Originally Posted by *TrixX*
> 
> A quick question guys, I'm getting a friend a Morpheus II cooler, but there's two editions, the standard and the 'CORE' edition. Just wondering which one is more appropriate for Vega?
> 
> I know it requires a few extra ram sinks for the VRM's but mainly which version to grab is what I'm after


Exactly the same, one is black,one is silver


----------



## Trender07

Anyone got this one stuttering with latest 17.12.2? I'd say I didn't had this problem with 17.12.1


----------



## Grummpy

...


----------



## Ragsters

Has anyone had trouble using their Rx Vega with the newer Windows 10 creators Update? Windows pretty much is inoperable when display drivers get installed at the same time when Creators Update is installed.


----------



## mouacyk

Any idea when Vega's price will come back down to earth?


----------



## VicsPC

Quote:


> Originally Posted by *mouacyk*
> 
> Any idea when Vega's price will come back down to earth?


When mining stops becoming profitable, so never lol.


----------



## SpecChum

Finally had time to watercool my vega









First impressions are excellent, although not played much yet.


----------



## Roboyto

Quote:


> Originally Posted by *mouacyk*
> 
> Any idea when Vega's price will come back down to earth?


Quote:


> Originally Posted by *VicsPC*
> 
> When mining stops becoming profitable, so never lol.


^ This ^

and this: http://www.overclock.net/t/1637514/bnib-sapphire-vega-64-air-cooled-black-package-sealed/0_20


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Finally had time to watercool my vega
> 
> 
> 
> 
> 
> 
> 
> 
> 
> First impressions are excellent, although not played much yet.


Enjoy!







.

Merry Christmas and happy new year to all!







.


----------



## SpecChum

Quote:


> Originally Posted by *tarot*
> 
> 
> I was going to ask a question but maybe this is it.
> anyone else have the vega turn off as in no power as in no gpu tach but the computer still going when they open hwinfo(3927beta at the moment)
> it doesn't always do it just now and then and it is a recent thing.
> also doesn't matter if it is stock balanced or custom


I watercooled my Vega today and got this for the first time.

Not ashamed to say I ***** myself, thought I'd fried the card lol Even a reboot didn't fix it, I had to power down.


----------



## Aenra

Quote:


> Originally Posted by *SpecChum*
> 
> I watercooled my Vega today and got this for the first time.
> 
> Not ashamed to say I ***** myself, thought I'd fried the card lol Even a reboot didn't fix it, I had to power down.


You could also tell us what caused it!


----------



## SpecChum

Quote:


> Originally Posted by *Aenra*
> 
> You could also tell us what caused it!


I did lol

I quoted the post. It was an older version of HWiNFO, seems my vega doesn't like it


----------



## tarot

Quote:


> Originally Posted by *SpecChum*
> 
> I watercooled my Vega today and got this for the first time.
> 
> Not ashamed to say I ***** myself, thought I'd fried the card lol Even a reboot didn't fix it, I had to power down.


have not ahd it again since so I still think it is simply something to do with system info scans (same thing happens randomly in 3dmark when it scans...locks up the computer hard rest)
all this is regardless of temps or power or overclock.


----------



## Aenra

Quote:


> Originally Posted by *SpecChum*
> 
> I did lol


Missed that, thank you. These things get me worried, lol


----------



## diggiddi

Just recently my system rx580 locked up 2x with HWinfo after flashing gave me a scare too


----------



## VicsPC

Quote:


> Originally Posted by *diggiddi*
> 
> Just recently my system rx580 locked up 2x with HWinfo after flashing gave me a scare too


Ive had it to where i ended up having to reset my cmos. I turned on VRM voltage monitoring while it was running and it just straight up froze. I disabled I2C support and haven't had an issue since. I did notice that VRM temperature monitoring comes and goes its a bit weird but my temps are in check so I'm not worried about it at all.


----------



## R0CK3T

Quote:


> Originally Posted by *ontariotl*
> 
> Sounds like he could have buggered it up on you and returned it for that reason.


Quote:


> Originally Posted by *ontariotl*
> 
> Sounds like he could have buggered it up on you and returned it for that reason.


Quote:


> Originally Posted by *webhito*
> 
> Nope, stock, not flashed at all.
> 
> One thing though, I sold it to a miner and the guy said he was not able to get them working in crossfire so he gave them back. Never checked bios version before but one is reporting to be
> 016.001.001.000.008776 which says is not recognized in the tpu website.
> 
> I wonder if maybe the guy flashed them and messed them up. No hardware swap was done though.


Found this at TPU: https://www.techpowerup.com/vgabios/196277/196277 the Bios seems to be from a Dell card.


----------



## webhito

Quote:


> Originally Posted by *R0CK3T*
> 
> Found this at TPU: https://www.techpowerup.com/vgabios/196277/196277 the Bios seems to be from a Dell card.


Both cards have the same bios.

They are both working fine as long as I don't fiddle with wattman, just by enabling it has caused me issues with crashing and instability. To reduce power usage and temperatures I had to resort to afterburner, have not had a single issue since. 3d mark still crashes though but everything else now is rock solid.


----------



## SpecChum

Had a bit of spare time today while missus watching xmas TV.

I have noticed that on water my HBM seems less stable at speeds like 1080 or 1100. Could this be a tighter timing thing as the temps are so low?

I know the HBM has a lower timing trigger at 85C, is there one at, say, 50C or something too?

Saying that, I've also updated to the latest drivers since I last played with HBM. On Air I always used my "quiet" 1020Mhz at 915mV setting.

Oh, I've also increased P7 to 1100mV so my clocks are about 200Mhz more too.

Could be any of these, but setting 1020 on HBM seems to stabilise it, so I assumed my HBM is limiting it.

Guess I need to "retest" on my new WC setup...


----------



## SpecChum

Just putting my new water setup through it's paces











Still on Air BIOS tho, not really bothered about flashing the LC one at this point.

NINJA EDIT: That's with HBM at 1020...


----------



## SpecChum

Yikes.



Liking this watercooling malarkey


----------



## pengs

Quote:


> Originally Posted by *tarot*
> 
> have not ahd it again since so I still think it is simply something to do with system info scans (same thing happens randomly in 3dmark when it scans...locks up the computer hard rest)
> all this is regardless of temps or power or overclock.


Same. It's a known issue.
Quote:


> Originally Posted by *SpecChum*
> 
> Had a bit of spare time today while missus watching xmas TV.
> 
> I have noticed that on water my HBM seems less stable at speeds like 1080 or 1100. Could this be a tighter timing thing as the temps are so low?
> 
> I know the HBM has a lower timing trigger at 85C, is there one at, say, 50C or something too?
> 
> Saying that, I've also updated to the latest drivers since I last played with HBM. On Air I always used my "quiet" 1020Mhz at 915mV setting.
> 
> Oh, I've also increased P7 to 1100mV so my clocks are about 200Mhz more too.
> 
> Could be any of these, but setting 1020 on HBM seems to stabilise it, so I assumed my HBM is limiting it.
> 
> Guess I need to "retest" on my new WC setup...


I seem to remember someone mentioning 65*C but it doesn't really make sense if the LC is targets 65*C. Given that the HBM usually runs a few degrees hotter than the core...

Yeah, without tweaking any voltage 1040 is stable in most games but 1030 seems to be stable and 1020 is solid.


----------



## Lixxon

Cheers, got my Vega64 Asus Strixx some days ago been playing and enjoying at stock settings so far! Thought I should start looking at what I can tweak, where does one start? start with 1 certain wattman change like puttinging hbm higher? Do I mess with +powerlimit before doing this and changing mV etc?


----------



## geriatricpollywog

Quote:


> Originally Posted by *Lixxon*
> 
> Cheers, got my Vega64 Asus Strixx some days ago been playing and enjoying at stock settings so far! Thought I should start looking at what I can tweak, where does one start? start with 1 certain wattman change like puttinging hbm higher? Do I mess with +powerlimit before doing this and changing mV etc?


Congrats. Post benches first and a snip of your GPUz graphs under max load.


----------



## Lixxon

Quote:


> Originally Posted by *0451*
> 
> Congrats. Post benches first and a snip of your GPUz graphs under max load.


Hiya, just tested the *turbo* mode for 1 hour playing Tomb raider: Am I reading this correctly but fan was maxxed at 2400 rpm = it got hotter than usual, maybe throttle some..?

+how can i snip these images to make less space?+


----------



## elox

I would start somewhere at [email protected] and 1050mv p6. HMB ~1000-1050mhz and HBM voltage (is minimum gpu voltage) at 980mv or so. Let the gpu frequency untouched. Powertarget +50%


----------



## Elmy

Just want to share a couple 2nd place runs on my Vega 64 / 8700K on 3DMark. Timespy and Firestrike extreme.

https://www.3dmark.com/spy/2964841

https://www.3dmark.com/fs/14467141


----------



## webhito

Got another question for you folks,

Those of you that have a limited edition card, hopefully someone has a sapphire, does your serial number on the box match the sticker on the back of the card? I have 2, none of them do.


----------



## Grummpy




----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Yikes.
> 
> 
> 
> Liking this watercooling malarkey


Sweet







, what clocks were ya at chap?


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> Sweet
> 
> 
> 
> 
> 
> 
> 
> , what clocks were ya at chap?


1752/1100 buddy.

My card really does not like 1100 hbm tho, benches ok but games crash in minutes


----------



## gupsterg

Cheers







. Highest I've been is 70xx points in SP 4K, this was IIRC 1685/1100.

What with ref cards being scarce, I've been holding off pushing card, just in case I cause an issue leading to RMA and then I don't gain back a VEGA card







.


----------



## SpecChum

Quote:


> Originally Posted by *gupsterg*
> 
> Cheers
> 
> 
> 
> 
> 
> 
> 
> . Highest I've been is 70xx points in SP 4K, this was IIRC 1685/1100.
> 
> What with ref cards being scarce, I've been holding off pushing card, just in case I cause an issue leading to RMA and then I don't gain back a VEGA card
> 
> 
> 
> 
> 
> 
> 
> .


You should be OK, even with 1752 I'm at 20mV less than stock Air lol

I can't see increasing the frequency doing much except the occasional lockup and reboot.

I tried 1800 but it falls over right away


----------



## ducegt

Quote:


> Originally Posted by *Grummpy*
> 
> all i did was lower power limit -25 and overclocked the memory. moved the stress away from the gpu and pushed it onto the hbm. saved 64 watts just crazy[/QUOTE]
> 
> I've done the same. I'm at 1080p @ 144hz so I'll be easy on her until she needs to push 4k. Also worth tweaking is the target temperature. With -25%PL, 1K RPM rad fan, and 200w of heat... the exhaust temp of the AIO is very warm. Lowering the targets 10C bumps the automatic fans up ~500rpm. Compared to stock balanced profile, Firestrike was slightly less than 10% slower while SP 1080p extreme was ~20% less.


----------



## Trender07

Quote:


> Originally Posted by *webhito*
> 
> Got another question for you folks,
> 
> Those of you that have a limited edition card, hopefully someone has a sapphire, does your serial number on the box match the sticker on the back of the card? I have 2, none of them do.


I got a sapphire limited edition and idk about the S/N i dont find it on the box but the P/N match the box and the card


----------



## SavantStrike

Bykski blocks are finally here and I've been messing around with them. I really wish I had bought proper thermal pads as there are sections of MOSFET that seem as though they aren't cooled. Great blocks but the pads are iffy.


----------



## Grummpy

AMD need to enable primitive shaders and draw stream binning rasterizer ASAP
Getting kind of boring getting owned in benchmarks.


----------



## elox

Quote:


> Originally Posted by *Grummpy*
> 
> AMD need to enable primitive shaders and draw stream binning rasterizer ASAP
> Getting kind of boring getting owned in benchmarks.


If you wait for primitive shaders you´ll have to wait for vega refresh.


----------



## Aenra

If you wait for benchmarks, you bought the wrong card.

What kind of a mentality is this anyway.. either you didn't know what you were getting, aka your fault, or you knew and now complain about what? A deficiency you were aware of from the start?

Non intelllectually-challenged people use benchmarks to compare/inform themselves on their card's performance; theoretical or otherwise. Compare with others having the same card. Just so they can tell if what they did is wrong somehow, or not. That's all.

Why have you twisted things to such an extent? It honestly baffles me. And what would a normal, relatively sane person get out of scoring a good benchmark? Talk about shallow.

Your reasons for buying this card should have been average gaming requirements (if any at all, not everyone "games"), mining, work, a willingness to support AMD or a combination thereof. Nothing else.

Apologies for passing the coffee around, but occasionally a wake up call is due.

* 'you' as in figuratively *

** doubly so for someone having bought two of them, consecutively. Talk about zero excuse **


----------



## webhito

Quote:


> Originally Posted by *Trender07*
> 
> I got a sapphire limited edition and idk about the S/N i dont find it on the box but the P/N match the box and the card


Yea the part number is the same for every card, the serial should be on the backplate of the card, where the pci-slot is at the back.


----------



## VicsPC

Quote:


> Originally Posted by *Aenra*
> 
> If you wait for benchmarks, you bought the wrong card.
> 
> What kind of a mentality is this anyway.. either you didn't know what you were getting, aka your fault, or you knew and now complain about what? A deficiency you were aware of from the start?
> 
> Non intelllectually-challenged people use benchmarks to compare/inform themselves on their card's performance; theoretical or otherwise. Compare with others having _the same_ card. Just so they can tell if what they did is wrong somehow, or not. That's all.
> Why have you twisted things to such an extent? It honestly baffles me. And what would a normal, relatively sane person get out of scoring a good benchmark? Talk about shallow.
> 
> Your reasons for buying this card should have been average gaming requirements (if any at all, not everyone "games"), mining, work, a willingness to support AMD or a combination thereof. Nothing else.
> 
> Apologies for passing the coffee around, but occasionally a wake up call is due.
> 
> * 'you' as in figuratively *
> ** doubly so for someone having bought two of them, consecutively. Talk about zero excuse **


Average gaming card would be an rx 580 def not vega 56/64 but sure. I agree that way too many people care about synthetic benchmarks and gaming benchmarks. In game benchmarks are notorious for well, being pure garbage. One look at Rise of the Tomb Raider benchmark and you can see why, it's SO inacurate.


----------



## ManofGod1000

Quote:


> Originally Posted by *VicsPC*
> 
> Average gaming card would be an rx 580 def not vega 56/64 but sure. I agree that way too many people care about synthetic benchmarks and gaming benchmarks. In game benchmarks are notorious for well, being pure garbage. One look at Rise of the Tomb Raider benchmark and you can see why, it's SO inacurate.


Yeah, but benchmarking, even on my 1700X and Vega 56, is still fun and cool to do. Then when I overclock, I can compare what I have for increases or at least to see if everything is stable. Heck, I have been overclocking since the 486SX25 days. (Not gloating, just saying that I can never resist doing so.)


----------



## VicsPC

Quote:


> Originally Posted by *ManofGod1000*
> 
> Yeah, but benchmarking, even on my 1700X and Vega 56, is still fun and cool to do. Then when I overclock, I can compare what I have for increases or at least to see if everything is stable. Heck, I have been overclocking since the 486SX25 days. (Not gloating, just saying that I can never resist doing so.)


Yea they have their uses, but best way is actual usage. I can tell you that in-game benchmarks are notoriously unreliable, hell i can get really varying results just between two benchmarks one right after the other lol. Id use firestrike for stability and to see if fps improves but that's about it, with my 1700x i noticed that changing power plan and core parking will give HUGE boosts in combined tests in firestrike, meaning its not even remotely close to being optimized for more then 4 cores so pointless to use on ryzen (unless you core park).


----------



## ManofGod1000

Quote:


> Originally Posted by *VicsPC*
> 
> Yea they have their uses, but best way is actual usage. I can tell you that in-game benchmarks are notoriously unreliable, hell i can get really varying results just between two benchmarks one right after the other lol. Id use firestrike for stability and to see if fps improves but that's about it, with my 1700x i noticed that changing power plan and core parking will give HUGE boosts in combined tests in firestrike, meaning its not even remotely close to being optimized for more then 4 cores so pointless to use on ryzen (unless you core park).


So, I will get a better score with the combined test in FS if I apply the performance or Ryzen power plan in Windows?


----------



## VicsPC

Quote:


> Originally Posted by *ManofGod1000*
> 
> So, I will get a better score with the combined test in FS if I apply the performance or Ryzen power plan in Windows?


Performance, no. You need to go into the registry and manually enable core parking percentage so you can go into power options and processor management and change it to 50%. What people don't realize is the amount of options (that can be manually changed) that differ between high performance and balanced. Once you're in the registry for power plans youll see what i mean. It's much more complex then just balanced and high performance.


----------



## ManofGod1000

Quote:


> Originally Posted by *VicsPC*
> 
> Performance, no. You need to go into the registry and manually enable core parking percentage so you can go into power options and processor management and change it to 50%. What people don't realize is the amount of options (that can be manually changed) that differ between high performance and balanced. Once you're in the registry for power plans youll see what i mean. It's much more complex then just balanced and high performance.


Would it be worth doing on a machine that is used everyday and is on 24/7? I don't game on this computer I am on now although I could, since it has a R7 1700 at 3.8Ghz and an XFX R9 380.


----------



## VicsPC

Quote:


> Originally Posted by *ManofGod1000*
> 
> Would it be worth doing on a machine that is used everyday and is on 24/7? I don't game on this computer I am on now although I could, since it has a R7 1700 at 3.8Ghz and an XFX R9 380.


Core parking doesn't really matter if the program is going to use more then 4c/8t as it will push to use all the threads. What it does is effectively turn your cpu from 8/16 to 4/8 in non cpu intensive programs, problem is it's not too reliable but i have yet to try it on AC Origins. Some games (ie mostly indie titles) don't play very well with 8 cores, the load is just all over the place and it causes stutter/lag/etc. If your cpu is using all 16threads and you don't game much then just leave it on high performance BUT for most benchmarks that benchmark the cpu its worth playing with.

For example between ryzen and balanced modes, balanced will get 500points more in physics score. And a 50% core parking hp mode does the best.


----------



## Grummpy

Quote:


> Originally Posted by *elox*
> 
> If you wait for primitive shaders you´ll have to wait for vega refresh.


pls share where you heard this because it sounds very much like nonsense propaganda something a nvidia fan would say.


----------



## Hattifnatten

Anyone got one of MSIs Air Boost cards yet? I can't get my hands on a reference Vega, and I'm stuck here with a waterblock. Won't build my new loop before everything is "included". Worried MSI might have changed some SMC, resulting in the card not being compatible with any waterblocks :s


----------



## Trender07

Quote:


> Originally Posted by *webhito*
> 
> Yea the part number is the same for every card, the serial should be on the backplate of the card, where the pci-slot is at the back.


Yeah I see the SN in the card its on a sticker but i can't find the SN in the box


----------



## fursko

Quote:


> Originally Posted by *WannaBeOCer*
> 
> It destroys the 1080 in Wolfenstein 2. Vega 64 is faster than a 1080 and that's what it competes with. The 1080 Ti is in its own class. If you mention an overclocked Vega card, don't forget how easy it is to overclock a GTX 1080 Ti. I have no complaints about the RX Vega 64. It delivers performance above the 1080 and looks nice. The new driver overlay is fantastic.


Hello again my friend. https://www.computerbase.de/2017-12/grafikkarten-round-up-2018/2/#diagramm-wolfenstein-2-2560-1440 Check Star Wars too. I said you referencing outdated benchmarks.


----------



## webhito

Quote:


> Originally Posted by *Trender07*
> 
> Yeah I see the SN in the card its on a sticker but i can't find the SN in the box


According to newegg, it should be on the box under the barcode. I think they don't match and the only serial available is the one on the back of the gpu.


----------



## Kyozon

Hello guys. Do you have any idea why some VEGA RX out of the box Clocks at 1750Mhz and my Frontier Edition LC, no matter what Voltages and Power Limits, can't do the same?


----------



## fursko

Quote:


> Originally Posted by *Kyozon*
> 
> Hello guys. Do you have any idea why some VEGA RX out of the box Clocks at 1750Mhz and my Frontier Edition LC, no matter what Voltages and Power Limits, can't do the same?


My RX Vega 64 LC default p7 state mV 1250. This is weird too. It should be 1200.


----------



## Kyozon

Quote:


> Originally Posted by *fursko*
> 
> My RX Vega 64 LC default p7 state mV 1250. This is weird too. It should be 1200.


I have also noticed a very strange behavior with my Frontier Edition on Gaming Benchmarks. Despite running same Drivers as RX VEGA, same Memory Overclock, same Power Limit. I just can't match RX VEGA, i am not even sure why. RX VEGA on Superposition for an example is edging me out by 400 Points 1080p Extreme. I can barely match VEGA RX 56.

What could be happening with Frontier? I doesn't even goes beyond 52C under Load, the LC Variant. Tested the 2 BIOSes, same results.


----------



## hellphyre

Quote:


> Originally Posted by *Kyozon*
> 
> I have also noticed a very strange behavior with my Frontier Edition on Gaming Benchmarks. Despite running same Drivers as RX VEGA, same Memory Overclock, same Power Limit. I just can't match RX VEGA, i am not even sure why. RX VEGA on Superposition for an example is edging me out by 400 Points 1080p Extreme. I can barely match VEGA RX 56.
> 
> What could be happening with Frontier? I doesn't even goes beyond 52C under Load, the LC Variant. Tested the 2 BIOSes, same results.


----------



## Kyozon

Quote:


> Originally Posted by *hellphyre*


I was afraid someone would quote that video. I am very unhappy with those results that i am getting. Not sure how to proceed going forth.


----------



## ontariotl

Quote:


> Originally Posted by *webhito*
> 
> According to newegg, it should be on the box under the barcode. I think they don't match and the only serial available is the one on the back of the gpu.


Is this to the reference of the card you got back from the miner? Are you thinking he switched something on you?


----------



## Trender07

Quote:


> Originally Posted by *webhito*
> 
> According to newegg, it should be on the box under the barcode. I think they don't match and the only serial available is the one on the back of the gpu.


Yeah just checked and the SN doesnt match


----------



## webhito

Quote:


> Originally Posted by *ontariotl*
> 
> Is this to the reference of the card you got back from the miner? Are you thinking he switched something on you?


Nah, he didn't switch them, initally I thought he did since his reason to return them sounded really flakey, so I checked some old pictures I had and the serials are the same, he just couldn't figure out why one of the cards was working with all red leds and one had a green led. So instead of giving him technical support I asked him to give them back. I would rather sell them to someone who knows what hes doing than have them returned after he burns them.

The box and serial provided on my invoice match, but the serial number on the card doesn't, reason I am asking if anyone else has this or do theirs match.


----------



## webhito

Quote:


> Originally Posted by *Trender07*
> 
> Yeah just checked and the SN doesnt match


Cheers Trender07.


----------



## Digidi

Why nobody is interested in enabling Primitive Shaders and NGG Fast Path? Why so silence?


----------



## steadly2004

Quote:


> Originally Posted by *Digidi*
> 
> Why nobody is interested in enabling Primitive Shaders and NGG Fast Path? Why so silence?


I think everyone is interested in those being enabled. We just can't influence the AMD driver team. We've accepted the product as is and if it gets a boost later, then great. If not then we have no reason to be sad.


----------



## ontariotl

Quote:


> Originally Posted by *webhito*
> 
> Nah, he didn't switch them, initally I thought he did since his reason to return them sounded really flakey, so I checked some old pictures I had and the serials are the same, he just couldn't figure out why one of the cards was working with all red leds and one had a green led. So instead of giving him technical support I asked him to give them back. I would rather sell them to someone who knows what hes doing than have them returned after he burns them.
> 
> The box and serial provided on my invoice match, but the serial number on the card doesn't, reason I am asking if anyone else has this or do theirs match.


Ok, glad you weren't ripped off. I thought that's what you were starting to get at.

Quote:


> Originally Posted by *Digidi*
> 
> Why nobody is interested in enabling Primitive Shaders and NGG Fast Path? Why so silence?


I'm sure all of us wouldn't mind these features enabled, but we are at the mercy of the AMD driver team. No point beating a dead horse everyday as it's not going to speed things up.

However if the Vega refresh comes out with these features enabled from the start and the original Vega still does not, then I'd be pulling my pitchfork out!


----------



## Rei86

Finally Performance PC sent the EKWB for this thing


----------



## pengs

Quote:


> Originally Posted by *fursko*
> 
> My RX Vega 64 LC default p7 state mV 1250. This is weird too. It should be 1200.


Same


----------



## fursko

Quote:


> Originally Posted by *ontariotl*
> 
> However if the Vega refresh comes out with these features enabled from the start and the original Vega still does not, then I'd be pulling my pitchfork out!


Definitely.


----------



## AlphaC

Gigabyte releases another iffy AMD GPU...

_Another interesting solution is to banish some of the voltage transformers, in this case the low-side VRM from the row on the top, to the back of the board._
http://www.tomshardware.com/news/gigabyte-rx-vega56-vega64-teardown,36177.html


----------



## diabetes

Quote:


> Originally Posted by *elox*
> 
> If you wait for primitive shaders you´ll have to wait for vega refresh.


This was claimed by one guy who was posting about AMD all day on reddit. Not a credible source IMO.

@topic Primitive shaders are actually active and have been since launch. Vega HW does not support separate shader stages anymore, so Vertex Shaders and Geometry shaders automatically get joined by the shader compiler. The Compiler then does an optimization pass over the whole thing before submitting it to the card. These primitive shaders are a prerequisite for NGG fast path, which is not enabled yet. For NGG even more compilerwork is needed, which is why AMD employed another Senior Shader Compiler Engineer.

https://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/drivers/radeonsi/si_shader.h - See long comment at the beginning and table at line 378.

https://jobs.amd.com/job/Frimley-Senior-Shader-Compiler-Engineer/422922700/ - a few days ago this job offer was still open


----------



## tpi2007

I was just reading that article, seems like a lot of cost cutting went into it. Not even one LED to tell the story was left lol.

It's understandable that they aren't even using the Aorus brand though, with the extremely low volume of chips made available, the price and demand for such cards at this point in time is probably not worth more than the current effort. I'm curious to see how that backplate heatpipe works though, I don't think that I've ever seen something like that.

Their phrasing on the amount of chips sent to AIBs isn't clear though, did AMD send around 5000 Vega 64 chips to each AIB or that is the global amount to all AIBs? It seems like the global amount, what do you think?

At least now we have some trustworthy estimate, and considering that we're almost in 2018, that number isn't good.


----------



## geriatricpollywog

Quote:


> Originally Posted by *tpi2007*
> 
> I was just reading that article, seems like a lot of cost cutting went into it. Not even one LED to tell the story was left lol.
> 
> It's understandable that they aren't even using the Aorus brand though, with the extremely low volume of chips made available, the price and demand for such cards at this point in time is probably not worth more than the current effort. I'm curious to see how that backplate heatpipe works though, I don't think that I've ever seen something like that.
> 
> Their phrasing on the amount of chips sent to AIBs isn't clear though, did AMD send around 5000 Vega 64 chips to each AIB or that is the global amount to all AIBs? It seems like the global amount, what do you think?
> 
> At least now we have some trustworthy estimate, and considering that we're almost in 2018, that number isn't good.


Makes me feel warm and fuzzy by the fire knowing I already bought a Vega for a reasonable price. Then again, this card has no future for driver optimization. Might as well be a Voodoo 5.


----------



## Grummpy

This card has no future for driver optimisation
is that so.


----------



## gupsterg

@webhito @Trender07

What you are seeing as serial on the outside of the box is AIB serial. What you are seeing on the card is OE serial number. You can see example in OP of this thread. It seems as if Sapphire did not apply an AIB "info" sticker on card, which would then match outside of box label.


----------



## poisson21

On my 2 MSI rx vega 64 there isn't any sticker at all. No serial number or any others.


----------



## webhito

Quote:


> Originally Posted by *gupsterg*
> 
> @webhito @Trender07
> 
> What you are seeing as serial on the outside of the box is AIB serial. What you are seeing on the card is OE serial number. You can see example in OP of this thread. It seems as if Sapphire did not apply an AIB "info" sticker on card, which would then match outside of box label.


Thanks gupsterg, the reason I was worried about this is because 1 of 2 cards arrived with no serial number, it seems the customs broker in my country removed it, for what reason? No clue, however the second card had one but it did not match the box either. Thought maybe someone swapped the cards prior to shipping.

Thanks for the link.

Quote:


> Originally Posted by *poisson21*
> 
> On my 2 MSI rx vega 64 there isn't any sticker at all. No serial number or any others.


Cheers!


----------



## Smitty2k1

Hi all, first time poster around here. I've got some background, some information, and some questions!

Background:
Although I've been building PCs since the early 2000's, I built my first truly high end PC in early 2014 when the Ncase M1 released. 4770k, Asus Impact VI, GTX 780 SuperClock. I was just running a 1920x1200 monitor back then and the GTX 780 served me very well. Fast forward to early 2017 and I bought a 3440x1440 ultrawide Freesync monitor. My GTX 780 was having difficulties driving the higher resolution, so I upgraded to a Vega 56. I was able to get the Vega56 for MSRP off Amazon on November 1.

Information:
I thought the Vega56 would do well in my Ncase M1 with my SFX450w PSU because the GTX 780 it replaced had a higher TDP. However, I very quickly noticed that the Vega56 was overloading my system when on balanced power profile causing hard crashes. After long gaming sessions on power saver it was also crashing my system. I went out and bought a top of the line SX650 SFX PSU for the Vega. Lo and behold the hard crashes continued. After some exploring I found out that the root cause was the Asus "Anti-Surge" feature. This protection is overly sensitive in thinking it was detecting power surges. I took a leap of faith and disabled the feature in the motherboard BIOS and have had no more crashes to this day even after long gaming sessions and running the card in turbo mode! Unfortunately I couldn't return the 650W PSU, but hey at least I have a new PSU and don't have to use adapters to get 2x8pin connectors for the graphics card.

The questions:
I replaced a blower style GTX 780 with the Vega56. Despite the Vega having a lower TDP I find the blower cooler to be significantly louder, and some basic temp monitoring is showing it to run hotter than the GTX780 as well. I live in a small apartment and the noise is a little much for my wife and I. Therefore, I'd like some suggestions on how to quiet down this thing. I think I have 4 options;
1) Play around with undervolting and fan curves. Does anyone have a good up-to-date primer on where to start? Specifically looking to run quieter, not push the upper limits of the system
2) Buy a Morpheus heatsink and two 120x15mm Noctua fans. This would cost around $100 and should give better noise and cooling performance over the blower cooler, but it seems some users are having issues with the small VRM heatsinks and I haven't seen definitive numbers from the 15mm fans (the thickest the Ncase can accommodate)
3) Buy an AIO water cooler for the Vega. Not sure exactly how I could make this fit in my Ncase since I have a very large CPU heatsink (Noctua C12P SE14). Not interested in a full watercooling setup at this time.
4) Sell the Vega56 and buy a GTX1070. Although I have a FreeySync monitor it is only 60hz so I wouldn't be losing out on much.

Thanks


----------



## fursko

Quote:


> Originally Posted by *Smitty2k1*
> 
> Hi all, first time poster around here. I've got some background, some information, and some questions!
> 
> Background:
> Although I've been building PCs since the early 2000's, I built my first truly high end PC in early 2014 when the Ncase M1 released. 4770k, Asus Impact VI, GTX 780 SuperClock. I was just running a 1920x1200 monitor back then and the GTX 780 served me very well. Fast forward to early 2017 and I bought a 3440x1440 ultrawide Freesync monitor. My GTX 780 was having difficulties driving the higher resolution, so I upgraded to a Vega 56. I was able to get the Vega56 for MSRP off Amazon on November 1.
> 
> Information:
> I thought the Vega56 would do well in my Ncase M1 with my SFX450w PSU because the GTX 780 it replaced had a higher TDP. However, I very quickly noticed that the Vega56 was overloading my system when on balanced power profile causing hard crashes. After long gaming sessions on power saver it was also crashing my system. I went out and bought a top of the line SX650 SFX PSU for the Vega. Lo and behold the hard crashes continued. After some exploring I found out that the root cause was the Asus "Anti-Surge" feature. This protection is overly sensitive in thinking it was detecting power surges. I took a leap of faith and disabled the feature in the motherboard BIOS and have had no more crashes to this day even after long gaming sessions and running the card in turbo mode! Unfortunately I couldn't return the 650W PSU, but hey at least I have a new PSU and don't have to use adapters to get 2x8pin connectors for the graphics card.
> 
> The questions:
> I replaced a blower style GTX 780 with the Vega56. Despite the Vega having a lower TDP I find the blower cooler to be significantly louder, and some basic temp monitoring is showing it to run hotter than the GTX780 as well. I live in a small apartment and the noise is a little much for my wife and I. Therefore, I'd like some suggestions on how to quiet down this thing. I think I have 4 options;
> 1) Play around with undervolting and fan curves. Does anyone have a good up-to-date primer on where to start? Specifically looking to run quieter, not push the upper limits of the system
> 2) Buy a Morpheus heatsink and two 120x15mm Noctua fans. This would cost around $100 and should give better noise and cooling performance over the blower cooler, but it seems some users are having issues with the small VRM heatsinks and I haven't seen definitive numbers from the 15mm fans (the thickest the Ncase can accommodate)
> 3) Buy an AIO water cooler for the Vega. Not sure exactly how I could make this fit in my Ncase since I have a very large CPU heatsink (Noctua C12P SE14). Not interested in a full watercooling setup at this time.
> 4) Sell the Vega56 and buy a GTX1070. Although I have a FreeySync monitor it is only 60hz so I wouldn't be losing out on much.
> 
> Thanks


Best solution is morpheus i think.


----------



## hyp36rmax

Quote:


> Originally Posted by *Smitty2k1*
> 
> Hi all, first time poster around here. I've got some background, some information, and some questions!
> 
> Background:
> Although I've been building PCs since the early 2000's, I built my first truly high end PC in early 2014 when the Ncase M1 released. 4770k, Asus Impact VI, GTX 780 SuperClock. I was just running a 1920x1200 monitor back then and the GTX 780 served me very well. Fast forward to early 2017 and I bought a 3440x1440 ultrawide Freesync monitor. My GTX 780 was having difficulties driving the higher resolution, so I upgraded to a Vega 56. I was able to get the Vega56 for MSRP off Amazon on November 1.
> 
> Information:
> I thought the Vega56 would do well in my Ncase M1 with my SFX450w PSU because the GTX 780 it replaced had a higher TDP. However, I very quickly noticed that the Vega56 was overloading my system when on balanced power profile causing hard crashes. After long gaming sessions on power saver it was also crashing my system. I went out and bought a top of the line SX650 SFX PSU for the Vega. Lo and behold the hard crashes continued. After some exploring I found out that the root cause was the Asus "Anti-Surge" feature. This protection is overly sensitive in thinking it was detecting power surges. I took a leap of faith and disabled the feature in the motherboard BIOS and have had no more crashes to this day even after long gaming sessions and running the card in turbo mode! Unfortunately I couldn't return the 650W PSU, but hey at least I have a new PSU and don't have to use adapters to get 2x8pin connectors for the graphics card.
> 
> The questions:
> I replaced a blower style GTX 780 with the Vega56. Despite the Vega having a lower TDP I find the blower cooler to be significantly louder, and some basic temp monitoring is showing it to run hotter than the GTX780 as well. I live in a small apartment and the noise is a little much for my wife and I. Therefore, I'd like some suggestions on how to quiet down this thing. I think I have 4 options;
> 1) Play around with undervolting and fan curves. Does anyone have a good up-to-date primer on where to start? Specifically looking to run quieter, not push the upper limits of the system
> 2) Buy a Morpheus heatsink and two 120x15mm Noctua fans. This would cost around $100 and should give better noise and cooling performance over the blower cooler, but it seems some users are having issues with the small VRM heatsinks and I haven't seen definitive numbers from the 15mm fans (the thickest the Ncase can accommodate)
> 3) Buy an AIO water cooler for the Vega. Not sure exactly how I could make this fit in my Ncase since I have a very large CPU heatsink (Noctua C12P SE14). Not interested in a full watercooling setup at this time.
> 4) Sell the Vega56 and buy a GTX1070. Although I have a FreeySync monitor it is only 60hz so I wouldn't be losing out on much.
> 
> Thanks


You could just get a gpu block with a 120mm rad, pump and small res. I understand you don't want to do a full loop, but you can upgrade to that if you ever change your mind pretty easily from there. This would be the best way to cool VEGA in the first place. Another alternative is find someone selling their VEGA LC AIO that they removed.


----------



## ducegt

Underclock it and tame fan curve. Lower the power limit and maybe increase the target temp. Anything else is likely just gunna upset the wife. Other options not the most economical either.


----------



## Digidi

Quote:


> Originally Posted by *diabetes*
> 
> This was claimed by one guy who was crap posting about AMD all day on reddit. Not a credible source IMO.
> 
> @topic Primitive shaders are actually active and have been since launch. Vega HW does not support separate shader stages anymore, so Vertex Shaders and Geometry shaders automatically get joined by the shader compiler. The Compiler then does an optimization pass over the whole thing before submitting it to the card. These primitive shaders are a prerequisite for NGG fast path, which is not enabled yet. For NGG even more compilerwork is needed, which is why AMD employed another Senior Shader Compiler Engineer.
> 
> https://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/drivers/radeonsi/si_shader.h - See long comment at the beginning and table at line 378.
> 
> https://jobs.amd.com/job/Frimley-Senior-Shader-Compiler-Engineer/422922700/ - a few days ago this job offer was still open


Thank you, but your first link I think is a failure.

Ps means pixelshader not primitive shader. So no primitive shaders in line 378.


----------



## Grummpy

@Digidi
You still sticking to propaganda crap talk as a source of legit information based on nothing but hate.....
Come on man you know better than that.


----------



## diabetes

Quote:


> Originally Posted by *Digidi*
> 
> Thank you, but your first link I think is a failure.
> 
> Ps means pixelshader not primitive shader. So no primitive shaders in line 378.


Of course PS in that table means pixel shader. Primitive shaders are these "->" symbols.
For a reference see the picture I linked in this post and compare it to line 387.

Code:



Code:


/* Valid shader configurations:
 *
 * API shaders       VS | TCS | TES | GS |pass| PS
 * are compiled as:     |     |     |    |thru|
 *                      |     |     |    |    |
 * Only VS & PS:     VS |     |     |    |    | PS
 * GFX6 - with GS:   ES |     |     | GS | VS | PS
 *      - with tess: LS | HS  | VS  |    |    | PS
 *      - with both: LS | HS  | ES  | GS | VS | PS
 * GFX9 - with GS:   -> |     |     | GS | VS | PS     /*----> this line! Vertex Shader is merged with next stage, which is the Geometry Shader.*/
 *      - with tess: -> | HS  | VS  |    |    | PS
 *      - with both: -> | HS  | ->  | GS | VS | PS
 *
 * -> = merged with the next stage
 */


----------



## Grummpy

@diabetes
Rekt him


----------



## Ne01 OnnA

Quote:


> Originally Posted by *diabetes*
> 
> Of course PS in that table means pixel shader. Primitive shaders are these "->" symbols.
> For a reference see the picture I linked in this post and compare it to line 387.
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> /* Valid shader configurations:
> *
> * API shaders       VS | TCS | TES | GS |pass| PS
> * are compiled as:     |     |     |    |thru|
> *                      |     |     |    |    |
> * Only VS & PS:     VS |     |     |    |    | PS
> * GFX6 - with GS:   ES |     |     | GS | VS | PS
> *      - with tess: LS | HS  | VS  |    |    | PS
> *      - with both: LS | HS  | ES  | GS | VS | PS
> * GFX9 - with GS:   -> |     |     | GS | VS | PS     /*----> this line! Vertex Shader is merged with next stage, which is the Geometry Shader.*/
> *      - with tess: -> | HS  | VS  |    |    | PS
> *      - with both: -> | HS  | ->  | GS | VS | PS
> *
> * -> = merged with the next stage
> */


o_0 Great news









So we have:

Pixel Shader aka *PS*
Vertex Shader aka *VS*
Geometry Shader aka *GS*
Lightning Shader aka *LS*
and
Tess/Evaluate Shaders aka ES?
Compute Shaders aka CS?
Next Gen Primitive Shaders aka ??? -->


----------



## porschedrifter

Well guys, I bid you adeu, just made a 450$ profit on my Rx 56 card on eBay, selling it used. Just picked up the 1080ti duke. Lmao


----------



## geoxile

Quote:


> Originally Posted by *porschedrifter*
> 
> Well guys, I bid you adeu, just made a *450$ profit* on my Rx 56 card on eBay, selling it used. Just picked up the 1080ti duke. Lmao


Dayyum. I'm wondering if I should sell too but ebay's full of scammers now.


----------



## porschedrifter

Quote:


> Originally Posted by *geoxile*
> 
> Dayyum. I'm wondering if I should sell too but ebay's full of scammers now.


I would man, paypal and ebay have gotten pretty good with buyer protection and all that. People are buying these at high prices even used. It's amazing.
Just sell to someone with a decent enough feedback rating and ship with insurance.

I put it up fully not thinking I would sell it so high, well, I was wrong. lol


----------



## SpecChum

Quote:


> Originally Posted by *porschedrifter*
> 
> I would man, paypal and ebay have gotten pretty good with buyer protection and all that. People are buying these at high prices even used. It's amazing.
> Just sell to someone with a decent enough feedback rating and ship with insurance.
> 
> I put it up fully not thinking I would sell it so high, well, I was wrong. lol


I thought about it for a while, but I've stuck a waterblock on it now and really can't be bothered putting it all back lol


----------



## steadly2004

Quote:


> Originally Posted by *SpecChum*
> 
> I thought about it for a while, but I've stuck a waterblock on it now and really can't be bothered putting it all back lol


Just charge another $100 to cover the water block...


----------



## SpecChum

Quote:


> Originally Posted by *steadly2004*
> 
> Just charge another $100 to cover the water block...


I'm pretty sure it now has no warranty now tho.

Doesn't really bother me, I knew this before I did it, but if I was buying it second hand it would.


----------



## geoxile

Quote:


> Originally Posted by *SpecChum*
> 
> I'm pretty sure it now has no warranty now tho.
> 
> Doesn't really bother me, I knew this before I did it, but if I was buying it second hand it would.


Warranties aren't transferable anyway.


----------



## gupsterg

Quote:


> Originally Posted by *porschedrifter*
> 
> I would man, paypal and ebay have gotten pretty good with buyer protection and all that. People are buying these at high prices even used. It's amazing.
> Just sell to someone with a decent enough feedback rating and ship with insurance.
> 
> I put it up fully not thinking I would sell it so high, well, I was wrong. lol


Nice to read you got a great price







. It's not the buyer protection which is an issue with ebay TBH it's seller protection. It becomes one sided very easily when you end up with bad buyer. I have only had it one time with something I could afford to lose, has left a real bad taste to ebaying after that.


----------



## ducegt

I tried to buy a 56 2 months ago on eBay for 90 dollars. I've had pricing errors work out for me in the past so I rolled the dice. A few hours after I paid with Paypal, I got an email from eBay stating the sellers account was compromised and the transaction was cancelled. I challenged the payment with my CC company and after a month they ruled I'm the sellers favor so I challenged again and I've been waiting a month since. The sellers account has emails and usernames of 3 different people and never inputted a tracking number. I wonder how much a 64LC would go for.


----------



## geoxile

Quote:


> Originally Posted by *ducegt*
> 
> I tried to buy a 56 2 months ago on eBay for 90 dollars. I've had pricing errors work out for me in the past so I rolled the dice. A few hours after I paid with Paypal, I got an email from eBay stating the sellers account was compromised and the transaction was cancelled. I challenged the payment with my CC company and after a month they ruled I'm the sellers favor so I challenged again and I've been waiting a month since. The sellers account has emails and usernames of 3 different people and never inputted a tracking number. I wonder how much a 64LC would go for.


You didn't open a claim with paypal? They would've resolved it pretty quickly since it'd keep track of if you were refunded.


----------



## webhito

Quote:


> Originally Posted by *gupsterg*
> 
> Nice to read you got a great price
> 
> 
> 
> 
> 
> 
> 
> . It's not the buyer protection which is an issue with ebay TBH it's seller protection. It becomes one sided very easily when you end up with bad buyer. I have only had it one time with something I could afford to lose, has left a real bad taste to ebaying after that.


I have been using a website called mercadolibre to sell/buy my stuff for over 8 years, lately they have given buyers the option to return things if not happy with the purchase. Sadly this has opened doors for many things, sell a used item, buyer says there is something wrong with it, ask for a refund and they ship back a brick, the website gives them back their money no questions asked. Sell a new item? They give the buyer 10 days satisfaction guaranteed, if they are not happy they can return it and you end up with a used item and no restocking fees will be covered at all, instead of making some sort of profit you have to take a hit and that's if you actually get your item back.

Mexico is corrupt as it gets, so everyone is trying to pull some sort of trick whenever they can.


----------



## ducegt

Quote:


> Originally Posted by *geoxile*
> 
> You didn't open a claim with paypal? They would've resolved it pretty quickly since it'd keep track of if you were refunded.


Unfortunately I opened the dispute with CC company, but yes it would have been better to have started the process with Paypal.


----------



## IvantheDugtrio

So today I randomly turned on my system with the presumed dead Vega 56 and it started up. Back in Windows there were still artifacts on the screen and I didn't push the graphics beyond idle. There was just enough time to get a few updates through. The card won't boot up again but I'm think it might have to do with temps as the system warmed up to ~30C. When the system booted it was at ~15C. I'll try booting again later tonight but I still plan on getting this card replaced.


----------



## skratos115

SO i joined just for this thread, its so large i cant keep up.

I have a Vega FE. I want to just flash it to a vega 64 bios if i can for now. mining is more important to me.
has anyone done it and what bios did they use? i tried with and xfx 64 and amd vega 64 from https://www.techpowerup.com/vgabios/194441/amd-rxvega64-8176-170719.
neither worked for me and had to revert back to my own bios that i had backed up from gpuz.

i wanted a vega 64 and could not find one, got a FE for $750 and thought i could easily convert it. need help.

only getting 1400h's on xmr-stack. cant disable and enable them like other people have. it throws code 43 in device manager when i try.
is it to much to ask of a vega FE to perform like a 64? 64's are getting 2100 no problem.

Vega FE:
GPU_P0=852;900
GPU_P1=991;900
GPU_P2=1084;900
GPU_P3=1138;900
GPU_P4=1150;900
GPU_P5=1202;900
GPU_P6=1212;905
GPU_P7=1408;925
Mem_P0=167;900
Mem_P1=500;900
Mem_P2=800;900
Mem_P3=1100;905
Fan_Min=3000
Fan_Max=4900
Fan_Target=55
Fan_Acoustic=2400
Power_Temp=85
Power_Target=0

Machine is a 2008 macpro 12gb ram running windows 10 creators ed.
blockchain driver from AUG.


----------



## diabetes

@skratos115

You cannot flash VFE to V64, unfortunately this isn't possible.


----------



## Jass11

Hello, i have a VEGA64, its possible past with métal liquid on this card?

Thanks


----------



## Grummpy

New year and a new start to trash talk AMD it seems.
These web sites dont hang around to start the new year.
http://www.guru3d.com/news-story/amd-adrenalin-driver-has-issues-with-older-dx9-games.html


----------



## kondziowy

Very well. We need less people buying Vega and more stock on shelves.


----------



## Aenra

We need less miners buying GPUs you mean. In general.

(it's sad on so many levels, but given the maturity, cultural influences and age spectrum, no reason to go into detail, as pointless as debating RGB lights with an 'adult')

As to the old games news article, it's just five titles; five. I can understand it's disappointing, but honestly it's no big deal.

Can name tens of dozens of classic gems that no longer work because Moneysoft decided never to support 8bit installers on 32/64bit OSes. Anyone going into SJW-rage fit over at Reddit or wherever youngsters waste their time nowadays? No.

Could name older titles that Nvidia sync, fastsync or gsync will cause to crash, instantly. Titles that occlusion can cause an instant crash. Come to speak of it, titles that won't even launch if you make the mistake of activating the Nvidia ctpl by making a profile for them. Did anyone have a similar rage-induced response? No.

And so on.

Clickbait and techpowerup.com. News at eleven really.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *skratos115*
> 
> SO i joined just for this thread, its so large i cant keep up.
> 
> I have a Vega FE. I want to just flash it to a vega 64 bios if i can for now. mining is more important to me.
> has anyone done it and what bios did they use? i tried with and xfx 64 and amd vega 64 from https://www.techpowerup.com/vgabios/194441/amd-rxvega64-8176-170719.
> neither worked for me and had to revert back to my own bios that i had backed up from gpuz.
> 
> i wanted a vega 64 and could not find one, got a FE for $750 and thought i could easily convert it. need help.
> 
> only getting 1400h's on xmr-stack. cant disable and enable them like other people have. it throws code 43 in device manager when i try.
> is it to much to ask of a vega FE to perform like a 64? 64's are getting 2100 no problem.
> 
> Vega FE:
> GPU_P0=852;900
> GPU_P1=991;900
> GPU_P2=1084;900
> GPU_P3=1138;900
> GPU_P4=1150;900
> GPU_P5=1202;900
> GPU_P6=1212;905
> GPU_P7=1408;925
> Mem_P0=167;900
> Mem_P1=500;900
> Mem_P2=800;900
> Mem_P3=1100;905
> Fan_Min=3000
> Fan_Max=4900
> Fan_Target=55
> Fan_Acoustic=2400
> Power_Temp=85
> Power_Target=0
> 
> Machine is a 2008 macpro 12gb ram running windows 10 creators ed.
> blockchain driver from AUG.


You could flash the Vega FE AIO BIOS and get the higher clock speeds: http://www.overclock.net/t/1633446/preliminary-view-of-amd-vega-bios/900_100#post_26514867


----------



## skratos115

I put it in my other windows 10 machine (none enterprise). Its a hp z620.
I was able to disable and enable it and it. With the same settings above it went right to 2kh's.

So mac pro2008 running win 10 enterprise:
Could not disable and enable. Get error code 43

Hp z620 running windows 10 home:
I could enable and disable and got my hashate to 2kh's.

Now just have to get hashmonitor running. Excited just to have it running at 2k for now. For $750 ill take it. Thats cheaper then i can find a vega 64 and about the same price as a vega 56 right now in the area.

Hope this helps some one.


----------



## SavantStrike

So I've got a Vega 64 card that doesn't want to play nice with other cards. One solid green light on the tach. Any ideas what that might be about?


----------



## Trender07

So guys how comes my card get 1100 V hbm!(v_floor) isnt it too high xD most of you use 950 mv. Anyways im trying to UV but it keeps crashing, you guys think hbm v actually affect stability in like should I up the hbm voltage?


----------



## ducegt

I'm watching what 64 LC sells for on eBay after trying to game tonight.

Wolfenstein 2 with Adrenalin 17.12.2 gives stutters. 17.12.1 frequently crashes the game... Gotta mess with DDU just to get old drivers installed... I'm not even changing any settings


----------



## geoxile

Quote:


> Originally Posted by *ducegt*
> 
> I'm watching what 64 LC sells for on eBay after trying to game tonight.
> 
> Wolfenstein 2 with Adrenalin 17.12.2 gives stutters. 17.12.1 frequently crashes the game... Gotta mess with DDU just to get old drivers installed... I'm not even changing any settings


Just look at the sold listings. Just saw a reference Sapphire Vega 56 that sold for $880. Very tempting but I have trust issues with ebay buyers


----------



## By-Tor

Looked on eBay and my Powercolor Vega 64 that I paid $465 a couple of months ago is selling at $1089. WOW.....


----------



## SpecChum

Seem to be going for about £700 here.

It would be a £250 profit for me, I can't say I'm that interested - I'd end up with a 1080Ti so I'd not see the money, just have a faster card. I'm happy as it is to be honest.

Plus I can mine XMR when it's not doing anything - I'm missing the boat on this one, I should step this up!


----------



## cplifj

strangest thing...

But i wanna hear if others have had anything of their hardware die since installing Vega and using HBCC.

Somehow i had a one year old SSD die and a somewhat older 2TB HD died aswell.....

all this happened while running HBCC on. Now i leave it off and nothing else seems to be breaking.

Can't proof anything but if this happened because of a flawed amd hbcc implementation, i think i would get very upset with AMD.

To the point even i would like to see them BURN for that.

I don't really believe in coincidences and 2 different disks going bad REAL FAST after switching HBCC on and looking at amd's explanation of what hbcc does...i'm wondering.

Has anyone elese experienced something like this ????


----------



## SavantStrike

Quote:


> Originally Posted by *By-Tor*
> 
> Looked on eBay and my Powercolor Vega 64 that I paid $465 a couple of months ago is selling at $1089. WOW.....


The miners paying that money are really throwing the dice paying that much over retail.

Quote:


> Originally Posted by *SpecChum*
> 
> Seem to be going for about £700 here.
> 
> It would be a £250 profit for me, I can't say I'm that interested - I'd end up with a 1080Ti so I'd not see the money, just have a faster card. I'm happy as it is to be honest.
> 
> Plus I can mine XMR when it's not doing anything - I'm missing the boat on this one, I should step this up!


This. You'll end up ahead of a 1080 TI with a Vega that's paid for itself.

Does anyone here know what the various light codes mean on the Vega tachometer? I have a Vega that's giving me trouble and refuses to do anything but show a single green light. The manual that came with my power color doesn't even cover the tachometer.


----------



## SpecChum

Quote:


> Originally Posted by *SavantStrike*
> 
> Does anyone here know what the various light codes mean on the Vega tachometer? I have a Vega that's giving me trouble and refuses to do anything but show a single green light. The manual that came with my power color doesn't even cover the tachometer.


I didn't even know they went green!


----------



## geoxile

Quote:


> Originally Posted by *cplifj*
> 
> strangest thing...
> 
> But i wanna hear if others have had anything of their hardware die since installing Vega and using HBCC.
> 
> Somehow i had a one year old SSD die and a somewhat older 2TB HD died aswell.....
> 
> all this happened while running HBCC on. Now i leave it off and nothing else seems to be breaking.
> 
> Can't proof anything but if this happened because of a flawed amd hbcc implementation, i think i would get very upset with AMD.
> 
> To the point even i would like to see them BURN for that.
> 
> I don't really believe in coincidences and 2 different disks going bad REAL FAST after switching HBCC on and looking at amd's explanation of what hbcc does...i'm wondering.
> 
> Has anyone elese experienced something like this ????


Nope. Is your RAM stable or have you done any forced shutdowns? I had a SSD break after forcing my PC to shutdown by holding the power button down


----------



## SavantStrike

Quote:


> Originally Posted by *SpecChum*
> 
> I didn't even know they went green!


Yeah, me neither! I was worried I somehow killed the card with the full cover block I installed, but according to a reddit post the green led is for zerocore power mode, just like on fury. The third card thinks it's supposed to be off for power saving.

I added this as card number 3 to an x370 machine, and it was successfully detected. For a brief period of time the computer saw all three cards. In fact, the beta blockchain drivers seemed to think crossfire might be possible







. I turned that off and then I fired up a mining client and the whole thing went sour.

It looks like I triggered OCP on the ax1500i and the machine went crazy. I've tried DDU and a fresh install of windows and had no luck. I'll need to configure the ax1500i to allow extra current on the pcie connectors and then I'm not sure what I'm doing yet.

I'll either have to drain the loop and move the third card to another machine or I'll need to swap motherboards. I don't think Vega likes x4 pcie slots, but I think it works in them. The two v64 cards still detected insta crash when mining.

I had an unstable mem oc on my CPU and flogged the cards too hard (accidentally). Now this card seems to think it should be turned off. If I can interrogate it using Linux I might learn more.

I'm frustrated as it takes forever to get this loop drained and im losing money every day this thing doesn't run. I'm not a large scale mining operation, just an enthusiast augmenting his income.


----------



## alanthecelt

these cards are a nightmare to get working stable with many gpus
ive got 4 in a z97, which should by all rights be able to run 6, but no such luck,i had to try all combinations of slots, and the logical ones didn't necessarily work
Likewise in a h170 board i had to clock the PCIe lane down to gen 1, ive only got 2 cards running in this so far, awaiting 2 more
the most stable way ive got it running so far is with the blockchain beta drivers only, no AMD management stuff
as soon as you install the drivers only, hop into registry and disable crossfire autodetect
then set any custom overclocks using overdriventtool

http://vega.miningguides.com/


----------



## SavantStrike

Quote:


> Originally Posted by *alanthecelt*
> 
> these cards are a nightmare to get working stable with many gpus
> ive got 4 in a z97, which should by all rights be able to run 6, but no such luck,i had to try all combinations of slots, and the logical ones didn't necessarily work
> Likewise in a h170 board i had to clock the PCIe lane down to gen 1, ive only got 2 cards running in this so far, awaiting 2 more
> the most stable way ive got it running so far is with the blockchain beta drivers only, no AMD management stuff
> as soon as you install the drivers only, hop into registry and disable crossfire autodetect
> then set any custom overclocks using overdriventtool
> 
> http://vega.miningguides.com/


Yeah, nightmare is about how I'd describe it.

I should probably just pull the third card and stick with two in this chassis. It worked flawlessly with two before I water cooled it and added the third.


----------



## Grummpy

I love debunking propaganda.


__
https://www.reddit.com/r/7ncr3y/regarding_amd_and_the_lack_of_directx_9_support_a/


----------



## gupsterg

Quote:


> Originally Posted by *Grummpy*
> 
> I love debunking propaganda.
> 
> 
> __
> https://www.reddit.com/r/7ncr3y/regarding_amd_and_the_lack_of_directx_9_support_a/
> 
> 
> 
> Spoiler: Warning: Spoiler!


Skyrim works for me. The Witcher 1 does not. Max Payne 2 needed the patch as in OP and The Witcher 2 I have to knock out a logical CPU for it to work. Assassin's Creed DX9 or DX10 does not work for me, did on Ryzen/Fiji, but I have not tried when on TR with/without VEGA until yesterday. Going through some other games I have in my library which are old and bought on deep promos to fill the time at some point.


----------



## Mumak

Guys, can you please check if there's still a problem when accessing VRM sensors on RX Vega using the latest v17.12.2 drivers ?
AMD seems to think they improved something in those drivers..
Note that if you use HWiNFO v5.70, you will need to enable the "GPU I2C Support Force" option to access the VRMs.


----------



## Grummpy

nvm


----------



## gupsterg

Quote:


> Originally Posted by *Mumak*
> 
> Guys, can you please check if there's still a problem when accessing VRM sensors on RX Vega using the latest v17.12.2 drivers ?
> AMD seems to think they improved something in those drivers..
> Note that if you use HWiNFO v5.70, you will need to enable the "GPU I2C Support Force" option to access the VRMs.


Did not enable "GPU I2C Support Force".


Spoiler: Here is 1hr run of RB Stress mode, where I forgot what I was doing on rig, thus 8hrs idle. AMD Chipset v17.40, AMD GPU v17.12.2, W10 Pro FCU x64.









Spoiler: Then here is a rerun later in the morning but no VRM data.









Spoiler: Later in the day as I tune 3466MHz The Stilt preset on ASUS ZE UEFI 0901 it is there.



ProcODT [Auto] (60Ω) FAIL



ProcODT [68.6Ω] FAIL



ProcODT [53.3Ω] PASS (If you want LOG/intermitate info PM)


----------



## Mumak

Thanks, but I will need a test with "GPU I2C Support Force" enabled to know if VRM access still causes a problem with 17.12.2 drivers.
My Vega64 did never show this problem, so I need results from those who saw issues here.


----------



## gupsterg

Will enable when finish stress mode of TR profile and report back







.


----------



## Grummpy

yikes
https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=1


----------



## helloimspoon

Hey all, not sure if this is where I should post.

I have a R5 1600 + Vega 56 water cooled setup where I used an EK Fluid Gaming A240R kit. I have noticed my 'hot spot' temperature is usually around 20-25C higher than my core temperature during gaming. e.g. 63c core, 86c hot spot

I have read here and other places where some users have the same issue and have tried remounting their cooler / waterblock where some have reported successful results and others failed results. Is this a common thing for the Vega GPU? I've tried remounting myself but still have the same issue.

My plan is to buy a 120mm expansion rad to hopefully bring down my cpu temps and gpu temps (core, hbm mainly). I don't think
this will fix the hot spot issue however.


----------



## geoxile

Quote:


> Originally Posted by *helloimspoon*
> 
> Hey all, not sure if this is where I should post.
> 
> I have a R5 1600 + Vega 56 water cooled setup where I used an EK Fluid Gaming A240R kit. I have noticed my 'hot spot' temperature is usually around 20-25C higher than my core temperature during gaming. e.g. 63c core, 86c hot spot
> 
> I have read here and other places where some users have the same issue and have tried remounting their cooler / waterblock where some have reported successful results and others failed results. Is this a common thing for the Vega GPU? I've tried remounting myself but still have the same issue.
> 
> My plan is to buy a 120mm expansion rad to hopefully bring down my cpu temps and gpu temps (core, hbm mainly) since I don't think it will 'fix' the hot spot issue.


With my reference cooler and power limit +50% the hotspot hits over 100C while the core hovers around 75-80C. So seems normal.

By the way, how do you drain that thing? Considered getting a kit but apparently the kits don't come with fixings for a drain.


----------



## helloimspoon

Quote:


> Originally Posted by *geoxile*
> 
> By the way, how do you drain that thing? Considered getting a kit but apparently the kits don't come with fixings for a drain.


I disconnect the tubes connected to the pump and drain as much out into a bucket/container before blowing into one of the tubes to get the rest of the liquid out.. Yes that sounds weird but it's in the manual lol.


----------



## geoxile

Quote:


> Originally Posted by *helloimspoon*
> 
> I disconnect the tubes connected to the pump and drain as much out into a bucket/container before blowing into one of the tubes to get the rest of the liquid out.. Yes that sounds weird but it's in the manual lol.


Wow, talk about ghetto. Man, would it kill those guys to just put a valve and some extra connectors in there?


----------



## fallrisk

Well, finally got my Strix Vega 56 in.. Seems like it has the same current/voltage issues as the AMD blowers, limiting HBM overclocking. Anyone else have this card yet?


----------



## rv8000

Anyone here play warframe and still on 17.12.1 or 17.12.2? I don't know if its just me but my pc has been bone stock for sometime and this game has been crashing randomly almost every gaming session now (1-2 times in an hour or two). Anyone with similar issues?


----------



## cmogle4

I got an Alphacool Eisbear 420 for my Ryzen 7 and I also picked up the Alphacool Eiswolf Gpu cooler (minus the Radiator) and added it into the Eisbear Loop. Vega FE temps never go over 42C now.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *Mumak*
> 
> Guys, can you please check if there's still a problem when accessing VRM sensors on RX Vega using the latest v17.12.2 drivers ?
> AMD seems to think they improved something in those drivers..
> Note that if you use HWiNFO v5.70, you will need to enable the "GPU I2C Support Force" option to access the VRMs.


I´m on 18.1.1 - no Problems yet......



but got anther one, that i had forgotten to report.....

I´m running a Corsair commander Pro here and every time i start HWinfo i must close the linksoftware and restart it in admin-mode.....


----------



## ducegt

I need corsair link to keep AIO pump on full speed and not sure if its the culprit, but I tried latest Hwinfo64 beta and 3dmark froze before running the test. Afterburner seems to play nice with new 18 alpha driver. 64 LC and also I see the VRMs with the stock BIOS as opposed to only with 8774.


----------



## Mumak

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> I´m on 18.1.1 - no Problems yet......


Thanks for the feedback. Please keep me updated if it will remain stable or crash.
Quote:


> Originally Posted by *BeetleatWar1977*
> 
> but got anther one, that i had forgotten to report.....
> 
> I´m running a Corsair commander Pro here and every time i start HWinfo i must close the linksoftware and restart it in admin-mode.....


Oh Corsair... Their CorsairLink software cannot work with any other monitoring tool, otherwise it can break the communication. There's a simple way how to synchronize with other software, which is used by HWiNFO and several other tools (which play nice with Corsair devices), except Corsair. We proposed them to implement this simple solution, they started to work on that, but didn't implement it properly. Instead they wrote documents, specifications about this, rather than adding just a few lines of code into their software. Their initial implementation was buggy and caused even more troubles. But it was easy to fix, just a few lines of code.. Well, after several months passed they finally decided to remove this synchronization completely... Why? ehm....
It's offtopic, just wanted to explain this, so users know a bit background about how some companies work...
Anyway, then next HWiNFO Beta build will include a switch, which will disable access to Corsair devices.


----------



## ITAngel

Was it stupid to turn down $750 for my Vega 64? lol


----------



## Grummpy




----------



## SpecChum

I paid £455 for my Vega 64









I'm very happy with it, it's going nowhere...


----------



## ITAngel

Yea I paid $465 for mine and is hard to even consider selling it after it has a EK block and is in a loop doing great. But if the price is right could do it however then I worried trying to get a good decent card that is not an overpriced NVIDIA card. lol


----------



## 99belle99

I wish I picked up one of these cards when they were reasonably priced. I was happy with my Fury X so never bothered then for some strange reason i sold the Fury X and now I'm stuck in a hard place waiting for stock at a reasonable price.


----------



## ducegt

@Mumak Still crashing on my LC. I turned off all corsair/afterburner software and Rise of the Tomb Raider instantly crashed once the game was loaded and rendering.

18.1.1 is an improvement for me. I was getting frame-rate dips every so often with latest 2017 release.


----------



## By-Tor

Quote:


> Originally Posted by *ITAngel*
> 
> Yea I paid $465 for mine and is hard to even consider selling it after it has a EK block and is in a loop doing great. But if the price is right could do it however then I worried trying to get a good decent card that is not an overpriced NVIDIA card. lol


I picked up the same card for the same price and an EK block on it also. Not thinking about selling it or going to the dark side...


----------



## geoxile

Got a 1080TI coming wednesday lol. Then it's sayonara Vega 56 as soon as I find a buyer. My stint with AMD was short at a mere 5 years.


----------



## ITAngel

Quote:


> Originally Posted by *geoxile*
> 
> Got a 1080TI coming wednesday lol. Then it's sayonara Vega 56 as soon as I find a buyer. My stint with AMD was short at a mere 5 years.


I am sorry you had a bad experience. The Vega 64 is a nice card and can go head to head with the 1080Ti. I think the 1080Ti is only 15% faster than the Vega 64 but as it gets replaced by NVIDIA here soon Vega 64 will become better over time lasting you a longer time than any NVIDIA cards. I think rule of thumb for NVIDIA is once a new card is out they consider the previous card no longer a supported card. I could be wrong but I think that is how NVIDIA play the game. Someone can correct me if I am wrong.

Anyways good luck the 1080Ti looks like a nice card and the only one I like is the EVGA Kingpin edition or Sea Hawk edition for water cooling.







However that MSI new card with the LED and 3 fans looks pretty neat also.


----------



## hyp36rmax

I was gonna pick up two VEGA 64's at launch but it was a PIA to secure for MSRP. I later picked up two EVGA GTX 1080Ti's for SLI and water blocks for my main setup. Best decision ever. Later had an opportunity to pick up a VEGA 64 at MSRP at New Egg no less and another at Microcenter with EK Water blocks. I'm actually impressed with the crossfire performance at 4K with supporting games. I have them on my test bench at the moment. Thinking about mounting them into a build i'm working on right now.


----------



## SavantStrike

So I've got a pair of Vega 64's with full cover blocks on them and I cannot mine on the blockchain driver unless I set power limit to -50 percent. The cards ramp to full on the tach and appear to boost themselves to death.

If I use the newest drivers they work and I can set power to +30 with no issues, except then at +30 they barely even hit 800mhz hbm.

Has anyone experienced this over boost issue, and if so how did you fix it?


----------



## geoxile

Quote:


> Originally Posted by *ITAngel*
> 
> I am sorry you had a bad experience. The Vega 64 is a nice card and can go head to head with the 1080Ti. I think the 1080Ti is only 15% faster than the Vega 64 but as it gets replaced by NVIDIA here soon Vega 64 will become better over time lasting you a longer time than any NVIDIA cards. I think rule of thumb for NVIDIA is once a new card is out they consider the previous card no longer a supported card. I could be wrong but I think that is how NVIDIA play the game. Someone can correct me if I am wrong.
> 
> Anyways good luck the 1080Ti looks like a nice card and the only one I like is the EVGA Kingpin edition or Sea Hawk edition for water cooling.
> 
> 
> 
> 
> 
> 
> 
> However that MSI new card with the LED and 3 fans looks pretty neat also.


It wasn't bad but I'm kinda tired of AMD lagging behind on drivers; their openGL drivers on Windows are terrible for emulators. The Vega 56 isn't bad but it's a reference card and it's loud when gaming. The only reason I switched in the first place was because AMD allowed the use of some program to lock color profiles into the GPU's LUT. But I stopped using it for games anyway. And it only worked with one GPU so I had to return one of the two HD7950s I originally bought.


----------



## ducegt

1080 ti is closer to 30% faster and that's not even when its OCed. A used 64LC sold on eBay for 890 tonight. That's 390 more than I paid for mine, but eBay's fee and shipping is like 120. If 18.1.1 didn't fix the bug I had, I was ready to sell it. CES is next week and I've been hoping for news about HDMI 2.1 and VRR, but rumor is it won't be coming this year. I don't have a 4k anything yet and may settle with a large true 120hz TV. 1080 ti would be a better match. Hopefully CES brings good news. Freesync kept me with AMD and a Freesync TV or VRR is what I hope for.


----------



## MediocreKiller

Quote:


> Originally Posted by *SavantStrike*
> 
> So I've got a pair of Vega 64's with full cover blocks on them and I cannot mine on the blockchain driver unless I set power limit to -50 percent. The cards ramp to full on the tach and appear to boost themselves to death.
> 
> If I use the newest drivers they work and I can set power to +30 with no issues, except then at +30 they barely even hit 800mhz hbm.
> 
> Has anyone experienced this over boost issue, and if so how did you fix it?


I have the same issue while mining, even in some games or benchmarks it makes it self-go to 1790ish Core Clock and crashes. It's stable and fine at 1700 MHZ, which I have it set in wattman. For example in Fire Strike Extreme in the first 2 Graphics tests, it's setting fine at a constant 1700 MHZ Core Clock, but on the combined test it goes to 1795 and crashes. Any help?


----------



## Razkin

Quote:


> Originally Posted by *SavantStrike*
> 
> So I've got a pair of Vega 64's with full cover blocks on them and I cannot mine on the blockchain driver unless I set power limit to -50 percent. The cards ramp to full on the tach and appear to boost themselves to death.
> 
> If I use the newest drivers they work and I can set power to +30 with no issues, except then at +30 they barely even hit 800mhz hbm.
> 
> Has anyone experienced this over boost issue, and if so how did you fix it?


You'll need to control your cards clock and power with powerplay mod tables. The blockchain driver doesn't play nice like the normal driver does when changing clocks.


----------



## RatusNatus

Hello,

I do have 6 Saphire Vega 64. Im mining using BC driver and ive never used another one.
They are all the same, similar SN and etc except that one of it is identified as gfx900 instead gfx901.
gfx901 mine at 2050, gfx900 1950
gfx900 also need more juice and its the only card that HBM is going down sometimes.

I think there is some optimization on the miner(xmr-stak) side not being applyed to the gfx900.

Do you know any info about it?


----------



## Kyozon

Quote:


> Originally Posted by *RatusNatus*
> 
> Hello,
> 
> I do have 6 Saphire Vega 64. Im mining using BC driver and ive never used another one.
> They are all the same, similar SN and etc except that one of it is identified as gfx900 instead gfx901.
> gfx901 mine at 2050, gfx900 1950
> gfx900 also need more juice and its the only card that HBM is going down sometimes.
> 
> I think there is some optimization on the miner(xmr-stak) side not being applyed to the gfx900.
> 
> Do you know any info about it?


I think that gfx901 is VEGA 10 identity once HBCC is enabled.


----------



## ITAngel

Are the Radeon Vega Frontier Edition 16GB cards any good over the 64? I do like the idea of 16GB memory.


----------



## By-Tor

I tried the on screen performance monitoring in the new drivers and though it works well, at 1440p the text is very small and real hard to read.

Has anyone found a way to enlarge the text in the monitoring box?


----------



## diggiddi

Quote:


> Originally Posted by *ITAngel*
> 
> Are the Radeon Vega Frontier Edition 16GB cards any good over the 64? I do like the idea of 16GB memory.


It seems the 64 has an edge in gaming through drivers , at least
Also, Does'nt the hbcc render the vram size not as important anymore?


----------



## ITAngel

Quote:


> Originally Posted by *diggiddi*
> 
> It seems the 64 has an edge in gaming through drivers , at least
> Also, Does'nt the hbcc render the vram size not as important anymore?


Only reason I ask is because I came into one for a good price and decided to grab it since it has more memory. I have seen moded games like Skyrim and other intense mod games were they are using way pass 8GB+ of memory. It was for sure cheaper than going 1080ti at the moment. I would be paying way pass $800 for a GTX 1080 Ti when I ended up grabbing this GPU for a lot less with 16GB of memory.









I plan to play Skyrim soon and I know for a fact I am going to mod the living hell out of that game. Muahahaha!









Plus I have a Threadripper which makes it feel like they belong together.









[Questions]
1. Also I have the block by EK already for it but I need new thermal pad and thermal paste. what you guys recommend?

2. Can you flash the bios of this card into the LC version? if so is it the LC 64 version or LC FE version?


----------



## Grummpy

This stupid hot spot.
ive already destroyed a vega 64 card because of it and now look at it.
Just nonsense if you ask me.


----------



## ITAngel

nevermind I answer my own question that I asked Grummpy about the hot spot. lol

On a side note:

Does anyone have the Vega FE LC bios that they can link up for me?

Thanks!

I did an order for the following items to put on my EK block and the Vega FE card.

1. Fujipoly / mod/smart Ultra Extreme XR-m Thermal Pad - 60 x 50 x 0.5 - Thermal Conductivity 17.0 W/mK by mod/smart

2. Fujipoly / mod/smart Ultra Extreme XR-m Thermal Pad - 100 x 15 x 1.0 - Thermal Conductivity 17.0 W/mK by mod/smart

3. ARCTIC MX-4 Thermal Compound Paste, Carbon Based High Performance, Heatsink Paste, Thermal Compound CPU for All Coolers, Thermal Interface Material - 4 Grams by ARCTIC


----------



## geoxile

Quote:


> Originally Posted by *Grummpy*
> 
> This stupid hot spot.
> ive already destroyed a vega 64 card because of it and now look at it.
> Just nonsense if you ask me.


Don't worry about it.


----------



## Grummpy

..


----------



## webhito

Quote:


> Originally Posted by *Grummpy*
> 
> This stupid hot spot.
> ive already destroyed a vega 64 card because of it and now look at it.
> Just nonsense if you ask me.


That 30c difference is just nasty.


----------



## VicsPC

Quote:


> Originally Posted by *webhito*
> 
> That 30c difference is just nasty.


I'd only worry about hotspot if it reaches 100°C, my VRMs even on water reach like 60°C so a hotspot of 79°C isn't really that bad. It's hot but not a worry, my guess is you have some poor case flow.


----------



## geoxile

Quote:


> Originally Posted by *VicsPC*
> 
> I'd only worry about hotspot if it reaches 100°C, my VRMs even on water reach like 60°C so a hotspot of 79°C isn't really that bad. It's hot but not a worry, my guess is you have some poor case flow.


Trust me it routinely reaches over 100C even on the stock settings on my Vega 56.


----------



## VicsPC

Quote:


> Originally Posted by *geoxile*
> 
> Trust me it routinely reaches over 100C even on the stock settings on my Vega 56.


Mine, on air would only reach 95°C or so and that was with stock settings on a vega 64, now it peaks at around 54°C on water but then again my case airflow for my gpu is pretty ridiculous, 3 140mm fans right on top of it (its mounted vertically).


----------



## Grummpy

This is the LC gigabyte top of the range vega 64 gpu using their water cooling.
i think this hot spot is utter nonsense.
It dont worry me in the slightest.
you can add any reading to a set resistance.
Its to be ignored , its a nvidia app in any case it cant be trusted.


----------



## Grummpy

Why dont they do this would be neat i think.
Get a case where the case side is a huge passive heat sink.
Put the socket on the other side of the pcb so the act in fitting the board into the case is the same as fitting the cooler.


----------



## VicsPC

Quote:


> Originally Posted by *Grummpy*
> 
> This is the LC gigabyte top of the range vega 64 gpu using their water cooling.
> i think this hot spot is utter nonsense.
> It dont worry me in the slightest.
> you can add any reading to a set resistance.
> Its to be ignored , its a nvidia app in any case it cant be trusted.


Yea i don't trust hotspot, nothing else in my PC gets to 100°C so I'm not concerned. Apparently it's safe at up to 125°C apparently.


----------



## webhito

Quote:


> Originally Posted by *Grummpy*
> 
> Why dont they do this would be neat i think.
> Get a case where the case side is a huge passive heat sink.
> Put the socket on the other side of the pcb so the act in fitting the board into the case is the same as fitting the cooler.


Thermal paste would have to be involved somehow, and it would probably end up all gooey and nasty.

Maybe thermalpads?


----------



## By-Tor

In games my GPU stays at around 28-30c and the hot shot hits 46-48c on water..


----------



## Grummpy

Quote:


> Originally Posted by *webhito*
> 
> Thermal paste would have to be involved somehow, and it would probably end up all gooey and nasty.
> 
> Maybe thermalpads?


well it would be fitted the same way a cpu cooler would be because the case side is the cooler.
why waste that side of the board when it can be used to fit a passive cooler.
would need a board and case manufacturer to work together.
put the sock on the other side can be done easy.
use same amount of paste a grain of rice size.


----------



## geoxile

Edit: misread


----------



## Grummpy

i need to work on my drawing quality. lmao
the cooler for the cpu is the side of the case its external not internal.



hope it makkes sense.
i would love a pc to be built this way make is silent running for the cpu


----------



## cjc75

Whats going on with Vega?
Are they ever going to be available again?
Every time I go to my local MicroCenter or Frys, they never have any, and whenever I ask if they're getting more, they either tell me they do not know, or that they're not.
Newegg never has any either.


----------



## ITAngel

Quote:


> Originally Posted by *cjc75*
> 
> Whats going on with Vega?
> Are they ever going to be available again?
> Every time I go to my local MicroCenter or Frys, they never have any, and whenever I ask if they're getting more, they either tell me they do not know, or that they're not.
> Newegg never has any either.


Which Vega version are you looking for? Because I purchase today the Vega FE today but if you are looking for the Vega 56/64 then those so far have been hard to spot on. I sold mine recently for a lot.


----------



## cjc75

Quote:


> Originally Posted by *ITAngel*
> 
> Which Vega version are you looking for? Because I purchase today the Vega FE today but if you are looking for the Vega 56/64 then those so far have been hard to spot on. I sold mine recently for a lot.


I cant find any of them to be honest... but yeah, I was hoping to pick up a 56 within the next month or two.
My GTX 770 FTW 4GB is still decent but its starting to show its age... especially when trying to play some of my games at 1440p.
However, newer nVidia cards are so over priced right now, and I'm interested in getting a FreeSync monitor at some point.


----------



## ITAngel

Quote:


> Originally Posted by *cjc75*
> 
> I cant find any of them to be honest... but yeah, I was hoping to pick up a 56 within the next month or two.
> My GTX 770 FTW 4GB is still decent but its starting to show its age... especially when trying to play some of my games at 1440p.
> However, newer nVidia cards are so over priced right now, and I'm interested in getting a FreeSync monitor at some point.


I see that is the reason why I still got another Vega but this time a FE version. Mainly because I also have a Threadripper machine to work with video editing, audio procuction etc... while still being able to play my games. Glad you decided not to go the NVIDIA route this time since they are way over priced plus you get a 1-2 years out of them before you have to go upgrade again just to keep up with NVIDIA. lol I hated when I first got my GTX 1070 and 6 months later they started to talk about the replacement cards. Anyways is why I been trying to stick to AMD this time around for my desktop/workstation plus gaming needs.


----------



## cjc75

Well it all depends on whether I can ever find any AMD cards...
I cant even find the RX 500 series anywhere, Microcenter doesn't have them anymore either, and can't tell me when they'll be available again.

Truth is, I wanted to replace my GTX 770 around this time, last year, but everyone said wait for Vega, so I waited... Vega was released but cost too much and the RX 580 looked too much like a side grade at the time, to be worth it.

Now a year later, drivers are better, RX 580 is showing better performance but both Microcenter and Newegg are pricing them in the $500+ range and don't have any available.... plus, at that price range, the RX Vega 56 is the better option.

Its either that, or a GTX 1070, which even though is still over priced, its cheaper then ALL of them... and I dont care if nVidia says to upgrade 1 - 2 years... I run my video cards for a lot longer then that regardless of when the manufacturer claims they're obsolete.

I had a GTX 275 that I bought when it hit the store shelves and I only just replaced it with a Radeon HD6950 last year... that in turn was replaced by my GTX 770 a few years back. The 6950 was originally in my main rig, but when I replaced it, it then got moved to my girlfriends PC, but then she bought a GTX 960 last year so I took my 6950 back and used it to replace my GTX 275 in my back up PC..


----------



## gupsterg

Quote:


> Originally Posted by *Grummpy*
> 
> This is the LC gigabyte top of the range vega 64 gpu using their water cooling.
> i think this hot spot is utter nonsense.
> It dont worry me in the slightest.
> you can add any reading to a set resistance.
> Its to be ignored , its a nvidia app in any case it cant be trusted.


Which app is nvidia?

If you mean GPU-Z then you are wrong as clearly don't know what W1zzard does to tailor it.


----------



## ITAngel

Quote:


> Originally Posted by *cjc75*
> 
> Well it all depends on whether I can ever find any AMD cards...
> I cant even find the RX 500 series anywhere, Microcenter doesn't have them anymore either, and can't tell me when they'll be available again.
> 
> Truth is, I wanted to replace my GTX 770 around this time, last year, but everyone said wait for Vega, so I waited... Vega was released but cost too much and the RX 580 looked too much like a side grade at the time, to be worth it.
> 
> Now a year later, drivers are better, RX 580 is showing better performance but both Microcenter and Newegg are pricing them in the $500+ range and don't have any available.... plus, at that price range, the RX Vega 56 is the better option.
> 
> Its either that, or a GTX 1070, which even though is still over priced, its cheaper then ALL of them... and I dont care if nVidia says to upgrade 1 - 2 years... I run my video cards for a lot longer then that regardless of when the manufacturer claims they're obsolete.
> 
> I had a GTX 275 that I bought when it hit the store shelves and I only just replaced it with a Radeon HD6950 last year... that in turn was replaced by my GTX 770 a few years back. The 6950 was originally in my main rig, but when I replaced it, it then got moved to my girlfriends PC, but then she bought a GTX 960 last year so I took my 6950 back and used it to replace my GTX 275 in my back up PC..


Not a bad deal my friend keep up that routine it seems to be working for you well enough. There were only two GTX 1070 I ever wanted and I did owned them before redoing my entire setup. ZOTAC GTX 1070 AMP EXTREME and GALAX GTX 1070 HOF. Now I wanted a EVGA GTX 1080 Ti KINGPIN or the GALAX GTX 1080 TI HOF OC LAB Edition but they either sold out or way way to expensive. Which is why I then grabbed. The AMD Radeon Vega Frontier Edition 16GB 2048-bit HBM2 Video Card which was cheaper than what I sold my AMD Vega 64 Air for.







. Still made some side money to buy better Thermal Paste, Better Thermal Pad to install my EK Block into it. For now will hold that unless mining continue to be crazy and the gpu makert have no gpu left and people are willing to pay a lot more for that Vega FE.









Either way I will keep it unless the price is right.


----------



## Grummpy

Quote:


> Originally Posted by *gupsterg*
> 
> Which app is nvidia?
> 
> If you mean GPU-Z then you are wrong as clearly don't know what W1zzard does to tailor it.


my mistake


----------



## gupsterg

@Grummpy

NP







.

@Mumak

Disregard my reply in PM, in regard to your request on my experience with i2c enabled on GPU. I cleared i2c cache, using forced i2c, so far all good. In this ZIP is ~2hrs of testing, sort files via timestamp. Initial testing was without closing HWiNFO, but zeroing it before another test. Secondary testing was with closing/opening HWiNFO between tests. Tertiary was system repost between each test, as Windows Fast Startup is disabled = fresh kernel, etc. Windows 10 Pro x64 FCU all current updates, AMD Chipset v17.40, AMD GPU v17.12.2.

VRM Temp values via non i2c access is still hit and miss, in my HWiNFO is configured to be in space below HBM temp.

Will carry on with other tests and some gaming and update how it goes. Still on v17.12.2 driver and will do the same on v18.1.1







.

All the best
Gup


----------



## Mumak

Quote:


> Originally Posted by *gupsterg*
> 
> @Mumak
> 
> Disregard my reply in PM, in regard to your request on my experience with i2c enabled on GPU. I cleared i2c cache, using forced i2c, so far all good. In this ZIP is ~2hrs of testing, sort files via timestamp. Initial testing was without closing HWiNFO, but zeroing it before another test. Secondary testing was with closing/opening HWiNFO between tests. Tertiary was system repost between each test, as Windows Fast Startup is disabled = fresh kernel, etc. Windows 10 Pro x64 FCU all current updates, AMD Chipset v17.40, AMD GPU v17.12.2.
> 
> VRM Temp values via non i2c access is still hit and miss, in my HWiNFO is configured to be in space below HBM temp.
> 
> Will carry on with other tests and some gaming and update how it goes. Still on v17.12.2 driver and will do the same on v18.1.1
> 
> 
> 
> 
> 
> 
> 
> .
> 
> All the best
> Gup


Thanks for the results ! That looks like AMD indeed improved something to stabilize access to VRMs in latest drivers. But will wait for more results to have this confirmed...
In the past (older drivers) you had problems with VRM access like system crashes, right ?


----------



## gupsterg

NP







.

Here is more info. Same configuration.

Encountered 3x GPU blackscreen (ie GPU disconnect from system as no tach LEDs) on using HWiNFO with forced i2c for GPU. This occurred at varying stages. First was straight as HWiNFO was about to open sensors window. Second time sensors window did open, but as soon as a 3D load was used I had blackscreen. Later without any changes to system I was able to run a 3D load, 3DM FS demo looped fullscreen.



3DM_FS_i2c_CSV.zip 184k .zip file


Today will be dedicated to testing as you/we need







.

On older drivers I have had same experience as on v17.12.2. I could have no issues and then encounter them back to back or rarely intermittently, not limited to a usage case, but only when i2c is being used. I have logs/screenies from past several days when VRM temps appeared from non i2c and had no issues, but the sensors appearing on this method is totally pot luck. They could be there when you launch HWiNFO or appear later or you could run rig ~24hr+ continuously with monitoring and not see them at all.

*** edit ***

Without repost, reopened HWiNFO another run of 3DM FS demo looped fullscreen successfully.



3DM_FS_i2c_CSV_2.zip 167k .zip file


----------



## SpecChum

Actually, while @Mumak is looking at this thread, and it kinda relates to Vega as my pump and fans is cooling my Vega lol

I had an issue over the weekend where using the beta AISuite III and HWiNFO seems to prevent the PWM from changing speed (my pump got locked at 800RPM) and if I don't open HWiNFO the fans seem to change as normal. Everything seems fine with just afterburner open.

My motherboard is an ASUS C6H.

Bear in mind this is far from scientific, I only really started testing last night and it was pushing 1am and had to go to bed as I was up early for work, also bear in mind the version of AISuite is currently BETA.

Just wondered if anyone had noticed anything similar?

Regardless of the cause (possibility even isn't HWiNFO), it's still by far my fave monitoring app


----------



## Mumak

Quote:


> Originally Posted by *gupsterg*
> 
> NP
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Here is more info. Same configuration.
> 
> Encountered 3x GPU blackscreen (ie GPU disconnect from system as no tach LEDs) on using HWiNFO with forced i2c for GPU. This occurred at varying stages. First was straight as HWiNFO was about to open sensors window. Second time sensors window did open, but as soon as a 3D load was used I had blackscreen. Later without any changes to system I was able to run a 3D load, 3DM FS demo looped fullscreen.
> 
> 
> 
> 3DM_FS_i2c_CSV.zip 184k .zip file
> 
> 
> Today will be dedicated to testing as you/we need
> 
> 
> 
> 
> 
> 
> 
> .
> 
> On older drivers I have had same experience as on v17.12.2. I could have no issues and then encounter them back to back or rarely intermittently, not limited to a usage case, but only when i2c is being used. I have logs/screenies from past several days when VRM temps appeared from non i2c and had no issues, but the sensors appearing on this method is totally pot luck. They could be there when you launch HWiNFO or appear later or you could run rig ~24hr+ continuously with monitoring and not see them at all.
> 
> *** edit ***
> 
> Without repost, reopened HWiNFO another run of 3DM FS demo looped fullscreen successfully.
> 
> 
> 
> 3DM_FS_i2c_CSV_2.zip 167k .zip file


Oh, so it's still crashing even with new drivers. That's what I needed to know. Will submit to AMD and check with them what can be done...
Thanks for the test, I think there's nothing more needed to test here for now.


----------



## Mumak

Quote:


> Originally Posted by *SpecChum*
> 
> Actually, while @Mumak is looking at this thread, and it kinda relates to Vega as my pump and fans is cooling my Vega lol
> 
> I had an issue over the weekend where using the beta AISuite III and HWiNFO seems to prevent the PWM from changing speed (my pump got locked at 800RPM) and if I don't open HWiNFO the fans seem to change as normal. Everything seems fine with just afterburner open.
> 
> My motherboard is an ASUS C6H.
> 
> Bear in mind this is far from scientific, I only really started testing last night and it was pushing 1am and had to go to bed as I was up early for work, also bear in mind the version of AISuite is currently BETA.
> 
> Just wondered if anyone had noticed anything similar?
> 
> Regardless of the cause (possibility even isn't HWiNFO), it's still by far my fave monitoring app


Well, ehm, AISuite....







How to say it nice... this software is, well... a bit odd...








If you really, REALLY need to use it, then you should at least avoid running it with any other monitoring tool. AISuite doesn't support synchronization with such tools (yet) and can cause problems in this case.


----------



## SpecChum

Quote:


> Originally Posted by *Mumak*
> 
> Well, ehm, AISuite....
> 
> 
> 
> 
> 
> 
> 
> How to say it nice... this software is, well... a bit odd...
> 
> 
> 
> 
> 
> 
> 
> 
> If you really, REALLY need to use it, then you should at least avoid running it with any other monitoring tool. AISuite doesn't support synchronization with such tools (yet) and can cause problems in this case.


Completely agree









It's complete pants, but it's the only way I can have more than 1 fan profile, so I can switch between silence and performance when needed.

I might just set an "in-between" profile in BIOS and uninstall this abomination.

Thanks for the reply tho


----------



## gupsterg

Quote:


> Originally Posted by *Mumak*
> 
> Oh, so it's still crashing even with new drivers. That's what I needed to know. Will submit to AMD and check with them what can be done...
> Thanks for the test, I think there's nothing more needed to test here for now.


NP







, no thank you for taking your time out for us to highlight issue to AMD







.

No repost, reopened HWiNFO only, AMD Chipset v17.40, AMD GPU v17.12.2, Wolfenstein 2 ~30min no issues,etc.



Wolf2_i2c_CSV.zip 283k .zip file


*** edit ***

No repost, reopened HWiNFO, same config as before, DOOM ~30min no issues, etc.



DOOM_i2c_CSV.zip 286k .zip file


*** edit ***

Did some SWBF, idle testing of rig with HWiNFO open and 1hr of [email protected], still OK.


----------



## Smitty2k1

I'm itching to replace my Vega56 blower with the Morpheus II cooler as some other users have done here. Question to the group:

1) Is there any reason not to just use the thermal compound that comes with the Morpheus II? I'm talking for the VGA chip itself, not the thermal pads for the VRM heatsinks, etc.
2) If the answer to #1 is yes and if I'm just looking to go the 'easy' route and not get the absolute best performance, what thermal compound should I buy?
3) Do I need anything special to remove the old cooler/thermal compound besides alcohol? (Such as a heat gun). Do I need to setup anything special in the BIOS to get fans to run correctly? I plan on using a VGA-PWM adapter to a PWM splitter to x2 Noctua 120x15mm (slim) fans in my Ncase M1.

Thanks


----------



## webhito

Quote:


> Originally Posted by *Smitty2k1*
> 
> I'm itching to replace my Vega56 blower with the Morpheus II cooler as some other users have done here. Question to the group:
> 
> 1) Is there any reason not to just use the thermal compound that comes with the Morpheus II? I'm talking for the VGA chip itself, not the thermal pads for the VRM heatsinks, etc.
> 2) If the answer to #1 is yes and if I'm just looking to go the 'easy' route and not get the absolute best performance, what thermal compound should I buy?
> 3) Do I need anything special to remove the old cooler/thermal compound besides alcohol? (Such as a heat gun). Do I need to setup anything special in the BIOS to get fans to run correctly? I plan on using a VGA-PWM adapter to a PWM splitter to x2 Noctua 120x15mm (slim) fans in my Ncase M1.
> 
> Thanks


Most the time, the general consensus is that the goo that comes pre-applied is not of the highest quality nor conductivity, so its always suggested to change it if you want the best possible results. It will work, but you will probably drop a few degrees by using something better.

I clean the paste off with isopropyl alcohol and some toilet paper, afterwards I use a microfiber cloth to remove any leftover residue.

Not sure how the vga adapter works as I have never had one, but I would assume that afterburner or whatever software you use to give the card a manual profile should do the job.

Just checked an unboxing of the heatsink, nothing is pre-applied and I can't seem to find any info on it having paste included in the packaging.


----------



## Smitty2k1

Quote:


> Originally Posted by *webhito*
> 
> Most the time, the general consensus is that the goo that comes pre-applied is not of the highest quality nor conductivity, so its always suggested to change it if you want the best possible results. It will work, but you will probably drop a few degrees by using something better.
> 
> I clean the paste off with isopropyl alcohol and some toilet paper, afterwards I use a microfiber cloth to remove any leftover residue.
> 
> Not sure how the vga adapter works as I have never had one, but I would assume that afterburner or whatever software you use to give the card a manual profile should do the job.
> 
> Just checked an unboxing of the heatsink, nothing is pre-applied and I can't seem to find any info on it having paste included in the packaging.


Thanks for the tip. Any easy-to-apply paste you suggest? Preferably from Amazon prime!

The Morpheus II product page on the Raijintek website shows thermal paste (in addition to the thermal pads), but I haven't really seen it mentioned elsewhere.


----------



## webhito

Quote:


> Originally Posted by *Smitty2k1*
> 
> Thanks for the tip. Any easy-to-apply paste you suggest? Preferably from Amazon prime!
> 
> The Morpheus II product page on the Raijintek website shows thermal paste (in addition to the thermal pads), but I haven't really seen it mentioned elsewhere.


I always use mx4, amazon should have it in stock.

Yea, on the website it has one of those plastic packets with thermal grease, no wonder I missed it, was sure it would come in a small syringe.

Not sure how good it is though.


----------



## ITAngel

Quote:


> Originally Posted by *Smitty2k1*
> 
> Thanks for the tip. Any easy-to-apply paste you suggest? Preferably from Amazon prime!
> 
> The Morpheus II product page on the Raijintek website shows thermal paste (in addition to the thermal pads), but I haven't really seen it mentioned elsewhere.


Hi there,

I did an order from Amazon for the following items to put on my EK block on the Vega FE card.

1. *Fujipoly / mod/smart Ultra Extreme XR-m Thermal Pad* - 60 x 50 x 0.5 - Thermal Conductivity 17.0 W/mK by mod/smart

2. *Fujipoly / mod/smart Ultra Extreme XR-m Thermal Pad* - 100 x 15 x 1.0 - Thermal Conductivity 17.0 W/mK by mod/smart

3. *ARCTIC MX-4 Thermal Compound Paste*, Carbon Based High Performance, Heatsink Paste, Thermal Compound CPU for All Coolers, Thermal Interface Material - 4 Grams by ARCTIC

The entire order was about $47.97 US Dollars.


----------



## Smitty2k1

Quote:


> Originally Posted by *webhito*
> 
> I always use mx4, amazon should have it in stock.
> 
> Yea, on the website it has one of those plastic packets with thermal grease, no wonder I missed it, was sure it would come in a small syringe.
> 
> Not sure how good it is though.


Thanks! That looks reasonable.
Quote:


> Originally Posted by *ITAngel*
> 
> Hi there,
> 
> I did an order from Amazon for the following items to put on my EK block on the Vega FE card.
> 
> 1. *Fujipoly / mod/smart Ultra Extreme XR-m Thermal Pad* - 60 x 50 x 0.5 - Thermal Conductivity 17.0 W/mK by mod/smart
> 
> 2. *Fujipoly / mod/smart Ultra Extreme XR-m Thermal Pad* - 100 x 15 x 1.0 - Thermal Conductivity 17.0 W/mK by mod/smart
> 
> 3. *ARCTIC MX-4 Thermal Compound Paste*, Carbon Based High Performance, Heatsink Paste, Thermal Compound CPU for All Coolers, Thermal Interface Material - 4 Grams by ARCTIC
> 
> The entire order was about $47.97 US Dollars.


Thanks for the suggestions! The thermal pads are for the VRM heatsinks and the thermal compound is for the GPU/HBM package, right? Those thermal pads seem really pricey! I figured the VRM heatsinks would be significantly less important.


----------



## SpecChum

Grizzly Kryonaut is probably the best performing paste you can get outside of liquid metal.

However, as with pretty much anything in watercooling, the difference between that and, say, MX-4 will be minimal, perhaps 1 or 2C.

I just used the supplied EK-Ectotherm when I put my block on the Vega. I've even got some MX-4 here but really don't see the point.


----------



## VicsPC

Quote:


> Originally Posted by *SpecChum*
> 
> Grizzly Kryonaut is probably the best performing paste you can get outside of liquid metal.
> 
> However, as with pretty much anything in watercooling, the difference between that and, say, MX-4 will be minimal, perhaps 1 or 2C.
> 
> I just used the supplied EK-Ectotherm when I put my block on the Vega. I've even got some MX-4 here but really don't see the point.


I got mine for so cheap i bought a 5g tube of Kryonaut. EKs thermal paste is your basic intel quality thermal paste. Good for testing pressure but i threw mine in the garbage when i got it.


----------



## SpecChum

Quote:


> Originally Posted by *VicsPC*
> 
> I got mine for so cheap i bought a 5g tube of Kryonaut. EKs thermal paste is your basic intel quality thermal paste. Good for testing pressure but i threw mine in the garbage when i got it.


Please stop spreading nonsense unless you can back it up; it's posts like this that cause confusion and myths. My GPU being about 4C above the water temp would suggest otherwise.

Ectotherm is about on par with MX-4.

https://www.kitguru.net/components/cooling/dominic-moass/thermal-paste-head-to-head-does-it-matter-which-brand-you-use/2/

Edit: To throw a qualifier into this, I can see some reviews from a few years ago where ectotherm didn't seem to perform as well as it does now (that comparison above is from last year); I suspect a formula has changed since then.

My point still stands, saying it's "Intel quality paste" is just hyperbole.


----------



## gupsterg

Was just about to link that







.

Using pads that EK supplied with block, only tried Thermal Grizzly Hydronaut as was included with another block FOC or else I would have just slapped on the usual AS5 or included EK Ecotherm TBH. Frankly for the minor differences really can't see the point in spending on TIM beyond a few quid TBH.

*** edit ***

@SpecChum

Here's one from 2yrs ago, differing site 2015 and then 2017 testing.


----------



## SpecChum

Yeah, does fall a bit flat on it's face on that one.

Even there tho, it's only 2C behind AS5 when overclocked.

I'm certainly not saying it's the best, far from it, but if you've got nothing else it's perfectly adequate.


----------



## gupsterg

Really can't justify spend on TIM for minor temp differences for context of my uses. I'd rather put that £ towards improving cooling solution.

I have read people advise get a new tube when going for a build, TBH I never do. I have had AS5 for over 3yrs+ and not noted any separation. My i5 4690K, which did 4.9GHz on air for 1.5yrs+, fully stability tested (and have 



) I initially used a tube of AS5 from god knows when. Just as that chip was pretty golden for clocking I decided to splurge on a new tube of AS5 later on in ownership, did I see any difference in temp? none that made me go "WOW".

I went for el cheapo rads and fans for this build. Frankly luv'ing them and in the past have splurged on decent fans, even if not done WC before.


----------



## ducegt

Variable Refresh Rate, a feature of the HDMI 2.1 spec is confirmed for RX products. It's like FreeSync over HDMI for TVs.
Quote:


> AMD also announced that Radeon™ Software will add support for HDMI 2.1 Variable Refresh Rate (VRR) technology on Radeon™ RX products in an upcoming driver release.


http://www.amd.com/en-us/press-releases/Pages/ces-2018-2018jan07.aspx

Rumor is a Samsung executive has said their 2018 QLED TVs won't have full HDMI 2.1, but do have VRR, HDR10, and eARC; leaving out 8K and 4K at high refresh rates. Hopefully they can do 4k @ 120hz, but even 60 with VRR would make for a good experience on a 55-65'' screen.


----------



## SavantStrike

Quote:


> Originally Posted by *gupsterg*
> 
> Really can't justify spend on TIM for minor temp differences for context of my uses. I'd rather put that £ towards improving cooling solution.
> 
> I have read people advise get a new tube when going for a build, TBH I never do. I have had AS5 for over 3yrs+ and not noted any separation. My i5 4690K, which did 4.9GHz on air for 1.5yrs+, fully stability tested (and have
> 
> 
> 
> ) I initially used a tube of AS5 from god knows when. Just as that chip was pretty golden for clocking I decided to splurge on a new tube of AS5 later on in ownership, did I see any difference in temp? none that made me go "WOW".
> 
> I went for el cheapo rads and fans for this build. Frankly luv'ing them and in the past have splurged on decent fans, even if not done WC before.


That's why I use nt-h1 exclusively. On the lower end of the price scale and within a degree of the best on the market in most tests. I was on AS5 until nt-h1, every other designer paste was too viscous and didn't spread well.

Fans are an area where I've always just bought industrial designs and called it a day. Every time I try something else it fails to impress me.


----------



## porschedrifter

Quote:


> Originally Posted by *gupsterg*
> 
> Nice to read you got a great price
> 
> 
> 
> 
> 
> 
> 
> . It's not the buyer protection which is an issue with ebay TBH it's seller protection. It becomes one sided very easily when you end up with bad buyer. I have only had it one time with something I could afford to lose, has left a real bad taste to ebaying after that.


Yeah funny you mention that, I just went through a situation on ebay, I sold my previous strix r9 390 to get the vega, and about a month later I got a charge-back with the buyer telling paypal someone made an unauthorized purchase on their account.







I had the amount instantly frozen in my PP until it was resolved. They decided in the buyers favor but still let me keep the money since I had proof of shipment.


----------



## gupsterg

Quote:


> Originally Posted by *SavantStrike*
> 
> That's why I use nt-h1 exclusively. On the lower end of the price scale and within a degree of the best on the market in most tests. I was on AS5 until nt-h1, every other designer paste was too viscous and didn't spread well.
> 
> Fans are an area where I've always just bought industrial designs and called it a day. Every time I try something else it fails to impress me.


Recently when did the VEGA block I was interested in NT-H1 and Be Quiet DC1, ~£3-4 IIRC. Then as I had TGH sitting around I thought I'd give that a whirl. Only gripe was spreading the stuff was awful, even when I warmed the tube with hairdryer. Happy with temps, used shares from here to form an opinion for that.

I went Arctic Cooling F12 PWM, cost £4. Initially wanted Phantek PH-F120MP, best price at the time IIRC was £10. I needed 6 fans for build, so quite a saving by going AC F12. I use my rig quite a bit, past few weeks it's been on 24/7. I have had the fans since ~Sep 17 and been "_sound as a pound_".
Quote:


> Originally Posted by *porschedrifter*
> 
> Yeah funny you mention that, I just went through a situation on ebay, I sold my previous strix r9 390 to get the vega, and about a month later I got a charge-back with the buyer telling paypal someone made an unauthorized purchase on their account.
> 
> 
> 
> 
> 
> 
> 
> I had the amount instantly frozen in my PP until it was resolved. They decided in the buyers favor but still let me keep the money since I had proof of shipment.


Sweet







.


----------



## VicsPC

Quote:


> Originally Posted by *SpecChum*
> 
> Please stop spreading nonsense unless you can back it up; it's posts like this that cause confusion and myths. My GPU being about 4C above the water temp would suggest otherwise.
> 
> Ectotherm is about on par with MX-4.
> 
> https://www.kitguru.net/components/cooling/dominic-moass/thermal-paste-head-to-head-does-it-matter-which-brand-you-use/2/
> 
> Edit: To throw a qualifier into this, I can see some reviews from a few years ago where ectotherm didn't seem to perform as well as it does now (that comparison above is from last year); I suspect a formula has changed since then.
> 
> My point still stands, saying it's "Intel quality paste" is just hyperbole.


Not nonsense when I've done my own testing (under 3 different pressures, using pressure paper to verify all 3) and like i said it's about the same as the dow corning TIM Intel uses (which believe it or not is still NOT the issue with Intel CPUs running hot). Again this is from personal testing running bare die, and delided, under water and on air. EK ecoterm is about the same, the longevity of it is what most tests don't test so in my eyes they are not valid in the least.

Here's what i mean, so it's not nonsense, far from it. I don't just look at stock temps, need to look at a proper overclock as well, and with people pushing their CPUs further and further I don't use it. GPUs run even hotter the difference is quite a bit bigger.








Quote:


> Originally Posted by *SavantStrike*
> 
> That's why I use nt-h1 exclusively. On the lower end of the price scale and within a degree of the best on the market in most tests. I was on AS5 until nt-h1, every other designer paste was too viscous and didn't spread well.
> 
> Fans are an area where I've always just bought industrial designs and called it a day. Every time I try something else it fails to impress me.


I used to use NH-T1 until it did poorly on bare die and GPUs (longevity wise) and i switched to use either liquid metal or kryonaut on most customer builds i do.


----------



## By-Tor

Quote:


> Originally Posted by *gupsterg*
> 
> I went Arctic Cooling F12 PWM, cost £4. Initially wanted Phantek PH-F120MP, best price at the time IIRC was £10. I needed 6 fans for build, so quite a saving by going AC F12. I use my rig quite a bit, past few weeks it's been on 24/7. I have had the fans since ~Sep 17 and been "_sound as a pound_".
> Sweet
> 
> 
> 
> 
> 
> 
> 
> .


I have been using 12 of these fans on 2 Hardware Labs 360 radiators for thee past 3 years and love them, never an issue..


----------



## SpecChum

F12's are awesome; used them in my 4690k build.

I had a bit more expendable cash for this build (thanks PPI lol) so I've gone Corsair Mag Lev all round


----------



## gupsterg

Quote:


> Originally Posted by *By-Tor*
> 
> I have been using 12 of these fans on 2 Hardware Labs 360 radiators for thee past 3 years and love them, never an issue..


Nice







.
Quote:


> Originally Posted by *SpecChum*
> 
> F12's are awesome; used them in my 4690k build.
> 
> I had a bit more expendable cash for this build (thanks PPI lol) so I've gone Corsair Mag Lev all round


Sweet







.

How peeps finding v18.1.1 driver?


----------



## SpecChum

I've not really had a need to try it yet.

I've got a week off next week tho, and I intend to play a few games so maybe something will come up


----------



## Robotmind

Greetings!

Wanted to catch up on all the posts in this thread before I introduced myself.

However, I read the post and hoped that my experience may help you out with the Cf791 issues that you are having.

I ran the gamut on this issue myself.

Ultimately, the best solution I could find was simply to turn off the monitor and turn it back on using the jog button on the back of the monitor.

Before that, I was being driven mad by black screens on boot, starting games, and streaming video fullscreen in the browser.

Also, not sure if anyone else has tried, but with the adrenalin drivers I am able to run the HBM at 1200 megahertz @ 1050mv At least to 1250 as well, but with less stability.

UV P6 1642 w/ 1025 and P7 1697 w/ 1075 . Flat and dead stable clocks, ruler flat.. V 64 air on EKWB

Hope the info helps your black screen issues!

Cheers!

https://www.3dmark.com/spy/2995034

https://www.3dmark.com/search#/?mode=advanced&url=/proxycon/ajax/search2/cpugpu/spy/P/2184/1152/500000?minScore=0&cpuName=Intel+Core+i7-7700K&gpuName=AMD+Radeon+RX+Vega+64+Liquid&gpuCount=1


----------



## JasonMZW20

Finally got around to disassembling my other Vega64 card. This one is molded from Taiwan. It performs pretty similarly to my unmolded Korean made one, except in HBM. Molded one seems to have more issues running at 1100MHz*, but that was with some of the HBM chips not well covered with thermal paste (if you look in the album). I'm currently testing it with reapplied paste (Thermal Grizzly on GPU1 - molded, GC Extreme on GPU0 - unmolded). They perform similarly, though molded has 1-2C lower hotspot temps.










Album here:


http://imgur.com/mY52x


*Over a 36 hour mining period, GPU1 had 39 incorrect ETH shares whereas GPU0 had none. I attributed these to memory errors, as I'm running low clocks and voltages (~1200MHz, 0.894v with 1100MHz HBM) and ETH is quite memory intensive. I used to use 0.875v, but GPU1 kept throwing fits and zeroing out (crashing). So, we'll see how it responds to new thermal paste application.


----------



## VicsPC

Quote:


> Originally Posted by *JasonMZW20*
> 
> Finally got around to disassembling my other Vega64 card. This one is molded from Taiwan. It performs pretty similarly to my unmolded Korean made one, except in HBM. Molded one seems to have more issues running at 1100MHz, but that was with some of the HBM chips not well covered with thermal paste (if you look in the album). I'm currently testing it with reapplied paste (Thermal Grizzly on GPU1 - molded, GC Extreme on GPU0 - unmolded). They perform similarly, though molded has 1-2C lower hotspot temps.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Album here:
> 
> 
> http://imgur.com/mY52x


Yea 1-2°C is within margin of error, glad to see theres not much temp difference between the 2, i believe unmolded needs a bit more paste but 40micronmeters isnt much anyways.


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> I've not really had a need to try it yet.
> 
> I've got a week off next week tho, and I intend to play a few games so maybe something will come up


Taken the plunge







.

@Mumak

Rest of the day on v17.12.2 went without incident. Moved to v18.1.1, restarted rig, reset i2c cache several times and no VRM info until Valley test, ZIP link (3DM FS Demo loop was fullscreen test). Will update if have blackscreen/GPU remove.


----------



## Robotmind

Greetings,

I would like to introduce myself; Robotmind. Just an average hobbyist who enjoys reading and sharing knowledge. I regret not introducing myself sooner, but I honestly have only gotten half way through this thread. I must say, I have thoroughly enjoyed being part of this forum since I purchasing my Vega 64 air at launch. I cannot thank all the thread contributors enough for all the helpful information!

I previously was very active testing various overclock settings and methods. I have been Moderately successful. As seems to be well known, with each driver/software update the settings for stable overclock seem to change dramatically. I won't waste time rehashing my trial and tribulations unless someone is curious I will share all that I remember of my tuning.

Currently,

"XFX" Vega 64 air on EKWB w/ backplate

UV : P6 1642MHz @ 1050mv P7 1697MHz @ 1075mv HBM 1200MHz @ 950mv

Using wattman only for overclock settings, with Power at +180% (regedit)

Max power draw approx 270w under 100% usage

https://www.3dmark.com/fs/14507067

Since the Adrenalin update, I have been able to safely increase HBM clock from 1105MHz to 1200MHz and beyond.









However, my current UV provides the greatest compromise between performance and power usage for MY card. YMMV

https://www.3dmark.com/fs/14439767

^^^^ 30/45 min Stress test @ 1200MHz HBM after Adrenalin update...

You can find the somewhat extensive testing history on various benchmarking sites under username Robotmind...

In the above cases , you could also, search 3DmMark site with single Vega 64 liquid and i7 7700K GPU/CPU combo.

Many early tests where done Overvolt and Overclocked. Power target anywhere from stock to +275 or more...









Hope to have some time to post a couple pictures of my rig with my card so I am able to join the exalted owners' list.

Any questions feel free to ask!

Cheers!


----------



## helloimspoon

Quote:


> Originally Posted by *Robotmind*
> 
> Any questions feel free to ask!


Is it possible if could you share your bios? I have a XFX Vega 56 in a watercooling loop so I think it's time to try and overclock.

Assuming you still have it as I noticed in your 3Dmark score it says you have the liquid Vega 64.


----------



## Robotmind

Quote:


> Originally Posted by *helloimspoon*
> 
> Is it possible if could you share your bios? I have a XFX Vega 56 in a watercooling loop so I think it's time to try and overclock.
> 
> Assuming you still have it as I noticed in your 3Dmark score it says you have the liquid Vega 64.


It doesn't matter which AIO bios you flash on your card they are all the same on the reference cards. I believe the bios I used was from a MSI perhaps...

I would like to hear your results if you try my settings. Hope that helps. Have a great day!

Cheers!


----------



## kondziowy

I can't believe my Sapphire Vega 64 Nitro is actually shipping.







It was a full time job to get one - refreshing retail shops all the time, and I got it in 30 minutes window when it was available at casekings. It was almost the price of 1080Ti, and I intend to run it underclocked. Sad times







But if it will not be stolen by a miner while shipped, I will be a very happy man


----------



## Newbie2009

Look at those Vega 64 prices!

Glad I pulled trigger on the £450 launch day price


----------



## Smitty2k1

Quote:


> Originally Posted by *Smitty2k1*
> 
> I'm itching to replace my Vega56 blower with the Morpheus II cooler as some other users have done here. Question to the group:
> 
> 1) Is there any reason not to just use the thermal compound that comes with the Morpheus II? I'm talking for the VGA chip itself, not the thermal pads for the VRM heatsinks, etc.
> 2) If the answer to #1 is yes and if I'm just looking to go the 'easy' route and not get the absolute best performance, what thermal compound should I buy?
> 3) Do I need anything special to remove the old cooler/thermal compound besides alcohol? (Such as a heat gun). Do I need to setup anything special in the BIOS to get fans to run correctly? I plan on using a VGA-PWM adapter to a PWM splitter to x2 Noctua 120x15mm (slim) fans in my Ncase M1.
> 
> Thanks


Well, pulled the trigger last night on the Morpheus and Noctua fans and MX compound. Here is hoping I don't destroy that sensitive chip when taking off the old cooler. I checked XFX's warranty, because they are known to allow aftermarket coolers and they specifically say on their site that Vega cards CANNOT have the cooler swapped because of how sensitive the chip is... yikes!


----------



## kondziowy

Nitro 64 was at overclockers.co.uk at 899£ and I saw the stock go dry. Everytime it sells out they raise the price. It will be 999 soon.


----------



## rancor

Quote:


> Originally Posted by *Smitty2k1*
> 
> Well, pulled the trigger last night on the Morpheus and Noctua fans and MX compound. Here is hoping I don't destroy that sensitive chip when taking off the old cooler. I checked XFX's warranty, because they are known to allow aftermarket coolers and they specifically say on their site that Vega cards CANNOT have the cooler swapped because of how sensitive the chip is... yikes!


I don't see where they disallow it just highly recommend against it.

http://www.xfxforce.com/en-us/support/xfx-warranty
Quote:


> For VEGA class products, it is recommended not to touch the cooling solution or thermal paste, the VEGA GPU and HBM memory are very sensitive and can be easily damaged compared to previous GPUs.


----------



## SpecChum

Quote:


> Originally Posted by *Newbie2009*
> 
> Look at those Vega 64 prices!
> 
> Glad I pulled trigger on the £450 launch day price


Ooh, you beat me by £5.

I paid £455 for my 64 lol


----------



## Efilnikuf

Quote:


> Originally Posted by *SpecChum*
> 
> Ooh, you beat me by £5.
> 
> I paid £455 for my 64 lol


Feel like I got a steal, paid 424 USD for mine (open box,) and got a steal on an Acer predator 1440, 144hz 27" monitor as well. I love you Microcenter.


----------



## By-Tor

Same here at $465 US which is 343 GBP.


----------



## Efilnikuf

Quote:


> Originally Posted by *By-Tor*
> 
> Same here at $465 US which is 343 GBP.


I feel like they get screwed on prices on computer parts overseas, basically they pay the same in Euros that we pay in USD regardless of the difference in value of the currency.


----------



## SpecChum

Is that inclusive of tax tho?

Mine was £455 all in


----------



## ducegt

Quote:


> Originally Posted by *Efilnikuf*
> 
> Feel like I got a steal, paid 424 USD for mine (open box,)


Nice. My open box 64 Liquid Cooled was 500 plus 5 for shipping.

A new Sapphire aftermarket cooler 64 just sold for $ 1463 on eBay. Bestbuy also let 56s sell for only 750 as well when used ones are going for more. Now that AMD hasn't released anyone news regarding refreshes and node improvements... Hard to imagine the values going anywhere but up


----------



## By-Tor

No tax from newegg or shipping charge.

Remember when I was stationed in Germany back in the late 80s. We had forms to fill out so we didnt have to pay the local VAT taxes which were crazy.


----------



## stewwy

HeHeHe I paid £300 for my Sapphire Rx Vega 64 LE, Insight had them up for preorder late last year and I took a risk


----------



## ITAngel

I know those prices are crazy. I got mine for $465.00 about a month or two ago and sold it for $900 and got a Vega FE for $749.99. XD Not bad huh?


----------



## Grummpy

seen this.
http://www.bbc.co.uk/news/technology-42619812

AMD don't even need this patch and yet they are the ones in the head lines.
£$£$Tel to blame
I hate misinformation and lies. intel defence is clearly to hurt amd to mask the fact it is them who are at fault with their weak design open to attack


----------



## By-Tor

I like how intels CEO sold all the stocks he could to still remain as CEO..hhhmmmmm

https://arstechnica.com/information-technology/2018/01/intel-ceos-sale-of-stock-just-before-security-bug-reveal-raises-questions/

I have been wanting to come back to AMD and this seems like a good time...


----------



## raysheri

Quote:


> Originally Posted by *ducegt*
> 
> Nice. My open box 64 Liquid Cooled was 500 plus 5 for shipping.
> 
> A new Sapphire aftermarket cooler 64 just sold for $ 1463 on eBay. Bestbuy also let 56s sell for only 750 as well when used ones are going for more. Now that AMD hasn't released anyone news regarding refreshes and node improvements... Hard to imagine the values going anywhere but up


AMD recently released their 2018-2020 roadmap for CPU and GPU at CES. https://www.techpowerup.com/240384/amd-reveals-cpu-graphics-2018-2020-roadmap-at-ces
looks like Vega is going to 7nm (no 12nm) with release approx early 2019 and Navi in 2020


----------



## Trender07

Well even tho its alpha I installed the 18.1.1 drivers and no problems


----------



## diabetes

Is anyone with C++ programming knowledge and advanced Linux skills in this thread?

If one were to clone the complete AMDVLK driver (every git repo here: https://github.com/GPUOpen-Drivers), the drm-next branch of the Linux Kernel and remove the following lines of code from https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/hw/gfxip/gfx9/gfx9SettingsLoader.cpp

Code:



Code:


if (chipProps.gfxLevel == GfxIpLevel::GfxIp9)
{
        m_gfx9Settings.nggMode = Gfx9NggDisabled;
}

probably edit some more files, and fix the compiler errors, he could try out NGG fast path (if it actually is implemented and has not been stripped out of the codebase). Most likely Mesa needs to be recompiled too and pointed towards the new LLVM installation, as otherwise GL and X.Org would break. The driver is designed to work with Ubuntu 16.04.3 or RHEL 7.4.

It is funny that this one if-clause is the only thing that is not documented in that file. Everything else has a comment saying why it was done, except for that bracket.

I also found this in https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/hw/gfxip/gfx9/gfx9WorkaroundState.cpp - seems kinda severe, although it could be managable if the card does not have to switch back and forth between legacy pipeline and NGG pipeline all the time:


Spoiler: Warning: Spoiler!



// When a transition from a legacy tessellation pipeline (GS disabled) to an NGG pipeline, the broadcast logic
// to update the VGTs can be triggered at different times. This, coupled with back pressure in the SPI, can cause
// delays in the RESET_TO_LOWEST_VGT and ENABLE_NGG_PIPELINE events from being seen. This will cause a hang.
// NOTE: For non-nested command buffers, there is the potential that we could chain multiple command buffers
// together. In this scenario, we have no method of detecting what the previous command buffer's last bound
// pipeline is, so we have to assume the worst and insert this event.


----------



## Emmott

I got a "new - tested working" Vega Frontier card from eBay. It is appearing as a "Radeon Instinct MI25" in the device manager. I'm worried the seller may have tried flashing the wrong bios, but I have no Idea how to check or attempt fixing it. Any help would be greatly appreciated.


----------



## diabetes

@Emmott Turn off the computer, change the Bios switch on the card to its alternate position and start it gain. If the card is recognized as a Frontier edition card after that, you can be sure that it has been flashed. If that is the case, change the switch back, backup the MI25 Bios with GPU-Z in case someone here wants on OC net wants it and grab a FE bios from the Vega Bios Thread or from the "Techpowerup VGA Bios collection". Flash using atiwinflash 2.77


----------



## Emmott

It won't let me download the bios. Also the switch is broken, there is no plastic knob to adjust anymore on the side of the card.


----------



## itxgamer

Quote:


> Originally Posted by *Emmott*
> 
> 
> It won't let me download the bios. Also the switch is broken, there is no plastic knob to adjust anymore on the side of the card.


You can probably still save it through atiwinflash if GPU-Z doesn't work, just click the save button and put a file name. It should be 256KB.


----------



## Newbie2009

Quote:


> Originally Posted by *SpecChum*
> 
> Is that inclusive of tax tho?
> 
> Mine was £455 all in


Those jammy dodgers don't pay VAT


----------



## ManofGod1000

Quote:


> Originally Posted by *Emmott*
> 
> 
> It won't let me download the bios. Also the switch is broken, there is no plastic knob to adjust anymore on the side of the card.


I would return it then and get a refund, if possible. A broken switch, questionable bios, who know what else might be wrong with it.


----------



## BoMbY

Quote:


> Originally Posted by *diabetes*
> 
> Is anyone with C++ programming knowledge and advanced Linux skills in this thread?
> 
> If one were to clone the complete AMDVLK driver (every git repo here: https://github.com/GPUOpen-Drivers), the drm-next branch of the Linux Kernel and remove the following lines of code from https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/hw/gfxip/gfx9/gfx9SettingsLoader.cpp
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> if (chipProps.gfxLevel == GfxIpLevel::GfxIp9)
> {
> m_gfx9Settings.nggMode = Gfx9NggDisabled;
> }
> 
> probably edit some more files, and fix the compiler errors, he could try out NGG fast path (if it actually is implemented and has not been stripped out of the codebase). Most likely Mesa needs to be recompiled too and pointed towards the new LLVM installation, as otherwise GL and X.Org would break. The driver is designed to work with Ubuntu 16.04.3 or RHEL 7.4.
> 
> It is funny that this one if-clause is the only thing that is not documented in that file. Everything else has a comment saying why it was done, except for that bracket.
> 
> I also found this in https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/hw/gfxip/gfx9/gfx9WorkaroundState.cpp - seems kinda severe, although it could be managable if the card does not have to switch back and forth between legacy pipeline and NGG pipeline all the time:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> // When a transition from a legacy tessellation pipeline (GS disabled) to an NGG pipeline, the broadcast logic
> // to update the VGTs can be triggered at different times. This, coupled with back pressure in the SPI, can cause
> // delays in the RESET_TO_LOWEST_VGT and ENABLE_NGG_PIPELINE events from being seen. This will cause a hang.
> // NOTE: For non-nested command buffers, there is the potential that we could chain multiple command buffers
> // together. In this scenario, we have no method of detecting what the previous command buffer's last bound
> // pipeline is, so we have to assume the worst and insert this event.


You don't need to disable it. From what I can tell this is an option setting, which can be set from the outside. I can't figure out exactly how, and where to put it, but this should be a Registry setting under Windows, but I don't know where to put the keys exactly.

From https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/hw/gfxip/gfx9/gfx9PalSettings.cfg :

Code:



Code:


DefineEnum = "'Gfx9NggMode' : ('Gfx9NggDisabled',       '0x00'),
                              ('Gfx9NggEnableInternal', '0x01'),
                              ('Gfx9NggEnableExternal', '0x02'),
                              ('Gfx9NggEnableClient',   '0x04')
                              ('Gfx9NggEnableAll',      '0x07')";

And from https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/hw/gfxip/gfx9/gfx9SettingsLoader.cpp :

Code:



Code:


void SettingsLoader::HwlReadSettings()
{
    // read HWL settings from the registry or configure file
    Gfx9ReadSettings(&m_gfx9Settings);
}

Edit: See also here https://github.com/GPUOpen-Drivers/pal/blob/28a98ba3e787278dad958afd2cadbdabf28bacfc/inc/core/palDevice.h :

Code:



Code:


    /// Reads a specific setting from the operating system specific source (e.g. registry or config file).
    ///
    /// @param [in]  pSettingName Name of the setting. Must be null-terminated.
    /// @param [in]  settingScope The scope of settings accessible.
    /// @param [in]  valueType    The type of the setting to return (e.g. bool or int).
    /// @param [out] pValue       Buffer to write data that was read. Must be non-null.
    /// @param [out] bufferSz     Size of string buffer (pValue). Only necessary for ValueType::Str.
    ///
    /// @returns True if the read of specified setting is successful. False indicates failure.
    virtual bool ReadSetting(
        const char*     pSettingName,
        SettingScope    settingScope,
        Util::ValueType valueType,
        void*           pValue,
size_t bufferSz = 0) const = 0;


----------



## diabetes

The Problem is that SettingsLoader::HwlValidateSettings() is run after SettingsLoader::HwlReadSettings() and always disables NGG unless that if statement is removed.


----------



## 113802

Quote:


> Originally Posted by *Emmott*
> 
> It won't let me download the bios. Also the switch is broken, there is no plastic knob to adjust anymore on the side of the card.




It's kinda difficult to flip the switch up on Vega cards. That's actually pretty sweet you have an Instinct MI25 with video output


----------



## Trender07

So many Vegas even on Intel I think theyre doing It so developers optimize form Vega as a platform and get performance boost like on Wolfenstein 2 dont u guys think of this like Vega as a platform


----------



## By-Tor

Quote:


> Originally Posted by *Trender07*
> 
> So many Vegas even on Intel I think


Yep...

Was going to jump back to AMD now, but will wait for the Zen+ CPUs and X400 motherboards.


----------



## SlushPuppy007

Hi to all!!

I just saw this post on guru3d:

http://www.guru3d.com/news-story/samsung-starts-volume-production-2-4-gbps-8gb-hbm2-stacks.html

Do you recon if AMD is to use this new faster HBM2 on RX VEGA, that it would give a substantial performance increase?

Like 5% to 10% perhaps?


----------



## gupsterg

@Mumak

Been on v18.1.1 Alpha since 08th late evening. So far not a single issue in regard to i2c access for VRM info. Used multiple times and varied apps, with and without system repost. Will let you know if have an issue.


----------



## SpecChum

Might have to try the new driver.

Saying that, probably won't be long before a new version comes out...


----------



## Mumak

Quote:


> Originally Posted by *gupsterg*
> 
> @Mumak
> 
> Been on v18.1.1 Alpha since 08th late evening. So far not a single issue in regard to i2c access for VRM info. Used multiple times and varied apps, with and without system repost. Will let you know if have an issue.


Interesting, hard to believe they fixed it between 17.12.2 - 18.1.1.. Thanks for the feedback, please keep me updated if you encounter any issues.


----------



## gupsterg

Quote:


> Originally Posted by *SpecChum*
> 
> Might have to try the new driver.
> 
> Saying that, probably won't be long before a new version comes out...


Intially did some GPU-Z render test, Heaven, Valley and 3DM. Then had multiple sessions on Wolfenstein 2 and SWBF, no issues, will be gaming in lunch break







. Also done ~7hrs [email protected] and ~19.5hrs other monitoring when doing P95 & Y-Crunching. After lunch be some RealBench on CPU/GPU.

Quote:


> Originally Posted by *Mumak*
> 
> Interesting, hard to believe they fixed it between 17.12.2 - 18.1.1.. Thanks for the feedback, please keep me updated if you encounter any issues.


Indeed. NP and will do







.


----------



## Grummpy

this is great for vega architecture,
https://wccftech.com/samsung-hbm2-aquabolt-2-4-gbps/

ddr6 rip


----------



## gedoze

Quote:


> Originally Posted by *Grummpy*
> 
> this is great for vega architecture,
> https://wccftech.com/samsung-hbm2-aquabolt-2-4-gbps/
> 
> ddr6 rip


no, not really.
If I understood AMD from CES2018, there will be no vega refresh for gaming.
These new HBM2.2 stacks are hardware lvl = so new chips would have to be made = brand new gpu.
Now lets consider, HBM is already supperior to gddr, so there woun't be a huge performance uplift from hbm2 refresh, my bet is 5-10% uplift, not more.
The problem is the core.
Do a vega rx 12nm or 7nm refresh with this new hbm2 refresh and nvidia would be dethroned, sadly i didn't see that in amd's plans









either way, i'm waiting for sapphire's vega 64 nitro+ to be on stock, then some temperature baseline tests and then instant fan replacement to either 2x 120mm noctua industrial 3000rpm pwm or 3x 92mm noctua A9 pwm fans (who are doing amazing job on my sapphire r9 270x toxic card) and thermal pad replacement to 17w/mk, along with thermal grizlly kryonaut paste

P.S. and this is coming from fx-9590 clocked at 5ghz+ owner...TDP madness experience, yet ultimate TDP build or best ever radiator project


----------



## gupsterg

@Mumak

Some of what I have been upto today is in this post, I was just going to rerun IBT now, loaded HWiNFO and all of a sudden had GPU disconnect on v18.1.1 when using i2c access. Here is log if any use:-

18.1.1_V64_Off.zip 1k .zip file


Is running in debug mode any use to you, to fire info at AMD?


----------



## flippin_waffles

This looks awesome. The HBC and HBCC on Vega seems to be working very well.
Quote:


> well, for starters, in many compute environments, the bottleneck of compute is actually memory size, there are moments where there are spare compute power, but the memory has reached its capacity so it will have to page file the overflow. HBCC allows me to essentially remove the memory limit by dedicating system memory to VRAM and 16GB HBM2 acting like a cache instead of VRAM




__
https://www.reddit.com/r/7pqef1/pushing_vega_fe_hbcc_to_the_limit_147gb_of_ram/%5B/URL


----------



## TrixX

Quote:


> Originally Posted by *gedoze*
> 
> no, not really.
> If I understood AMD from CES2018, there will be no vega refresh for gaming.
> These new HBM2.2 stacks are hardware lvl = so new chips would have to be made = brand new gpu.
> Now lets consider, HBM is already supperior to gddr, so there woun't be a huge performance uplift from hbm2 refresh, my bet is 5-10% uplift, not more.
> The problem is the core.
> Do a vega rx 12nm or 7nm refresh with this new hbm2 refresh and nvidia would be dethroned, sadly i didn't see that in amd's plans
> 
> 
> 
> 
> 
> 
> 
> 
> 
> either way, i'm waiting for sapphire's vega 64 nitro+ to be on stock, then some temperature baseline tests and then instant fan replacement to either 2x 120mm noctua industrial 3000rpm pwm or 3x 92mm noctua A9 pwm fans (who are doing amazing job on my sapphire r9 270x toxic card) and thermal pad replacement to 17w/mk, along with thermal grizlly kryonaut paste
> 
> P.S. and this is coming from fx-9590 clocked at 5ghz+ owner...TDP madness experience, yet ultimate TDP build or best ever radiator project


It's interesting as that HBM could easily be implemented on the 7nm Vega refresh this year. Would be a good performance boost as long as there's no other bottleneck in the GPU itself.

However this year and next are looking very tasty from a tech perspective


----------



## SpecChum

Not so tasty from my wallet's perspective


----------



## By-Tor

Quote:


> Originally Posted by *SpecChum*
> 
> Not so tasty from my wallet's perspective


Dilly dilly


----------



## gedoze

Quote:


> Originally Posted by *gedoze*
> 
> no, not really.
> If I understood AMD from CES2018, there will be no vega refresh for gaming.
> These new HBM2.2 stacks are hardware lvl = so new chips would have to be made = brand new gpu.
> Now lets consider, HBM is already supperior to gddr, so there woun't be a huge performance uplift from hbm2 refresh, my bet is 5-10% uplift, not more.
> The problem is the core.
> Do a vega rx 12nm or 7nm refresh with this new hbm2 refresh and nvidia would be dethroned, sadly i didn't see that in amd's plans
> 
> 
> 
> 
> 
> 
> 
> 
> 
> either way, i'm waiting for sapphire's vega 64 nitro+ to be on stock, then some temperature baseline tests and then instant fan replacement to either 2x 120mm noctua industrial 3000rm pwm or 3x 92mm noctua A9 pwm fans (who are doing amazing job on my sapphire r9 270x toxic card) and thermal pad replacement to 17w/mk, along with thermal grizlly kryonaut paste


Quote:


> Originally Posted by *TrixX*
> 
> It's interesting as that HBM could easily be implemented on the 7nm Vega refresh this year. Would be a good performance boost as long as there's no other bottleneck in the GPU itself.
> 
> However this year and next are looking very tasty from a tech perspective


could you please elaborate and share your knowledge with the rest of us?
memory is the bottleneck? not the core? (don't want to call it gpu, because i think hbm+core=gpu)


----------



## TrixX

Quote:


> Originally Posted by *gedoze*
> 
> could you please elaborate and share your knowledge with the rest of us?
> memory is the bottleneck? not the core? (don't want to call it gpu, because i think hbm+core=gpu)


From what I've been reading on the r/realAMD it seems that the Memory is partially a bottleneck and the boost in performance would see a nice gain for the GPU. I could have interpreted what I read incorrectly, but I don't think I did.
Quote:


> Originally Posted by *AMDominance*
> I imagine we'll be looking at sustained clocks of around 1750MHz with power consumption somewhere just south of 210W on a 330mm2 die.
> While 1750MHz might not sound significant, that's exactly where Vega's raw bandwidth sweet spot ends up with Samsung's updated HBM2 - 42.5GB/s per tflop vs the horribly bandwidth starved LC Vega 64 @ 35.2GB/s per tflop. This also permits an 8-pin reference design for both 56 and 64 refresh.


----------



## ducegt

AMD has said Radeon products are not susceptible to Metldown or Spectre; nVidia products are susceptible. Good thing for AMD gamers as I'm sure the driver team is already losing resources/funding/motivation being so many cards are only used for mining.

About the latest HBM2 news..Progress is always good news, but unless I'm mistaken, nobody on 64 LC or custom water is seeing significant gains from OCing the memory so it seems obvious that Vega gaming performance isn't memory bandwidth starved.


----------



## Trender07

Anyone have this thing like me that i.e I run 3dmark benchmarks withouth problem at 1100 MHz hbm (i didnt tried anything higher because of fan noise as hbm needs low temps and im on air) but it crashes in time spy stress test or frie strike whatever 3dmark stress test it crashes unless I set like 1025 HBM mhz


----------



## Grummpy

Does nvidia use speculative execution ?
They already upload private data of its users over a unsecured connection and they dont seem to care.

answer is yes they have been open to attack from day one.
http://nvidia.custhelp.com/app/answers/detail/a_id/4611


----------



## cmogle4

I had this issue on my Frontier Edition until I put an Alphacool block on it. 1025 was highest I could get. I can bench 1100 now with no crash but in games it tops out at 1070.


----------



## Grummpy

Im loving my vega.
Im using a 32" and 2 19" screens flipped.
drivers allow me to set mixed eyefinity flipping the 2 other displays and selecting full screen in games.
Cant do that on nvidia hardware


----------



## gedoze

Quote:


> Originally Posted by *ducegt*
> 
> AMD has said Radeon products are not susceptible to Metldown or Spectre; nVidia products are susceptible. Good thing for AMD gamers as I'm sure the driver team is already losing resources/funding/motivation being so many cards are only used for mining.
> 
> About the latest HBM2 news..Progress is always good news, but unless I'm mistaken, *nobody on 64 LC or custom water is seeing significant gains from OCing the memory so it seems obvious that Vega gaming performance isn't memory bandwidth starved*.


that's what i have been saying, it must be the core, not hbm.

about vega refresh:
"The first 7nm AMD product, a Radeon "Vega" based GPU built specifically for machine learning applications." source

so i don't know what to do, wait for vega refresh, untill pinacle ridge gets released = full new system (been planing for this from ryzen launch) or get nitro+ vega 64 when available for my fx-9590 build to make it complete radiator build (and i bet my 650w bequiet! PSU woun't be enough for it)?


----------



## Smitty2k1

Hey everyone need some help! I did the Morpheus 2 cooler swap with my vega56. Upon booting back up my monitor did not recognize an input. The fans for the Morpheus were spinning (powered by VGA to PWM splitter) and there is a red led that is on on the side of the card by the power connectors.

Switching to the i7 iGPU and setting bios to use iGPU I was able to boot into windows. I reset and plugged the Vega power back in but left the iGPU as display. Device manager recognizes the Vega and Wattman allows fan speed control. However I still can't seem to get display output. Wattman reads 0% GPU utilization.

Stock bios on the Vega, tried both positions. Still no luck. Can't seem to figure out what the red led on the side of the card means, but I'm assuming it's bad. Any advice?

EDIT: Ok, heart attack over. Not sure what changed but after trying an HDMI cable instead of the display port cable I've always used, it started outputting display again. Switched back to the display port cable and everything is operating as intended.


----------



## Razkin

Quote:


> Originally Posted by *gedoze*
> 
> that's what i have been saying, it must be the core, not hbm.
> 
> about vega refresh:
> "The first 7nm AMD product, a Radeon "Vega" based GPU built specifically for machine learning applications." source
> 
> so i don't know what to do, wait for vega refresh, untill pinacle ridge gets released = full new system (been planing for this from ryzen launch) or get nitro+ vega 64 when available for my fx-9590 build to make it complete radiator build (and i bet my 650w bequiet! PSU woun't be enough for it)?


I played Fallout 4 and Xcom 2 at 4k with Vega 64(EK block) with around 1700MHz core clock and the difference between stock 945MHz or 1050MHz is big enough that you'll notice it. Even the swicth to 1080 is noticable but to a far lesser extent.


----------



## kondziowy

Hell yeah I got it!! It "only" took a week of refreshing online stores full time.

Size didn't impress me at all. Barely bigger than Fury Strix. Maybe a little bit too thick though









+100W noted immediately at the wall on Balanced mode ~1600mhz. Time to undervolt.

Power Save gives the same power draw as Fury while running at +45-55% higher clocks, and I still can undervolt. This cooling with custom fan curve can run power save frikin silently.


----------



## drchoi21

So has the Radeon Instinct MI25 BIOS been extracted from Vega FE? I would like to use it if possible because Radeon Instinct MI25 allows SR-IOV and Multi-user Virtualization on single GPU!


----------



## Ne01 OnnA

Quote:


> Originally Posted by *kondziowy*
> 
> Hell yeah I got it!! It "only" took a week of refreshing online stores full time.
> 
> Size didn't impress me at all. Barely bigger than Fury Strix. Maybe a little bit too thick though
> 
> 
> 
> 
> 
> 
> 
> 
> 
> +100W noted immediately at the wall on Balanced mode ~1600mhz. Time to undervolt.
> 
> Power Save gives the same power draw as Fury while running at +45-55% higher clocks, and I still can undervolt. This cooling with custom fan curve can run power save frikin silently.


Keep this baby rockin'









From caseking? which store?


----------



## itxgamer

Quote:


> Originally Posted by *drchoi21*
> 
> So has the Radeon Instinct MI25 BIOS been extracted from Vega FE? I would like to use it if possible because Radeon Instinct MI25 allows SR-IOV and Multi-user Virtualization on single GPU!


I'm also interested in if it lets you use SR-IOV on the FE, mainly because I don't like using Windows as my host. But no it hasn't been dumped, he hasn't been on here since then. He might have returned it already without dumping it because of the broken switch. He might still be able to ask the seller for the bios though.


----------



## kondziowy

Quote:


> Originally Posted by *Ne01 OnnA*
> 
> Keep this baby rockin'
> 
> 
> 
> 
> 
> 
> 
> 
> 
> From caseking? which store?


Caseking. It looks like there will be nothing else getting stocked up until february and who knows what price it will have.


----------



## cephelix

Finally done with my initial round of tweaking my Vega 56 with a 64 BIOS(Air).
Card is under water, cooled by 5 x 120mm radiators running 17.12.1.
Only tested it in Superposition 1080p Extreme though. Will do further testing later.
P6 (mHz/mV): 1536/1050
P7(mHz/mV): 1640/1150, actual 1605mHz
HBM(mHz/mV): 1200/950
Power Target: 50%
Highest Score in Superposition is 4197. Don't know how it compares to the rest but at stock I was only scoring 3431.

Being under water temps are quite good. Core temps hovered at 47C. Hotspot is 68C and HBM temps were 49C max.
Is there any way for me to stabilise 1650mHz? At these settings 1650mHz has artifacts. Any other tips and tricks to get a better score and/or better mHz?


----------



## Efilnikuf

Quote:


> Originally Posted by *cephelix*
> 
> Finally done with my initial round of tweaking my Vega 56 with a 64 BIOS(Air).
> Card is under water, cooled by 5 x 120mm radiators running 17.12.1.
> Only tested it in Superposition 1080p Extreme though. Will do further testing later.
> P6 (mHz/mV): 1536/1050
> P7(mHz/mV): 1640/1150, actual 1605mHz
> HBM(mHz/mV): 1200/950
> Power Target: 50%
> Highest Score in Superposition is 4197. Don't know how it compares to the rest but at stock I was only scoring 3431.
> 
> Being under water temps are quite good. Core temps hovered at 47C. Hotspot is 68C and HBM temps were 49C max.
> Is there any way for me to stabilise 1650mHz? At these settings 1650mHz has artifacts. Any other tips and tricks to get a better score and/or better mHz?


It's most likely the memory artifacting. 1200 is pretty damn high. Should be ablt to push the core higher, but drop the memory a bit.


----------



## ITAngel

Question, I purchase the 1.0 and the 0.5 Fujipoly thermal pad for my GPU however the 1.0 they only gave me a single strip. For how much money I paid for both of these about $40 US dollars I would had though I would get more. My question is, if I use what I have, can I still use the EK one on other areas sine I still have some of that left? or would that be bad? Also which chips get hotter the VRM or the MOSFET? Thanks!


----------



## cephelix

Quote:


> Originally Posted by *ITAngel*
> 
> Question, I purchase the 1.0 and the 0.5 Fujipoly thermal pad for my GPU however the 1.0 they only gave me a single strip. For how much money I paid for both of these about $40 US dollars I would had though I would get more. My question is, if I use what I have, can I still use the EK one on other areas sine I still have some of that left? or would that be bad? Also which chips get hotter the VRM or the MOSFET? Thanks!


Yeah,fujipoly is expensive but worth it. the VRMs are the ones you would want to use it on typically. The mosfets not so much.

Quote:


> Originally Posted by *Efilnikuf*
> 
> It's most likely the memory artifacting. 1200 is pretty damn high. Should be ablt to push the core higher, but drop the memory a bit.


Thanks!trying it now.yeah i was surprised. i just kept increasing 10mHz and testing till it crashed


----------



## Call

Hey guys! I'm new to the forums and overclocking, so bear with my noobiness Lol!

Okay so I just got a Vega Frontier (air). I bought it primarily to game since I have a FreeSync monitor, but I also 3D model so I went with the frontier. I'm super excited to hopefully get all this sorted out and running smoothly ^_^

I think my PSU isn't good enough... It's shut down once, and I don't think it was due to overheating. (EVGA 600B; and I have the Ryzen 1400 OC'd to 3.8GHz which I assume takes a lot of that power)
I ordered an 850w seasonic platinum to solve that problem...

So I just "undervolted" it and ran a benchmark and seemed to be fine. It scored low... I'm sure I'm doing something wrong. But at least it hasn't shutdown yet.

*What settings work well, setting up for gaming, in wattman/registry/etc.? What do I have do to get this benching closer to 15k rather than 10k?* Lol
Thanks so much guys!

Here are my settings that I ran the bench with and am currently using and then the link to the bench results:


https://www.passmark.com/baselines/V9/display.php?id=95770841810

Thankies guys! ^_^


----------



## ITAngel

Quote:


> Originally Posted by *cephelix*
> 
> Yeah,fujipoly is expensive but worth it. the VRMs are the ones you would want to use it on typically. The mosfets not so much.
> Thanks!trying it now.yeah i was surprised. i just kept increasing 10mHz and testing till it crashed


Hey are the bigger chips the VRM or the smaller one?

My though is the bigger one are the VRM and the smaller ones are the MOSFET?


----------



## cephelix

Quote:


> Originally Posted by *ITAngel*
> 
> Hey are the bigger chips the VRM or the smaller one?
> 
> My though is the bigger one are the VRM and the smaller ones are the MOSFET?


EKWB would explain it better


----------



## ITAngel

Quote:


> Originally Posted by *cephelix*
> 
> EKWB would explain it better


Thanks I figure it out and more than I needed. lol


----------



## cephelix

Quote:


> Originally Posted by *ITAngel*
> 
> Thanks I figure it out and more than I needed. lol


Glad you found it out.


----------



## Efilnikuf

Quote:


> Originally Posted by *Call*
> 
> Hey guys! I'm new to the forums and overclocking, so bear with my noobiness Lol!
> 
> Okay so I just got a Vega Frontier (air). I bought it primarily to game since I have a FreeSync monitor, but I also 3D model so I went with the frontier. I'm super excited to hopefully get all this sorted out and running smoothly ^_^
> 
> I think my PSU isn't good enough... It's shut down once, and I don't think it was due to overheating. (EVGA 600B; and I have the Ryzen 1400 OC'd to 3.8GHz which I assume takes a lot of that power)
> I ordered an 850w seasonic platinum to solve that problem...
> 
> So I just "undervolted" it and ran a benchmark and seemed to be fine. It scored low... I'm sure I'm doing something wrong. But at least it hasn't shutdown yet.
> 
> *What settings work well, setting up for gaming, in wattman/registry/etc.? What do I have do to get this benching closer to 15k rather than 10k?* Lol
> Thanks so much guys!
> 
> Here are my settings that I ran the bench with and am currently using and then the link to the bench results:
> 
> 
> https://www.passmark.com/baselines/V9/display.php?id=95770841810
> 
> Thankies guys! ^_^


Well, the first two things I would do, is make your fan curve more reasonable, maybe a 1200-1500 min, and about a 3000-3400 max. Can tweak setting from there to your liking. Second, set your power limit to +50%, also can tweak from there. Not even getting into undervolting or flashing bios atm, as well as other tweaks. Those are quick and easy, just may draw some wattage.

Edit: A decent 650w should handle this gpu. And make sure you manually enter a max and target temp, sometimes certain custom settings do not apply unless you do, same with fan curve, manually enter it. Overdriventool is nice to have as well. GPU-Z for monitoring too.


----------



## Efilnikuf

Quote:


> Originally Posted by *cephelix*
> 
> Yeah,fujipoly is expensive but worth it. the VRMs are the ones you would want to use it on typically. The mosfets not so much.
> Thanks!trying it now.yeah i was surprised. i just kept increasing 10mHz and testing till it crashed


Thought I had 1200 too at one point...then 1180, but 1140 seems about as stable as I can get. 1180 works on some things, crashes on others, but it is close, maybe a driver update will help. On the LC 8774 bios with an xspc waterblock on a 64 air on a custom loop. 1732/1140.


----------



## cephelix

Ok, as per your suggestion @Efilnikuf I lowered my HBM
New settings are P7 (1670mHz/1155mV) with actual clocks being 1645mHz and HBM (1080mHz/950mV).

Of course I'll have to do more testing but for synthetics, it's been quite stable. Thanks a bunch!


----------



## Efilnikuf

double post


----------



## Efilnikuf

Quote:


> Originally Posted by *cephelix*
> 
> Ok, as per your suggestion @Efilnikuf I lowered my HBM
> New settings are P7 (1670mHz/1155mV) with actual clocks being 1645mHz and HBM (1080mHz/950mV).
> 
> Of course I'll have to do more testing but for synthetics, it's been quite stable. Thanks a bunch!


Glad I could help







.


----------



## Roboyto

Quote:


> Originally Posted by *Call*
> 
> Hey guys! I'm new to the forums and overclocking, so bear with my noobiness Lol!
> 
> Okay so I just got a Vega Frontier (air). I bought it primarily to game since I have a FreeSync monitor, but I also 3D model so I went with the frontier. I'm super excited to hopefully get all this sorted out and running smoothly ^_^
> 
> I think my PSU isn't good enough... It's shut down once, and I don't think it was due to overheating. (EVGA 600B; and I have the Ryzen 1400 OC'd to 3.8GHz which I assume takes a lot of that power)
> I ordered an 850w seasonic platinum to solve that problem...
> 
> So I just "undervolted" it and ran a benchmark and seemed to be fine. It scored low... I'm sure I'm doing something wrong. But at least it hasn't shutdown yet.
> 
> *What settings work well, setting up for gaming, in wattman/registry/etc.? What do I have do to get this benching closer to 15k rather than 10k?* Lol
> Thanks so much guys!
> 
> Here are my settings that I ran the bench with and am currently using and then the link to the bench results:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.passmark.com/baselines/V9/display.php?id=95770841810
> 
> Thankies guys! ^_^


Your PSU was likely the primary contributor to your issues; especially if it had some miles on it. These cards have no problems drawing massive amounts of power...300...400 watts if you push them with additional voltage or power limit. Good call buying a larger, more efficient, power supply.

Your Ryzen CPU, even overclocked, is more efficient than you are giving it credit for I feel. Vega, under most circumstances, is a power hungry beast.

The first thing you should know about overclocking/tweaking Vega is that it is fickle; Small changes can drastically effect performance and/or stability. Don't let this scare you, just be methodical in your attempts to increase performance and you will be able to see what does and doesn't improve performance.

Undervolting is your friend for Vega; especially with the factory blower cooler. Reducing voltages, and/or increasing power limit, generally can lead to an increase in core clock. I have not tried the Frequency % increase, so I can't comment on its effectiveness, as I enter clock/voltage values for P6/P7 manually. Generally if you are going to increase/decrease any value, it should be done in small-ish increments; 25MHz or mV for example.

It appears you are still running stock HBM speed. You can likely increase this value a fair bit. I don't know how responsive Frontier cards are these days, but Vega 64's are doing quite well and most, IINM, are surpassing 1100MHz without much trouble due to driver improvements. My Vega 64, with full cover block, happily hums along between 1150-1175; depending on what it is doing.

*** It is very important to note that voltage slider right under the HBM clock adjustment has NOTHING to do with the performance of the HBM; this is quite misleading. ***

That particular voltage adjustment has become known as the 'GPU floor voltage'. Meaning the GPU core will get no less than whatever value is set there. For example if you set P6 (and/or P7) to 900mV, but had the HBM/Floor voltage set to 1000mV...it would never see that 900mV due to the Floor Voltage being higher.

Not many people here rely on PassMark to bench the GPU. You will want something from 3DMark or Unigine, amongst others, to compare with other members here.

I saw you mentioned a Freesync monitor. I just picked one up myself...2560*1080 34" Ultra Wide LG with 75Hz Freesync refresh rate. Very happy with the purchase especially for under $300. With screen tearing a thing of the past, I also decided to give FRTC, frame rate target control, a try and was quite impressed. If you're not familiar, this tells the GPU to only work as hard as is necessary to achieve the FPS cap set in the Radeon software. I set mine to 75 FPS to match the refresh rate, and glorious reductions with power consumption and GPU temps were realized. I don't know what your monitor specs are, but while gaming if you want to cut down on power/heat/noise, then setting up FRTC could potentially make a large difference.

I mentioned being methodical with your tweaking and testing initially. I make a spreadsheet recording GPU settings and benchmark scores. This easily allows you to see exactly what is helping, or hurting, your performance so you can really dial it in.

Be patient and you should be happy with the performance achieved









If you have questions, feel free to ask.


----------



## gupsterg

Quote:


> Originally Posted by *ITAngel*
> 
> Question, I purchase the 1.0 and the 0.5 Fujipoly thermal pad for my GPU however the 1.0 they only gave me a single strip. For how much money I paid for both of these about $40 US dollars I would had though I would get more. My question is, if I use what I have, can I still use the EK one on other areas sine I still have some of that left? or would that be bad? Also which chips get hotter the VRM or the MOSFET? Thanks!


I used EK supplied here. Moulded well around inductors. No issues TBH, temps on VRM seem vastly below what they can take for sure. The drivers on the back of card I plan to "connect" to backplate with thermal pads (I use stock Limited Edition backplate).


----------



## VicsPC

Quote:


> Originally Posted by *gupsterg*
> 
> I used EK supplied here. Moulded well around inductors. No issues TBH, temps on VRM seem vastly below what they can take for sure. The drivers on the back of card I plan to "connect" to backplate with thermal pads (I use stock Limited Edition backplate).


Same here, i dont see spending 20€ or whatever 17w/mK pads costs these days, making a huge difference for VRM temps especially considering they can reach 125°C and not be an issue for em. I think mine reach 50-60°C under water so I'm not worried.


----------



## cephelix

If you're under water it's fine. I just had spare from when I was trying to tame my 290 on air.


----------



## gupsterg

@VicsPC










@cephelix








.

Yeah Hawaii where phase count was low I saw higher temps than a card with higher, for example if I compared the ref PCB Tri-X vs Vapor-X. Fiji was a little better than Hawaii IMO, even when I used Fury Tri-X it wasn't at all bad vs Fury X, very comparable if memory serve me correct.

Ref VEGA VRM is just so







.


----------



## FastMHz

Just got my VEGA FE the other day (add me to the club). I use it in a bunch of different ways and am pleased with the performance:

1. Window 7, Adobe Creative Suite, Media production, stock speeds
2. Windows 10, Gaming, with gaming driver, stock speeds.
3. Windows 10, Mining, with blockchain driver, overclocked, about $13 per day on NiceHash

Things I've found so far that annoy me:

1. Blockchain driver is broken on Win7, tried clean install multiple times, no dice.
2. Can't switch to gaming driver on Win7. Not too big a deal but would've been nice to test.

That aside, the thing is a beast for my applications. I may try some minor OC for gaming, even though the games I play (Wolf2, Doom, BF4, BF1, Rise of the Tomb Raider) run at maxxed settings in 4K and rarely falls below 60fps.


----------



## Smitty2k1

Quote:


> Originally Posted by *Call*
> 
> Hey guys! I'm new to the forums and overclocking, so bear with my noobiness Lol!
> 
> Okay so I just got a Vega Frontier (air). I bought it primarily to game since I have a FreeSync monitor, but I also 3D model so I went with the frontier. I'm super excited to hopefully get all this sorted out and running smoothly ^_^
> 
> I think my PSU isn't good enough... It's shut down once, and I don't think it was due to overheating. (EVGA 600B; and I have the Ryzen 1400 OC'd to 3.8GHz which I assume takes a lot of that power)
> I ordered an 850w seasonic platinum to solve that problem...
> 
> So I just "undervolted" it and ran a benchmark and seemed to be fine. It scored low... I'm sure I'm doing something wrong. But at least it hasn't shutdown yet.
> 
> *What settings work well, setting up for gaming, in wattman/registry/etc.? What do I have do to get this benching closer to 15k rather than 10k?* Lol
> Thanks so much guys!
> 
> Here are my settings that I ran the bench with and am currently using and then the link to the bench results:
> 
> 
> https://www.passmark.com/baselines/V9/display.php?id=95770841810
> 
> Thankies guys! ^_^


I was having issues with shutdowns when I first got my Vega. I wasn't overclocking or anything. I turned out not to be my powersupply, but the 'anti surge' feature on my ASUS motherboard. I disabled the anti-surge feature in the mobo BIOS and no more crashes.

May be worth a shot before you commit to the money spent on t he PSU.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *cephelix*
> 
> Finally done with my initial round of tweaking my Vega 56 with a 64 BIOS(Air).
> Card is under water, cooled by 5 x 120mm radiators running 17.12.1.
> Only tested it in Superposition 1080p Extreme though. Will do further testing later.
> P6 (mHz/mV): 1536/1050
> P7(mHz/mV): 1640/1150, actual 1605mHz
> HBM(mHz/mV): 1200/950
> Power Target: 50%
> Highest Score in Superposition is 4197. Don't know how it compares to the rest but at stock I was only scoring 3431.
> 
> Being under water temps are quite good. Core temps hovered at 47C. Hotspot is 68C and HBM temps were 49C max.
> Is there any way for me to stabilise 1650mHz? At these settings 1650mHz has artifacts. Any other tips and tricks to get a better score and/or better mHz?


Maybe try a lower HBM first? I cant get anything over 1107 stable, it seems it cant take the higher SoC clocks.....
im running 1530/950, 1590/1005, 1650/1068..... all stable

btw - over 8 hours of HWInfo without issues....


----------



## cephelix

Quote:


> Originally Posted by *gupsterg*
> 
> @VicsPC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> @cephelix
> 
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Yeah Hawaii where phase count was low I saw higher temps than a card with higher, for example if I compared the ref PCB Tri-X vs Vapor-X. Fiji was a little better than Hawaii IMO, even when I used Fury Tri-X it wasn't at all bad vs Fury X, very comparable if memory serve me correct.
> 
> Ref VEGA VRM is just so
> 
> 
> 
> 
> 
> 
> 
> .


Is hwinfo able to monitor vrm temps now? Last i read there were problems


----------



## geriatricpollywog

Just went to my local electronics store and bought an RX Vega 64 for $499.99. They had a few 56 cards for less, but I figured more is better. How do I overclock this thing?


----------



## gupsterg

Quote:


> Originally Posted by *cephelix*
> 
> Is hwinfo able to monitor vrm temps now? Last i read there were problems


I2C method is, which shows the sensors 99.9% of the time. This "access" method is disabled in HWINFO but can be forced in settings. The non i2c method yet to have an issue but it is very hit'n'miss if you will see them or not.

Non i2c appear where all the other info is and i2c will appear under a heading which has CHiL/IR at the end, see example screenie.
Quote:


> Originally Posted by *0451*
> 
> Just went to my local electronics store and bought an RX Vega 64 for $499.99. They had a few 56 cards for less, but I figured more is better. How do I overclock this thing?


Nice







. Seen some recently go on ebay for ~£500, seemed crazy when they seem to have sold for more as well. Thought you had VEGA before?


----------



## geriatricpollywog

Quote:


> Originally Posted by *gupsterg*
> 
> I2C method is, which shows the sensors 99.9% of the time. This "access" method is disabled in HWINFO but can be forced in settings. The non i2c method yet to have an issue but it is very hit'n'miss if you will see them or not.
> 
> Non i2c appear where all the other info is and i2c will appear under a heading which has CHiL/IR at the end, see example screenie.
> Nice
> 
> 
> 
> 
> 
> 
> 
> . Seen some recently go on ebay for ~£500, seemed crazy when they seem to have sold for more as well. Thought you had VEGA before?


Just trolling. I was looking on ebay and they are going for 1k used


----------



## cephelix

@gupsterg well, compared my hwinfo to your screenies..and it's not showing any vrm temps....sucks


----------



## fallrisk

Quote:


> Originally Posted by *cephelix*
> 
> @gupsterg well, compared my hwinfo to your screenies..and it's not showing any vrm temps....sucks


I have both reference and Strix Vega 56's and only my Strix has VRM temps available. I haven't tried that hard to get to see vrm temps on my reference though, they're a non issue.. Never seen people so obsessed with VRM temps lol


----------



## VicsPC

For those who don't see VRM temps, i have the same issue BUT sometimes after a restart or a cold boot they'll show up. If they show up once just stress test the gpu and see what max temps you get, it's not something that needs to be monitored constantly anyways.


----------



## gupsterg

Quote:


> Originally Posted by *cephelix*
> 
> @gupsterg well, compared my hwinfo to your screenies..and it's not showing any vrm temps....sucks


Try this, if GPU disconnect (ie blackscreen, no GPUTach LEDs on) then disable item 2.

1. Click "Settings".
2. Tick box "GPU I2C Support Force".
3. Click "Reset GPU I2C Cache".



4. Then click "OK" and "Run".

*** edit ***


__
https://www.reddit.com/r/7qce6g/display_your_amd_adrenalin_performance_logs_with/


----------



## cephelix

Quote:


> Originally Posted by *fallrisk*
> 
> I have both reference and Strix Vega 56's and only my Strix has VRM temps available. I haven't tried that hard to get to see vrm temps on my reference though, they're a non issue.. Never seen people so obsessed with VRM temps lol


Well, just curious what it is really. At least after finding out, you could plan your next step, be it OC-ing or UV-ing.
Quote:


> Originally Posted by *VicsPC*
> 
> For those who don't see VRM temps, i have the same issue BUT sometimes after a restart or a cold boot they'll show up. If they show up once just stress test the gpu and see what max temps you get, it's not something that needs to be monitored constantly anyways.


Quote:


> Originally Posted by *gupsterg*
> 
> Try this, if GPU disconnect (ie blackscreen, no GPUTach LEDs on) then disable item 2.
> 
> 1. Click "Settings".
> 2. Tick box "GPU I2C Support Force".
> 3. Click "Reset GPU I2C Cache".
> 
> 
> 
> 4. Then click "OK" and "Run".
> 
> *** edit ***
> 
> 
> __
> https://www.reddit.com/r/7qce6g/display_your_amd_adrenalin_performance_logs_with/


Alright guys thanks. I will try out the various methods when I get back home. Will keep you updated.


----------



## gupsterg

@cephelix

NP







.

If ever the VRM temp show without i2c forced then just remember when they don't show not to hide the blanked out "line", because they will reappear on their own.

A run earlier today without them.



Later appear.



If inadvertently I were to hide the blanked out "line" only way to gain back is then to restore order of sensors to original, as they seem to disappear from hidden sensors section to restore when I last checked.


----------



## cephelix

@gupsterg ahh.ok. i have a section with blanks that i was wondering about. Haven't played around with the custom layouts on hwinfo. Hoping to get rainmeter with hwinfo plugin running so i don't have to keep scrolling


----------



## fallrisk

Quote:


> Originally Posted by *cephelix*
> 
> Well, just curious what it is really. At least after finding out, you could plan your next step, be it OC-ing or UV-ing.
> 
> Alright guys thanks. I will try out the various methods when I get back home. Will keep you updated.


The VRMs on here are practically bulletproof and your gpu would be far over heated before you run into VRM temp issues, unless you're doing LN2 oc'ing and not cooling your vrms.. VRM temps definitely are not an issue when this well designed! My strix (with a 64 bios) VRM peaks at 113c and I've still got plenty of room to spare according to buildzoid's statements on VRM temps.


----------



## cephelix

Quote:


> Originally Posted by *fallrisk*
> 
> The VRMs on here are practically bulletproof and your gpu would be far over heated before you run into VRM temp issues, unless you're doing LN2 oc'ing and not cooling your vrms.. VRM temps definitely are not an issue when this well designed! My strix (with a 64 bios) VRM peaks at 113c and I've still got plenty of room to spare according to buildzoid's statements on VRM temps.


While that may be true, I personally wouldn't feel comfortable if my vrm temps were to be that high. Right now my V56 is already pulling almost 300W, granted it is under water. If I can get a gauge of the temps without it being too much of a hassle, why wouldn't I? Also helps me in making a decision as to whether I should go with the V64 liquid bios or not.


----------



## fallrisk

Quote:


> Originally Posted by *cephelix*
> 
> While that may be true, I personally wouldn't feel comfortable if my vrm temps were to be that high. Right now my V56 is already pulling almost 300W, granted it is under water. If I can get a gauge of the temps without it being too much of a hassle, why wouldn't I? Also helps me in making a decision as to whether I should go with the V64 liquid bios or not.


Hmm? That high? The danger point for VRMs is around 125-140c. My VRMs at 110c peak is like running a vega at 55-70c, calling 85c the "danger" point, being where you should worry about thermals. But yeah, I think I missed the part where you had a block on it. Your temps will probably be lower than this though! Weird that you can get it working while others can, though.


----------



## cephelix

Could you repeat your last statement? I don't get it. Get what working?
Currently my V56 with V64 Air bios runs at 1640mhz/1150mv (P7) and 1080mhz/950mv (HBM) will have to test further on heaven/valley. So far it performs well enough in Superposition without artifacts. Been playing too much bioshock and that practically puts no load on the card.


----------



## fallrisk

Ah, am I mixing users up now? I thought you were the one that couldn't see their VRM twmps.


----------



## cephelix

Well, not to say can't. I haven't tried Gupsterg's method yet. Which I will do when I get back if the cold boot thing doesn't work out


----------



## Grummpy

My vrm temps hang around 63c
no worries at all



odd seeing my fan rpm decline.
whats that all about


----------



## gupsterg

Quote:


> Originally Posted by *fallrisk*
> 
> ... My VRMs at 110c peak ...


If that's limited to few instances when under load I'd say not too bad, but if consistently at that temp then your very close to throttling occurring from VRM temp.

Code:



Code:


73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
73 00 (115°C)   USHORT usTemperatureLimitVrMem;

Quote:


> Originally Posted by *fallrisk*
> 
> Weird that you can get it working while others can, though.


Non I2C is very hit'n'miss, I just had my rig running overnight, I had VRM temps. I do a reboot this morning, fire up HWINFO and not there.

I2C is more reliable, in the context of being there, but can cause GPU to disconnect. Just like Martin is reporting to AMD I have also stated twice the issue to AMDMatt on OCuk.


----------



## os2wiz

I bought a Vega 56 card a couple of months back. I have it under water with Alphacool Eiswolf GPX 120 waterblock. I flashed the bios to Vega 64 . I live in new York City (Brooklyn) amd the computer is in a heated but poorly insulated sunroom so even midday in this cold weather the room is often below room temperature 72 Farenheit. Right now at 2:55 am the room is 61 F. My gpu diode is 17 Celcius my HBM temp is 16 Celcius and my Hot spot is 19 Celcius. Under full load running either Furmark or AOS Escalation benchmark the gpu rarely exceeds 40 Celcius.

My question is I have had a hard time doing much better than what wattman balanced gives me for frame rates. ON AOS at 4K it gives me 74 FPS. With custom settinsg the best I have ever achieved is 77 FPS. But I really do not know what I am doing as far as undervolting and overclocking the memory and gpu. I have heard some people here say set gpu to about -5% but is that in play with a 64 bios flash or not I have no idea how to set the proper voltage and the methodology to ramp up the HBM speeds vs the voltage settinhgs. Also do I really need +50% for power on custom with such low temps and with the 64 bios on my Vega ? I feel a little overwhelmed with all the variables in play. I really wish there was a guide on how to do this in a systematic way with my situation with watercooled card with the enhanced bios.
Some thoughtful advice from an experienced user who has their card under water and with the 64 bios on a Vega 56 would be greatly appreciated. Please no conjecture from those who have a different situation I need clarity not confusion. Thanks.


----------



## cephelix

Quote:


> Originally Posted by *os2wiz*
> 
> I bought a Vega 56 card a couple of months back. I have it under water with Alphacool Eiswolf GPX 120 waterblock. I flashed the bios to Vega 64 . I live in new York City (Brooklyn) amd the computer is in a heated but poorly insulated sunroom so even midday in this cold weather the room is often below room temperature 72 Farenheit. Right now at 2:55 am the room is 61 F. My gpu diode is 17 Celcius my HBM temp is 16 Celcius and my Hot spot is 19 Celcius. Under full load running either Furmark or AOS Escalation benchmark the gpu rarely exceeds 40 Celcius.
> 
> My question is I have had a hard time doing much better than what wattman balanced gives me for frame rates. ON AOS at 4K it gives me 74 FPS. With custom settinsg the best I have ever achieved is 77 FPS. But I really do not know what I am doing as far as undervolting and overclocking the memory and gpu. I have heard some people here say set gpu to about -5% but is that in play with a 64 bios flash or not I have no idea how to set the proper voltage and the methodology to ramp up the HBM speeds vs the voltage settinhgs. Also do I really need +50% for power on custom with such low temps and with the 64 bios on my Vega ? I feel a little overwhelmed with all the variables in play. I really wish there was a guide on how to do this in a systematic way with my situation with watercooled card with the enhanced bios.
> Some thoughtful advice from an experienced user who has their card under water and with the 64 bios on a Vega 56 would be greatly appreciated. Please no conjecture from those who have a different situation I need clarity not confusion. Thanks.


Well, since I was in a similar situation to you, let me try to see if I can explain what I did. Of course, this is by no means exhaustive or even the best way but this is what I did.
1. For monitoring temps, use HWiNFO.
2. For benchmarking/initial stability tests I used superposition. I used 1080p extreme test since my monitors are 1080p but you could use whichever you want provided that it's the same throughout your testing.
3. For overclocking/undervolting I used OverdriveNTool.
4. Launch the above-mentioned programs, set power limit to 50% and do a benchmark for your card with 'all states unlocked' and 'all states except P7 and HBM P3 locked'. Record your scores and see if there's a difference. *I did this just to see if there's any difference. *All tests after step 4 are done with 'all states except P7 and HBM P3 locked'. You could also try to do a benchmark with 0% and 50% power limit. Here you will see your scores differ by quote a bit but your power consumption and heat will of course increase.
5. Keep in mind that HBM voltage does not mean that. HBM voltage for your card/bios is at a constant 1.35v and what is labelled as 'HBM voltage' is actually the voltage floor. This is usually set to the same voltage as P6 but could be lower. I set mine to 950mv.
6. Note that for your core voltage for P6 and P7 should not be set to the same value. Unsure about the newer drivers but the older driver versions had a bug that if P6 and P7 voltages were the same that it would get stuck in P6.
7. Knowing (5) and (6), I started increasing my HBM P3 clocks by 10mhz at a time and running one run of Superposition. Making sure that temperatures were in check, recording power consumption, observing for any image corruption (purple/yellow flashes and artifacts) and finally noting scores. Do that until you run into image corruption and back down by 10mhz. Use the data you've obtained to find a nice medium between power, heat and performance.
8. After you've gotten your HBM tweaked, you could work on your core clocks, specifically P7. I started lowering P7 voltage until I started seeing corruption. Again I kept track of actual mHz in Superposition, score, temps, power consumption. Lower P6 voltage by 30mv or so and you're set.
9. Now if you want to overclock your core start from the voltage you obtained in (8). Start increasing P7 clocks 10mHz at a time and running Superposition. If you see any image corruption during the run, increase mHz by 10mv. After a certain point, increasing P7 voltages will still result in artifacts/corruption. At this point you could choose to back off on the core clocks or decrease HBM clocks so you could increase P7 clocks. That is up to you. Here is where I use scores from before to determine what I should do.
10. Once you've settled on P7 and HBM clocks, test one more time in Superposition to make sure it's stable. If that passes, you can move to other benchmarks like Heaven or Valley to see if your clocks are stable there. Once everything is set, the final test is to play a game and see if your new clocks are stable.

* If you ever experience a driver crash or black screen/ screen flickering make sure to restart your system before continuing to OC/UV your card.

Doing that, I got P7 to 1640mHz/1150mv and HBM to 1080mHz/950mv. As stated in my previous post, I've yet to test these clock in anything intensive since I've been preoccupied with just playing Bioshock.


----------



## os2wiz

Quote:


> Originally Posted by *cephelix*
> 
> Quote:
> 
> 
> 
> Originally Posted by *os2wiz*
> 
> I bought a Vega 56 card a couple of months back. I have it under water with Alphacool Eiswolf GPX 120 waterblock. I flashed the bios to Vega 64 . I live in new York City (Brooklyn) amd the computer is in a heated but poorly insulated sunroom so even midday in this cold weather the room is often below room temperature 72 Farenheit. Right now at 2:55 am the room is 61 F. My gpu diode is 17 Celcius my HBM temp is 16 Celcius and my Hot spot is 19 Celcius. Under full load running either Furmark or AOS Escalation benchmark the gpu rarely exceeds 40 Celcius.
> 
> My question is I have had a hard time doing much better than what wattman balanced gives me for frame rates. ON AOS at 4K it gives me 74 FPS. With custom settinsg the best I have ever achieved is 77 FPS. But I really do not know what I am doing as far as undervolting and overclocking the memory and gpu. I have heard some people here say set gpu to about -5% but is that in play with a 64 bios flash or not I have no idea how to set the proper voltage and the methodology to ramp up the HBM speeds vs the voltage settinhgs. Also do I really need +50% for power on custom with such low temps and with the 64 bios on my Vega ? I feel a little overwhelmed with all the variables in play. I really wish there was a guide on how to do this in a systematic way with my situation with watercooled card with the enhanced bios.
> Some thoughtful advice from an experienced user who has their card under water and with the 64 bios on a Vega 56 would be greatly appreciated. Please no conjecture from those who have a different situation I need clarity not confusion. Thanks.
> 
> 
> 
> Well, since I was in a similar situation to you, let me try to see if I can explain what I did. Of course, this is by no means exhaustive or even the best way but this is what I did.
> 1. For monitoring temps, use HWiNFO.
> 2. For benchmarking/initial stability tests I used superposition. I used 1080p extreme test since my monitors are 1080p but you could use whichever you want provided that it's the same throughout your testing.
> 3. For overclocking/undervolting I used OverdriveNTool.
> 4. Launch the above-mentioned programs, set power limit to 50% and do a benchmark for your card with 'all states unlocked' and 'all states except P7 and HBM P3 locked'. Record your scores and see if there's a difference. *I did this just to see if there's any difference. *All tests after step 4 are done with 'all states except P7 and HBM P3 locked'. You could also try to do a benchmark with 0% and 50% power limit. Here you will see your scores differ by quote a bit but your power consumption and heat will of course increase.
> 5. Keep in mind that HBM voltage does not mean that. HBM voltage for your card/bios is at a constant 1.35v and what is labelled as 'HBM voltage' is actually the voltage floor. This is usually set to the same voltage as P6 but could be lower. I set mine to 950mv.
> 6. Note that for your core voltage for P6 and P7 should not be set to the same value. Unsure about the newer drivers but the older driver versions had a bug that if P6 and P7 voltages were the same that it would get stuck in P6.
> 7. Knowing (5) and (6), I started increasing my HBM P3 clocks by 10mhz at a time and running one run of Superposition. Making sure that temperatures were in check, recording power consumption, observing for any image corruption (purple/yellow flashes and artifacts) and finally noting scores. Do that until you run into image corruption and back down by 10mhz. Use the data you've obtained to find a nice medium between power, heat and performance.
> 8. After you've gotten your HBM tweaked, you could work on your core clocks, specifically P7. I started lowering P7 voltage until I started seeing corruption. Again I kept track of actual mHz in Superposition, score, temps, power consumption. Lower P6 voltage by 30mv or so and you're set.
> 9. Now if you want to overclock your core start from the voltage you obtained in (8). Start increasing P7 clocks 10mHz at a time and running Superposition. If you see any image corruption during the run, increase mHz by 10mv. After a certain point, increasing P7 voltages will still result in artifacts/corruption. At this point you could choose to back off on the core clocks or decrease HBM clocks so you could increase P7 clocks. That is up to you. Here is where I use scores from before to determine what I should do.
> 10. Once you've settled on P7 and HBM clocks, test one more time in Superposition to make sure it's stable. If that passes, you can move to other benchmarks like Heaven or Valley to see if your clocks are stable there. Once everything is set, the final test is to play a game and see if your new clocks are stable.
> 
> * If you ever experience a driver crash or black screen/ screen flickering make sure to restart your system before continuing to OC/UV your card.
> 
> Doing that, I got P7 to 1640mHz/1150mv and HBM to 1080mHz/950mv. As stated in my previous post, I've yet to test these clock in anything intensive since I've been preoccupied with just playing Bioshock.
Click to expand...

I have been using hwinfo64 since you have been wearing diapers.
I am using wattman . Wattman has all the settings necessary to undervolt and overclock .
I do not like superposition benchmark. , though I have the paid for version as it is strictly DX11. I do not know what you mean by unlocking different states. In wattman you are able to modify p7 and p6 on core voltage and only p7 on HBM voltage. See this is why I have difficulty here. I need guidance with using wattman not telling me to use a different tool. What you like is subjective, what I need is objective help with what I am using.


----------



## cephelix

@os2wiz you could use wattman as well. As stated, what was described is what I've done personally. And since I've not used Wattman, I cannot recommend that particular software. But the others here have noted that it is irrelavent whether you use Wattman or OverdriveNTool

edit: I'm out. I've given you the way I did things. It's up to you to play around with things a bit. Maybe someone else could help you specific to the software you use.


----------



## SpecChum

Quote:


> Originally Posted by *SpecChum*
> 
> Yikes.
> 
> 
> 
> Liking this watercooling malarkey


Cheeky bugger on guru3d is passing this as their own!

https://forums.guru3d.com/threads/rx-vega-owners-thread-tests-mods-bios-tweaks.416287/page-19#post-5504973

EDIT: Looking at the 3dmark scores he posted right after this seems to be @Robotmind

Robotmind, care to explain?


----------



## gupsterg

That's @Ne01 OnnA, not read the linked thread yet. I would think he is using your data as example of VEGA performance, should clarify source, etc.


----------



## SpecChum

Ah, his 3dmark username is RobotMind, at least according to the links he's posted.

Sorry @Robotmind

Reading more of that thread it's a bit weird, he posts links to scores and benchmarks, without giving any reference, so it looks like they're his, but then states he's on Fiji and waiting for Vega 12nm, so I guess you're right.

Was just a bit weird.

I really don't know why this bothered me as much as it did lol


----------



## gupsterg

You'll get used his flair!







, the 3DM links must not also be his then.


----------



## fallrisk

Quote:


> Originally Posted by *gupsterg*
> 
> If that's limited to few instances when under load I'd say not too bad, but if consistently at that temp then your very close to throttling occurring from VRM temp.
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 73 00 (115°C)   USHORT usTemperatureLimitVrSoc;
> 73 00 (115°C)   USHORT usTemperatureLimitVrMem;
> 
> Non I2C is very hit'n'miss, I just had my rig running overnight, I had VRM temps. I do a reboot this morning, fire up HWINFO and not there.
> 
> I2C is more reliable, in the context of being there, but can cause GPU to disconnect. Just like Martin is reporting to AMD I have also stated twice the issue to AMDMatt on OCuk.


I could've sworn the temp limit was higher but looks like you're right.. Either I've got bad VRMs or the Strix is just badly built.. My card specifically has a fudged paint job anyway and I was planning on getting a replacement. Just asked another friend what his VRM temps sit at.
(These are vrm temps under 100% constant load mining a highly inefficient algorithm, however).. And I just checked Prey at 4k, both raise VRM temps to 110c approx with over 200w at 1.2v vcore... There's gotta be something wrong here.


----------



## Naeem

so what new development in vega 64 anything new came up in past few weeks ? other than that 17.12.1 and 12.2 dropped scores in benchmarks like 3dmark and SP ?


----------



## ducegt

Quote:


> Originally Posted by *os2wiz*
> 
> I have been using hwinfo64 since you have been wearing diapers.










Uncalled for and certainly not the case.

Overclocking Vega is underwhelming even if you've installed a custom water cooling solution and ambient temperatures are low. I have the AIO version and I subjected my entire PC to ~32F winter air and I saw no significant gains over the balanced profile.


----------



## fallrisk

Quote:


> Originally Posted by *ducegt*
> 
> 
> 
> 
> 
> 
> 
> 
> Uncalled for and certainly not the case.
> 
> Overclocking Vega is underwhelming even if you've installed a custom water cooling solution and ambient temperatures are low. I have the AIO version and I subjected my entire PC to ~32F winter air and I saw no significant gains over the balanced profile.


Well, my bud's got a great chip doing 1900mhz+ on core with 1.2v and it does some pretty hefty lifting from what I've seen.


----------



## ducegt

Quote:


> Originally Posted by *fallrisk*
> 
> Well, my bud's got a great chip doing 1900mhz+ on core with 1.2v and it does some pretty hefty lifting from what I've seen.


Your anecdote is missing the details that would make it entertainable.

That's surely a compute workload while most everyone here is discussing gaming. Mine out of the box did 1800+ during compute.


----------



## rancor

So I have been having terrible problems with Vega rebooting my system whenever I try to do any setting changes.

Trying to just enable custom mode in watman will crash the computer most of the time. For the most part this is happening when trying to change memory clocks. In watman, overdriventool, and Afterburner changing memory clock is an almost guaranteed lockup and reboot of the system.

This has happened now for a few driver versions, through a windows reinstall, and a redone overclock of my CPU/memory. Currently I am on 17.12.2.
At this point I am thinking about a RMA of the gpu. Anyone have any ideas of what's wrong?


----------



## fallrisk

Quote:


> Originally Posted by *ducegt*
> 
> Your anecdote is missing the details that would make it entertainable.
> 
> That's surely a compute workload while most everyone here is discussing gaming. Mine out of the box did 1800+ during compute.


Well, I sure as hell can't get over 1750 either way. Yes, it is indeed a compute workload and I'm not sure if he uses that for gaming.


----------



## Robotmind

@SpecChum

Indeed, those are my benchies in the links you posted, but the picture in your post is not me. I have a i7 7700k not a Ryzen 7









It does appear that he was quoting my post from a couple pages back.

Cheers!


----------



## TrixX

Quote:


> Originally Posted by *os2wiz*
> 
> I have been using hwinfo64 since you have been wearing diapers.
> I am using wattman . Wattman has all the settings necessary to undervolt and overclock .
> I do not like superposition benchmark. , though I have the paid for version as it is strictly DX11. I do not know what you mean by unlocking different states. In wattman you are able to modify p7 and p6 on core voltage and only p7 on HBM voltage. See this is why I have difficulty here. I need guidance with using wattman not telling me to use a different tool. What you like is subjective, what I need is objective help with what I am using.


Seeing as OverdriveNTool is using the same numerical values as Wattman and is infinitely easier to see what you are changing compared to Wattman, I'd recommend giving it a go. Wattman is fine though I'd recommend changing the different options to numerical values instead of using the sliders so you can be accurate on the numbers used.

Cephelix's comments are applicable to Wattman and OverdriveNTool interchangeably as they use the same numerical values. So I'd suggest re-reading his response.

The other thing is that Superposition is a good initial test of stability, Firestrike is a better test of Memory stability for Vega cards.

Also one last thing you may need to read into the topic more. Cephelix gave a ton of good information which is applicable to your situation, the tool used is irrelevant. I think brushing up on the basics may be required before going further.


----------



## SpecChum

Quote:


> Originally Posted by *Robotmind*
> 
> @SpecChum
> 
> Indeed, those are my benchies in the links you posted, but the picture in your post is not me. I have a i7 7700k not a Ryzen 7
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It does appear that he was quoting my post from a couple pages back.
> 
> Cheers!


I know the image wasn't yours, it was mine lol

I was just a bit annoyed that he was passing those off as his, but that didn't appear to be the case; he's just posting info but really needs to be a bit more clear


----------



## pengs

Anyone with a V64 LC have the 17.12.2 driver defaulting 1250mV? Usually 1200mV.

I wonder if this was done on purpose by AMD or whats going on. Waiting for an official version to update drivers.
On another note I'm boosting up to 1730-1740MHz sometimes with that extra 50mV.


----------



## ITAngel

Quote:


> Originally Posted by *pengs*
> 
> Anyone with a V64 LC have the 17.12.2 driver defaulting 1250mV? Usually 1200mV.
> 
> I wonder if this was done on purpose by AMD or whats going on. Waiting for an official version to update drivers.
> On another note I'm boosting up to 1730-1740MHz sometimes with that extra 50mV.


Nope I manually set mine for testing to 1250mV but I moved it back to 1200mV. I can double check once I am home from work to see what driver version I am running but I believe I am on 17.12.2 but let me verified that once home.


----------



## Trender07

Quote:


> Originally Posted by *rancor*
> 
> So I have been having terrible problems with Vega rebooting my system whenever I try to do any setting changes.
> 
> Trying to just enable custom mode in watman will crash the computer most of the time. For the most part this is happening when trying to change memory clocks. In watman, overdriventool, and Afterburner changing memory clock is an almost guaranteed lockup and reboot of the system.
> 
> This has happened now for a few driver versions, through a windows reinstall, and a redone overclock of my CPU/memory. Currently I am on 17.12.2.
> At this point I am thinking about a RMA of the gpu. Anyone have any ideas of what's wrong?


idk man mine also crashes every day or 2 and gets locked to p7 and have to restart PC i dont know what could go on, do u also use gpu extension cables?


----------



## diabetes

Quote:


> Originally Posted by *Trender07*
> 
> idk man mine also crashes every day or 2 and gets locked to p7 and have to restart PC i dont know what could go on, do u also use gpu extension cables?


My V56 does also do this, but only in Elite: Dangerous. No extension cables, card is directly plugged into the motherboard. It was fine the first week of the year and I think the issue was introduced with the Windows updates for Meltdown and Spectre.


----------



## Grummpy

Using 32" display and 2 x 19" displays in portrait mode eyefinity in full screen.
Getting 50" + ultra wide experience for less than 160 pounds extra on the 2 x 19 monitors with arm stands.
Cant do that on any other gpu








These are the settings i used to use 100 watt less power without effecting performance.
consequence temps on the hbm2 went up 3 to 4 c

https://drive.google.com/open?id=1-wd6tc66LZzcU8BPq-oaB9gQpddFESdS




drop clock speed 5% and tweak v core then regain performance by overclocking the memory.
lower temps with less power usage and latency that keeps performance the same wile using much much less power.

As you close the game the 2 other outer screens goto sleep mode and turn off.
you run game they turn on its fantastic.


----------



## SpecChum

Quote:


> Originally Posted by *Grummpy*
> 
> Using 32" display and 2 x 19" displays in portrait mode eyefinity in full screen.
> Getting 50" + ultra wide experience for less than 160 pounds extra on the 2 x 19 monitors with arm stands.
> Cant do that on any other gpu
> 
> 
> 
> 
> 
> 
> 
> 
> These are the settings i used to use 100 watt less power without effecting performance.
> consequence temps on the hbm2 went up 3 to 4 c
> 
> https://drive.google.com/open?id=1-wd6tc66LZzcU8BPq-oaB9gQpddFESdS
> 
> 
> 
> 
> drop clock speed 5% and tweak v core then regain performance by overclocking the memory.
> lower temps with less power usage and latency that keeps performance the same wile using much much less power.
> 
> As you close the game the 2 other outer screens goto sleep mode and turn off.
> you run game they turn on its fantastic.


Really glad to see everything going fine for you now buddy; I'm aware you've not had the best of luck thus far!


----------



## NI6HTHAWK

Quote:


> Originally Posted by *pengs*
> 
> Anyone with a V64 LC have the 17.12.2 driver defaulting 1250mV? Usually 1200mV.
> 
> I wonder if this was done on purpose by AMD or whats going on. Waiting for an official version to update drivers.
> On another note I'm boosting up to 1730-1740MHz sometimes with that extra 50mV.


Yeah i noticed this too, my guess is they did it for better clocks or maybe stability. I haven't crashed from clock speeds ramping up too high with these drivers, I only crash occasionally because of my triple display setup. This is what my wattman shows when i take it from Balanced to Custom.


----------



## hellm

afaik it was 1250mV Vcore from the start. Because this is what is stored in the LC BIOS, in the ASIC_Profiling table. The 1200mV came from the PowerPlay table, or the driver itself. With Polaris 580 BIOS, no matter what Vcore you had, 1150mV for the last 3 states is what wattman showed.


----------



## Grummpy

@SpecChum
yeah i killed my card didnt i.
was my own stupid fault for not supporting the board.
so i went out and got another lol
spent over 1100 now lmao....




this forza runs well.
power usage is great so much for vega being greedy.
ONLY IF YOU LET IT.

shows how robust the hardware build quality is.


----------



## Trender07

Quote:


> Originally Posted by *diabetes*
> 
> My V56 does also do this, but only in Elite: Dangerous. No extension cables, card is directly plugged into the motherboard. It was fine the first week of the year and I think the issue was introduced with the Windows updates for Meltdown and Spectre.


Well if its only since late days because of patchs then its ok I guess, mine has been crashing since day one


----------



## os2wiz

Quote:


> Originally Posted by *ducegt*
> 
> Quote:
> 
> 
> 
> Originally Posted by *os2wiz*
> 
> I have been using hwinfo64 since you have been wearing diapers.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Uncalled for and certainly not the case.
> 
> Overclocking Vega is underwhelming even if you've installed a custom water cooling solution and ambient temperatures are low. I have the AIO version and I subjected my entire PC to ~32F winter air and I saw no significant gains over the balanced profile.
Click to expand...

That is what I have noticed also. But how does one convert the wattman slider from percentages to actual frequency for core???


----------



## fallrisk

Quote:


> Originally Posted by *os2wiz*
> 
> That is what I have noticed also. But how does one convert the wattman slider from percentages to actual frequency for core???


Huh? There's a toggle *right* where it says Frequency (%). Same for voltage control.


----------



## tomogotchi

Is HBM temp at 95c stock settings normal? I just received my Vega FE (AC) and started testing it with Time Spy and my HBM temp is always at 95c under load in HWinfo. The HBM frequency remain at 945 throughout all my runs on Time Spy, so no throttling I assume? However, 95c is making me worried. Would undervolting the HBM help? If so, is there a guide because I read the HBM voltage next to the frequency is a core floor voltage and not actual HBM voltage. I also searched around and found tom's review had similar HBM temps of 95c but they never revisited the issue again. Others here claim they reach 92c while gaming or only couple degrees above their core temp. Should I open it up and repaste or that's how it is for Vega FEs?


----------



## TrixX

Quote:


> Originally Posted by *os2wiz*
> 
> That is what I have noticed also. But how does one convert the wattman slider from percentages to actual frequency for core???


Use OverdriveNTool. No need for conversion or using clunky Wattman.

https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/

Same functions as Wattman just without all the nonsense of sliders. Input the numbers required in the cells and voila.


----------



## pengs

Quote:


> Originally Posted by *NI6HTHAWK*
> 
> Yeah i noticed this too, my guess is they did it for better clocks or maybe stability. I haven't crashed from clock speeds ramping up too high with these drivers, I only crash occasionally because of my triple display setup. This is what my wattman shows when i take it from Balanced to Custom.
> 
> 
> Spoiler: Warning: Spoiler!


Quote:


> Originally Posted by *hellm*
> 
> afaik it was 1250mV Vcore from the start. Because this is what is stored in the LC BIOS, in the ASIC_Profiling table. The 1200mV came from the PowerPlay table, or the driver itself. With Polaris 580 BIOS, no matter what Vcore you had, 1150mV for the last 3 states is what wattman showed.


I see. I just reinstalled 17.12.1 and it's the same as 17.12.2. So prior to Adrenalin the driver was undervolting the card... which probably attributed to the instability from release to November-ish. So the LC version is effectively voltage maxed as far Wattman is concerned if this is correct.

BTW, I'm missing the workload toggle within the settings. Anyone else?


----------



## SavantStrike

Quote:


> Originally Posted by *pengs*
> 
> [/SPOILER]
> I see. I just reinstalled 17.12.1 and it's the same as 17.12.2. So prior to Adrenalin the driver was undervolting the card... which probably attributed to the instability from release to November-ish. So the LC version is effectively voltage maxed as far Wattman is concerned if this is correct.
> 
> BTW, I'm missing the workload toggle within the settings. Anyone else?


I've never seen the workload toggle in the wild - it's supposedly automatic on Vega and only needed for earlier architectures. Evidence still suggests that the block chain drivers are better for compute tasks too.


----------



## cg4200

I grabbed another vega frontier card to run crossfire when gaming. Got great deal before gouging began again...as all 64 vega overpriced and 8 gb figured.. I would get fe 16gb for little more..
Gaming option not there with 2 cards.... driver install and switch is missing..
If I unpug 1 card and install same drivers turn off computer add second card restart computer ..
then it shows I am in firepro with driver switch option now there and it works as I now have 32 gb ram playing gta with wattman there..
Is anyone running 2 fe cards have easier way.. also afterburner tweeks out with two fe but not one ??
Also anyone flash a fe to regular vega 64 bios??thanks


----------



## RoughHex

*Rig:*
Taichi X370
Ryzen 1700 @ 3.85 Ghz
32GB TridentZ z170 @ 14-14-14-34-50-1t 2933Mhz
XFX RX Vega 56 air with Morpheus II heatsink on Sapphire 64 Air bios (016.001.001.000.008730)

*Short Backstory:*
I received the original card late October. I flashed the bios with the one listed and proceeded to OC/UV to see the range of operations for the card from low power mining to top end gaming. At first I was able to get a GPU down to 950mV at the stock 64 clock with memory overclocked to 1050Mhz at 950mV. Timespy scores were in the 7200-7300 range and card was running stable for weeks. Tragedy strikes with a lightning storm frying my surge protector. When I was able to get the PC back on, I could no longer UV below 1150mV on GPU or find a stable overclock for the RAM over 945Mhz no matter what voltage I tried.

I ended up sending the card in for RMA because of an issue with display signal dropping that turned out to be caused by my Samsung KS8000 (which found the temporary solution for if anyone has the Samsung One Connect issue).

The "new" card that XFX returned to me can't be OC'd on the GPU at all without immediate crashes. Any GPU UV below 1170mV causes a driver crash and/or computer lock, though I am able to run the RAM at 1060Mhz as long as the Vram is above 1070mV. This is true for all drivers I've tried from 17.2.2 back to 17.11.4. Temps never exceed 100C any of the runs. The crashes that occur happen almost instantly, sometimes just opening the 3DMark launcher.

*Question:*
Is the new card just the biggest loser of the GPU lottery or is there something I'm missing in my methods of OC/UV?
My current method is Vcore UV 5mV increments until instability > return to last stable VCore > increase GPU freq by 5Mhz until instability then lower until stable. Same general method is used for the RAM.

The main goal being to achieve maximum OC for gaming while keeping that pesky hotspot temp below 95C. Currently my Core and HBM temps peak around 60C ± 5C at full load (Timespy @ 4k) but the hotspot easily approaches 95C, especially if I don't framerate limit (when gaming) <120Hz in WattMan.

Thanks for any feedback.


----------



## SavantStrike

Quote:


> Originally Posted by *cg4200*
> 
> I grabbed another vega frontier card to run crossfire when gaming. Got great deal before gouging began again...as all 64 vega overpriced and 8 gb figured.. I would get fe 16gb for little more..
> Gaming option not there with 2 cards.... driver install and switch is missing..
> If I unpug 1 card and install same drivers turn off computer add second card restart computer ..
> then it shows I am in firepro with driver switch option now there and it works as I now have 32 gb ram playing gta with wattman there..
> Is anyone running 2 fe cards have easier way.. also afterburner tweeks out with two fe but not one ??
> Also anyone flash a fe to regular vega 64 bios??thanks


You can't flash the V64 bios to a FE, or vice versa.


----------



## webhito

Just cashed out on my Vega 64, sold it for $900, got myself a 1080 ti Kingpin. Almost felt bad for selling it that high but the buyer left quite happy lol.

I wish thee the best!


----------



## cephelix

Quote:


> Originally Posted by *webhito*
> 
> Just cashed out on my Vega 64, sold it for $900, got myself a 1080 ti Kingpin. Almost felt bad for selling it that high but the buyer left quite happy lol.
> 
> I wish thee the best!


GPU prices now are insane! great that you could make money from it


----------



## rancor

Quote:


> Originally Posted by *Trender07*
> 
> idk man mine also crashes every day or 2 and gets locked to p7 and have to restart PC i dont know what could go on, do u also use gpu extension cables?


I don't have GPU extension cables but I figured out the problem. Its my three screens that are screwing up the card. If I disconnect two of the DP 2560x1440 144Hz displays its completely stable. So I just have to unplug two displays if I ever need to change watman settings


----------



## SpecChum

Quote:


> Originally Posted by *webhito*
> 
> Just cashed out on my Vega 64, sold it for $900, got myself a 1080 ti Kingpin. Almost felt bad for selling it that high but the buyer left quite happy lol.
> 
> I wish thee the best!


Very nice, and yet I still have no interest in selling mine.

I suspect I'm finally losing the plot lol


----------



## Aenra

Quote:


> Originally Posted by *SpecChum*
> 
> Very nice, and yet I still have no interest in selling mine.
> 
> I suspect I'm finally losing the plot lol


Not in the slightest.. am not selling mine either; or considering to for that matter.

You've grasped the plot just fine, don't worry 

This book has never been about some measly overall FPS difference. A good book too if i may say so


----------



## webhito

Quote:


> Originally Posted by *SpecChum*
> 
> Very nice, and yet I still have no interest in selling mine.
> 
> I suspect I'm finally losing the plot lol


Nah, you are happy with your purchase, nothing wrong with that.

The reason I gave in was in favor to noise, I hate watercooling, and to keep mine cool enough required a really strong fan profile, my kingpin is on a whole different ballpark, whisper quiet, doesn't break 65c with the fans on auto. My hard drive is the noisiest thing in my build now.


----------



## TrixX

Quote:


> Originally Posted by *webhito*
> 
> Nah, you are happy with your purchase, nothing wrong with that.
> 
> The reason I gave in was in favor to noise, I hate watercooling, and to keep mine cool enough required a really strong fan profile, my kingpin is on a whole different ballpark, whisper quiet, doesn't break 65c with the fans on auto. My hard drive is the noisiest thing in my build now.


Odd thing to hate watercooling. Expensive, but definitely not something to hate


----------



## By-Tor

I love water cooling and look at it as a hobby within a hobby...


----------



## webhito

Quote:


> Originally Posted by *TrixX*
> 
> Odd thing to hate watercooling. Expensive, but definitely not something to hate


Its the leaks that I hate, had a close encounter a few years back with a custom build and now I don't even get close to aio's.

Just air for me!


----------



## ITAngel

After being such a big fan of air cooling so much I just, caved in to try water cooling on my hardware's but I ran away back to air cooling. This time around, I am not sure what happen but I wanted to water cooled the most expensive system I have ever owned. lol







I am pretty happy though.

Here are some shots of the Vega FE water cooled.









Sorry poor photo quality due to my poor lighting and head lamp plus my phone.


----------



## newbminer

fe owner here. is there a bios that works well for overclocking gpu ming?


----------



## ITAngel

Quote:


> Originally Posted by *newbminer*
> 
> fe owner here. is there a bios that works well for overclocking gpu ming?


I flashed my FE to LC bios and then go into drivers 17.12.2 Gaming to modified the card.


----------



## fallrisk

Well, turns out my friend's 1950(actually 1980) at 1200mv and +50% power limit is for *gaming and compute*.. He's got a hell of a card.
I also just found out that my Strix 56 has Hynix memory.. no wonder I can hardly get it over 900 on a 64 bios... What a damn shame. Looks like I'll just flip this thing on ebay.


----------



## cephelix

Whoa..1980 @ 1200?! That's insane. I'm only managing 1640 @ 1150


----------



## Grummpy

Under volting rewards


----------



## BeetleatWar1977

Quote:


> Originally Posted by *cephelix*
> 
> Whoa..1980 @ 1200?! That's insane. I'm only managing 1640 @ 1150


i ģet [email protected] - but [email protected] or [email protected], nearly 2k is Really a big price in the lottery


----------



## cephelix

Quote:


> Originally Posted by *BeetleatWar1977*
> 
> i ģet [email protected] - but [email protected] or [email protected], nearly 2k is Really a big price in the lottery


How is everyone getting low voltages here?! Lol. Granted mine is a 56 flashed to a 64 bios. Could it be linked to my hbm? Since mine is at 1080mhz.i think


----------



## cplifj

after installing the official release of 18.1.1 all is working as it was in 17.12.2.

EXCEPT, I now have 7 Watt's idle consumption on the gpucore instead of the 3 Watts I had with previous drivers...(that's over double, 200% power usage)

So what is it that AMD is doing actually ?

Has anyone else noticed this then ?


----------



## Grummpy

Try this.
Do somthing unforgivable and lower your clock speed by 5% and then reduce v core from 1.2 to 1.0
then overclock memory to 1100 stock voltages.
You will maintain same performance but use 100 watt less power.


----------



## Grummpy

Quote:


> Originally Posted by *cplifj*
> 
> after installing the official release of 18.1.1 all is working as it was in 17.12.2.
> 
> EXCEPT, I now have 7 Watt's idle consumption on the gpucore instead of the 3 Watts I had with previous drivers...(that's over double, 200% power usage)
> 
> So what is it that AMD is doing actually ?
> 
> Has anyone else noticed this then ?


you honestly worried about 4 watts ?
4 watts over 1000 years would cost you 50 cents

LMAO
that's over double, 200% power usage
REALY omg call the fire brigade im going to have an electric fire,
Jees man who cares its 4 frig in watts no big deal making a fuss over nothing i think.


----------



## Grummpy

Under volting results and 100% stable so far.
slashes more than 100 watts of power usage without it effecting performance.


----------



## cephelix

@Grummpy noted. Thanks for that. I have a spreadsheet somewhere that lists super scores and why I chose the settings I did. I will definitely look over it and if I haven't already tried your settings, you can be sure I will and report back


----------



## SpecChum

How we measuring this, cos my undervolt is 1632 @ 915mV but obviously the clocks don't stay there.

On a full load, like 4k SP, it gets 1480 to 1490 at about 180W, and on something lighter, like cs:go it can go to 1600 at about 100W.

In terms of going the other way 1752 is fine at 1180mV.


----------



## VicsPC

Anybody else notice that on 18.1.1 disabling FRTC on a game profile doesn't work? I just tried it on Siege and Farming Siege and it gets stuck to 73fps (what i have my FRTC set to as i run a freesync monitor). A workaround would be to set the fps limit to 300fps but that's a real downer for me.


----------



## VicsPC

Quote:


> Originally Posted by *cplifj*
> 
> after installing the official release of 18.1.1 all is working as it was in 17.12.2.
> 
> EXCEPT, I now have 7 Watt's idle consumption on the gpucore instead of the 3 Watts I had with previous drivers...(that's over double, 200% power usage)
> 
> So what is it that AMD is doing actually ?
> 
> Has anyone else noticed this then ?


Im at 3-4w at idle. That's with everything closed, steam, uplay and chrome. You must have something open while checking your idle measurement.


----------



## cplifj

i'm sure i really did check everything and even shutdowns / reboots / restarts gave the same result. Now , the day after , i booted and everything seems back to normal with 3 Watts in idle. Very strange. Next time something happens i shall wait till the next day and see if it fixed itself then.


----------



## cplifj

Quote:


> Originally Posted by *Grummpy*
> 
> you honestly worried about 4 watts ?
> 4 watts over 1000 years would cost you 50 cents
> 
> LMAO
> that's over double, 200% power usage
> REALY omg call the fire brigade im going to have an electric fire,
> Jees man who cares its 4 frig in watts no big deal making a fuss over nothing i think.


you just make sure you don't break any cards.

ofcourse 4 watts is very little , but when an idle runs double it's power over usual, no matter how small , something is wrong. Purely technical, without any marketing ploys involved.


----------



## Grummpy

Quote:


> Originally Posted by *cephelix*
> 
> @Grummpy noted. Thanks for that. I have a spreadsheet somewhere that lists super scores and why I chose the settings I did. I will definitely look over it and if I haven't already tried your settings, you can be sure I will and report back


save then load into wattman.
https://drive.google.com/open?id=1-wd6tc66LZzcU8BPq-oaB9gQpddFESdS

suppose it all depends on the card.
im using the LC vega 64


----------



## Grummpy




----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Grummpy*





I tried your setup and it just told me to do something with my mamma overboosted and crashed like always.

I seriously think I have a possessed card I can run 1660 all day long(sits on 16385 most times) at 1160 but I lower that to even 1150 and it overboosts and crashes...stupid card









now one thing the 2 8 pin power connectors how has people got them set up a single cable double cables different rails what...because I think my problems may be stemming from how I have it setup.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *cephelix*
> 
> How is everyone getting low voltages here?! Lol. Granted mine is a 56 flashed to a 64 bios. Could it be linked to my hbm? Since mine is at 1080mhz.i think


Got a 56 and can run the hbm up to 1107, mine wont take a higher SoC Clock. Im using the bios for the 64 aswell.


----------



## cephelix

Quote:


> Originally Posted by *Grummpy*
> 
> save then load into wattman.
> https://drive.google.com/open?id=1-wd6tc66LZzcU8BPq-oaB9gQpddFESdS
> 
> suppose it all depends on the card.
> im using the LC vega 64


So use wattman instead? to OC?


----------



## Grummpy

Quote:


> Originally Posted by *cephelix*
> 
> So use wattman instead? to OC?


its what i use now.
the fact i can save profile make it good to use now.


----------



## os2wiz

I have my RX Vega 56 card since early October and installed custom dedicated water cooling for it in November. I installed the Vega 64 bios on it. I have spent 2 and 1/2 months trying to optimize settings. It runs very cool But the biggest performance boost I can get over wattman balanced settings is 4%. Hardly worth the 100+ hours of effort I have expended. My max score in AOS Escalation benchmark was 77 fps at 4k high quality. With balanced mode 74 fps. Every time the driver is upgraded the scores will change sometimes lower and sometimes a bit higher but no significant improvement. My conclusion is this card does NOT live up to expectations. Even with the Vega 56 bios it never can sustain the rated fps that AMD gives it. My water cooling may prolong the life of the gpu through cooling but has not given any real advantage in overall performance. Card costed me $599 with 2 games included. The AlphaCool waterblock was $190 including shipping. The $80 I had to pay for installation as I am elderly and and could not install the thermal pads properly brings the total expense $869. Of course I could have a factory installed water-cooled high end 1080 Ti for less than that. But I can live with this disappointment. Next time I will be more careful before buying into the hype of AMD graphics cards . I am retired and this was a good chunk of money for a working class guy like me. No more hours devoted to trying to squeeze 4% improvement in fps out of a poorly designed card. This card should never have been produced on the14LP process as well. Unfortunately AMD has a punitive contract with GF that should never been agreed to. When it moves to 7nm this summer I doubt AMD will issue an enthusiast version of it. Navi may be a different story in 2019. but I am NOT holding my breath.


----------



## cephelix

Quote:


> Originally Posted by *Grummpy*
> 
> its what i use now.
> the fact i can save profile make it good to use now.


Alright. I'll give it a try. thanks!


----------



## tarot

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *os2wiz*
> 
> I have my RX Vega 56 card since early October and installed custom dedicated water cooling for it in November. I installed the Vega 64 bios on it. I have spent 2 and 1/2 months trying to optimize settings. It runs very cool But the biggest performance boost I can get over wattman balanced settings is 4%. Hardly worth the 100+ hours of effort I have expended. My max score in AOS Escalation benchmark was 77 fps at 4k high quality. With balanced mode 74 fps. Every time the driver is upgraded the scores will change sometimes lower and sometimes a bit higher but no significant improvement. My conclusion is this card does NOT live up to expectations. Even with the Vega 56 bios it never can sustain the rated fps that AMD gives it. My water cooling may prolong the life of the gpu through cooling but has not given any real advantage in overall performance. Card costed me $599 with 2 games included. The AlphaCool waterblock was $190 including shipping. The $80 I had to pay for installation as I am elderly and and could not install the thermal pads properly brings the total expense $869. Of course I could have a factory installed water-cooled high end 1080 Ti for less than that. But I can live with this disappointment. Next time I will be more careful before buying into the hype of AMD graphics cards . I am retired and this was a good chunk of money for a working class guy like me. No more hours devoted to trying to squeeze 4% improvement in fps out of a poorly designed card. This card should never have been produced on the14LP process as well. Unfortunately AMD has a punitive contract with GF that should never been agreed to. When it moves to 7nm this summer I doubt AMD will issue an enthusiast version of it. Navi may be a different story in 2019. but I am NOT holding my breath.






I agree and disagree at the same time
my card cost(xfx 64) $729 aus the block $140 aus
400 less than a 1080Ti with block (apples to apples right)
and around $200 less than 1080 with block.

the main reason I cooled it was the broke ass cooler plus I like things cool and quieter.
also with the fan at 100 percent on the stock I never reached 1630 it was close but sitting on 80 degrees was not fun and that was not summer add an easy 10 to 150 degrees ambient and it would have throttled all over the place.

what I do agree on is the performance overclocked there is little in it but I also found that on the fury x I had BUT just throw the ram up to 1100 leave it stock running balanced and it performs very well without the excess heat(under water that is with the stock one you still have issues unless you undervolt)

so while it could have been better and definitely could have been cheaper I don't see it as an abject failure more like a meh release.
the other thing is it meant I did not have to go near NVidia who I hate








now as for expectations based on what it was supposed to clock to I agree completely in my case...1652 is not what I would have liked I would have preffered over 1700 BUT the ram did exceed my expectations happily cruising at 1100 so a bit of give and take there.

now I did not pay installation or anything but that has nothing to do with the card and the value really now does it?


----------



## sega4ever

Quote:


> Originally Posted by *Grummpy*


hello, i see that you undervolt to 1006mv and i was wondering why not go to 950mv? do you feel that the lowered clocks are not worth it? i was doing 900mv but after monitoring power usage with amd link i saw that there was no difference between 900mv and 950mv. i guess 950mv is the floor due to it being the default memory voltage control floor whatever.


----------



## Grummpy

stability.
Quote:


> Originally Posted by *sega4ever*
> 
> hello, i see that you undervolt to 1006mv and i was wondering why not go to 950mv? do you feel that the lowered clocks are not worth it? i was doing 900mv but after monitoring power usage with amd link i saw that there was no difference between 900mv and 950mv. i guess 950mv is the floor due to it being the default memory voltage control floor whatever.


Stability .
yes i can take it down lower but random crashes and instability make the extra 20 watts worth keeping.
Its all trial and error im sure i can take it down lower but im happy with how this performing now.
Just have to play test play test.


----------



## Grummpy

Will it play 5 k
Can see the HBCC is working im up to 9000 of gpu memory.


----------



## Grummpy

People dont seem to realise with AMD eyefinity on vega it allows you to add 2 extra displays
and flip them 90 degree and run them in *FULL screen* mode.
I personally think 3 x 16/9 screens are to wide i dont want to render pixels im not looking at.
This is somthing you cant do with other card manufacturers so leverage your advantage.
Me im using a 32" samsung and 2 19" dell displays flipped 90 degree.
It allows me to game in wide screen at a massive reduced price.
It cost me 50 pound each for my 19" dell screens and the stands cost me 25 each.
so thats 150 pounds for wide screen experience.
Its worth looking into because its cheap to do.
50" gaming for a extra 150 pounds i take that thank you.
Give it a go what you got to loose.
Just pick the right screen.
horizontal pixels must match the vertical and keep the size as close as possible.
For me it was 2560x1440/ 1440x900.
32" and 19"
*1440*

Honestly when you are gaming the borders vanish because they are in your peripheral view
It adds immersion.
I have mine set so when i run game they turn themselves on and turn of when i quit.
Thats kind of nice.
But they are also useful for chat and other apps if you dont want to use them.



























Hope some of you give it a go.
If anyone wants help picking screens if they choose to give it a go im happy to help.


----------



## TrixX

Quote:


> Originally Posted by *os2wiz*
> 
> Rant which needs paragraphs...


Thanks for your insight. It's highly possible you just lost out on the silicon lottery with your 56. My experience has been anything but similar though I had a 64 so possibly lucked out a bit with the silicon lottery. Considering in stock settings I was pulling close to 260W with avg core speeds around the 1500's due to temp throttling performance wasn't pretty. It was passable but not what I expected from the card. After some tweaking and the time saving OverdriveNTool vs Wattman which was bugged to hell when I was first testing I managed to get the core speeds up to 1752MHz stable and dropped the power draw to around 180W for daily use and if I really felt like burning money could hit 300W to reach that 1752MHz limit. Since then I've settled on a middle ground around the 1680MHz range and pulling around 180W with the undervolt I run. Performance wise initially the difference isn't huge but over time the gain is massive as there's not throttling limitations and temps stay cool.

Your experience of a single card is just that, a single point of data that's not equal to the average.


----------



## rdr09

Quote:


> Originally Posted by *Grummpy*
> 
> People dont seem to realise with AMD eyefinity on vega it allows you to add 2 extra displays
> and flip them 90 degree and run them in *FULL screen* mode.
> I personally think 3 x 16/9 screens are to wide i dont want to render pixels im not looking at.
> This is somthing you cant do with other card manufacturers so leverage your advantage.
> Me im using a 32" samsung and 2 19" dell displays flipped 90 degree.
> It allows me to game in wide screen at a massive reduced price.
> It cost me 50 pound each for my 19" dell screens and the stands cost me 25 each.
> so thats 150 pounds for wide screen experience.
> Its worth looking into because its cheap to do.
> 50" gaming for a extra 150 pounds i take that thank you.
> Give it a go what you got to loose.
> Just pick the right screen.
> horizontal pixels must match the vertical and keep the size as close as possible.
> For me it was 2560x1440/ 1440x900.
> 32" and 19"
> *1440*
> 
> Honestly when you are gaming the borders vanish because they are in your peripheral view
> It adds immersion.
> I have mine set so when i run game they turn themselves on and turn of when i quit.
> Thats kind of nice.
> But they are also useful for chat and other apps if you dont want to use them.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> IMG ALT=""]http://www.overclock.net/content/type/61/id/3191995/width/500/height/1000[/IMG]
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hope some of you give it a go.
> If anyone wants help picking screens if they choose to give it a go im happy to help.


Can you show off your setup?


----------



## cephelix

How are you guys monitoring gpu wattage? I used to just use GPU-Z but that shows only the core. Also don't know how accurate that is. Anyway my HBM is 1180mhz with P7 at 1640mHz/1155mV. Just checked my datasheet yesterday..


----------



## os2wiz

Quote:


> Originally Posted by *tarot*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *os2wiz*
> 
> I have my RX Vega 56 card since early October and installed custom dedicated water cooling for it in November. I installed the Vega 64 bios on it. I have spent 2 and 1/2 months trying to optimize settings. It runs very cool But the biggest performance boost I can get over wattman balanced settings is 4%. Hardly worth the 100+ hours of effort I have expended. My max score in AOS Escalation benchmark was 77 fps at 4k high quality. With balanced mode 74 fps. Every time the driver is upgraded the scores will change sometimes lower and sometimes a bit higher but no significant improvement. My conclusion is this card does NOT live up to expectations. Even with the Vega 56 bios it never can sustain the rated fps that AMD gives it. My water cooling may prolong the life of the gpu through cooling but has not given any real advantage in overall performance. Card costed me $599 with 2 games included. The AlphaCool waterblock was $190 including shipping. The $80 I had to pay for installation as I am elderly and and could not install the thermal pads properly brings the total expense $869. Of course I could have a factory installed water-cooled high end 1080 Ti for less than that. But I can live with this disappointment. Next time I will be more careful before buying into the hype of AMD graphics cards . I am retired and this was a good chunk of money for a working class guy like me. No more hours devoted to trying to squeeze 4% improvement in fps out of a poorly designed card. This card should never have been produced on the14LP process as well. Unfortunately AMD has a punitive contract with GF that should never been agreed to. When it moves to 7nm this summer I doubt AMD will issue an enthusiast version of it. Navi may be a different story in 2019. but I am NOT holding my breath.
> 
> 
> 
> 
> 
> 
> 
> I agree and disagree at the same time
> my card cost(xfx 64) $729 aus the block $140 aus
> 400 less than a 1080Ti with block (apples to apples right)
> and around $200 less than 1080 with block.
> 
> the main reason I cooled it was the broke ass cooler plus I like things cool and quieter.
> also with the fan at 100 percent on the stock I never reached 1630 it was close but sitting on 80 degrees was not fun and that was not summer add an easy 10 to 150 degrees ambient and it would have throttled all over the place. what I do agree on is the performance overclocked there is little in it but I also found that on the fury x I had BUT just throw the ram up to 1100 leave it stock running balanced and it performs very well without the excess heat(under water that is with the stock one you still have issues unless you undervolt)
> 
> so while it could have been better and definitely could have been cheaper I don't see it as an abject failure more like a meh release.
> the other thing is it meant I did not have to go near NVidia who I hate
> 
> 
> 
> 
> 
> 
> 
> 
> now as for expectations based on what it was supposed to clock to I agree completely in my case...1652 is not what I would have liked I would have preffered over 1700 BUT the ram did exceed my expectations happily cruising at 1100 so a bit of give and take there.
> 
> now I did not pay installation or anything but that has nothing to do with the card and the value really now does it?
Click to expand...

You can not overclock the memory and leave it on balanced. That is impossible. If you leave the gpu on stock 64 frequency , which is overlocked for an actual Vega 56 with a 64 bios, you do not have room to overclock the memory much at all beyond 945mhz. So tell me how you managed????


----------



## steadly2004

Quote:


> Originally Posted by *os2wiz*
> 
> You can not overclock the memory and leave it on balanced. That is impossible. If you leave the gpu on stock 64 frequency , which is overlocked for an actual Vega 56 with a 64 bios, you do not have room to overclock the memory much at all beyond 945mhz. So tell me how you managed????


Maybe he has it on balanced then adjusts the the memory with another tool? Or have it on balanced and then go custom and adjust only the memory?


----------



## By-Tor

Anyone using Freesync having any issues with screen blackout in game for about 2 sec.?


----------



## VicsPC

Quote:


> Originally Posted by *By-Tor*
> 
> Anyone using Freesync having any issues with screen blackout in game for about 2 sec.?


How often does it happen?


----------



## By-Tor

Every now and then in game..


----------



## Cannon19932006

Quote:


> Originally Posted by *By-Tor*
> 
> Every now and then in game..


I used to have this problem in csgo with my fury but It hasn't happened since I got my vega. I solved it by just switching my refresh rate on my freesync monitor from 144hz to 120hz and it wouldn't do it at that refresh rate. I've also heard sometimes a higher quality displayport cable can stop this from happening.


----------



## TrixX

Quote:


> Originally Posted by *steadly2004*
> 
> Maybe he has it on balanced then adjusts the the memory with another tool? Or have it on balanced and then go custom and adjust only the memory?


I know some cards don't like OC'ing the memory, but mine's happy at 1050MHz and there are many who've reported good results with 1100+ all the way to 1200MHz depending on the silicon and voltage performance of the card.


----------



## BeetleatWar1977

Quote:


> Originally Posted by *os2wiz*
> 
> You can not overclock the memory and leave it on balanced. That is impossible. If you leave the gpu on stock 64 frequency , which is overlocked for an actual Vega 56 with a 64 bios, you do not have room to overclock the memory much at all beyond 945mhz. So tell me how you managed????


Shure it is possible. Eg Afterburner.


----------



## Grummpy

The VEGA 64 is a strange card.
I can get my wall power reader meter to reach 700 watts if i overclock and push the GPU hard
but i can also get it to run at 400 watts at the wall with no more than a 5% performance drop.
It really hates being overclocked but loves to take under volt.

I noticed since the new drivers my gpu says 1250 mv up from 1200 in wattman but in tests it hasn't increased at all.
All that has happened is i have lost the ability to increase my gpu wattage by 50 mv because i cant go over 1250.


----------



## Grummpy

Quote:


> Originally Posted by *TrixX*
> 
> I know some cards don't like OC'ing the memory, but mine's happy at 1050MHz and there are many who've reported good results with 1100+ all the way to 1200MHz depending on the silicon and voltage performance of the card.


I found if you pull power away from the gpu by under clocking and under volting you can push the memory clocks higher without touching the stock volt.
they are very much connected even thou the vrm are different locations.


----------



## cephelix

Quote:


> Originally Posted by *Grummpy*
> 
> I found if you pull power away from the gpu by under clocking and under volting you can push the memory clocks higher without touching the stock volt.
> they are very much connected even thou the vrm are different locations.


Hmm, this got me wondering. If I set my HBM to 1200mHz, I couldn't overclock my P7 at all. Basically core has to be left at stock v64 clocks or I will get artifacts. But every time I decrease HBM clocks by 10mHz, I could clock P7 by 10mHz, albeit with a slight bump in P7 voltage. Question is, how are they connected when HBM has it's own set of voltages?


----------



## papillon121

Hi Guys and Girls,

i bought a vega64 for mining at ebay germany.

i set the fans to min 3000 rpm and max 4900 and when i starting to mine, i get a temp of 90 *C on HotSpot and arround 78 *C on GPU Temp.

After some time, the mining rate drops down to the half.

Do anyone of you have an idea what i can to?

Best regards!


----------



## cephelix

Quote:


> Originally Posted by *papillon121*
> 
> Hi Guys and Girls,
> 
> i bought a vega64 for mining at ebay germany.
> 
> i set the fans to min 3000 rpm and max 4900 and when i starting to mine, i get a temp of 90 *C on HotSpot and arround 78 *C on GPU Temp.
> 
> After some time, the mining rate drops down to the half.
> 
> Do anyone of you have an idea what i can to?
> 
> Best regards!


You just set the fans? didn't undervolt? did you use the blockchain driver(or is it bios?)


----------



## papillon121

Quote:


> Originally Posted by *cephelix*
> 
> You just set the fans? didn't undervolt? did you use the blockchain driver(or is it bios?)


i dont use bios, just overdrive. yes, i use blockchain drivers aug 23.

im thinking about replace the TIM... my card gettin hotter and hotter. its on 4300 rpm right now and still at 83 *C


----------



## TrixX

Quote:


> Originally Posted by *papillon121*
> 
> i dont use bios, just overdrive. yes, i use blockchain drivers aug 23.
> 
> im thinking about replace the TIM... my card gettin hotter and hotter. its on 4300 rpm right now and still at 83 *C


Try undervolting the GPU so it doesn't generate as much heat. I've seen a lot of miners aiming for between 800mv and 900mv for best efficiency, however some cards are incapable of that (my min is 950mv for instance).


----------



## cephelix

Quote:


> Originally Posted by *papillon121*
> 
> i dont use bios, just overdrive. yes, i use blockchain drivers aug 23.
> 
> im thinking about replace the TIM... my card gettin hotter and hotter. its on 4300 rpm right now and still at 83 *C


Quote:


> Originally Posted by *TrixX*
> 
> Try undervolting the GPU so it doesn't generate as much heat. I've seen a lot of miners aiming for between 800mv and 900mv for best efficiency, however some cards are incapable of that (my min is 950mv for instance).


as what Trixx has mentioned, you have to undervolt the card. I don't think it's about getting the most power out of the card but instead a nice ratio between power consumption. heat and hash rates


----------



## papillon121

Quote:


> Originally Posted by *cephelix*
> 
> as what Trixx has mentioned, you have to undervolt the card. I don't think it's about getting the most power out of the card but instead a nice ratio between power consumption. heat and hash rates


ok ,so where in overdrive should i undervolt? (sorry for the noob question)


----------



## cephelix

papillon121 said:


> Quote:Originally Posted by *cephelix*
> 
> 
> as what Trixx has mentioned, you have to undervolt the card. I don't think it's about getting the most power out of the card but instead a nice ratio between power consumption. heat and hash rates
> 
> 
> ok ,so where in overdrive should i undervolt? (sorry for the noob question)


you should tweak P7 clocks and voltage. For mine I did 1640mHz/1155mV.
Also you could lower P6 voltage, mine is 1100mv.
for HBM, I'm unsure if it's useful for mining, I overclocked it to 1080mhz. Take note that HBM voltage isn't actually for HBM. It is the "voltage floor" (mine is at 950mV). HBM runs at a constant 1.25V or 1.35V depending on if you have a vega 56 or 64.

Of course with all this, you have to test for stability of your clocks and undervolt.


----------



## punchmonster

cephelix said:


> you should tweak P7 clocks and voltage. For mine I did 1640mHz/1155mV.
> Also you could lower P6 voltage, mine is 1100mv.
> for HBM, I'm unsure if it's useful for mining, I overclocked it to 1080mhz. Take note that HBM voltage isn't actually for HBM. It is the "voltage floor" (mine is at 950mV). HBM runs at a constant 1.25V or 1.35V depending on if you have a vega 56 or 64.
> 
> Of course with all this, you have to test for stability of your clocks and undervolt.


for mining cryptonight coins (monero, etc) he wants a voltage of 900mV and core clock of 1400, memory clock as high as possible.

For Ethereum he wants 1000Mhz @ 800mV with memory clock as high as possible.


----------



## cephelix

punchmonster said:


> for mining cryptonight coins (monero, etc) he wants a voltage of 900mV and core clock of 1400, memory clock as high as possible.
> 
> For Ethereum he wants 1000Mhz @ 800mV with memory clock as high as possible.


ohh, well, I don't mine at all so I wouldn't know. Don't think it's even worth to mine with my single v56 actually....


----------



## Naeem

papillon121 said:


> Hi Guys and Girls,
> 
> i bought a vega64 for mining at ebay germany.
> 
> i set the fans to min 3000 rpm and max 4900 and when i starting to mine, i get a temp of 90 *C on HotSpot and arround 78 *C on GPU Temp.
> 
> After some time, the mining rate drops down to the half.
> 
> Do anyone of you have an idea what i can to?
> 
> Best regards!




you need to downvolt and downclock your gpu i have tested minning on my Vega 64 LC i downclocked core to about 1000mhz and down volted core voltage to -150 and overclocked HBM2 to 1100mhz and i eas getting about 43mh/s with HBM2 at 1150mhz i get about 45mh/s gou clock speed does not do much over 1000mhz and card will run cooler with this setting

this is for ETH minning


----------



## punchmonster

cephelix said:


> ohh, well, I don't mine at all so I wouldn't know. Don't think it's even worth to mine with my single v56 actually....


definitely worth it.

If 10 cards are worth it so is 1. Every single card has to be worth it for the system to be worth it.


----------



## uncivil_engineer

papillon121 said:


> Quote:Originally Posted by *cephelix*
> 
> 
> as what Trixx has mentioned, you have to undervolt the card. I don't think it's about getting the most power out of the card but instead a nice ratio between power consumption. heat and hash rates
> 
> 
> ok ,so where in overdrive should i undervolt? (sorry for the noob question)


What are you mining? If you're on the CryptoNight algo, you'll probably need to implement the Soft Power Play Table mod in order to undervolt the card down to 900mV.


----------



## elox

papillon121 said:


> Hi Guys and Girls,
> 
> i bought a vega64 for mining at ebay germany.
> 
> i set the fans to min 3000 rpm and max 4900 and when i starting to mine, i get a temp of 90 *C on HotSpot and arround 78 *C on GPU Temp.
> 
> After some time, the mining rate drops down to the half.
> 
> Do anyone of you have an idea what i can to?
> 
> Best regards!


For Ethereum i lock the cards core to p1 state at 800mv with OverdriveNtool. Lock memory at the highest state wtih 1100mhz with 870mv "HBM voltage"(minimum voltage) and powerlimit -38.
Gives a constant hashrate of 44mhs with 130watt GPU power draw with 18.1 driver.


----------



## cephelix

punchmonster said:


> definitely worth it.
> 
> If 10 cards are worth it so is 1. Every single card has to be worth it for the system to be worth it.[/Q]
> Electricity costs 21.56cents SGD per kWh here....gotta input those numbers
> seems like i'll be loosing money instead..


----------



## Ne01 OnnA

By-Tor said:


> Anyone using Freesync having any issues with screen blackout in game for about 2 sec.?


Turn Off Relive DVR (if you don't recording)


----------



## VicsPC

So far so good with 18.1.1 no issues. I'm still not using the overlay as i have my ab and hwinfo setup just the way i like.


----------



## Delijohn

cephelix said:


> Anyway my HBM is 1180mhz with P7 at 1640mHz/1155mV. Just checked my datasheet yesterday..


Are you on water? I installed an alphacool eiswolf and I want to test its limits, since now it's cooler as a card.. do you have any ready profiles? Thx!
On ethereum i can mine with 43-45Mh/s stable rate.. but games are different..


----------



## kondziowy

Brace yourselves, today Vegas got another price hike of 100£/~140$. 
949£ for V64 Red Devil? https://www.overclockers.co.uk/pc-components/graphics-cards/amd/radeon-rx-vega-64
LoL. Hey - at least they are in stock.
You could buy 2x reference V64 last year for that price at my place.

Titan XP from Nvidia site is going to be a bargain soon. 

Why is Vega FE not sold out yet? The price is great(not everywhere though) and it has 16GB of HBM2.


----------



## cephelix

Delijohn said:


> Are you on water? I installed an alphacool eiswolf and I want to test its limits, since now it's cooler as a card.. do you have any ready profiles? Thx!
> On ethereum i can mine with 43-45Mh/s stable rate.. but games are different..


I'm on water. Profiles for mining? Nope. Never even mined before.


----------



## newbminer

i have 2 fe's. i can not get them to 4k, 3k-3.1k is all it will do. i can take one away 2-2.1k, swap the other 2-2.1k put together, and back to 3k. im running p7:1407:940 . p3:1100:950. also most of the time, if i stop mining, reset cards with ODNT then devcon restart, the 2nd will never come back, i have to reboot.


----------



## Rangerscott

Newegg has these in stock with code for $899.


----------



## os2wiz

Grummpy said:


> Quote:Originally Posted by *TrixX*
> 
> I know some cards don't like OC'ing the memory, but mine's happy at 1050MHz and there are many who've reported good results with 1100+ all the way to 1200MHz depending on the silicon and voltage performance of the card.
> 
> 
> I found if you pull power away from the gpu by under clocking and under volting you can push the memory clocks higher without touching the stock volt.
> they are very much connected even thou the vrm are different locations.


Of course I can overclock the memory, but on custom and by reducing frequency between 3 and 5%. That leaves room when undervolting to boost memory speed. Overclocking the gpu is a lost cause and that means keeping a vega 56 with 64 bios at 64 frquency settings is a lost cause no matter how much voltage you throw at it. Even undervolting the gpou will not allow you to keep it at 1640mhz. In any benchmark or game it right away falls into the 1300's . That occurs on both balanced or custom even if I drop frquency by 5% and undervolt it will not hold at 1640 minus the 82mhz drop. It plunges into the 1300's . That happens with undervolting gpu by 60mv or whatever values I choose. Memory is much easier to undervolt and overclock if you reduce gpu frequency close to 5% as I and others have noted. You can undervolt both gpou and memory floor . It extends te life of the gpu and memory, but is not wildly successful in upping the fps performance. That is what I see from my own experience. Vega is a real dog compared to Ryzen. It was a waste of money putting it under water. I love the thermals, but they get me absolutely nowhere.


----------



## Rei86

Hey guys, just got my RX Vega 64 up and running two days ago and its stock performance in game for me has been more than enough. 
However the whole notion of fiddling with the card is something I want too do (this is overclock.net after all).

So I was wondering is Wattman the better tool to overclocking this card or should I switch over to handy dandy MSI afterburner?

This is my 1st AMD card since the ATi 9600 Pro, so thanks in advanced.


----------



## os2wiz

Delijohn said:


> Are you on water? I installed an alphacool eiswolf and I want to test its limits, since now it's cooler as a card.. do you have any ready profiles? Thx!
> On ethereum i can mine with 43-45Mh/s stable rate.. but games are different..



I have the Eiswolf GPX Pro 120 on my Vega 56 with 64 bios. Unless you want to drive yourself crazy with frustration. I recommend in wattman to use balanced. You can try custom raise the power slider to +50% and play with upping the HBM speed to about 1100mhz with an undervolt. Most times you will have to reduce the clock on the gpu by 4 to 5% in order to raise memory speed. The best you will achieve in most cases is a 4% improvement in fps over sticking with balanced. It is a complete waste of time trying to overclock the gpu. The fact is that AMD's stock gpu frequency is higher than the card can really perform. So when you test it with benchmarks or in game play the max gpu frequency drops considerably from the rated speed. My thermals are excellent with the Eiswolf they never exceed 41 celcius under full load. I use wattman . It really can do everything Overdrive and the other tools can but every one has their own preferences. There are some cards that will out perform the average. My card is NOT below average. I achieved 77 fps in the Ashes of Singularity Escalation benchmark putting my card 21st out of over 600 people on their website. I have been unable to match that run or even come closer than 74 fps since then. I have a 4k monitor and do all my games and benches at 4k. The cards overall performance is at or near a 1080 card. I have done better in some benchmarks than most 1080's but worse in others and in games usually slightly lower. You can try incrementally undervolting and overclocking the memory and it is fun the first week or two. Good luck. The Vega design just was not up to snuff to challenge the 1080 Ti and they chose to manufacture on 14nm LLP from Global Foundries. That process is not particularly good for achieving high frequencies. When Navi comes on 7nm it may be significantly better, but that is just a wild guess.


----------



## os2wiz

Rangerscott said:


> Newegg has these in stock with code for $899.


 A month ago Newgg was selling them for $849.


----------



## TinyRichard

Serious question, who exactly is the target audience for a $899 card that performs on par with a 1070 TI?

Masochists?


----------



## os2wiz

TinyRichard said:


> Serious question, who exactly is the target audience for a $899 card that performs on par with a 1070 TI?
> 
> Masochists?


Not true it performs better than a 1070 Ti. My Vega 56 is just about at 1080 performance level. It is NOT a $899 card. My reference model was at $499. Mining has reduced the consumer supply and allowed the ripoff pricing. I think your comments reflect a childish lack of maturity. This forum is not for braggarts and gloaters, it is to dicuss achievements and challenges with our Vega cards. Obviously you do not own one. So please stick to forums that reflect what you are involved in. There is no forum for misbehaving kiddies.


----------



## os2wiz

TinyRichard said:


> Serious question, who exactly is the target audience for a $899 card that performs on par with a 1070 TI?
> 
> Masochists?


Not true it performs better than a 1070 Ti. My Vega 56 is just about at 1080 performance level. It is NOT an $899 card. My reference model was at $499. Mining has reduced the consumer supply and allowed the ripoff pricing. I think your comments reflect a childish lack of maturity. This forum is not for braggarts and gloaters, it is to dicuss achievements and challenges with our Vega cards. Obviously you do not own one. So please stick to forums that reflect what you are involved in. There is no forum for misbehaving kiddies.


----------



## Rei86

TinyRichard said:


> Serious question, who exactly is the target audience for a $899 card that performs on par with a 1070 TI?
> 
> Masochists?


Miners.


----------



## TrixX

os2wiz said:


> Of course I can overclock the memory, but on custom and by reducing frequency between 3 and 5%. That leaves room when undervolting to boost memory speed. Overclocking the gpu is a lost cause and that means keeping a vega 56 with 64 bios at 64 frquency settings is a lost cause no matter how much voltage you throw at it. Even undervolting the gpou will not allow you to keep it at 1640mhz. In any benchmark or game it right away falls into the 1300's . That occurs on both balanced or custom even if I drop frquency by 5% and undervolt it will not hold at 1640 minus the 82mhz drop. It plunges into the 1300's . That happens with undervolting gpu by 60mv or whatever values I choose. Memory is much easier to undervolt and overclock if you reduce gpu frequency close to 5% as I and others have noted. You can undervolt both gpou and memory floor . It extends te life of the gpu and memory, but is not wildly successful in upping the fps performance. That is what I see from my own experience. Vega is a real dog compared to Ryzen. It was a waste of money putting it under water. I love the thermals, but they get me absolutely nowhere.


Well from my testing on my card with water cooling, thermals dropped nicely and I could sustain 1050MHz Mem clock and 1700+ on the core with it set to 1752MHz. Pulls around 260-280W though so dropping it down to 200-240W I was getting around ~1690MHz with it still set to 1752MHz on P7. This was with power set to +140% via reg key and using the Water Cooled BIOS. Using the voltage as the bottleneck to control the Core Speeds seemed to work best instead of messing with the Temp Targets and Power Limiter. Basically remove all blocks to the GPU except floor and max voltage and it's pretty easy to tune.

For reference I cannot run that high on air with reasonable noise levels and I can't run that high on air for benching at all, so there's a noticeable bump in performance but my card maxxes out at 1752 on P7. Can get around 1140MHz stable on memory but waiting on some parts from the USA before I can re-install the waterblock.


----------



## os2wiz

TrixX said:


> Well from my testing on my card with water cooling, thermals dropped nicely and I could sustain 1050MHz Mem clock and 1700+ on the core with it set to 1752MHz. Pulls around 260-280W though so dropping it down to 200-240W I was getting around ~1690MHz with it still set to 1752MHz on P7. This was with power set to +140% via reg key and using the Water Cooled BIOS. Using the voltage as the bottleneck to control the Core Speeds seemed to work best instead of messing with the Temp Targets and Power Limiter. Basically remove all blocks to the GPU except floor and max voltage and it's pretty easy to tune.
> 
> For reference I cannot run that high on air with reasonable noise levels and I can't run that high on air for benching at all, so there's a noticeable bump in performance but my card maxxes out at 1752 on P7. Can get around 1140MHz stable on memory but waiting on some parts from the USA before I can re-install the waterblock.


You did not state your voltage settings Nor what model card you have (56 or 64 and whether the water cooling was the Vega watercoooled model or a custom watercool like mine the Alphacool Eiswolf GPX Pro 120). Without that information your remarks have little value to users here.


----------



## SavantStrike

os2wiz said:


> Not true it performs better than a 1070 Ti. My Vega 56 is just about at 1080 performance level. It is NOT a $899 card. My reference model was at $499. Mining has reduced the consumer supply and allowed the ripoff pricing. I think your comments reflect a childish lack of maturity. This forum is not for braggarts and gloaters, it is to dicuss achievements and challenges with our Vega cards. Obviously you do not own one. So please stick to forums that reflect what you are involved in. There is no forum for misbehaving kiddies.


I saw no bragging in that users post. At current prices, there isn't a compelling reason to buy a Vega - this is just a fact.

And before you jump down my throat, I own a Vega 64 with a full cover block.


----------



## Naeem

SavantStrike said:


> I saw no bragging in that users post. At current prices, there isn't a compelling reason to buy a Vega - this is just a fact.
> 
> And before you jump down my throat, I own a Vega 64 with a full cover block.




Reasons to buy Vega 56 over 1070ti

1 : You own a freesync screen or planning to buy one in future 
2 : You like to own a powerfull compute card that can do much more than just gaming
3 : You care about longevity of your buy as there is good chance it will out live 1070 ti in next 2-3 years
4 : You actually like open source ecosystem of AMD
5 : You love to tweak and tune your hardware as you are a hardware nerd


----------



## TrixX

os2wiz said:


> You did not state your voltage settings Nor what model card you have (56 or 64 and whether the water cooling was the Vega watercoooled model or a custom watercool like mine the Alphacool Eiswolf GPX Pro 120). Without that information your remarks have little value to users here.


Well thankyou, seeing as most of my comments were already in this thread with more data well before your arrival I really think you're taking things a little bit harshly.

My card as stated more than a few times in this thread and more than once to you directly is a HIS Vega64 reference edition from the second batch of cards in September. Has a moulded GPU too.

Anyway my voltages are 1050mv floor and 1100mv P7 on air and I can go up to 1200mv for full performance but I didn't see any real benefit above 1150mv on P7 as the sustained clocks were very similar though 1200mv actually introduced a crash issue with very high core clocks so I stuck to around 1150 on P7 even with the water block. Temps were not an issue with water and fan speed/temps are a large limiting factor on air.

Running a custom loop with CPU and GPU on the same loop when under water using an aquacomputer waterblock on the GPU. Using a 360XE and 240PE rad setup from EK. Though I think for better performance I may need a second pump.

I should also add that if I want to run on maximum power saving and still get good performance I can actually run the card at 950mv stable for floor and 970mv stable on P7. I have to disable the intermediate Power steps from P2 to P5 to get it to work as intended though as they have hard set voltages higher than 950mv.


----------



## cephelix

TrixX said:


> Well thankyou, seeing as most of my comments were already in this thread with more data well before your arrival I really think you're taking things a little bit harshly.
> 
> My card as stated more than a few times in this thread and more than once to you directly is a HIS Vega64 reference edition from the second batch of cards in September. Has a moulded GPU too.
> 
> Anyway my voltages are 1050mv floor and 1100mv P7 on air and I can go up to 1200mv for full performance but I didn't see any real benefit above 1150mv on P7 as the sustained clocks were very similar though 1200mv actually introduced a crash issue with very high core clocks so I stuck to around 1150 on P7 even with the water block. Temps were not an issue with water and fan speed/temps are a large limiting factor on air.
> 
> Running a custom loop with CPU and GPU on the same loop when under water using an aquacomputer waterblock on the GPU. Using a 360XE and 240PE rad setup from EK. Though I think for better performance I may need a second pump.
> 
> I should also add that if I want to run on maximum power saving and still get good performance I can actually run the card at 950mv stable for floor and 970mv stable on P7. I have to disable the intermediate Power steps from P2 to P5 to get it to work as intended though as they have hard set voltages higher than 950mv.


"It's ok, let it go. It's not worth it. If you punched him, your hand would just smell like ointment and pee."
-Louise, Bob's Burgers

But seriously oz2wiz, why so confrontational? You asked for help and when people do help, you complain saying that's it's not tailored to exactly how you do it. You could've just elaborated on your initial question and people then would reply accordingly. Then without reading through the thread, you accuse people's reply as being worthless. You go through all the effort of putting your card under water then lamenting the purchase. Why did you not read up on the card first before ever making the inital purchase?


----------



## os2wiz

SavantStrike said:


> I saw no bragging in that users post. At current prices, there isn't a compelling reason to buy a Vega - this is just a fact.
> 
> And before you jump down my throat, I own a Vega 64 with a full cover block.


The guy lied when he said the Vega 56 cartd pperforms lower than 1070 Ti. That is completely false. He also called us some ugly name. You consider that mature???


----------



## os2wiz

SavantStrike said:


> I saw no bragging in that users post. At current prices, there isn't a compelling reason to buy a Vega - this is just a fact.
> 
> And before you jump down my throat, I own a Vega 64 with a full cover block.


The guy lied when he said the Vega 56 card performs lower than 1070 Ti. That is completely false. He also called us some ugly name. You consider that mature??? He was a troll. Trolls do not belong on any of these threads. Moderation should come down like ton of bricks on such nonsense. He implied The normal price for the Vega product is $899. I agree the prices are ridiculous. It is not a good time to be buying graphics cards. I am a bit annoyed at AMD about Vega. I expected a bit more from them. It has very poor overclocking characteristics. They would have done better delaying release until now and doing it on 12nmLP. It would have been a better product. The architecture also obviously is limiting. I am not an engineer but it is obvious all of Vegas weaknesses are not process related.


----------



## TrixX

os2wiz said:


> I am a bit annoyed at AMD about Vega. I expected a bit more from them. It has very poor overclocking characteristics.


Either yours has poor OC characteristics or everyone else who's got a good one for OC'ing is lying.

There's a wide spread of quality for Vega GPU's, you seem to have either got a dud or still don't know what you are doing when OC'ing these things.

Not sure what you are doing for OC'ing but having suggested removing all the different limitations such as Power Limit and so on and just using the voltages to tune the P7 MHz (again using Superposition as it's a very very easy test of the card) then you may find you'll get a balance between core OC and memory OC.

I should add Superposition is used to check the OC settings of the card not the final score. Seeing what MHz the core is running at as well as the mem. For Mem stability test using Firestrike.


----------



## punchmonster

SavantStrike said:


> I saw no bragging in that users post. At current prices, there isn't a compelling reason to buy a Vega - this is just a fact.
> 
> And before you jump down my throat, I own a Vega 64 with a full cover block.


Categorically untrue. A Vega even at increased cost will pay back the difference faster than a 1070Ti/1080, which you also can't find at MSRP.


----------



## By-Tor

Ne01 OnnA said:


> Turn Off Relive DVR (if you don't recording)


Never installed it...


----------



## Grummpy

amount of power i can save is just insane
Using world of tanks benchmark.
150 watt at wall without loosing any performance.
the last was manual voltage set.
all the others are 1 % drop with auto voltage.
https://i.imgur.com/8qxTNF5.jpg


----------



## Grummpy

*crazy*

double post


----------



## Trender

My Vega just keep crashing and I tried everything u can find on the Internet, power save, balanced, uv, locked states, fresh windows format & reinstall, always DDU... nothing









also tried to increase "TrdDelay" but it crashes the same didnt fixed it
Also I got this file:
atikmdag_dce.log in C:/AMD










m playing then driver reset (black screen but its really just turning off and on) and then and if I dont restart fast, it gets the computer freeze in less than a minute ; It doesnt get any audio problem it just restart the display and the vega gpu and in about 1 min it freezes the pcso I have to hold the power button to shutdown it.
and it doesn't crashes in stress 3dmark test at 300W so power isnt an issue as it also crashes in power save


----------



## estarkey7

TinyRichard said:


> Serious question, who exactly is the target audience for a $899 card that performs on par with a 1070 TI?
> 
> Masochists?


I wanted a card with 16 GB of VRAM. I do video editing primarily, but also some mechanical engineering and a little gaming. 

The Vega FE exceeded my expectations and the HBCC gives me more memory than any NVidia card can.

Sent from my SM-G950U using Tapatalk


----------



## surfinchina

estarkey7 said:


> I wanted a card with 16 GB of VRAM. I do video editing primarily, but also some mechanical engineering and a little gaming.
> 
> The Vega FE exceeded my expectations and the HBCC gives me more memory than any NVidia card can.
> 
> Sent from my SM-G950U using Tapatalk


I do CAD on a hackintosh. The Vega is the only card that is powerful and runs natively on OS x.
Also the memory lets me manipulate huge models without any lag.


----------



## Hanjin

Finally got around to installing the EK A240R this is my first ever attempt at a loop so the tubing is meh, but I'll be re-doing it with hard tubing later on.
I also tried flashing the Gigabyte liquid bios but my Gigabyte Air couldn't handle it as its a terrible overclocker so I reverted back to stock bios.

Temps are pretty damn good considering its summer here in Australia and my aircon is broke making my room 30c ambient, GPU temp is maxing out at around
58c and HBM at 60c and [email protected] with my Vega 64 running at 1630mhz GPU/1100mhz [email protected]


----------



## os2wiz

cephelix said:


> "It's ok, let it go. It's not worth it. If you punched him, your hand would just smell like ointment and pee."
> -Louise, Bob's Burgers
> 
> But seriously oz2wiz, why so confrontational? You asked for help and when people do help, you complain saying that's it's not tailored to exactly how you do it. You could've just elaborated on your initial question and people then would reply accordingly. Then without reading through the thread, you accuse people's reply as being worthless. You go through all the effort of putting your card under water then lamenting the purchase. Why did you not read up on the card first before ever making the inital purchase?


 I am sorry. The frustration level with this card is just phenomenal for me. I apologize for not thanking you when you gave me help a week ago. It matters not what tools I use I just can't get close to the performance I had expected with water cooling. i had settings that were 3 to 4% better in fps than the balanced setting, but unfortunately I failed to make a profile for it and can not now come close to that performance. It may be all me, but likely the driver updates have not made things easier. I also see the longer my computer is on and the more benchmarks I run the results get worse. Also I use hwinfo64 to monitor temps and I notice it no longer reports the hotspot temperature as it did a couple of weeks back. I installed the latest version about that time so it may be a bug. The other possibility is my sensor for it is faulty. So I just am thwarted at all levels now. I have no idea if I can get the best out of this card. By the way I did have enough sense to let the graphics card cool down for a few minutes before attempting another bench mark run.


----------



## cephelix

os2wiz said:


> I am sorry. The frustration level with this card is just phenomenal for me. I apologize for not thanking you when you gave me help a week ago. It matters not what tools I use I just can't get close to the performance I had expected with water cooling. i had settings that were 3 to 4% better in fps than the balanced setting, but unfortunately I failed to make a profile for it and can not now come close to that performance. It may be all me, but likely the driver updates have not made things easier. I also see the longer my computer is on and the more benchmarks I run the results get worse. Also I use hwinfo64 to monitor temps and I notice it no longer reports the hotspot temperature as it did a couple of weeks back. I installed the latest version about that time so it may be a bug. The other possibility is my sensor for it is faulty. So I just am thwarted at all levels now. I have no idea if I can get the best out of this card. By the way I did have enough sense to let the graphics card cool down for a few minutes before attempting another bench mark run.


Apology accepted. I just think RTG possibly pushed the card too far to try and compete with Nvidia's Pascal and as such, clocks and voltages were already at their limit. And for those of us unlucky to not get golden chips, the best way to go is to actually undervolt the card. So you get similar performance to stock but at a much lower wattage and thus heat output. 

Don't know whether it means much but at my tweaked settings, I gained close to 1000 points in superposition 1080p extreme while consuming about 100+ to 200W less system power. Then again for me at 1080p, raw power isn't critical but it's still nice to know that I've gained more thermal headroom.


----------



## cephelix

Naeem said:


> you need to downvolt and downclock your gpu i have tested minning on my Vega 64 LC i downclocked core to about 1000mhz and down volted core voltage to -150 and overclocked HBM2 to 1100mhz and i eas getting about 43mh/s with HBM2 at 1150mhz i get about 45mh/s gou clock speed does not do much over 1000mhz and card will run cooler with this setting
> 
> this is for ETH minning


I know this is late, but what do you use to test stability of downclocks? Same firestrike?


----------



## LicSqualo

*Superstition is not a good test for my HBM overclock*

Hi guys,
just to share my experience. I've tried to overclock and undervolt with success and I'm now running my Vega64 LC at these settings: HBM to 1080 Mhz and P6-P7 to 1100 and 1150 mVolts for a maximum speed of 1750 Mhz (stock settings of speed). 
But, for me, Superstition is not a good tester for HBM speed. I can pass with success without artifacts at 1200 Mhz memory, but in The Witcher 3 I've to lower my HBM speed to 1080 to not have artifacts (that are NOT present in Superstition test).
Just to point this.
In the witcher 3, after 40 or more seconds, if the ram is not "stable" you can see a lot of artifacts (sky shines in my case).
Now my question is: this is valid only for my system or this happen also with you? (can be run the witcher3 and test?)


----------



## TrixX

LicSqualo said:


> Hi guys,
> just to share my experience. I've tried to overclock and undervolt with success and I'm now running my Vega64 LC at these settings: HBM to 1080 Mhz and P6-P7 to 1100 and 1150 mVolts for a maximum speed of 1750 Mhz (stock settings of speed).
> But, for me, Superstition is not a good tester for HBM speed. I can pass with success without artifacts at 1200 Mhz memory, but in The Witcher 3 I've to lower my HBM speed to 1080 to not have artifacts (that are NOT present in Superstition test).
> Just to point this.
> In the witcher 3, after 40 or more seconds, if the ram is not "stable" you can see a lot of artifacts (sky shines in my case).
> Now my question is: this is valid only for my system or this happen also with you? (can be run the witcher3 and test?)


Agreed Superposition is not reliant on HBM clocks and can run much higher than other applications can cope with for HBM clocks. Personally I think Firestrike as a benchmark is probably the best for sussing out HBM stability as well as Timespy.


----------



## LicSqualo

*Ops wrong name*

Superstition instead of Superposition  Sorry for my mistake.
And thanks for your reply, I'm glad to know that I'm not the lonely


----------



## VicsPC

TrixX said:


> Agreed Superposition is not reliant on HBM clocks and can run much higher than other applications can cope with for HBM clocks. Personally I think Firestrike as a benchmark is probably the best for sussing out HBM stability as well as Timespy.


I think gaming is an even better benchmark for stability. Most games uncapped will hit your hbm AND core clock and that together is a good test. Something like rainbow six siege or anything multiplayer that will also use cpu usage is probably a good test as any.


----------



## TrixX

VicsPC said:


> I think gaming is an even better benchmark for stability. Most games uncapped will hit your hbm AND core clock and that together is a good test. Something like rainbow six siege or anything multiplayer that will also use cpu usage is probably a good test as any.


It does go without saying that any test of real applications that you use on a daily basis is the best form of test, however a quick benchmark run can knock out most of the inconsistencies. PUBG for instance is my go to one for full stability testing. In my card's case it doesn't like Firestrike much at all but remains stable in everything else so I have to ignore the Firestrike issue as it's likely something else as the cause. End of the day it's slightly different for each machine.


----------



## xkm1948

I am surprised that nobody posted this piece of news here:

https://www.techpowerup.com/240879/amd-cancels-implicit-primitive-shader-driver-support

TL, DR: Implicit Primitive Shader driver support is canceled. From now on any primitive shading has to be implemented from game developers' side. So no magic performance boosting driver in the forseable future for Vega architecture.


----------



## LicSqualo

xkm1948 said:


> I am surprised that nobody posted this piece of news here:
> 
> https://www.techpowerup.com/240879/amd-cancels-implicit-primitive-shader-driver-support
> 
> TL, DR: Implicit Primitive Shader driver support is canceled. From now on any primitive shading has to be implemented from game developers' side. So no magic performance boosting driver in the forseable future for Vega architecture.


...and Vega/Raja with Intel (the next gen Intel Apu)? 
Perhaps we can hope also from this join to have a better future performance for our Vega?


----------



## Grummpy

xkm1948 said:


> I am surprised that nobody posted this piece of news here:
> 
> https://www.techpowerup.com/240879/amd-cancels-implicit-primitive-shader-driver-support
> 
> TL, DR: Implicit Primitive Shader driver support is canceled. From now on any primitive shading has to be implemented from game developers' side. So no magic performance boosting driver in the forseable future for Vega architecture.


Its no different for nvidia
developer has to implement it with them aswell i here


----------



## Trender

Trender said:


> My Vega just keep crashing and I tried everything u can find on the Internet, power save, balanced, uv, locked states, fresh windows format & reinstall, always DDU... nothing
> 
> 
> 
> 
> 
> 
> 
> 
> 
> also tried to increase "TrdDelay" but it crashes the same didnt fixed it
> Also I got this file:
> atikmdag_dce.log in C:/AMD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> m playing then driver reset (black screen but its really just turning off and on) and then and if I dont restart fast, it gets the computer freeze in less than a minute ; It doesnt get any audio problem it just restart the display and the vega gpu and in about 1 min it freezes the pcso I have to hold the power button to shutdown it.
> and it doesn't crashes in stress 3dmark test at 300W so power isnt an issue as it also crashes in power save


Slight bump, if my card is fault with how much effort I did put to buy it I'll just have to say bye to her as they aren't even manufacturated anymore


----------



## FastMHz

Is it possible to get Gaming Driver 18.1.1 to install on Vega FE? Driver Options only sees 17.12.2.


----------



## Naeem

cephelix said:


> I know this is late, but what do you use to test stability of downclocks? Same firestrike?



mining is not as hard hitting as 3d rendering i don't mine on my vega 24 7 but some time i leave it up for like 6 to 8 hrs as i also play games inbetween so i can't hard lock it to lower power state i use following MSI AB setting right now I have Vega 64 Liquid Edtion


----------



## cephelix

Naeem said:


> mining is not as hard hitting as 3d rendering i don't mine on my vega 24 7 but some time i leave it up for like 6 to 8 hrs as i also play games inbetween so i can't hard lock it to lower power state i use following MSI AB setting right now I have Vega 64 Liquid Edtion


Thanks!!!


----------



## Trender

How bad is this?


----------



## LicSqualo

Trender said:


> How bad is this?


A lot, for my standard. Can you give us other informations?
Is a overclocked card? Or stock?
If the first, immediately lower your clocks. 
If the second is RMA time. 
My 2 cents.


----------



## os2wiz

Grummpy said:


> Its no different for nvidia
> developer has to implement it with them aswell i here


The difference being Nvidia has close to 90% of market share so they can get developers to create api support in their engines for the shaders that AMD can not. This is something that AMD should have work out with the major game developers prior to release of Vega. They are doing a poor job of catchup with Nvidia with no sign of significant improvement on the horizon. I see no compelling reason in Navi design to indicate the gap will close. 7nm without major architectural improvements means little. There has been nothing revealed about navi architecture yet, so little reason for optimism. If 7nm is not high performance it will be another Vega disaster.


----------



## Scorpion49

I just got one of the dual fan XFX Vega 56 cards, and I do not recommend this thing to anyone. It has the original Vega problem with the TIM beind hard as a rock and not touching the HBM because the stacks are lower than the GPU die, and even if you repaste and flood it so it makes contact, the fans are so excessively loud that you could only dream for the silence of a blower. This thing is all around terrible, and to add insult to injury their tiny "custom" PCB has no water blocks available and it coil whines worse than any card I've ever heard in 20+ years of doing this.


----------



## ducegt

Trender said:


> How bad is this?


I last saw something like that on a 9600 PRO that I hard volt modded. I permanently damaged the core/mem. 

Anyway, you have been complaining for weeks about problems, while at the same mentioning under volting... Maybe worth trying a fresh OS install and leaving it at 100% stock to see if you have issues before concluding the hardware is defective. Did you ever remove the cooler or anything?


----------



## ducegt

Scorpion49 said:


> I just got one of the dual fan XFX Vega 56 cards, and I do not recommend this thing to anyone. It has the original Vega problem with the TIM beind hard as a rock and not touching the HBM because the stacks are lower than the GPU die, and even if you repaste and flood it so it makes contact, the fans are so excessively loud that you could only dream for the silence of a blower. This thing is all around terrible, and to add insult to injury their tiny "custom" PCB has no water blocks available and it coil whines worse than any card I've ever heard in 20+ years of doing this.


That sucks, but thanks for sharing. My 64LC makes noises like nothing I've ever heard before. It's very annoying, but subtle enough that I normally don't notice it. Doesn't bother me with headphones on of course.


----------



## Scorpion49

ducegt said:


> That sucks, but thanks for sharing. My 64LC makes noises like nothing I've ever heard before. It's very annoying, but subtle enough that I normally don't notice it. Doesn't bother me with headphones on of course.


Yeah, I jumped on it because it was "affordable" for a custom cooled Vega these days and the RX 580 was way too slow for 1440p 144hz. I can't explain why this card is so loud, the temp stays under 50C for both core and HBM, and the hotspot is only around 15C more (it was hitting 100C before I repasted it). But the fans still crank right to 95% within 30 seconds of running a game, no matter what. The coil whine is SO BAD, I can't even describe it. I've never heard anything like this, it sounds like a cicada sitting inside of the rig.


----------



## SavantStrike

Trender said:


> How bad is this?


Have you messed with the card at all? Power play mods, over clock, bios flash etc?

If I had a card do that out of the box I'd RMA it if it wasn't a game engine glitch.


----------



## Ne01 OnnA

Trender said:


> How bad is this?


I've expeienced this bug also in SW BF2 (starfighter assault)
Already back to Old Good 17.9.3 WHQL  -> All problems solved for me.

Do note that WDDM 2.3 drivers are not fully Ready Yet !
Yes for any of ATI GPU.

For Fury HBM best is 17.9.3 Oct.2 (WDDM 2.2)
For VEGA is 17.11.4 (WDDM 2.3)


----------



## kondziowy

Scorpion49 said:


> Yeah, I jumped on it because it was "affordable" for a custom cooled Vega these days and the RX 580 was way too slow for 1440p 144hz. I can't explain why this card is so loud, the temp stays under 50C for both core and HBM, and the hotspot is only around 15C more (it was hitting 100C before I repasted it). But the fans still crank right to 95% within 30 seconds of running a game, no matter what. The coil whine is SO BAD, I can't even describe it. I've never heard anything like this, it sounds like a cicada sitting inside of the rig.


Hot spot was 100*C on stock paste? What was HBM temp and core temp? Are they not testing temps before shipping at all?


----------



## SavantStrike

kondziowy said:


> Hot spot was 100*C on stock paste? What was HBM temp and core temp? Are they not testing temps before shipping at all?


I dont think manufacturers do much testing before shipping except for cards like the Kingpin or the HoF.


----------



## Scorpion49

kondziowy said:


> Hot spot was 100*C on stock paste? What was HBM temp and core temp? Are they not testing temps before shipping at all?


Look at the picture I posted. The HBM makes zero contact with the heatsink or TIM, totally unacceptable IMO especially for a problem we knew about last year. 

This came out months ago and should have been fixed (especially considering these things are sold out in less than 10 minutes, its not like they're sitting on old stock):


----------



## VicsPC

Scorpion49 said:


> Look at the picture I posted. The HBM makes zero contact with the heatsink or TIM, totally unacceptable IMO especially for a problem we knew about last year.
> 
> This came out months ago and should have been fixed (especially considering these things are sold out in less than 10 minutes, its not like they're sitting on old stock):
> 
> 
> https://www.youtube.com/watch?v=y_OgK_fUoH0


Only way to fix this would be for card manufacturers to lower part of the heatsink down by a few micron meters but they would have to change tooling and for them its not worth it. I would LOVE to see someone with an unmolded die try to heat up the copper and have it push down a bit where it meats the HBM. Youd need a good amount of pressure paper and do it slowly. I would love to do it if i had a 56 that was unmolded but im on water with a 64. I think whoever posted about the xfx said 50°C HBM and core temps and i don't see that as humanely possible on air especially when the paste isn't even touching the HBM.



kondziowy said:


> Hot spot was 100*C on stock paste? What was HBM temp and core temp? Are they not testing temps before shipping at all?


Hotspot temp is such an iffy measurement i don't even bother to look at mine. 100°C hotspot is normal for air cooled and poor case flow, on water it's totally different as well. Mine is usually 12°C above core on water, i've seen some go up to 20°C above core on water.


----------



## Newbie2009

I think the memory timings have been changed on updated vega 64 water bios, flashed my card, cannot break 1110mhz on air bios HBM, will do 1.2ghz on watercooled bios HBM.


----------



## ducegt

Newbie2009 said:


> I think the memory timings have been changed on updated vega 64 water bios, flashed my card, cannot break 1110mhz on air bios HBM, will do 1.2ghz on watercooled bios HBM.


You mean 8774 that's several months old or is there a newer one?


----------



## gupsterg

Scorpion49 said:


> I just got one of the dual fan XFX Vega 56 cards, and I do not recommend this thing to anyone. It has the original Vega problem with the TIM beind hard as a rock and not touching the HBM because the stacks are lower than the GPU die, and even if you repaste and flood it so it makes contact, the fans are so excessively loud that you could only dream for the silence of a blower. This thing is all around terrible, and to add insult to injury their tiny "custom" PCB has no water blocks available and it coil whines worse than any card I've ever heard in 20+ years of doing this.


OMG, that PCB seems gimped on VRM phases, have you got any wider shots of front/back?


----------



## seniorfallrisk

So, to anyone who has the Strix Vega 56 or is thinking of getting it, it seems that either the Vega 56 are all Hynix HBM2, have mixed Hynix and Samsung, or that the Vega 64 are _only_ Samsung HBM2. I've been running into issues with my Strix 56 flashed to Strix 64 vbios and recently have nothing but trouble trying to run any games so I decided to do some benches.

https://www.3dmark.com/compare/spy/3234552/spy/3234582/spy/3234632

The first result is the Strix 64 "P mode", or right side (high TDP) bios at stock settings and the second result is that of the Strix 56 P mode, and the third of 56 P mode with an undervolt to 1100 and overclock to 1650 core with 900 on memory..

Funny stuff. My 56 bios overclocked is rock solid and performs MUCH better than the 64 bios, and I, sadly, have Hynix on this card.

FYI, this is paired with 16gb of Corsair 3000c15 at 3000c14 tightened timings and a 1700 doing 3.8ghz.


----------



## Scorpion49

VicsPC said:


> Only way to fix this would be for card manufacturers to lower part of the heatsink down by a few micron meters but they would have to change tooling and for them its not worth it. I would LOVE to see someone with an unmolded die try to heat up the copper and have it push down a bit where it meats the HBM. Youd need a good amount of pressure paper and do it slowly. I would love to do it if i had a 56 that was unmolded but im on water with a 64. I think whoever posted about the xfx said 50°C HBM and core temps and i don't see that as humanely possible on air especially when the paste isn't even touching the HBM.
> 
> 
> 
> Hotspot temp is such an iffy measurement i don't even bother to look at mine. 100°C hotspot is normal for air cooled and poor case flow, on water it's totally different as well. Mine is usually 12°C above core on water, i've seen some go up to 20°C above core on water.


They fixed it already on the other fab that builds them, that has the gap filled and the HBM is level with the die. After my 3rd repaste I have the HBM at 80C under load, which is around stock blower temps. 




gupsterg said:


> OMG, that PCB seems gimped on VRM phases, have you got any wider shots of front/back?


Yes, it is very gimped. This is possibly what was meant to be Vega Nano, but it never really came to market because they can't keep up with demand in the first place.


----------



## SavantStrike

Scorpion49 said:


> They fixed it already on the other fab that builds them, that has the gap filled and the HBM is level with the die. After my 3rd repaste I have the HBM at 80C under load, which is around stock blower temps.
> 
> 
> 
> 
> Yes, it is very gimped. This is possibly what was meant to be Vega Nano, but it never really came to market because they can't keep up with demand in the first place.


That card with a single slot io plate and a full cover bock is a SFF PC dream card.

I can't see how an air cooled Vega nano could ever work well given Vegas fiery temparment.


----------



## bill1971

I think to buy alphacool eiswolf Wolf 120,For my vega 56,whats your opinions,gives good Temps?


----------



## SavantStrike

bill1971 said:


> I think to buy alphacool eiswolf Wolf 120,For my vega 56,whats your opinions,gives good Temps?


Compared to the stock blower it's guaranteed to drop temps. Do they sell a 240?


----------



## bill1971

SavantStrike said:


> Compared to the stock blower it's guaranteed to drop temps. Do they sell a 240?


Yes but i have to wait 4-5 weeks, and the 120 is 45 mm tickness vs 30 mm 240,so there is No big diffrence and both Has two fans and finally i am afraid if the 240 refrigarator tubes is Long to touch the top of the case.if i Compared With a custom lopp eg With ek waterblock, which is better solution?


----------



## Grummpy

http://benchmark.finalfantasyxv.com...bc585db87&Resolution=1920x1080&Quality=Middle


----------



## Grummpy

using less power and is faster then the 1080.
what more do you need.


----------



## SavantStrike

bill1971 said:


> Yes but i have to wait 4-5 weeks, and the 120 is 45 mm tickness vs 30 mm 240,so there is No big diffrence and both Has two fans and finally i am afraid if the 240 refrigarator tubes is Long to touch the top of the case.if i Compared With a custom lopp eg With ek waterblock, which is better solution?


Custom loop is always better, but the alphacool uses copper components in the block itself, so it's still miles ahead of 90 percent of the AIO's out there. The alphacool is cheaper than building a loop too I'm pretty sure.

If you want to add your CPU to the loop, then build one, otherwise the alphacool is a lot nicer than the liquid cooler that AMD provided on the V64 Liquid.


----------



## os2wiz

bill1971 said:


> I think to buy alphacool eiswolf Wolf 120,For my vega 56,whats your opinions,gives good Temps?


 I have the Eiswolf GPX 120 and does an excellent job with thermals. Do NOT expect big overclocks though. That is Vegas issue. Best thing you can do is flash the bios Vega 64. That gives you more than a 10% performance gain over the 56 bios. You may be able to squeeze another 2 or 3% by playing with undervolting and overclocking the HBM2 memory. It is a complete crap shoot . Most of the time your going to feel on the shortend. By undervolting and cooling you can extend the life of the gpu. No matter what game or benchmark I run the gpu temp NEVER goes above 40 Celcius under full load. So the Eiswolf does a great job.


----------



## plywood99

Any way to limit the max boost on Vega? Sometimes it will jump to 1800+ in certain situations and cause a CTD or Wattman reset.


----------



## geriatricpollywog

Grummpy said:


> using less power and is faster then the 1080.
> what more do you need.
> https://www.youtube.com/watch?v=km9uwqYspQM


FF XV looks and runs like crap on the PC. This was the exact case when FF VII was released on PC 20 years ago, a the year after its initial console release.


----------



## os2wiz

bill1971 said:


> Yes but i have to wait 4-5 weeks, and the 120 is 45 mm tickness vs 30 mm 240,so there is No big diffrence and both Has two fans and finally i am afraid if the 240 refrigarator tubes is Long to touch the top of the case.if i Compared With a custom lopp eg With ek waterblock, which is better solution?


The Eiswolf GPX 240 for Vega is overkill. Totally unnecessary. My temps under full load on graphics benchmarks never exceed 40 celcius with my Eiswolf GPX 120 for Vega.


----------



## os2wiz

Scorpion49 said:


> I just got one of the dual fan XFX Vega 56 cards, and I do not recommend this thing to anyone. It has the original Vega problem with the TIM beind hard as a rock and not touching the HBM because the stacks are lower than the GPU die, and even if you repaste and flood it so it makes contact, the fans are so excessively loud that you could only dream for the silence of a blower. This thing is all around terrible, and to add insult to injury their tiny "custom" PCB has no water blocks available and it coil whines worse than any card I've ever heard in 20+ years of doing this.


XFX has been releasing inferiorly designed and built cards for the past 3 to 4 years. It really was not worth the risk. I would demand a refund for the card. It is not performing up to their own fraudulent specs so fight them.


----------



## Grummpy

Nvidia good show.
some never change their spots.


----------



## Delijohn

os2wiz said:


> I have the Eiswolf GPX 120 and does an excellent job with thermals. Do NOT expect big overclocks though. That is Vegas issue. Best thing you can do is flash the bios Vega 64. That gives you more than a 10% performance gain over the 56 bios. You may be able to squeeze another 2 or 3% by playing with undervolting and overclocking the HBM2 memory. It is a complete crap shoot . Most of the time your going to feel on the shortend. By undervolting and cooling you can extend the life of the cpu. No matter what game or benchmark I run the gpu temp NEVER goes above 40 Celcius under full load. So the Eiswolf does a great job.


your best settings until now for stable gaming? 
I got the same WC package but sometimes it's still unstable and i get black screens, so i have to restart. I flashed recently with the WC bios of vega64 but it became more "sensitive". Memory speed can easily stay at 1130-1150, but what about core clocks?


----------



## os2wiz

Delijohn said:


> your best settings until now for stable gaming?
> I got the same WC package but sometimes it's still unstable and i get black screens, so i have to restart. I flashed recently with the WC bios of vega64 but it became more "sensitive". Memory speed can easily stay at 1130-1150, but what about core clocks?


I have no black screens. What I have is severely limited overclocking. I have spent almost 100 hours trying to get a good custom overclock. I had one that was fairly good but failed to save the profile. I find most of the time just using the balanced wattman power profile is best. You can do better but it is small potatoes
not worth the aggravation in my opinion. I could never replicate my best custom performance.


----------



## Scorpion49

So I've been massing with the XFX Vega 56 double fan card some trying to get decent levels of performance out of it. I've managed to solve the fans ramping up to 100% all the time, but performance is really bad still. In the 3 games I play its slower than my 3GB 1060. PUBG it gets around 50fps at very low settings. Destiny 2 it manages around 70 while my 1060 can get 100-110 at the same exact settings. 

One thing I notice is every time I boot the computer up the HBM speed is very low in wattman, usually its 500mhz but I've seen it set at 700 a few times (stock for the card is 800, why the crap AMD can't run their cards at spec in their own damn software is a mystery to me).

Core overclock seems impossible, even giving it a 1-5% boost locks the driver up. HBM artifacts badly at even 50mhz over stock. I'm considering either A) returning it and having nothing to play games on or B) trying to flash a Vega 64 BIOS to it, not sure it will even work because it only has a 4-phase VRM.


----------



## TrixX

Scorpion49 said:


> So I've been massing with the XFX Vega 56 double fan card some trying to get decent levels of performance out of it. I've managed to solve the fans ramping up to 100% all the time, but performance is really bad still. In the 3 games I play its slower than my 3GB 1060. PUBG it gets around 50fps at very low settings. Destiny 2 it manages around 70 while my 1060 can get 100-110 at the same exact settings.
> 
> One thing I notice is every time I boot the computer up the HBM speed is very low in wattman, usually its 500mhz but I've seen it set at 700 a few times (stock for the card is 800, why the crap AMD can't run their cards at spec in their own damn software is a mystery to me).
> 
> Core overclock seems impossible, even giving it a 1-5% boost locks the driver up. HBM artifacts badly at even 50mhz over stock. I'm considering either A) returning it and having nothing to play games on or B) trying to flash a Vega 64 BIOS to it, not sure it will even work because it only has a 4-phase VRM.


Honestly it doesn't sound like an AMD issue, more an XFX issue with their board design. 4 Phase VRM doesn't even come close to the stock VRM's for the V56. For OC help I'd personally drop the Voltages as low as possible and work up to where they plateau, for HBM that should be running at 950MHz I believe not 700 or 800MHz as 800MHz is the P2 value for my V64 with P3 set to 950MHz. If it's not hitting max Power levels then heat isn't being dissipated fast enough, so lowering voltage may get some headroom to have HBM clock to it's stock performance level. Does sound like a very poorly put together card by XFX though.


----------



## maxrealliti

Hey everyone. Can you please tell me who pushed the problem of the winking of black screen on the maps Vega? Who can tell me where to find the problem or something that can be fixed? The problem is shown on all the drivers card does not run it works in a sewer. I would be grateful for any help in solving this problem


----------



## Scorpion49

TrixX said:


> Honestly it doesn't sound like an AMD issue, more an XFX issue with their board design. 4 Phase VRM doesn't even come close to the stock VRM's for the V56. For OC help I'd personally drop the Voltages as low as possible and work up to where they plateau, for HBM that should be running at 950MHz I believe not 700 or 800MHz as 800MHz is the P2 value for my V64 with P3 set to 950MHz. If it's not hitting max Power levels then heat isn't being dissipated fast enough, so lowering voltage may get some headroom to have HBM clock to it's stock performance level. Does sound like a very poorly put together card by XFX though.


I have dropped the voltage down, it runs at 1.000V with no problem at all. 

This is the card I have: https://videocardz.net/xfx-radeon-rx-vega-56-8gb-double-edition/

The stock HBM is 800mhz, V64 cards have much higher memory clocks out of the box which is why a lot of people flash the BIOS on the V56. I just don't know if this card can handle the increased voltages of the V64 vBIOS.


----------



## seniorfallrisk

TrixX said:


> Honestly it doesn't sound like an AMD issue, more an XFX issue with their board design. 4 Phase VRM doesn't even come close to the stock VRM's for the V56. For OC help I'd personally drop the Voltages as low as possible and work up to where they plateau, for HBM that should be running at 950MHz I believe not 700 or 800MHz as 800MHz is the P2 value for my V64 with P3 set to 950MHz. If it's not hitting max Power levels then heat isn't being dissipated fast enough, so lowering voltage may get some headroom to have HBM clock to it's stock performance level. Does sound like a very poorly put together card by XFX though.





Scorpion49 said:


> I have dropped the voltage down, it runs at 1.000V with no problem at all.
> 
> This is the card I have: https://videocardz.net/xfx-radeon-rx-vega-56-8gb-double-edition/
> 
> The stock HBM is 800mhz, V64 cards have much higher memory clocks out of the box which is why a lot of people flash the BIOS on the V56. I just don't know if this card can handle the increased voltages of the V64 vBIOS.


I highly recommend checking out the Preliminary Vega Bios thread, as there are powerplay tables for all AIB Vega that have had their vbios(I think) posted there. Once you get your hands on that, set your voltage values from P2(I think?) to P5 at 900 or lower, which will effectively drop your minimal memory and core voltage to 900.

For stock, the lowest working voltage is 1050mv at P5 (for Strix, atleast). This is a ridiculously high voltage, but is governed by the value set on PPT/vbios's value for *P5 voltage*. As soon as you lower P5's voltage, you can lower HBM and core as low as P5 is set.


----------



## gupsterg

Newbie2009 said:


> I think the memory timings have been changed on updated vega 64 water bios, flashed my card, cannot break 1110mhz on air bios HBM, will do 1.2ghz on watercooled bios HBM.


I have not noted any differing timings between any VBIOS viewed so far, please share files will compare.



Scorpion49 said:


> Yes, it is very gimped. This is possibly what was meant to be Vega Nano, but it never really came to market because they can't keep up with demand in the first place.


They knocked GPU VCORE VRM down to 7 from 12 :/ , some things moved around, all same components though. My own believe is VEGA had bigger VRM to cope with variation of loading, to me it always seems VEGA jumps up/down more at wall meter than Fiji & Hawaii.



plywood99 said:


> Any way to limit the max boost on Vega? Sometimes it will jump to 1800+ in certain situations and cause a CTD or Wattman reset.


Right click DPM 7 (highest state) in WattMan and set as maximum state, may possibly work.


----------



## Newbie2009

Best score I've managed with Vega 64, air card, WC Bios and block. 1750/1165 @ 1175mv

https://www.3dmark.com/fs/14795768


----------



## Trender

Guys if you get that u know all full black screen in Overwatch is because of high temps with high HBM for air?


----------



## seniorfallrisk

Trender said:


> Guys if you get that u know all full black screen in Overwatch is because of high temps with high HBM for air?


Unstable HBM as well as core clocks create that issue and it's a god damned pain in the ass. I can't play with "Turbo" on wattman, and my overclock/undervolt has to be very precise or else I constantly blackscreen in the midst of matches.


----------



## RatusNatus

seniorfallrisk said:


> Unstable HBM as well as core clocks create that issue and it's a god damned pain in the ass. I can't play with "Turbo" on wattman, and my overclock/undervolt has to be very precise or else I constantly blackscreen in the midst of matches.


Thank you for your feedback. Ive been asking around for months about this and no one dare to guess.
But no, it's not that.
https://wccftech.com/amd-vega-10-gpu-milestone/

gfx900 is Vega as gfx800 is Polaris. It could be any Vega.

This card is probably just an low binned card but i want an answer more specific. I know because is the only one giving me errors.
The production date is similar with the other 5, 33 week of year 17. I just wonder why it was sold as 64 and not 56 if its low binned....this is a mystery to me.

Im just guessing, it was like this because the core is fine and the memory is the one low binned.
If it was the opposite, it should be an 56.

PS. Ive psoted in the wrong topic...


----------



## Ne01 OnnA

os2wiz said:


> I have no black screens. What I have is severely limited overclocking. I have spent almost 100 hours trying to get a good custom overclock. I had one that was fairly good but failed to save the profile. I find most of the time just using the balanced wattman power profile is best. You can do better but it is small potatoes
> not worth the aggravation in my opinion. I could never replicate my best custom performance.


Hi

Please try this tool -> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116

You'll be happy 

Im using this tool for my GPU.


----------



## Scorpion49

Just an FYI, my short PCB Vega 56 cannot be flashed with a normal Vega 64 vBIOS, it has a product mismatch because this card actually does show up as Vega Nano. Does anyone have a V64 with a short PCB that I could get a vBIOS from?


----------



## bill1971

this week,maby next,i will put waterblock custom loop watercooling in my vega 56,if I flash vega 64 liquid bios it will work?whats your opinion?


----------



## SavantStrike

bill1971 said:


> this week,maby next,i will put waterblock custom loop watercooling in my vega 56,if I flash vega 64 liquid bios it will work?whats your opinion?


Check and see if it's stable at the V64 liquid clocks first. Only flash if you know it can handle the clocks.


----------



## bill1971

SavantStrike said:


> Check and see if it's stable at the V64 liquid clocks first. Only flash if you know it can handle the clocks.


I have already flash v64 air bios,with good results,but I didn't try liquid because higher clocks,temps etc....that's I hope to be stable at first.


----------



## seniorfallrisk

Scorpion49 said:


> Just an FYI, my short PCB Vega 56 cannot be flashed with a normal Vega 64 vBIOS, it has a product mismatch because this card actually does show up as Vega Nano. Does anyone have a V64 with a short PCB that I could get a vBIOS from?


You're the only person that I, or anyone i know, with that card. Do NOT flash the Vega 64 vbios as you

1. Might run into compatibility problems if you have Hynix and V64 has Samsung.
2. You already have stability and temperature problems don't you?

I've only ever seen one other XFX Vega like yours, and it was on eBay. Where did you get it?? For some reason, I feel that your XFX Vega is a pre-production model since NO ONE has reviews or information on it, and there's nowhere to find the model.


----------



## Robotmind

"I've only ever seen one other XFX Vega like yours, and it was on eBay. Where did you get it?? For some reason, I feel that your XFX Vega is a pr-production model since NO ONE has reviews or information on it, and there's nowhere to find the model.[/QUOTE]


The only place I have seen those for sale is BestBuy.com (the retail chain store) and ebay (seller is also BestBuy, price is the same).

And of course resellers. 

Price has gone up about $40 since the last batch I saw for sale.

Cheers!


----------



## Scorpion49

seniorfallrisk said:


> You're the only person that I, or anyone i know, with that card. Do NOT flash the Vega 64 vbios as you
> 
> 1. Might run into compatibility problems if you have Hynix and V64 has Samsung.
> 2. You already have stability and temperature problems don't you?
> 
> I've only ever seen one other XFX Vega like yours, and it was on eBay. Where did you get it?? For some reason, I feel that your XFX Vega is a pre-production model since NO ONE has reviews or information on it, and there's nowhere to find the model.


Its sold at Best Buy, and yes - don't flash this with a normal V64 bios, ask me how bricked my secondary is. I can't find any trick to unbrick it like the Polaris cards have (shorting the legs of the BIOS chip).


----------



## RatusNatus

You don't need any tricky since all cards do have dual bios and the second one, as far as I know, can't be flashed.
All AMD high end cards have it since R9 290(2013). 
It's hard to brick those cards but the world is big enough. ..


----------



## JasonMZW20

VicsPC said:


> Only way to fix this would be for card manufacturers to lower part of the heatsink down by a few micron meters but they would have to change tooling and for them its not worth it. I would LOVE to see someone with an unmolded die try to heat up the copper and have it push down a bit where it meats the HBM. Youd need a good amount of pressure paper and do it slowly. I would love to do it if i had a 56 that was unmolded but im on water with a 64. I think whoever posted about the xfx said 50°C HBM and core temps and i don't see that as humanely possible on air especially when the paste isn't even touching the HBM.
> 
> 
> 
> Hotspot temp is such an iffy measurement i don't even bother to look at mine. 100°C hotspot is normal for air cooled and poor case flow, on water it's totally different as well. Mine is usually 12°C above core on water, i've seen some go up to 20°C above core on water.


It's easier than that. All manufacturers would have to do is sand off heatsink material on the GPU side which would lower the heatsink surface area on GPU side, which would also allow surface area to contact HBM, since GPU is only slightly taller (40 micrometers). That takes an extra step though, so most would rather use TIM to fill the gap on HBM side.

My unmolded reference Vega64 had decent hotspot temps, roughly 2-4C higher than my molded 64 (around 78-80C). 

Unfortunately, my unmolded Vega64 recently suffered HBM failure (in Wolfenstein 2; temps showed good, but HBM downclocked to 500MHz at 75C like it hit 95C, then hard locked PC and didn't initialize display from then on). GPU-Z reported 0MB HBM2 when used as secondary card and could not be used in Crossfire or independently (not shown in Radeon Settings and Device Manager showed yellow exclamation point). As primary, it'd never initialize the monitor, but mobo would still boot into Windows without display. I figured the actual GPU was still good, but had no access to framebuffer memory. Anyway, it was sent back to XFX, but with Chinese New Year going on and overall Vega shortages, I don't expect any news for a while, sadly.


----------



## seniorfallrisk

Scorpion49 said:


> Its sold at Best Buy, and yes - don't flash this with a normal V64 bios, ask me how bricked my secondary is. I can't find any trick to unbrick it like the Polaris cards have (shorting the legs of the BIOS chip).


Boot with your working bios activated. Once you're in Windows, switch to the bricked bios and then flash it using the proper utility. Voila, you're back to your normal bioses.

Edit:

So, I swapped my Strix 56 for a replacement and now it's correctly shown as having Hynix HBM... Man, what a *dud* of a card this replacement is. Max OC is about 1650 on core and 850 on mem set in wattman, even with 400w TDP and 100% power limit set on my soft PPT. These Hynix HBM2 modules are just awful..


----------



## STEvil

Anyone know if there's a fix to allow installing drivers on windows 10 non-uefi systems, or why AMD seems to be ignoring the issue? Got a Vega FE but unfortunately no uefi on my EX58-UD5...


----------



## Sunsoar

Hey all - just typing here to see if I can get any similar responses or advice. 

Gigabyte RX 64 Liquid

My Vega since I bought it has been giving me random issues like video display crashing during gaming and youtube (music). PC would stay on, but monitor would blink out and go a solid color. All lights mobo, constant power. Just the GPU resetting. So I quit doing youtube music and gaming at same time...which leads

So with only gaming the PC randomly completely shuts off. Have to plug pull for `1 minute to get it to reset. Did this multiple times in different games. Monitoring the temps the reported GPU temps are okay usually about 67C in Heroes of the Storm. Witcher 3 is like 65C. Now Kingdome Come Deliverance it gets up to 67-70C. The reported HBM and Liquid temps are 99C reported. GPU Hotspot is 95C. The card, radiator and fan get so hot it is of course raising the temps of all the other components in PC. 

So I take it out and put it in my older machine. I run furmark for 1.5 hours where the card still hits the same temps and lose mouse clicking functionality. Can use the keyboard but everything is pretty slow. Finally get task manager to open using keyboard only and Furmark crashes. The ambient tempt of the CPU rose 15C just because how hot this card was running.

Does this sound like a bad card to you all? I am back to my MSI GTX 970 4G LE which runs just fine but doesn't do to well with gaming at 4k.

New PC:
Ryzen 1800X cooled AIO Captain 240EX RGB 
Asrock Taichi X370
16GB G.Skill 14C RGB
Samsung 960 NVMe
1000W Seasonic Titanium
Fractal Meshify C
5 Corsair HD120MM Cooling

Old PC
i7 950
Asus X58 Sabertooth
12GB 1600 G.Skill
750W XFX Black PSU
Various HDD/SSD combos
Cooler Master HAF932 with too many fans and no dust filters


----------



## steadly2004

Sunsoar said:


> Hey all - just typing here to see if I can get any similar responses or advice.
> 
> Gigabyte RX 64 Liquid
> 
> My Vega since I bought it has been giving me random issues like video display crashing during gaming and youtube (music). PC would stay on, but monitor would blink out and go a solid color. All lights mobo, constant power. Just the GPU resetting. So I quit doing youtube music and gaming at same time...which leads
> 
> So with only gaming the PC randomly completely shuts off. Have to plug pull for `1 minute to get it to reset. Did this multiple times in different games. Monitoring the temps the reported GPU temps are okay usually about 67C in Heroes of the Storm. Witcher 3 is like 65C. Now Kingdome Come Deliverance it gets up to 67-70C. The reported HBM and Liquid temps are 99C reported. GPU Hotspot is 95C. The card, radiator and fan get so hot it is of course raising the temps of all the other components in PC.
> 
> So I take it out and put it in my older machine. I run furmark for 1.5 hours where the card still hits the same temps and lose mouse clicking functionality. Can use the keyboard but everything is pretty slow. Finally get task manager to open using keyboard only and Furmark crashes. The ambient tempt of the CPU rose 15C just because how hot this card was running.
> 
> Does this sound like a bad card to you all? I am back to my MSI GTX 970 4G LE which runs just fine but doesn't do to well with gaming at 4k.
> 
> New PC:
> Ryzen 1800X cooled AIO Captain 240EX RGB
> Asrock Taichi X370
> 16GB G.Skill 14C RGB
> Samsung 960 NVMe
> 1000W Seasonic Titanium
> Fractal Meshify C
> 5 Corsair HD120MM Cooling
> 
> Old PC
> i7 950
> Asus X58 Sabertooth
> 12GB 1600 G.Skill
> 750W XFX Black PSU
> Various HDD/SSD combos
> Cooler Master HAF932 with too many fans and no dust filters


Is this overheating stock or while overclock? If it's happening at stock you should be able to send for RMA. Also are you pushing the hot air (from the radiator) into the case without an adequate exit? Like.... What's the planned airflow?


----------



## diaaablo

seniorfallrisk said:


> So, I swapped my Strix 56 for a replacement and now it's correctly shown as having Hynix HBM... Man, what a *dud* of a card this replacement is. Max OC is about 1650 on core and 850 on mem set in wattman, even with 400w TDP and 100% power limit set on my soft PPT. These Hynix HBM2 modules are just awful..


I have two cards with this HBM2 Hynix mess. First one is same, as you have and second is Sapphire RX Vega 56 Nitro+ Limited Edition. Funny thing: I've replaced vbios on Strix56 card with Strix64 bios without any trouble and it work pretty normal undervoltaged 1580/1020, but Sapphire version after flashing just bricked up, so I had to flash original back. If someone know which one 64bios is compatible with Nitro+ LE, please let me know


----------



## Sunsoar

steadly2004 said:


> Is this overheating stock or while overclock? If it's happening at stock you should be able to send for RMA. Also are you pushing the hot air (from the radiator) into the case without an adequate exit? Like.... What's the planned airflow?


Everything is stock. No OC on any PC or component. 

I have fresh air coming inside the front of the Meshify C with 3 fans and the Vega venting out the back and AIO venting out the top. Ryzen 1800X usually sits at 50-55C under gaming. Pretty much same setup on old PC but the CPU is cooled by air.


----------



## STEvil

STEvil said:


> Anyone know if there's a fix to allow installing drivers on windows 10 non-uefi systems, or why AMD seems to be ignoring the issue? Got a Vega FE but unfortunately no uefi on my EX58-UD5...


I wont be able to test this for a while so here's the solution... maybe.

Download the Win7-64 drivers, install them on Win10-64. The catch is whether the install triggers are actually different or not. Given that Vega can be installed on identical non-uefi hardware in Win7 but not Win10, its possible..


----------



## seniorfallrisk

diaaablo said:


> I have two cards with this HBM2 Hynix mess. First one is same, as you have and second is Sapphire RX Vega 56 Nitro+ Limited Edition. Funny thing: I've replaced vbios on Strix56 card with Strix64 bios without any trouble and it work pretty normal undervoltaged 1580/1020, but Sapphire version after flashing just bricked up, so I had to flash original back. If someone know which one 64bios is compatible with Nitro+ LE, please let me know


Oooh boy this is good news! I think my first card was just a real dud. If you're stable at 1020 HBM, I shouldn't have a problem flashing then. Gonna go ahead and try this again. Let me know what your scores on a benchmark (3dmark, heaven, whatever) before flashing and after flashing!


----------



## Scorpion49

seniorfallrisk said:


> Boot with your working bios activated. Once you're in Windows, switch to the bricked bios and then flash it using the proper utility. Voila, you're back to your normal bioses.
> 
> Edit:
> 
> So, I swapped my Strix 56 for a replacement and now it's correctly shown as having Hynix HBM... Man, what a *dud* of a card this replacement is. Max OC is about 1650 on core and 850 on mem set in wattman, even with 400w TDP and 100% power limit set on my soft PPT. These Hynix HBM2 modules are just awful..


Nope, doesn't work. Some clever people on reddit thought I should force the flash even though I said it won't work, and now the BIOS is totally bricked. You CAN NOT flash it any more, same error as Polaris cards get: 0FL01 Rom not erased. 

Which is why I asked about a hard reset.. again like Polaris.


----------



## VicsPC

JasonMZW20 said:


> It's easier than that. All manufacturers would have to do is sand off heatsink material on the GPU side which would lower the heatsink surface area on GPU side, which would also allow surface area to contact HBM, since GPU is only slightly taller (40 micrometers). That takes an extra step though, so most would rather use TIM to fill the gap on HBM side.
> 
> My unmolded reference Vega64 had decent hotspot temps, roughly 2-4C higher than my molded 64 (around 78-80C).
> 
> Unfortunately, my unmolded Vega64 recently suffered HBM failure (in Wolfenstein 2; temps showed good, but HBM downclocked to 500MHz at 75C like it hit 95C, then hard locked PC and didn't initialize display from then on). GPU-Z reported 0MB HBM2 when used as secondary card and could not be used in Crossfire or independently (not shown in Radeon Settings and Device Manager showed yellow exclamation point). As primary, it'd never initialize the monitor, but mobo would still boot into Windows without display. I figured the actual GPU was still good, but had no access to framebuffer memory. Anyway, it was sent back to XFX, but with Chinese New Year going on and overall Vega shortages, I don't expect any news for a while, sadly.


Yea my molded 64 on water hotspot sits at around 54°C today with core at 41°C and HBM at 42°C playing hunter call of the wild the past couple hours, its chilly in southern France so no window open but if i did my hotspot and water temps would drop a bit and get lower temps. In the summer with no AC though is where it gets a tiny bit warmer but not that much different. This is with a case temp of 25°C, water temp of 35°C and my Ryzen 1700x in the same loop so i honestly can't complain.


----------



## seniorfallrisk

Scorpion49 said:


> Nope, doesn't work. Some clever people on reddit thought I should force the flash even though I said it won't work, and now the BIOS is totally bricked. You CAN NOT flash it any more, same error as Polaris cards get: 0FL01 Rom not erased.
> 
> Which is why I asked about a hard reset.. again like Polaris.


Crap, that's rough. Even forcing the bios overwrite isn't working? :doh:

Also, my new Strix 56 does indeed work with Strix 64 bioses this time around! Last time, flashing any Vega 64 bioses would lead to my Strix 56 being insanely unstable as well as having lower performance on 3DMark and gaming in general. This time around, my card is properly reporting Hynix HBM2 modules on GPU-Z too. Overclocking sucks, but being able to run 945 mclock is nice.


----------



## diaaablo

seniorfallrisk said:


> Oooh boy this is good news! I think my first card was just a real dud. If you're stable at 1020 HBM, I shouldn't have a problem flashing then. Gonna go ahead and try this again. Let me know what your scores on a benchmark (3dmark, heaven, whatever) before flashing and after flashing!


Before: https://www.3dmark.com/fs/14796234
After: https://www.3dmark.com/fs/14812694


----------



## steadly2004

Sunsoar said:


> Everything is stock. No OC on any PC or component.
> 
> I have fresh air coming inside the front of the Meshify C with 3 fans and the Vega venting out the back and AIO venting out the top. Ryzen 1800X usually sits at 50-55C under gaming. Pretty much same setup on old PC but the CPU is cooled by air.


Yea, sounds like you need to RMA. That sucks.


----------



## Scorpion49

seniorfallrisk said:


> Crap, that's rough. Even forcing the bios overwrite isn't working? :doh:
> 
> Also, my new Strix 56 does indeed work with Strix 64 bioses this time around! Last time, flashing any Vega 64 bioses would lead to my Strix 56 being insanely unstable as well as having lower performance on 3DMark and gaming in general. This time around, my card is properly reporting Hynix HBM2 modules on GPU-Z too. Overclocking sucks, but being able to run 945 mclock is nice.


Yeah forcing it does nothing. I'm really disappointed in this card, its fan run at 75% all the time if I leave it on auto. Doesn't OC at all, even 10mhz on the HBM causes artifacts. I think it runs a lower voltage because of the small PCB. It seems to play games at around the same FPS as my 3GB 1060, which is really disappointing. I expect to have to run low settings with a $50 GPU, not one I paid $800 for in this over-inflated mining economy, well now I know why miners didn't buy this one and I was able to get it.


----------



## MapRef41N93W

Hey guys, has anyone figured out how to fix hotspot temps? My RX 64 on Bykski Ice Dragon WB will max out at about 43-44c core while mining (1400/1100) but the hotspot will hit 90c after about 20 minutes. I've tried re-applying MX-4 with credit card spread method and X method (on all 3 dies), but same end result. I also put a dab of paste in the gap of the HBM/Core. 

Also does anyone else have this Bykski waterblock? Apparently the X backplate bracket helps with mounting pressure, but it's physically impossible to mount on my waterblock. I've tried even with 2 people (one person holding, other screwing in) and the screws don't thread. I know the block is installed correctly otherwise as I re-checked it on numerous occasions. Pads make contact, all the washers are in place, screws are all secure, etc. I did replace the tiny screws that come on the X plate with the ones that come with the GPU bracket as instructed as well. 

Thanks


----------



## seniorfallrisk

diaaablo said:


> Before: https://www.3dmark.com/fs/14796234
> After: https://www.3dmark.com/fs/14812694


Jeeze, your card's able to hit 1000 on HBM meanwhile anything over 955 is an insta-crash for my card... And nice OC on that 1700x! I'm on 3.6ghz here because I'm on stock cooling.. No money to spend on coolers now lol.



Scorpion49 said:


> Yeah forcing it does nothing. I'm really disappointed in this card, its fan run at 75% all the time if I leave it on auto. Doesn't OC at all, even 10mhz on the HBM causes artifacts. I think it runs a lower voltage because of the small PCB. It seems to play games at around the same FPS as my 3GB 1060, which is really disappointing. I expect to have to run low settings with a $50 GPU, not one I paid $800 for in this over-inflated mining economy, well now I know why miners didn't buy this one and I was able to get it.


Yeah, I've been running into pretty much the same thing with my strix cards on the HBM. Luckily my replacement card is working well with the 64 bios flashed on it, but any overclock above 955 (10mhz OC) is an instacrash for my HBM and I can't OC my core whatsoever. If I undervolt, I need to use 1592 max instead of the 64's 1632 core clock.. I wonder if the newest drivers are creating overclocking issues for us? For example, I have used used a few reference Vega 56's and *every single one* can hit 1000mhz on the HBM with a 64 bios (on older 17.X drivers) and yet neither of my Strix's could on any of the 18.X drivers..


----------



## xkm1948

MapRef41N93W said:


> Hey guys, has anyone figured out how to fix hotspot temps? My RX 64 on Bykski Ice Dragon WB will max out at about 43-44c core while mining (1400/1100) but the hotspot will hit 90c after about 20 minutes. I've tried re-applying MX-4 with credit card spread method and X method (on all 3 dies), but same end result. I also put a dab of paste in the gap of the HBM/Core.
> 
> Also does anyone else have this Bykski waterblock? Apparently the X backplate bracket helps with mounting pressure, but it's physically impossible to mount on my waterblock. I've tried even with 2 people (one person holding, other screwing in) and the screws don't thread. I know the block is installed correctly otherwise as I re-checked it on numerous occasions. Pads make contact, all the washers are in place, screws are all secure, etc. I did replace the tiny screws that come on the X plate with the ones that come with the GPU bracket as instructed as well.
> 
> Thanks



MX-4 is a pretty old paste. Used to be the top choice but does not stand a chance against newer and better TIM. I would recommend either Noctua NT-H1 or ThermalGrizzly Kryonaut.


----------



## vmlinuzz

Went from RX 480 4Gb to Vega FE Air, and since the file attachment/picture uploading is broken at the moment i can't upload any proof i own it . Anyways this lovely little bit of engineering is in the mail: 

https://www.alphacool.com/shop/neue...olf-240-gpx-pro-amd-rx-vega-m01-black?c=20540

Big OC's coming soon. Also if you use an HX Series PSU set it to Single rail mode with that little switch since Multi-rail was engaging the over-current protection occasionally at the start of 3DMark runs, etc and shutting the system down with the Vega in.


----------



## diaaablo

*seniorfallrisk*, reference cards using samsung's memory, instead of hynix garbage. Below are my "safe" settings for vega strix 56 with bios from 64. Maybe they will help you a bit. And don't forget about silicon lottery - there is always a chance to get weak overclock.


----------



## Trender

Ever since I changed my gpu cable extensions sleeved my Vega doesnt crash anymore lol I think those d*** extension cables was the problem


----------



## MapRef41N93W

xkm1948 said:


> MX-4 is a pretty old paste. Used to be the top choice but does not stand a chance against newer and better TIM. I would recommend either Noctua NT-H1 or ThermalGrizzly Kryonaut.


Noctua NT-H1 is within .3c of MX-4. There is virtually no real world difference between any of the major brands of non-silicone non-conductive pastes besides how easy they spread. That certainly doesn't explain my issue. Also I tried NT-H1 and the stuff was a nightmare. It got EVERYWHERE when I went to spread it and it turns into goop that just spreads around when you go to clean it. I went back to tried and trusty MX-4 after using that crap. ThermalGrizzly costs 5x per gram what MX-4 costs for a tiny performance increase. I'll pass.


----------



## VicsPC

MapRef41N93W said:


> Hey guys, has anyone figured out how to fix hotspot temps? My RX 64 on Bykski Ice Dragon WB will max out at about 43-44c core while mining (1400/1100) but the hotspot will hit 90c after about 20 minutes. I've tried re-applying MX-4 with credit card spread method and X method (on all 3 dies), but same end result. I also put a dab of paste in the gap of the HBM/Core.
> 
> Also does anyone else have this Bykski waterblock? Apparently the X backplate bracket helps with mounting pressure, but it's physically impossible to mount on my waterblock. I've tried even with 2 people (one person holding, other screwing in) and the screws don't thread. I know the block is installed correctly otherwise as I re-checked it on numerous occasions. Pads make contact, all the washers are in place, screws are all secure, etc. I did replace the tiny screws that come on the X plate with the ones that come with the GPU bracket as instructed as well.
> 
> Thanks


Hot spot has more to do with ambient temp and case airflow. My core peaks at 41°C while playing origins but hotspot only gets to 54°C. If i open a window and get 13°C airflow in the room hotspot drops by a couple degrees.

As far as paste goes. I used NH-T1 and found that on GPUs it was garbage, too thin of a paste to be used on a gpu with high mounting pressure (usually what waterblocks are). I use kryonaut as its the same price as NH-T1 here in France and i got a big tube of it. We're not sure where hotspot temps are but my VRMs are also 48°C VDDC and 56°C MVDD so hotspot is probably somewhere in between the VRMs somewhere educated guess. This is all while playing AC origins with case temp at 24°C and water temp probably close to ~32°C.



MapRef41N93W said:


> Noctua NT-H1 is within .3c of MX-4. There is virtually no real world difference between any of the major brands of non-silicone non-conductive pastes besides how easy they spread. That certainly doesn't explain my issue. Also I tried NT-H1 and the stuff was a nightmare. It got EVERYWHERE when I went to spread it and it turns into goop that just spreads around when you go to clean it. I went back to tried and trusty MX-4 after using that crap. ThermalGrizzly costs 5x per gram what MX-4 costs for a tiny performance increase. I'll pass.


Btw most of those tests the majority of people test it on a CPU cooler and rarely do i see it tested on a waterblock for a gpu so i doubt its just .3°C of a difference. On my 390 Nitro going from Noctua to GC Extreme was 3-4°C on it;s own. On my 1700x ekwb i dropped 2°C going from NH-T1 to kryonaut. Used to max out at 51°C on my ryzen now it's closer to 47°C


----------



## SavantStrike

VicsPC said:


> Hot spot has more to do with ambient temp and case airflow. My core peaks at 41°C while playing origins but hotspot only gets to 54°C. If i open a window and get 13°C airflow in the room hotspot drops by a couple degrees.
> 
> As far as paste goes. I used NH-T1 and found that on GPUs it was garbage, too thin of a paste to be used on a gpu with high mounting pressure (usually what waterblocks are). I use kryonaut as its the same price as NH-T1 here in France and i got a big tube of it. We're not sure where hotspot temps are but my VRMs are also 48°C VDDC and 56°C MVDD so hotspot is probably somewhere in between the VRMs somewhere educated guess. This is all while playing AC origins with case temp at 24°C and water temp probably close to ~32°C.
> 
> 
> 
> Btw most of those tests the majority of people test it on a CPU cooler and rarely do i see it tested on a waterblock for a gpu so i doubt its just .3°C of a difference. On my 390 Nitro going from Noctua to GC Extreme was 3-4°C on it;s own. On my 1700x ekwb i dropped 2°C going from NH-T1 to kryonaut. Used to max out at 51°C on my ryzen now it's closer to 47°C


Without doing multiple applications of each TIM, it's impossible to rule out better application as the reason for improvement.


----------



## MapRef41N93W

VicsPC said:


> Hot spot has more to do with ambient temp and case airflow. My core peaks at 41°C while playing origins but hotspot only gets to 54°C. If i open a window and get 13°C airflow in the room hotspot drops by a couple degrees.
> 
> As far as paste goes. I used NH-T1 and found that on GPUs it was garbage, too thin of a paste to be used on a gpu with high mounting pressure (usually what waterblocks are). I use kryonaut as its the same price as NH-T1 here in France and i got a big tube of it. We're not sure where hotspot temps are but my VRMs are also 48°C VDDC and 56°C MVDD so hotspot is probably somewhere in between the VRMs somewhere educated guess. This is all while playing AC origins with case temp at 24°C and water temp probably close to ~32°C.
> 
> 
> 
> Btw most of those tests the majority of people test it on a CPU cooler and rarely do i see it tested on a waterblock for a gpu so i doubt its just .3°C of a difference. On my 390 Nitro going from Noctua to GC Extreme was 3-4°C on it;s own. On my 1700x ekwb i dropped 2°C going from NH-T1 to kryonaut. Used to max out at 51°C on my ryzen now it's closer to 47°C


Well I have a fan controller with a thermal probe and case ambient temp isn't an issue at all. I have 600mm worth of rads all blowing intake. Doesn't explain why I have a 50c difference between hotspot and core temp. Someone told me on another board that you have to use the stock screws with the X backplate and the Bykski block unlike with the Barrow version. I did this and it made absolutely no difference in the hotspot temps vs just using the regular screws with no backplate. 

Your temps may have nothing to do with the compound and everything to do with how you apply them. Some pastes like IC Diamond require you do a small blob, while others need to be spread. 

The only time I used NT-H1 was on my 1950x and it was an absolute disaster. I ended up having to remove the CPU and clean everything up because the paste had gotten everywhere including under the chip and onto the pins when I was cleaning it. It also left some annoying stains after only a few hours of use that I had to scrub off the chip. Yes I understand that Kryonaut is a better paste than MX-4 for temps, but MX-4 is cheap, reliable, and incredibly easy to work with. If I'm going to buy an expensive paste because I was going all out, I'd buy IC Diamond 24. I do also know that using MX-4 is totally unrelated to my issue.


----------



## VicsPC

SavantStrike said:


> Without doing multiple applications of each TIM, it's impossible to rule out better application as the reason for improvement.


And that's exactly what i did lol. That's about a couple months worth of research strictly sticking to both TIMs. Also used it on an older 3350p but its a 15w or so processor used under and above the IHS temps were about the same, key point worth noting, Kryonaut didn't fail after a couple weeks under the IHS. Much better paste.



MapRef41N93W said:


> Well I have a fan controller with a thermal probe and case ambient temp isn't an issue at all. I have 600mm worth of rads all blowing intake. Doesn't explain why I have a 50c difference between hotspot and core temp. Someone told me on another board that you have to use the stock screws with the X backplate and the Bykski block unlike with the Barrow version. I did this and it made absolutely no difference in the hotspot temps vs just using the regular screws with no backplate.
> 
> Your temps may have nothing to do with the compound and everything to do with how you apply them. Some pastes like IC Diamond require you do a small blob, while others need to be spread.
> 
> The only time I used NT-H1 was on my 1950x and it was an absolute disaster. I ended up having to remove the CPU and clean everything up because the paste had gotten everywhere including under the chip and onto the pins when I was cleaning it. It also left some annoying stains after only a few hours of use that I had to scrub off the chip. Yes I understand that Kryonaut is a better paste than MX-4 for temps, but MX-4 is cheap, reliable, and incredibly easy to work with. If I'm going to buy an expensive paste because I was going all out, I'd buy IC Diamond 24. I do also know that using MX-4 is totally unrelated to my issue.


I've tested for a couple months when i first got my 1700x and method of application made ZERO difference in both pastes. You have 600mm of rad space both set to intake meaning it's blowing all the hot air INSIDE your case, if you haven't felt how hot air is coming off rads you'll be VERY surprised how warm/hot it can get. If you have 600mm of rad space i also hope you have around 600mm worth of exhaust space exhausting it out. I have both of mine set to EXHAUST and 3 140mm fans set to intake right above my card (vertically mounted cube case, my 3 140mm fans sit right above and next to my vega 64). Core x5 cube case.


----------



## Grummpy

Still no option to increase memory voltage on this app


----------



## Trender

My undervolt:


Yeah I can get much more HBM speed but Im running on Air so temps are a problem, Im not sure if I may oc a bit the clock with my voltages


----------



## seniorfallrisk

Trender said:


> My undervolt:
> 
> 
> Yeah I can get much more HBM speed but Im running on Air so temps are a problem, Im not sure if I may oc a bit the clock with my voltages


As long as you edited your powerplay tables, looks fine to me. If you didn't edit your powerplay tables, you're still running 1050 or 1100 on your core/HBM as the lowest voltage possible. You have to change the P5 voltage to get lower HBM/core voltages to stick.


----------



## Trender

seniorfallrisk said:


> As long as you edited your powerplay tables, looks fine to me. If you didn't edit your powerplay tables, you're still running 1050 or 1100 on your core/HBM as the lowest voltage possible. You have to change the P5 voltage to get lower HBM/core voltages to stick.


Yeah ur right Im not using the powerplay tables as I prefer to use wattman


----------



## StellarX88

May I know which copy of the Vega64 Bios that you use to flash? I have been trying with reference Vega64 BIOS on my Strix Vega56, but it can only be done via force flash and couldn't boot after that, so I have reverted to the original BIOS. My Strix Vega56 is working fine with its existing BIOS, but it would be great to be able to further OC it with a 64 BIOS if I ever get into the itch of tweaking and playing around with it in the very far future.

Thanks again!


----------



## MapRef41N93W

VicsPC said:


> And that's exactly what i did lol. That's about a couple months worth of research strictly sticking to both TIMs. Also used it on an older 3350p but its a 15w or so processor used under and above the IHS temps were about the same, key point worth noting, Kryonaut didn't fail after a couple weeks under the IHS. Much better paste.
> 
> 
> 
> I've tested for a couple months when i first got my 1700x and method of application made ZERO difference in both pastes. You have 600mm of rad space both set to intake meaning it's blowing all the hot air INSIDE your case, if you haven't felt how hot air is coming off rads you'll be VERY surprised how warm/hot it can get. If you have 600mm of rad space i also hope you have around 600mm worth of exhaust space exhausting it out. I have both of mine set to EXHAUST and 3 140mm fans set to intake right above my card (vertically mounted cube case, my 3 140mm fans sit right above and next to my vega 64). Core x5 cube case.


Took my entire loop out of my case and set it up on a table right next to a window AC unit feeding fresh cool air. Hotspot temp still 85c+ while core isn't breaking 38c.


----------



## kondziowy

Almost all Vega models are available to buy in Europe, this is the first time it happened. And it's a "fire sale" ! 
-5% to -15% !

http://www.overclock.net/forum/attachment.php?attachmentid=90673&thumb=1

/joke mode off

http://www.overclock.net/forum/attachment.php?attachmentid=90681&thumb=1

2* MSRP -15% = .. profit??


----------



## Grummpy

I still cant use wattman.
if i do i get instant random system shut downs ,
If i use other programs i can push hard without any problems so it wattman for sure.
Its bringing some safety feature that just makes it unusable not exactly sure what.
even at stock wattman can cause this problem its rather annoying.
Hope AMD fix it soon i dislike having to use other software.


----------



## hyp36rmax

Just finished moving my Crossfire VEGA 64's from another build with a 1700X test bench to an 8700k Gaming HTPC

*Source: * Build Log Link


----------



## SavantStrike

MapRef41N93W said:


> Took my entire loop out of my case and set it up on a table right next to a window AC unit feeding fresh cool air. Hotspot temp still 85c+ while core isn't breaking 38c.


I've got three v64's and Bykski blocks. I had the blocks on the cards and hit random restarts which turned out to be because I didn't get the VRMs cooled properly. I don't remember having hot spot issues, but now I wonder if that wasn't part of my problem. I switched the cards back over to air for the time being until I get parts for the chassis I intend to put them in.

I swapped the cards back over to air for troubleshooting and haven't switched them back.

I'm going to try moving one of them back over to water and put it on an h140X for testing. In my experience the x bracket didn't fit for me either, but there were a lot more screws around the GPU (like 8 vs just 4 with the x). I do wonder if there might not be a mounting pressure issue though


----------



## VicsPC

MapRef41N93W said:


> Took my entire loop out of my case and set it up on a table right next to a window AC unit feeding fresh cool air. Hotspot temp still 85c+ while core isn't breaking 38c.


What did your VRMs end up reading?


----------



## Grummpy

@hyp36rmax
Get yourself one of these and stop that distortion .
https://www.google.co.uk/search?q=1...w=1073&bih=747&dpr=1.31#imgrc=ZXufDbhE-7u4iM:

Can i say.
every 90 degree elbow used generates 500 cm of pipe resistance witch will increase pressure requirement from the pump.
so better to avoid them if you can.
Well thought out build i like it well done.


----------



## MapRef41N93W

SavantStrike said:


> I've got three v64's and Bykski blocks. I had the blocks on the cards and hit random restarts which turned out to be because I didn't get the VRMs cooled properly. I don't remember having hot spot issues, but now I wonder if that wasn't part of my problem. I switched the cards back over to air for the time being until I get parts for the chassis I intend to put them in.
> 
> I swapped the cards back over to air for troubleshooting and haven't switched them back.
> 
> I'm going to try moving one of them back over to water and put it on an h140X for testing. In my experience the x bracket didn't fit for me either, but there were a lot more screws around the GPU (like 8 vs just 4 with the x). I do wonder if there might not be a mounting pressure issue though


I'm having this same problem right now. Random restarts at odd times. What made you think the VRMs weren't cooled right? I've opened the block numerous times and it appears there is contact on the thermal pads. I am using the stock and thought about ordering fujipoly to rule that out, but wasn't sure which size I was supposed to get. I guess I need to go back to stock temporarily and rule out the card having issues. 



VicsPC said:


> What did your VRMs end up reading?


I don't have a way to check that. I don't own a temp gun or anything.


----------



## VicsPC

MapRef41N93W said:


> I'm having this same problem right now. Random restarts at odd times. What made you think the VRMs weren't cooled right? I've opened the block numerous times and it appears there is contact on the thermal pads. I am using the stock and thought about ordering fujipoly to rule that out, but wasn't sure which size I was supposed to get. I guess I need to go back to stock temporarily and rule out the card having issues.
> 
> 
> 
> I don't have a way to check that. I don't own a temp gun or anything.


Hwinf064 should be reporting VRM temps. I do notice with mine that if i don't open my window and have warm air just recirculate my hotspot temp gets a few degrees warmer then if i have the window open, which leads me to believe its either a spot between the VRMs that doesn't get cooled by the water or just a random sensor thats possibly reading wrong. If it's not affecting performance i really wouldnt worry about it.


----------



## MapRef41N93W

VicsPC said:


> Hwinf064 should be reporting VRM temps. I do notice with mine that if i don't open my window and have warm air just recirculate my hotspot temp gets a few degrees warmer then if i have the window open, which leads me to believe its either a spot between the VRMs that doesn't get cooled by the water or just a random sensor thats possibly reading wrong. If it's not affecting performance i really wouldnt worry about it.



Hmm I don't see my VRM temps anywhere. I just DL'd Hwinfo64 as well to check but nope, here's what it shows:

https://imgur.com/a/rvYGz


----------



## vmlinuzz

The kit arrived 

https://m.imgur.com/a/S5lXQ


----------



## SavantStrike

MapRef41N93W said:


> I'm having this same problem right now. Random restarts at odd times. What made you think the VRMs weren't cooled right? I've opened the block numerous times and it appears there is contact on the thermal pads. I am using the stock and thought about ordering fujipoly to rule that out, but wasn't sure which size I was supposed to get. I guess I need to go back to stock temporarily and rule out the card having issues.
> 
> 
> 
> I don't have a way to check that. I don't own a temp gun or anything.


Hwmonitor. You need a relatively new driver version and it will give you VRM temperatures.

I added extra thermal pads to the cards before assembling them to cool parts of the cards that had machined surfaces on the blocks but didn't have instructions to cool them from Bykski. I also didn't cut squares for the VRMs (just installed long strips and screwed through them). I think this might have led to suboptimal contact. One of the pads on one card I also forgot to remove the plastic backing on. I kept tearing my loop down and removing a card at a time, and it wasn't until the last card that I found the unstable card. The first two I removed were probably fine, it was the last one that was the trouble maker.

The drivers were so bad the machine would lock up and cards wouldn't come back without a fresh install of windows, even after using DDU. The behavior was just odd - the cards could mine for about 30-60 seconds before the machine would die spectacularly. They worked better on the adrenalin drivers though.

I've never had so many problems with a GPU, and other than the one piece of errant plastic I didn't do a bad job of installation. The cards would boost themselves too far and crash - it had to be VRM Temperatures as the core and hbm were fine.

Vega is a temperamental architecture.


----------



## MapRef41N93W

SavantStrike said:


> Hwmonitor. You need a relatively new driver version and it will give you VRM temperatures.
> 
> I added extra thermal pads to the cards before assembling them to cool parts of the cards that had machined surfaces on the blocks but didn't have instructions to cool them from Bykski. I also didn't cut squares for the VRMs (just installed long strips and screwed through them). I think this might have led to suboptimal contact. One of the pads on one card I also forgot to remove the plastic backing on. I kept tearing my loop down and removing a card at a time, and it wasn't until the last card that I found the unstable card. The first two I removed were probably fine, it was the last one that was the trouble maker.
> 
> The drivers were so bad the machine would lock up and cards wouldn't come back without a fresh install of windows, even after using DDU. The behavior was just odd - the cards could mine for about 30-60 seconds before the machine would die spectacularly. They worked better on the adrenalin drivers though.
> 
> I've never had so many problems with a GPU, and other than the one piece of errant plastic I didn't do a bad job of installation. The cards would boost themselves too far and crash - it had to be VRM Temperatures as the core and hbm were fine.
> 
> Vega is a temperamental architecture.


I just installed the latest Adrenalin ( I was on the blockchain beta before) and I still don't have any VRM temperatures. 

There were no instructions that came with mine (there was a booklet with a bunch of instructions, all of which were for non-Vega cards), so I simply followed a video for the Barrow block which is almost identical online. I always cut squares for the square RJ-64 connectors and the small areas of VRMs, but leave a long strip for the long area of VRMs on the right side. I also checked both placing the pads on the block vs directly onto the card and found some areas didn't make contact properly if placed onto the block. The sad thing is my friend has an RX Vega 64 with a Barrow block and it runs like a dream. I installed the block for him myself. He does 2050h/s+ without even using a timing table by just upping his card mem to 1100MHz... His hotspot runs super cool as well. I really regret not spending the extra $4 for a Barrow block.


----------



## sega4ever

I'm trying to find more info about the strange behavior of my vega card.

msi vega 64 lc edition

gpu:
p6 1667mhz 900mv
p7 1672mhz 900mv

memory:
1100 950mv

So I get that the memory voltage control acts as the voltage floor so core won't go lower than 950mv in my case. I want to know why setting the voltage floor to 951mv causes such a drastic jump in clock speeds when a 1mv increase should be tiny. I like using the pubg lobby screen as a test. I use wattman to undervolt.

950mv:
1519mhz core
195w

951mv:
1565mhz core
220w

Now that I'm checking there seems to be steps to voltage with nothing in between.

900mv* to 950mv:
1519mhz core
195w

951mv to 1000mv:
1565mhz core
220w

1001mv to 1050mv:
1602mhz core
250w

1051mv to 1100mv:
1611mhz core
285w

*900mv causes the hbm speed to drop. Does this undervolting behavior happen to anyone else?


----------



## SavantStrike

MapRef41N93W said:


> I just installed the latest Adrenalin ( I was on the blockchain beta before) and I still don't have any VRM temperatures.
> 
> There were no instructions that came with mine (there was a booklet with a bunch of instructions, all of which were for non-Vega cards), so I simply followed a video for the Barrow block which is almost identical online. I always cut squares for the square RJ-64 connectors and the small areas of VRMs, but leave a long strip for the long area of VRMs on the right side. I also checked both placing the pads on the block vs directly onto the card and found some areas didn't make contact properly if placed onto the block. The sad thing is my friend has an RX Vega 64 with a Barrow block and it runs like a dream. I installed the block for him myself. He does 2050h/s+ without even using a timing table by just upping his card mem to 1100MHz... His hotspot runs super cool as well. I really regret not spending the extra $4 for a Barrow block.


While I had assumed the blocks were identical (they appear so on the nvidia side) there is a difference between Barrow and Bykski for the Vega. They are still extremely similar and both use m2.5 screws, so the main difference may be mounting hardware and the Barrow blocks covering a couple extra chokes. I don't think the performance should be that radically different.


----------



## MapRef41N93W

SavantStrike said:


> While I had assumed the blocks were identical (they appear so on the nvidia side) there is a difference between Barrow and Bykski for the Vega. They are still extremely similar and both use m2.5 screws, so the main difference may be mounting hardware and the Barrow blocks covering a couple extra chokes. I don't think the performance should be that radically different.


The only difference I could find is that the Bykski block requires the stock X-plate screws while the Barrow block has you swap them for the silver spring screws you use everywhere else. But whatever else is different as far as the actual block designed is concerned seems to make the Barrow block work with zero hassles while the Bykski one seems to cause all sorts of headaches. I may contact the seller about my Bykski block and see if he will swap me for a Barrow one.


----------



## VicsPC

For those who don't get VRM temps i only get em every so often, ill restart my PC then ill have em then cold boot it and not have it. Just takes having it once to monitor once to see what temps you get.


----------



## bill1971

my vega 56 with custom watercooling,flashed to vega 64 liquid bios,score 17500 points in 3d mark,is it good?what do you think?
https://www.3dmark.com/3dm/25294782


----------



## whiteskymage

Guys, I got a small question:
What are Socket PowerLimit, Battery PowerLimit and Small PowerLimit ? Which of them do you think, when changed, will let the GPU use more power, like in Steve's video: 




I am trying to remove as much power restrain as possible from Vega in order to gain all of its performance as possible at whatever power cost it comes with. I bought an EK waterblock for my Vega and I am cooling it with 2 radiators, so cooling isn't an issue. I don't care about power consumption nor about my powerbill, because I will be playing games with this un-restrained Vega 64.

For now, this is my what my soft pp table looks like:
"PP_PhmSoftPowerPlayTable"=hex:82,02,08,01,00,5C,00,22,07,00,00,03,2B,00,00,1B,\
00,48,00,00,00,80,A9,03,00,F0,49,02,00,*96*,00,08,00,00,00,00,00,00,00,00,00,\
00,00,00,00,00,02,01,5C,00,1B,02,12,02,94,00,6A,01,B4,00,FE,00,7A,00,8C,00,\
88,01,00,00,00,00,3E,02,00,00,90,00,74,02,39,01,0F,01,63,01,00,71,02,00,71,\
02,02,02,00,00,00,00,00,00,08,00,00,00,00,00,00,00,05,00,07,00,03,00,05,00,\
00,00,00,00,00,00,01,08,20,03,84,03,B6,03,E8,03,1A,04,4C,04,7E,04,B0,04,01,\
01,46,05,01,01,84,03,00,06,60,EA,00,00,00,40,19,01,00,01,DC,4A,01,00,02,00,\
77,01,00,03,90,91,01,00,04,6C,B0,01,00,05,00,08,D0,4C,01,00,00,00,80,00,00,\
1C,83,01,00,01,00,00,00,00,88,BC,01,00,02,00,00,00,00,B4,EF,01,00,03,00,00,\
00,00,90,0E,02,00,04,00,00,00,00,80,32,02,00,05,00,00,00,00,E0,54,02,00,06,\
00,00,00,00,00,71,02,00,07,00,00,00,00,00,03,60,EA,00,00,00,40,19,01,00,00,\
80,38,01,00,00,00,08,28,6E,00,00,00,2C,C9,00,00,01,F8,0B,01,00,02,80,38,01,\
00,03,90,5F,01,00,04,F4,91,01,00,05,D0,B0,01,00,06,C0,D4,01,00,07,00,08,6C,\
39,00,00,00,24,5E,00,00,01,FC,85,00,00,02,AC,BC,00,00,03,34,D0,00,00,04,68,\
6E,01,00,05,08,97,01,00,06,EC,A3,01,00,07,00,01,68,3C,01,00,00,01,04,3C,41,\
00,00,00,00,00,50,C3,00,00,01,00,00,80,38,01,00,02,00,00,24,71,01,00,03,00,\
00,01,08,00,98,85,00,00,78,B4,00,00,60,EA,00,00,50,C3,00,00,01,80,BB,00,00,\
60,EA,00,00,94,0B,01,00,50,C3,00,00,02,78,FF,00,00,40,19,01,00,B4,27,01,00,\
50,C3,00,00,03,B4,27,01,00,DC,4A,01,00,DC,4A,01,00,50,C3,00,00,04,DC,4A,01,\
00,90,5F,01,00,90,5F,01,00,50,C3,00,00,05,00,77,01,00,90,91,01,00,00,77,01,\
00,50,C3,00,00,06,90,91,01,00,6C,B0,01,00,00,77,01,00,50,C3,00,00,07,6C,B0,\
01,00,6C,B0,01,00,90,91,01,00,50,C3,00,00,01,18,00,00,00,00,00,00,00,0B,E4,\
12,D0,07,D0,07,50,00,0A,00,54,03,90,01,90,01,90,01,90,01,90,01,90,01,90,01,\
00,00,00,00,00,02,04,31,07,DC,00,DC,00,DC,00,*90,01*,00,00,59,00,69,00,49,00,\
49,00,5F,00,73,00,73,00,64,00,40,00,90,92,97,60,96,00,90,55,00,00,00,00,00,\
00,00,00,00,00,00,00,00,00,00,00,00,02,02,D4,30,00,00,02,10,60,EA,00,00,02,\
10
As you can see, the Socket PowerLimit, Battery PowerLimit and Small PowerLimit are all = 220W. I have put the Tdc Limit up to 400A (90,01) from 300A(stock is 2C,01). 
What would happen if I increase the Socket, Battery and Small power limits?


----------



## Ne01 OnnA

bill1971 said:


> my vega 56 with custom watercooling,flashed to vega 64 liquid bios,score 17500 points in 3d mark,is it good?what do you think?
> https://www.3dmark.com/3dm/25294782


You have Weak Combined score IMO (it should be ~7k or 35-40FPS)

My Fiji have 30-32FPS (Unlocked Full X, tMOD, Chiped & new Fox Mod)


----------



## bill1971

Ne01 OnnA said:


> You have Weak Combined score IMO (it should be ~7k or 35-40FPS)
> 
> My Fiji have 30-32FPS (Unlocked Full X, tMOD, Chiped & new Fox Mod)


Whats the reason so weak?


----------



## Ne01 OnnA

bill1971 said:


> Whats the reason so weak?


Try, to Undevolt + OC (w/HBM) and +12-25% POW
The point is to not exceed GPU V/tW Max capabilities (every GPU is diffrent)

Edit 1
1595/1000 at 1.1v w/1.1v HBM linear GPU w/HBM


----------



## rv8000

bill1971 said:


> Whats the reason so weak?


It's a threading issue with ryzen, doesn't have anything to do with your GPU.

http://www.overclock.net/forum/10-amd-cpus/1627430-tale-ryzen-firestrike-problems-ahead.html


----------



## seniorfallrisk

Ne01 OnnA said:


> Try, to Undevolt + OC (w/HBM) and +12-25% POW
> The point is to not exceed GPU V/tW Max capabilities (every GPU is diffrent)
> 
> 1595/1000 at 1.1v w/1v HBM


Just so you know, that 1v on your HBM just means that you core voltage will float between 1000mv and 1100mv. I've found more stability in matching both. In my instance, I've got 1592/945 as my Hynix refuses to go any higher with 1100mv on both voltages. It also seems to have reduced my VRM temps a little but the VRM cooling and heat situation on the Strix cards is a joke.


----------



## Newbie2009

bill1971 said:


> my vega 56 with custom watercooling,flashed to vega 64 liquid bios,score 17500 points in 3d mark,is it good?what do you think?
> https://www.3dmark.com/3dm/25294782


Ignore the overall, all about the graphics score. Below for comparison, 1750 core.

https://www.3dmark.com/fs/14795369


----------



## bill1971

Newbie2009 said:


> Ignore the overall, all about the graphics score. Below for comparison, 1750 core.
> 
> https://www.3dmark.com/fs/14795369


You own a 64 liquid or You have flashed 64 liquid bios to a 56-64?


----------



## Nightrider84

*RX Vega Downclocking*

I just recently picked up an rx vega 64 strix, And with the current drivers it wont let me lock the P states through wattman or overdrive. Should i be using an older drive to make them available or is there something im missing. Really getting sick of this damn card downclocking and stuttering during games.


----------



## y0bailey

How many people are having long term success with reference 56 flash to 64 wc bios? I'm enjoying cold temps in my basement right now, and even with reference air cooling I think I can maintain cooling without too much noise. 

Anyone tried it and had luck?


----------



## Newbie2009

bill1971 said:


> You own a 64 liquid or You have flashed 64 liquid bios to a 56-64?


AIR 64 with liquid bios custom loop.

On another note finally broke 26k graphics score on firestrike.


----------



## geriatricpollywog

bill1971 said:


> my vega 56 with custom watercooling,flashed to vega 64 liquid bios,score 17500 points in 3d mark,is it good?what do you think?
> https://www.3dmark.com/3dm/25294782


http://hwbot.org/benchmark/3dmark_-...Id=videocard_2879&cores=1#start=0#interval=20

Here are some results for comparison.


----------



## seniorfallrisk

I can once again confirm that AIB Vega (atleast Strix) *do not support 56->64 bios flashing properly*. I have nothing but issues ontop of issues when I flash a 64 bios onto my Strix 56. No HBM or Core values are stable and give me issues for my issues when gaming.

The only place where I saw better performance was in 3DMark when running my 64 bios flash. I couldn't run many games, and I often had insane crashing issues if I could run it.


----------



## asus1889

I have a Vega Frontier liquid here. The idle fan noise sucks. It runs @ ~900 rpm minimum. My computer would be totally silent without this fan noise. 

I know you can open the shroud and replace the fan. But u need a torx srew driver, which i havent.

But i would go first another way if possible.

The most problem with the vega frontier is, there is no wattman (which probably allows fan control below 15 %, afterburners and trixx 6.5 fan settings have no effect) and no driver switch. I've tested driver version 17.8 from August 17 and two new drivers, which u can find on the AMD website.


----------



## cg4200

not sure what you mean no wattman?? 
I am using vega frontier edition sli...
1 card easy uninstall whatever driver you are using.. sometimes I use ddu but most times I don't..After uninstall get new radeon pro 18 q1 last one worked the same though if don't want newest..
When you go to install select custom and it will ask if you want multiple drivers yes you do.. after your done restart select switch driver will have 19.1 something else and 18.1...
If you have two vega fe it will not show up option on bottom left to switch drivers.... TRICK is install only 1 vega fe card follow instructions to get to game driver of your choice installed... Than Step 2 add 2nd vega fe card ... turn on computer it might take a minute or two longer when first turn on...Step 3 select amd radeon pro switch game driver ..if 18.1 does not show up don't worry one step longer sometime.. install 17.1 or 17.2 than restart computer select amd pro again search driver 18.1 gaming should be there..
I use wattman running 2 vega frontier sli... hope that helps


----------



## Grummpy

I cant figure out why my computer just shuts down on some games
other games i dont have any problems pushing high power usage with no problem.
others pushing low power usage and it shuts down.
its like someone pulled the plug out the wall it dont give me any event in event viewer because there is no time to write one.
It isnt my power supply that is bran new and i have seen it happen wile pulling less than 400 watt at wall.
driving me crazy i have no where to turn.


----------



## Newbie2009

Grummpy said:


> I cant figure out why my computer just shuts down on some games
> other games i dont have any problems pushing high power usage with no problem.
> others pushing low power usage and it shuts down.
> its like someone pulled the plug out the wall it dont give me any event in event viewer because there is no time to write one.
> It isnt my power supply that is bran new and i have seen it happen wile pulling less than 400 watt at wall.
> driving me crazy i have no where to turn.


Is it a new build? Maybe something touching something that shouldn't?


----------



## asus1889

Wattman isn't there in 18Q1 driver. Several reinstallations with DDU didn't help. Concerning the fan problem. I oiled the fan with machine oil (none mineral oil) and it didn't help. Than i cut the fan behind the soldering and isolated the wires separately. Some users speculated that the card would not start than anymore, because no fan is connected and a protection circuit would disable the card to prevent overheating. So I went for this method at least.

As u see this specualtion is non sense. 

Replacement of the fan with a 10 Euro 1200 rpm max. 12 cm Akasa fan helps to reduce the fan noise dramatically, without increasing the temperature, which u could imagine by replacing a 2800 rpm fan with a 1200 rpm fan.

Shame on AMD. What a .... fan on a such expensive card and no pwm socket ouiside the shroud for easier replacement.


----------



## SavantStrike

seniorfallrisk said:


> I can once again confirm that AIB Vega (atleast Strix) *do not support 56->64 bios flashing properly*. I have nothing but issues ontop of issues when I flash a 64 bios onto my Strix 56. No HBM or Core values are stable and give me issues for my issues when gaming.
> 
> The only place where I saw better performance was in 3DMark when running my 64 bios flash. I couldn't run many games, and I often had insane crashing issues if I could run it.


The 56 and 64 probably use different HBM2 on these later production models.


----------



## Skinnered

Grummpy said:


> I cant figure out why my computer just shuts down on some games
> other games i dont have any problems pushing high power usage with no problem.
> others pushing low power usage and it shuts down.
> its like someone pulled the plug out the wall it dont give me any event in event viewer because there is no time to write one.
> It isnt my power supply that is bran new and i have seen it happen wile pulling less than 400 watt at wall.
> driving me crazy i have no where to turn.


Same here, changed connectors and take care for not shared rails, but no dice. Running two Vega's in CF. 
I can reproduce it in GTAIV (enb) after a certain point on demand.


----------



## geriatricpollywog

I set a new personal best Firestrike graphics score of 27448.

I notice a lot of the scores in HW Bot have the message "Benchmark tessellation load modified by AMD Catalyst driver, result invalid." Are they cheating?


----------



## VicsPC

0451 said:


> I set a new personal best Firestrike graphics score of 27448.
> 
> I notice a lot of the scores in HW Bot have the message "Benchmark tessellation load modified by AMD Catalyst driver, result invalid." Are they cheating?


No they just use amd optimized tessellation settings in Adrenalin, i do the same so it doesn't matter. A lot of software is optimized for nvidia and use excessive tessellation so this is AMDs way of countering.


----------



## Spacebug

VicsPC said:


> No they just use amd optimized tessellation settings in Adrenalin, i do the same so it doesn't matter. A lot of software is optimized for nvidia and use excessive tessellation so this is AMDs way of countering.


Perhaps, but maybe no, i think... 
For hwbot it is allowed to disable tesselation in amd driver for some benchmarks. 
Some time many moons ago firestrike couldn't see if tesselation was turned off or not, giving higher scores.
Because it was possible earlier hwbot still allows disabled tesselation for amd cards.

For nvidia users modifying level of detail settings is allowed for hwbot rankings. 

Not allowed for firestrike "official" rankings though...

Atleast what ive heard...


----------



## Blameless

It's true that there are titles that use excessive tessellation, far beyond that which would actually benefit IQ, often only to make hardware with lower geometry performance look worse than it otherwise would. However, removing the bias on a test that's supposed to be biased is still cheating. For better or worse, certain tessellation factors are mandated by Firestrike, which is why altering them produces invalid scores.

HWbot can set different rules, obviously, and what constitutes cheating or not is relative to the rules in question.


----------



## prosen

*Vega Frontier*

Hey guys, Vega frontier is currently the cheapest high end video card in my area, how does it stack up to Vega 64 in gaming with current drivers?


----------



## Call

*Should I try the Morpheus II on my Vega FE?*

So I have a Vega Frontier, and have been impressed so far. It's just sometimes it gets too hot... even with a 1100mv underclock on P6/P7 for 1602Mhz. +50% power, though of course.

In turn I was thinking of trying to install a Morpheus II on it. My build is air cooled by 200cfm delta industrial 120mm fans, and the stock blower on my Vega FE even at max rpm can't hold any OC's well. With the open Morpheus II I feel like it would be much, much better considering the delta fans. (yes it can get loud haha)

I saw this: https://imgur.com/gallery/rTLha 
Should I go for it? Or am I just not overclocking correctly or something? (I use wattman)

Thanks guys!


----------



## plywood99

Grummpy said:


> I cant figure out why my computer just shuts down on some games
> other games i dont have any problems pushing high power usage with no problem.
> others pushing low power usage and it shuts down.
> its like someone pulled the plug out the wall it dont give me any event in event viewer because there is no time to write one.
> It isnt my power supply that is bran new and i have seen it happen wile pulling less than 400 watt at wall.
> driving me crazy i have no where to turn.


This happened to me when I first got my Vega 64 LC. Problem ended up being the power cables from the psu to the card. I replaced the cables and all is well.


----------



## asus1889

My Vega Frontier liquid doesnt go above 1360 - 1400 MHz gpu clock with 100 % gpu utilization. Why ?

The gpu temp is arround 42 °C. 

Who has flashed a frontier with a rx vega 64 bios in order to get access of wattman ?


----------



## seniorfallrisk

asus1889 said:


> My Vega Frontier liquid doesnt go above 1360 - 1400 MHz gpu clock with 100 % gpu utilization. Why ?
> 
> The gpu temp is arround 42 °C.
> 
> Who has flashed a frontier with a rx vega 64 bios in order to get access of wattman ?


If you ever read or googled anything, you would know that you cannot flash RX Vega bioses to a FE. You're running into power limits, probably because you never bothered to google/figure out why. Undervolt your card or use a powerplay table to expand your possibilities... Atleast try using the available knowledge before asking questions. :thumbsdow


----------



## Daedar

use the switch mode and select gaming drivers, both pro and gaming drivers can be installed at the same time, I have the Air Frontier card with men at 1075 MHz, GPU at 1590, +50 power, fan 4800 rpm, I own it for 2 weeks only, I did not get the chance to play with it an lower the voltages a bit


----------



## diggiddi

So what's the best Non Ref Vega? The Strix or Nitro?


----------



## kondziowy

Don't know about PCB but in cooling the best by far is Nitro LE because of vapor chamber.
Standard Nitro or Pulse don't have reviews yet, or I just didn't see any.


----------



## diggiddi

What about overclcocking?


----------



## prosen

Can anyone give me some insight on current gaming performance/stability of Vega frontier edition? 

Also:

-How does it stack up with Rx vega 64? 
-Does the extra 8gb memory make any difference in games?

Any info would be much appreciated. Thanks.


----------



## TrixX

diggiddi said:


> So what's the best Non Ref Vega? The Strix or Nitro?


Honestly, neither. Just wait for the Vega refresh later this year. Vega has been dropped by AMD like a hot coal unfortunately. Doesn't stop me enjoying mine, but it's not got the expected level of support


----------



## Grummpy

Vega has been dropped by AMD like a hot coal
dont be stupid.


----------



## diabetes

The Vega refresh this year will be a pipe cleaner for GlobalFoundries' 7nm process and only a HPC compute card without display outputs (MI-series).


----------



## microchidism

Hey guys, in general how has gaming support for Rx Vega been? I figure a high percentage of AMD cards are tied up in mining, not sure if that has reduced AMDs support for them in games

I have a 1080 but have been thinking of putting it up for trade/ sale for a Vega 64..


----------



## TrixX

Grummpy said:


> Vega has been dropped by AMD like a hot coal
> dont be stupid.


I'm not being stupid. For instance I didn't break a Vega by mis-mounting a custom cooling solution on it.

The support for Vega gen 1 has been dropped like a hot coal, many games are still having issues with support, especially older games or games running on an older game engine requiring DX9 for instance. Hopefully the support will grow as we get the Vega refresh and the drivers will get sorted out a bit for better all round support.

Overall for new games and very heavily played games, Vega has had good support, so the PUBG's and GTA V's of the gaming world are fine, but more niche games like Stellaris and iRacing certainly don't get the support at the same level and therefore lose out heavily to Nvidia in those areas. Anything DX10 or older plays better on older hardware than Vega, my R9 290 even outperforms it in many of the DX9 games I have.

So it's a mixed bag really. I'm hoping the refresh will unlock some more performance and the drivers will get expanded support. I'm not exactly keen to get an Nvidia currently.


----------



## spacemonkey99

Rx 56 reference here. Was hoping to get an opinion on aftermarket cooling. Are the performance gains on 56 worth it?
And if so, aio vs vga cooler...such as morpheus 2.

Thank you


----------



## surfinchina

TrixX said:


> I'm not being stupid. For instance I didn't break a Vega by mis-mounting a custom cooling solution on it.
> 
> The support for Vega gen 1 has been dropped like a hot coal, many games are still having issues with support, especially older games or games running on an older game engine requiring DX9 for instance. Hopefully the support will grow as we get the Vega refresh and the drivers will get sorted out a bit for better all round support.
> 
> Overall for new games and very heavily played games, Vega has had good support, so the PUBG's and GTA V's of the gaming world are fine, but more niche games like Stellaris and iRacing certainly don't get the support at the same level and therefore lose out heavily to Nvidia in those areas. Anything DX10 or older plays better on older hardware than Vega, my R9 290 even outperforms it in many of the DX9 games I have.
> 
> So it's a mixed bag really. I'm hoping the refresh will unlock some more performance and the drivers will get expanded support. I'm not exactly keen to get an Nvidia currently.


So far as I can figure out, the Vega was only ever supposed to be a workstation type crossover. They tacked on some gaming capabilities with the driver and some 3rd parties sold it in a pretty dodgy (saying it was good for gaming) way. I use mine not for games - only for rendering and working around in a model and it's great. Unless I spring for a Quadro at triple the price it can't be beat.
Support for the Vega and any next gens will always be aimed at people like me, in my opinion, so don't hold your breath over getting better gaming support.


----------



## TrixX

surfinchina said:


> So far as I can figure out, the Vega was only ever supposed to be a workstation type crossover. They tacked on some gaming capabilities with the driver and some 3rd parties sold it in a pretty dodgy (saying it was good for gaming) way. I use mine not for games - only for rendering and working around in a model and it's great. Unless I spring for a Quadro at triple the price it can't be beat.
> Support for the Vega and any next gens will always be aimed at people like me, in my opinion, so don't hold your breath over getting better gaming support.


Don't worry I'm not, Navi is likely to be the one that focuses more on gaming, so that's what I'm looking at for the future. My Vega is fine for what I use it for, however a couple of instances it runs into issues. The driver support needs work to overcome those and it's whether AMD can see the value in the cost of making those drivers support that. Hopefully they will, but I'm not counting on it


----------



## SavantStrike

spacemonkey99 said:


> Rx 56 reference here. Was hoping to get an opinion on aftermarket cooling. Are the performance gains on 56 worth it?
> And if so, aio vs vga cooler...such as morpheus 2.
> 
> Thank you


The Morpheus is comically large (3 slots) yet doesn't do a good job cooling VRMs. HBM is fragile, so I wouldn't go strapping some big abomination on my card personally.

A custom loop would be the best solution, but if that's not an option, alpha cool makes the eisbear which is a block and a pump/rad combo with quick disconnects. It's not terribly expensive and doesn't have the mixed metal issues that plague AIOs.


----------



## Call

SavantStrike said:


> The Morpheus is comically large (3 slots) yet doesn't do a good job cooling VRMs. HBM is fragile, so I wouldn't go strapping some big abomination on my card personally.
> 
> A custom loop would be the best solution, but if that's not an option, alpha cool makes the eisbear which is a block and a pump/rad combo with quick disconnects. It's not terribly expensive and doesn't have the mixed metal issues that plague AIOs.


That will work on a Vega Frontier? 
https://www.alphacool.com/shop/neue-produkte/20227/alphacool-eisbaer-280-cpu-black
Is that the one you mean?

I've also thought about getting the morpheus II, but been afraid of it putting too much stress on it structurally.


----------



## SavantStrike

Call said:


> That will work on a Vega Frontier?
> https://www.alphacool.com/shop/neue-produkte/20227/alphacool-eisbaer-280-cpu-black
> Is that the one you mean?
> 
> I've also thought about getting the morpheus II, but been afraid of it putting too much stress on it structurally.


I'll post when I get home. The alpha cool design is a modular pre-filled radiator with a block/pump combo that sits on one of their full cover blocks.

I stated the wrong product line. You want the eiswolf. It comes in a 120 and a 240 version for the RX Vega (and will fit V56, V64, and Vega FE reference designs).

The alpha cool costs more but also performs better, and has a price in line with . It's upgradeable too, you can swap the block on the card with a new one very easily and keep the rest of the system.


----------



## elox

SavantStrike said:


> The Morpheus is comically large (3 slots) yet doesn't do a good job cooling VRMs. HBM is fragile, so I wouldn't go strapping some big abomination on my card personally.
> 
> A custom loop would be the best solution, but if that's not an option, alpha cool makes the eisbear which is a block and a pump/rad combo with quick disconnects. It's not terribly expensive and doesn't have the mixed metal issues that plague AIOs.


Never saw one of two VRM sensors higher then 92 degree with morpheus 2.


----------



## Agostinho

Hello guys,

I need your help i have my rx vega 64 limited edition about 6 months. About 1 month ago started to make an electric noise with the cpu in load. Is it normal? Or i have to send for warrantly?

I leave below a video with the noise.







Best regards

Enviado do meu LG-H870 através do Tapatalk


----------



## prosen

Sounds like coil whine, difficult to tell though. All of this is me assuming you mean Gpu not Cpu. Coil whine is fine, my gtx 680 has been whining like a spoiled brat since I bought it 6 years ago.


----------



## os2wiz

SavantStrike said:


> I'll post when I get home. The alpha cool design is a modular pre-filled radiator with a block/pump combo that sits on one of their full cover blocks.
> 
> I stated the wrong product line. You want the eiswolf. It comes in a 120 and a 240 version for the RX Vega (and will fit V56, V64, and Vega FE reference designs).
> 
> The alpha cool costs more but also performs better, and has a price in line with . It's upgradeable too, you can swap the block on the card with a new one very easily and keep the rest of the system.


The Eiswolf GPX Pro 240 is overkill. the 120 version is more than adequate cooling for the gpu.


----------



## TrixX

os2wiz said:


> The Eiswolf GPX Pro 240 is overkill. the 120 version is more than adequate cooling for the gpu.


The 240 will allow lower fan speeds and have a higher wattage dissipation than the 120 which seems to be designed for the Nvidia range of cards. So it's good up to 300W for the 120, but as we know quite well the Vega is a hot chip and 300W is easy with a bit of tweaking if going for max clocks and performance. So the 240 should cover that base nicely. Either will be an improvement over the stock blower.


----------



## VicsPC

os2wiz said:


> The Eiswolf GPX Pro 240 is overkill. the 120 version is more than adequate cooling for the gpu.


You're kidding right? For decent temps a 120 only dissipates about 100w of TDP, the 240 is closer to 210-220w and a 360 is around 300-320w. It is by far no where NEAR overkill for a 240 that's a total joke. A 360 and 240 on my 1700 and vega 64 sees my GPU temps at around 40°C and my CPU around 47°C max. This is with around 340w from the cpu and gpu alone. Not including heat from the VRMs and the pump itself. A 240 is definitely not overkill, in fact its just right.


----------



## Sufferage

VicsPC said:


> You're kidding right? For decent temps a 120 only dissipates about 100w of TDP, the 240 is closer to 210-220w and a 360 is around 300-320w. It is by far no where NEAR overkill for a 240 that's a total joke. A 360 and 240 on my 1700 and vega 64 sees my GPU temps at around 40°C and my CPU around 47°C max. This is with around 340w from the cpu and gpu alone. Not including heat from the VRMs and the pump itself. A 240 is definitely not overkill, in fact its just right.



Absolutely correct. In fact, even the standard 240 was quite not good enough for me, exchanged it for a NexXxos Monsta 240, way better...


----------



## VicsPC

Sufferage said:


> Absolutely correct. In fact, even the standard 240 was quite not good enough for me, exchanged it for a NexXxos Monsta 240, way better...


Of course radiator thickness, static pressure and pump are going to make a difference but a 120 i find is absolutely minimal at best for a GPU especially a Vega. I think i see most people getting around 50-60°C on the 64LC meanwhile uncapped mine probably hits around 40°C with HBM being 43°C, of course depending on the game sometimes it wont reach that at all.


----------



## axos

Hi guys.
This is my first post here.
To be short I have Vega 56 Strix with Samsung memory. I have red on the net that all 56 Strix are with Hyenix memory. 
When tried to flash it to Strix 64 bios it doesn't let me. Any help here?
EDIT: Interesting part is that when I googled bios version it looks like it is 64 bios, but it shows shader count of vega 56.


----------



## SavantStrike

elox said:


> Never saw one of two VRM sensors higher then 92 degree with morpheus 2.


92C isn't what i would call stellar. An aftermarket solution should do better, especially when it takes up three slots.


----------



## seniorfallrisk

axos said:


> Hi guys.
> This is my first post here.
> To be short I have Vega 56 Strix with Samsung memory. I have red on the net that all 56 Strix are with Hyenix memory.
> When tried to flash it to Strix 64 bios it doesn't let me. Any help here?
> EDIT: Interesting part is that when I googled bios version it looks like it is 64 bios, but it shows shader count of vega 56.


My first Strix 56 also had Samsung memory, according to the bios, but wholly behaved like Hynix HBM but had better performance. I felt like I was having many issues with the card and got a replacement which has Hynix. You need to force flash the Vega 64 bios, but be warned because no matter what I've done, I *always had stability issues*.

On both my "samsung" and hynix Strix 56's, I met nothing but issues with a flashed vbios. Funny thing is, all AIB Vega 56 are supposed to have hynix HBM as the samsung HBM seems to be getting saved for only the Vega 64 cards.


----------



## Gdourado

I haven't followed Vega since launch.
So I am wondering with current drivers and some time since launch, how is the current performance of Vega 56? And how does it stack up against nvidia both in 1080p and 1440p in most recent late 2017 and 2018 games?

Cheers!


----------



## Call

VicsPC said:


> Of course radiator thickness, static pressure and pump are going to make a difference but a 120 i find is absolutely minimal at best for a GPU especially a Vega. I think i see most people getting around 50-60°C on the 64LC meanwhile uncapped mine probably hits around 40°C with HBM being 43°C, of course depending on the game sometimes it wont reach that at all.



I have a Vega Frontier Air; I've been thinking about getting either the eiswolf 240, or getting the 360 rad and the GPU block. Like this:
https://www.alphacool.com/shop/radi...eisbaer-ready-st30-full-copper-360mm-radiator
https://www.alphacool.com/shop/new-...swolf-gpx-pro-ati-rx-vega-m01-incl.-backplate

(or just going with the 240)
https://www.alphacool.com/shop/new-...ool-eiswolf-240-gpx-pro-ati-rx-vega-m01-black

What do you think? Thanks!


----------



## SavantStrike

Call said:


> I have a Vega Frontier Air; I've been thinking about getting either the eiswolf 240, or getting the 360 rad and the GPU block. Like this:
> https://www.alphacool.com/shop/radi...eisbaer-ready-st30-full-copper-360mm-radiator
> https://www.alphacool.com/shop/new-...swolf-gpx-pro-ati-rx-vega-m01-incl.-backplate
> 
> (or just going with the 240)
> https://www.alphacool.com/shop/new-...ool-eiswolf-240-gpx-pro-ati-rx-vega-m01-black
> 
> What do you think? Thanks!


The 240 should be sufficient unless you do some crazy things with power play tables. That said, a 360mm rad is probably the most convenient size on the planet if you repurpose it later.

If you go 360, you might be able to do a full cover block of your choice plus a kit and get the CPU too.


----------



## axos

seniorfallrisk said:


> My first Strix 56 also had Samsung memory, according to the bios, but wholly behaved like Hynix HBM but had better performance. I felt like I was having many issues with the card and got a replacement which has Hynix. You need to force flash the Vega 64 bios, but be warned because no matter what I've done, I *always had stability issues*.
> 
> On both my "samsung" and hynix Strix 56's, I met nothing but issues with a flashed vbios. Funny thing is, all AIB Vega 56 are supposed to have hynix HBM as the samsung HBM seems to be getting saved for only the Vega 64 cards.


 Thank you for reply. How do you "force flash"? I was flashing from windows because I couldn't make bootable ms-dos usb. What kind of stability issues did you have? It looks that they totally messed up with Vega line. HBM is sometimes lower and sometimes higher than gpu. Wattman doesn't follow HBM temps so often HBM overheats if there is not enough load on GPU, etc. :thinking:


----------



## Call

SavantStrike said:


> The 240 should be sufficient unless you do some crazy things with power play tables. That said, a 360mm rad is probably the most convenient size on the planet if you repurpose it later.
> 
> If you go 360, you might be able to do a full cover block of your choice plus a kit and get the CPU too.


There's other full cover blocks; like EK? I'm up for whatever will be best to cool the vega frontier monster, haha.

What other options are you talking about? Thanks!!


----------



## SavantStrike

Call said:


> There's other full cover blocks; like EK? I'm up for whatever will be best to cool the vega frontier monster, haha.
> 
> What other options are you talking about? Thanks!!


Oh yeah, there are a ton of full cover blocks available. The eiswolf is just one of the only kits, and was a suggestion as a step above a normal CLC.

Custom loop, I would go for a watercool heatkiller block if you're okay with replacing the back plate, or an XSPC razor or Phanteks glacier if you want to keep the sweet FE back plate. If you like the acrylic look then bitspower on the high end or Barrow on the low end are solid options. All are within a degree or two of one another (but a few degrees cooler than the alphacool). EK does make a block, but it's not as nice as it could be for the money.


----------



## Vincendre

Hey, sorry If someone already asked, but I'm wondering if using the Enzotech copper heatsink on VEGA VRM and Chokes is safe ?
I'm asking since I saw someone did something similar in AnandTech Vega builders thread (page 5). But I read somewhere on reddit that It might short circuit the card in some cases.

But since all heatsink will be covered by some thermal tape, themy future Morpheus mod should be fine isn't it ?
I hope that I'm in the right topic for this, Thanks !


----------



## Call

SavantStrike said:


> Oh yeah, there are a ton of full cover blocks available. The eiswolf is just one of the only kits, and was a suggestion as a step above a normal CLC.
> 
> Custom loop, I would go for a watercool heatkiller block if you're okay with replacing the back plate, or an XSPC razor or Phanteks glacier if you want to keep the sweet FE back plate. If you like the acrylic look then bitspower on the high end or Barrow on the low end are solid options. All are within a degree or two of one another (but a few degrees cooler than the alphacool). EK does make a block, but it's not as nice as it could be for the money.


Oh okay thanks!

My case doesn't have a window, so aesthetics aren't really an issue. What would be the best performance/price? I guess I was leaning towards the eiswolf because it seemed more straightforward and easy to setup. And since I have a beefy air-cooler on the CPU and don't want to go full loop with CPU+GPU.


----------



## SavantStrike

Call said:


> Oh okay thanks!
> 
> My case doesn't have a window, so aesthetics aren't really an issue. What would be the best performance/price? I guess I was leaning towards the eiswolf because it seemed more straightforward and easy to setup. And since I have a beefy air-cooler on the CPU and don't want to go full loop with CPU+GPU.


The eiswolf is the easiest - nothing to fill or bleed, and no reservoir needed. It's got a decent price too.

Next easiest (and slightly higher performance) is a barrow block added to a kit that just includes a pump,a reservoir, a radiator, and some soft tubing and a couple of fittings. This is more work but should run a few degrees cooler and is more upgradeable. Cost wise you could have people on this forum telling you to burn a ton of cash, but it's doable for 250-300 USD.


----------



## Call

SavantStrike said:


> The eiswolf is the easiest - nothing to fill or bleed, and no reservoir needed. It's got a decent price too.
> 
> Next easiest (and slightly higher performance) is a barrow block added to a kit that just includes a pump,a reservoir, a radiator, and some soft tubing and a couple of fittings. This is more work but should run a few degrees cooler and is more upgradeable. Cost wise you could have people on this forum telling you to burn a ton of cash, but it's doable for 250-300 USD.


Oh okay. I guess I'll just go with the eiswolf. I know I ask this earlier, but should I go for the 360 or 240? I have very high cfm industrial 120mm fans, and don't really care about sound; would the 360 work better? And also, is the integrated pump on the eiswolf vega block suited to use the 360?

Thanks again for all your help!


----------



## Spacebug

If you have the space for it, always go for the largest radiator/s possible, always.
Larger surface area, better cooling performance and could be used with lower rpm fans for less noise with still descent cooling performance.

Generally speaking... 
Don't know anything about the pump in the eisewolf but dont think it would be so weak it can't cope with the little extra restriction the 360 rad would give compared to the 240...


----------



## SavantStrike

Call said:


> Oh okay. I guess I'll just go with the eiswolf. I know I ask this earlier, but should I go for the 360 or 240? I have very high cfm industrial 120mm fans, and don't really care about sound; would the 360 work better? And also, is the integrated pump on the eiswolf vega block suited to use the 360?
> 
> Thanks again for all your help!


The 360 will be better, and can always be repurposed. If you're okay with the added cost, its not a bad way to go.


----------



## kondziowy

Sapphire pulse review popped up: https://www.computerbase.de/2018-03/sapphire-radeon-rx-vega-56-pulse-test/

Very nice card. Even budget Sapphire 2-fan design can be cooler or at least on par with 3-fan designs of other partners. Temperatures are really nice.


----------



## Call

SavantStrike said:


> The 360 will be better, and can always be repurposed. If you're okay with the added cost, its not a bad way to go.


Ohkay! Thanks again so much!!

What about the VRM/whatever temps? Is it good with the eiswolf? Or do I need to go with a fullblock like you were mentioning earlier?

Thanks!


----------



## LeadbyFaith21

Does anyone on here play Sea of Thieves? I've had issues with the water textures disappearing on my Vega powered machine, but nothing on my wife's Fury pc or my laptop (with NVidia).


----------



## diggiddi

Are there any Strix users with the EKWB or any water cooling, what clock speeds are you getting ?


----------



## seniorfallrisk

axos said:


> Thank you for reply. How do you "force flash"? I was flashing from windows because I couldn't make bootable ms-dos usb. What kind of stability issues did you have? It looks that they totally messed up with Vega line. HBM is sometimes lower and sometimes higher than gpu. Wattman doesn't follow HBM temps so often HBM overheats if there is not enough load on GPU, etc. :thinking:


Force flashing is simply done by using a command line to run the flashing software. You can google how to do that, it's pretty simple. Just be careful not to mess up.


----------



## sinnedone

Quick question for you guys in here. 

When it comes to gaming performance, does frontier match Vega 64?

I haven't kept up with news but I remeber back at release this want the case. Have drivers matured to the point that Vega 64 and the 16gb Frontier edition cards perform the same when gaming?


----------



## Spacewide

When putting 99% load on my Gigabyte rx56 (64 bios) with waterblock. It only reaches 1173 mhz on core and 800mhz on memory?

Though my settings are custom set to 1600 core and 960 on memory.

Anyone know why? :<


----------



## Brightmist

Not sure if it's been said here before but I was getting image corruption into system crashes or just general monitor initialization failure/no signal on boot when my monitor Asus MG278Q was paired with my GB RX Vega64 OC.

After testing parts in different systems, I pinpointed the problem to be a compatibility issue between GPU, DP Cable and monitor.

The exact image I get on my screen when the problem happens and system crashes is attached(picture belongs to another Vega user in AMD community site).

If you come accross this issue, just try a different DP cable or use HDMI instead, should clear it right up.

Let me also note that same DP cable works fine with my GTX970 so this is probably a Vega specific issue.


----------



## Naeem

Brightmist said:


> Not sure if it's been said here before but I was getting image corruption into system crashes or just general monitor initialization failure/no signal on boot when my monitor Asus MG278Q was paired with my GB RX Vega64 OC.
> 
> After testing parts in different systems, I pinpointed the problem to be a compatibility issue between GPU, DP Cable and monitor.
> 
> The exact image I get on my screen when the problem happens and system crashes is attached(picture belongs to another Vega user in AMD community site).
> 
> If you come accross this issue, just try a different DP cable or use HDMI instead, should clear it right up.
> 
> Let me also note that same DP cable works fine with my GTX970 so this is probably a Vega specific issue.



i am using Vega 64 LC with MG278Q with DP cable that came with it no issue for me here


----------



## MacConcierge

https://www.techpowerup.com/gpuz/details/7dyn2

My GPU load is constantly fluctuating.

It's running latest AMD driver: 23.20.15033.5003 (Adrenalin 18.3.4) / Win7 64

System is Dual E5-2690 v2 + RX Vega 64

OS is Win 7 64, 64Gb of RAM

In GPU-Z, the hotspot temp (78) is about 14 degrees higher than GPU temp (63)

I'm running Folding at Home, only getting 89,000 points. Which is about 1/10 of what it should be getting.

Do I have the wrong settings or do I have a dud of a card on my hand?


----------



## Butthurt Beluga

Hey guys, I'm really wanting to upgrade to my RX 580 and the only option I have currently is a Vega56/64 or an AMD Fury X (used to have two but they both died, not wanting to repeat that)
I know prices are still crazy, but coming down, is a Vega 56/64 a worthy upgrade for my card?

Basically I play R6: Siege @1440p, which is an extremely competitive game, so dropping down to 40 FPS on certain maps is definitely not something I'm accustomed to and would rather not be.

Any preferred AIB partners?
Expected OC range? (air/water)
56 or 64?
Any quirks/features/improvements should be aware of?

thanks in advance


----------



## SuperZan

For your needs you could get away with the 56, though I've been very pleased with my 64. Undervolt first and then push for the best clocks you can get away with. I'm using a reference card under water at the moment, but that was the general gist of things when I was stuck on the blower cooler. My card is a Sapphire and I'm usually happy with them. Their custom cooler designs for high-end cards are usually decent.


----------



## diggiddi

For a brand new card this thread is a ghosttown


----------



## Aenra

This ******ed decision to push for custom only PCBs is killing me.

Can anyone make me happy by telling me that i'm wrong, lol? That there's a custom PCB Vega out there, today, i can watercool?
Got this colleague i'm trying to help with his upcoming rig and it looks like we'll be going NewGreedia; even when you don't want to, lol, the Green giant has its ways! Of the three available custom PCB out there i can find, none are compatible 

(reference Sapphire Vegas listed as """brand new""" in the likes of eBay don't count; i don't want him to hate me afterwards, lol)


----------



## diggiddi

EK makes a block for Strix


----------



## CDub07

Seeing as this thread has gotten pretty lengthy, I thought I would just ask. Looking to running 1080p/1440p and Need a Asus ROG Strix card. Will the Vega 64 for $799 or GTX 1070ti for 679 be the better deal? I know they're better prices out there but I really the ROG Strix and the RGB lighting in the PCB. I really want a all AMD system but if the 1070ti is the better card I will say Nvidia. Oh my current card is a 1060 6GB.


----------



## looncraz

CDub07 said:


> Seeing as this thread has gotten pretty lengthy, I thought I would just ask. Looking to running 1080p/1440p and Need a Asus ROG Strix card. Will the Vega 64 for $799 or GTX 1070ti for 679 be the better deal? I know they're better prices out there but I really the ROG Strix and the RGB lighting in the PCB. I really want a all AMD system but if the 1070ti is the better card I will say Nvidia. Oh my current card is a 1060 6GB.


That's really tough to answer as what is worth $120 varies by person. The 64 can sometimes get close to the 1080ti, but usually falls around the 1080... though at 1440p, the Vega does relatively better. It's almost always faster than the 1070ti, but only very rarely is it enough to care about (1440p and 4k are those times, though).

If you were looking into FreeSync or GSync then the answer is straight-forward  Vega would offer better performance and more display options at the same-ish overall price (video card + monitor).

Likewise, if you do anything that uses GPU compute, the advantage usually resides with Vega in a nearly 2:1 ratio (but not always).

The difference in power consumption can be mitigated with a little undervolting - and that might actually increase performance of Vega.


----------



## CDub07

looncraz said:


> That's really tough to answer as what is worth $120 varies by person. The 64 can sometimes get close to the 1080ti, but usually falls around the 1080... though at 1440p, the Vega does relatively better. It's almost always faster than the 1070ti, but only very rarely is it enough to care about (1440p and 4k are those times, though).
> 
> If you were looking into FreeSync or GSync then the answer is straight-forward  Vega would offer better performance and more display options at the same-ish overall price (video card + monitor).
> 
> Likewise, if you do anything that uses GPU compute, the advantage usually resides with Vega in a nearly 2:1 ratio (but not always).
> 
> The difference in power consumption can be mitigated with a little undervolting - and that might actually increase performance of Vega.



I seen a few videos on youtube since posting this and I was surprised how much the 1070ti really fights it out with the Vega 64. The only computing I can think of is maybe video encoding and maybe photoshop acceleration. The monitor is the main reason. Like you said a freesync monitor and Vega 64 would off set the saving of the 1070ti and a g sync monitor. I will keep watching prices to see if a drop happens again.


----------



## SavantStrike

CDub07 said:


> I seen a few videos on youtube since posting this and I was surprised how much the 1070ti really fights it out with the Vega 64. The only computing I can think of is maybe video encoding and maybe photoshop acceleration. The monitor is the main reason. Like you said a freesync monitor and Vega 64 would off set the saving of the 1070ti and a g sync monitor. I will keep watching prices to see if a drop happens again.


In DX11 titles...

Switch to DX12 or Vulkan and the V56 beats the 1070TI. It isn't until the 1080 TI that the NV architecture wins every time due to brute force.


----------



## Aenra

diggiddi said:


> EK makes a block for Strix



Thank you very much, i must have missed it. First thing i did was go to their configurator :S
Will double check before passing the good news.

* Yeap, we're sorted it seems! Thanks again diggi, can't believe i missed that.


----------



## Mandarb

Hey guys, what's the consensus on safe max VRM temperatures for longtime use?

(Also, my owner info hasn't been changed to Morpheus II yet 😉 )


----------



## os2wiz

I have dual MSI Airboost RX Vega 56 cards. They only come with the Hynix memory and I am having difficulty figuring out a sweet spot for voltage and frequency. If any one has had success specifically with this card please send me their profile. I also note that crossfire is not working well with these cards in those few benchmarks and games that I have that support multiple gpus. Any additional advice on this would also help. But first I need to get the voltage and frequencies optimized on these Hynix memory based cards. Thank you.


----------



## Leons

os2wiz said:


> I have dual MSI Airboost RX Vega 56 cards. They only come with the Hynix memory and I am having difficulty figuring out a sweet spot for voltage and frequency. If any one has had success specifically with this card please send me their profile. I also note that crossfire is not working well with these cards in those few benchmarks and games that I have that support multiple gpus. Any additional advice on this would also help. But first I need to get the voltage and frequencies optimized on these Hynix memory based cards. Thank you.


Hello.
Firstly I have no experience with "croosfire" so nothing in this regard, I'm sorry.
I used the Bios of your cards on my 56 reference by Sapphire and it works well because the VRamInfo table contains information for both Samsung and Hynix, it seems to me that the timing Hynix are looser (I'm not sure I have not investigated thoroughly as my card is equipped with Samsung HBM2) but still it is to see how often you can carry.
Secondly, I would also try to set the voltage for P6 and P7 GPU at 1000mV.
If your chips are not bad you should be able to run them with these settings and you could even set the Target Temp to 70 ° C; initially all using Wattman and a card at a time because they will hardly behave both in the same way.
I hope this is a little help,
a greeting.

Added: 
To avoid bottlenecks due to TDP set the Power Limit to + 50%.


----------



## os2wiz

Call said:


> Ohkay! Thanks again so much!!
> 
> What about the VRM/whatever temps? Is it good with the eiswolf? Or do I need to go with a fullblock like you were mentioning earlier?
> 
> Thanks!


They are excellent with the Eiswolf. MY vrms's never got over 41 Celcius in ther 4 months I have had the Eiswolf installed. Just sold it installed with my Powercolor Vega 56 reference card. I now have dual MSI Airboost Vega 56 cards in my build. They look like reference but are not. MSI rearranged the io to a different slot on the card so the fan could expel the hot air more efficiently out the back of the chassis. But my case is still very well ventilated and gpu temps are pretty good without water cooling.


----------



## milkbreak

I bought an RX Vega 56 on launch last year and flashed it to the 64 bios that was available at the time and haven't had any problems. Is there any benefit to upgrading that BIOS to a newer 64 BIOS for gaming? Using a Morpheus II cooler on it.


----------



## kondziowy

So these are my settings for Vega 64. Can anyone say if this is good or bad chip or average? I run 980mv 1632MHz (in games ~1550MHz).
Memory is Samsung so 1080MHz is average I guess?


----------



## MacConcierge

Why can't my GPU load be stable?

RX Vega 64 on Air.

Running [email protected]

2 x 2690 v2 CPU, Win 7 64 with 64GB of RAM


----------



## Naeem

Anyone else having issues with HWinfo64 where it turns off GPU and tahn you have to turn off and on PSU to get display back on Vega ?


----------



## cplifj

No issues with HWinfo64 here. left it all on standard settings.

(vega 64 liquid on turbo, does average over 1700MHz (up to 1750MHz) during gaming, no other special oc tricks, just letting vega64 regulate itself.


----------



## Brightmist

Naeem said:


> Anyone else having issues with HWinfo64 where it turns off GPU and tahn you have to turn off and on PSU to get display back on Vega ?


That so feels like a DP cable issue to me if you're using that crap MG278Q cable still 

That thing presents itself in the weirdest ways with DP handshake failing at random times for unknown reasons pretty much.


----------



## Jacobahalls

cplifj said:


> No issues with HWinfo64 here. left it all on standard settings.
> 
> (vega 64 liquid on turbo, does average over 1700MHz (up to 1750MHz) during gaming, no other special oc tricks, just letting vega64 regulate itself.


Is the Turbo Mode option not available for FE owners because I cant seem to find it?


----------



## Naeem

Brightmist said:


> That so feels like a DP cable issue to me if you're using that crap MG278Q cable still
> 
> That thing presents itself in the weirdest ways with DP handshake failing at random times for unknown reasons pretty much.




it's not the cable for me as it works 24/7 without anu issue but it crashed when ever i ran HWinfo64 GPU just shuts it self off those lights that show load goes off and i had to power reset to get display out of it i have Vega 64 LC i think it was issue with sensors and HWinfo64 software i have reinstalled and not yet faced same issue




Jacobahalls said:


> Is the Turbo Mode option not available for FE owners because I cant seem to find it?



set the power target to 50% and it will boost up to rated boost clock of your gpu wich is around 1600mhz , Vega 64 LC has fastest out of the box boost in all vega cards at 1750mhz but it stays around 1700mhz -1730mhz in most games


----------



## Trender

kondziowy said:


> So these are my settings for Vega 64. Can anyone say if this is good or bad chip or average? I run 980mv 1632MHz (in games ~1550MHz).
> Memory is Samsung so 1080MHz is average I guess?


is it stable at those volts? --
Also anyones knows how to solve Overwatch black screen when HBM clocks too high?(I guess Ill have to turn it down unless I watercool, but only OW gives problems)


----------



## kondziowy

Trender said:


> is it stable at those volts? --


Yep, stable at full load in PUBG this week.


----------



## Chaoz

kondziowy said:


> So these are my settings for Vega 64. Can anyone say if this is good or bad chip or average? I run 980mv 1632MHz (in games ~1550MHz).
> Memory is Samsung so 1080MHz is average I guess?


Looks pretty good. These are my settings. Flashed the BIOS from the 64 Liquid Cooled version. Running custom loop and temps hardly go over 40°C.
Clocks hold 1700-1730MHz on core and 1050Mhz on HBM in-game perfectly. Running at 75Hz with FreeSync on 34" 1440p Ultra wide.


----------



## kondziowy

Oh I wish I could do 1730MHz at 1V  
Not, possible, I crash at P7 1702MHz 1050mV even in 3Dmark. And I don't want to go higher on volts.
I guess this cooling pays off big time.


----------



## Chaoz

kondziowy said:


> Oh I wish I could do 1730MHz at 1V
> Not, possible, I crash at P7 1702MHz 1050mV even in 3Dmark. And I don't want to go higher on volts.
> I guess this cooling pays off big time.


Damn, that sucks. Yeah, my ref Vega is awesome. It probably helps that I flashed my GPU with a Liquid Cooled BIOS, it allows for better OC'ing, UV'ing and such.


----------



## jaug1337

Just got a Vega 56. It is with a reference cooler. 

Not interested in buying a new cooler for it.. as I am poor af now, however, I am interested in flashing the Vega 64 BIOS onto it, and overclock + undervolt it. Does that seem reasonable?


----------



## Leons

jaug1337 said:


> Just got a Vega 56. It is with a reference cooler.
> 
> Not interested in buying a new cooler for it.. as I am poor af now, however, I am interested in flashing the Vega 64 BIOS onto it, and overclock + undervolt it. Does that seem reasonable?



Hello.
I guess it can be useful.

http://www.overclock.net/forum/67-amd-ati/1639595-rx-vega-undervolting-efficiency-thread.html#post27009809


----------



## RatusNatus

https://github.com/ghosttr/VegaBiosReader

Who will test it just for me? 

Can it be edited from there?


----------



## Chaoz

jaug1337 said:


> Just got a Vega 56. It is with a reference cooler.
> 
> Not interested in buying a new cooler for it.. as I am poor af now, however, I am interested in flashing the Vega 64 BIOS onto it, and overclock + undervolt it. Does that seem reasonable?


I UV'ed mine and it performs great. Got a custom loop, tho. But you'd still be able to bring the temps down a bit by UV'ing under 1v.

Use ATIFlash v2.77 and download the correct BIOS for it.

Look in the list what manufacturer you have and what version you have.

You can find all the 64 BIOS' in this link:
https://www.techpowerup.com/vgabios...X+Vega+64&interface=&memType=&memSize=&since=

Download the file, load in the BIOS file and flash. That's it.


----------



## sega4ever

Has anyone here replaced the vega 64 liquid cooled fan? The stock one would make a kind of grinding noise at lower rpm so i replaced it with a be quiet! silent wings 3 120mm high speed pwn fan. Msi afterburner shows that the fan speed is at 0% but the rpm is about 330. The fan percentage stays at 0% until I start a game where it will jump up to 57% at 420 rpm. When playing a demanding game, like the main menu of pubg, the fan percentage is at 98% 1440 rpm which cant be right for a 2200 rpm fan. Is there a reason why the fan doesn't seem to be scaling properly to the fan %? It is pretty silent untill it goes from 98% to 99% then the rpm goes from 1440 to 2200.


----------



## Fatrod

sega4ever said:


> Has anyone here replaced the vega 64 liquid cooled fan? The stock one would make a kind of grinding noise at lower rpm so i replaced it with a be quiet! silent wings 3 120mm high speed pwn fan. Msi afterburner shows that the fan speed is at 0% but the rpm is about 330. The fan percentage stays at 0% until I start a game where it will jump up to 57% at 420 rpm. When playing a demanding game, like the main menu of pubg, the fan percentage is at 98% 1440 rpm which cant be right for a 2200 rpm fan. Is there a reason why the fan doesn't seem to be scaling properly to the fan %? It is pretty silent untill it goes from 98% to 99% then the rpm goes from 1440 to 2200.


Yeh I replaced mine with an 2 x EK Vardars in push/pull.

Did you connect it to the PWM header on the card, or have you connected it to a sysfan header?


----------



## Ne01 OnnA

AMD at Work:
DDR5?
HBM2 for CPUs?
NAVI in PS5/XboX X2?
New Vega 2.0 in 2d Half of 2018?
Multi Stacked GPU Chips on Infinity Fabric this year?

Make the Hype Great Again


----------



## HaveeAirs

Could anyone share their Wattman undervolt settings? Trying to get the best out of my ASUS Strix Vega 64. Just wanting to get a general idea of what people have been able to achieve.


----------



## sega4ever

Fatrod said:


> Yeh I replaced mine with an 2 x EK Vardars in push/pull.
> 
> Did you connect it to the PWM header on the card, or have you connected it to a sysfan header?


I opened up the card and connected it to the mini pwm header. I had to use a fan extension cord because the one on the silent wing 3 wasn't long enough. You think that is the problem?


----------



## Chaoz

HaveeAirs said:


> Could anyone share their Wattman undervolt settings? Trying to get the best out of my ASUS Strix Vega 64. Just wanting to get a general idea of what people have been able to achieve.


I don't really use Wattman, but settings are also doable in Wattman.
I find OverdriveNTool better.
Note: I flashed my BIOS to the LC version of myy ref 64. So dunno if you can achieve same OC UV.


----------



## diggiddi

So whats the difference in performance between the Frontier Ed and the 64 aside from 16GB HBM2? does one overclock better, game better or is better in compute?


----------



## OGkrook

you guys think its worth buying vega now or just wait for refresh?


----------



## geriatricpollywog

OGkrook said:


> you guys think its worth buying vega now or just wait for refresh?


AMD has been releasing legitimate upgrades at 2-3 year intervals and Vega is only 8 months old. It all depends what you have now, what games you want to play, and at what resolution.


----------



## sega4ever

HaveeAirs said:


> Could anyone share their Wattman undervolt settings? Trying to get the best out of my ASUS Strix Vega 64. Just wanting to get a general idea of what people have been able to achieve.


vega 64 lc edition

gpu
p6: 1667mhz @ 900mv
p7: 1692mhz @ 900mv

memory
p3: 1090mhz @ 901mv


----------



## Leons

sega4ever said:


> vega 64 lc edition
> 
> gpu
> p6: 1667mhz @ 900mv
> p7: 1692mhz @ 900mv
> 
> memory
> p3: 1090mhz @ 901mv


I am pleased to see that even those who have a liquid version like the downvolt, Vega becomes very efficient in this way.
What effective frequency do you get in game?
Please note that with these settings it is actually like setting all three voltages to 906mV (valid AMD VID 906.25mV).


----------



## manhattan222

*Can't isntall newer drivers (18.3.4 nor 18.4.1)*

Hello everyone,

I googled a lot about this, but can't find anything. I'm posting here because I can't get PowerColor nor AMD to respond my e-mails. I even started a thread: http://www.overclock.net/forum/67-amd-ati/1677649-my-rx-vega-doesn-t-work-radeon-18-3-4-a.html

So, I got a PowerColor RX Vega 56 reference model, flashed the 64 bios on it and applied LiquidMetal for lower fan speeds. Those are the only mods I did. It's running at [email protected] 1V on the core and 1100MHZ HBM2 @ 1V voltage floor.
Everything always worked just fine (except the Blockchain drivers, with which I get artifacts on windows when I install the drivers and the card is not initialized properly on device manager. error 43)

Up until Radeon Software 18.3.3 everything works just fine, but if I try to update the driver to the 18.3.4 or 18.4.1 versions, the screen goes black as it is supposed to and, when it comes back, I get the same artifacts on windows as soon as the AMD display driver is installed, the same thing I get with the Blockchain drivers. When I restart the computer I get the Microsoft Basic Display Adapter goodness. I also tried the stock 56 bios, same thing.

Any ideas why this is happening? I have another Vega 56 from XFX and it works just fine with the Blockchain drivers and everything else I throw at it. The XFX V56 is on a P55 platform and working fine, and the PowerColor V56 is on my Sabertooth X58, so I guess the issue is not related to old platforms, since one works and the other does not. I tried swapping the cards and still the PowerColor gives me artifacts and the XFX works just fine.

Just to clarify, I'm not trying to mine, I'm trying to game lol. I just can't update to the 18.3.4 and 18.4.1 drivers. I mentioned the Blockchain Drivers because the behaviour is the same between it and the 18.3.4 and 18.4.1.

Cheers and thanks!


----------



## sega4ever

Leons said:


> I am pleased to see that even those who have a liquid version like the downvolt, Vega becomes very efficient in this way.
> What effective frequency do you get in game?
> Please note that with these settings it is actually like setting all three voltages to 906mV (valid AMD VID 906.25mV).


about 1520 mhz at the main menu in pubg.


----------



## Ne01 OnnA

OGkrook said:


> you guys think its worth buying vega now or just wait for refresh?


Consider Freesync Monitor first w/FS 2


----------



## Robotmind

Currently for gaming:


P6 : 1667 @1025

P7 : 1697 @1050

MEM: 1200 @1050

Power + 180%

RX Vega 64 air w/EK waterblock custom loop v64lc bios


----------



## Heidi

So...in short...shall I just flash my Sapphires with corresponding WC vbios and be done with it? Or...?!

Sent from my SM-G935F using Tapatalk


----------



## Chaoz

Heidi said:


> So...in short...shall I just flash my Sapphires with corresponding WC vbios and be done with it? Or...?!
> 
> Sent from my SM-G935F using Tapatalk


Pretty much, if you have a custom loop. That's the main reason why I did the same thing.


----------



## rancor

Double post


----------



## rancor

Heidi said:


> So...in short...shall I just flash my Sapphires with corresponding WC vbios and be done with it? Or...?!
> 
> Sent from my SM-G935F using Tapatalk


You may need to lower the stock frequencies of the WC bios to ensure stability but it depends on how well your cards overclock.


----------



## MAMOLII

new gpuz 2.9.0!! Changes in version 2.9.0:Vega's SOC Clock and Hot Spot sensors are now disabled by default at request of AMD. You can enable them any time in settings
W T F ??


----------



## Heidi

rancor said:


> You may need to lower the stock frequencies of the WC bios to ensure stability but it depends on how well your cards overclock.


Thanks on response...the card overclocks well...but I am really looking forward underclocking and undervolting with aim to reduce power usage and temperatures....
So far it seems that WC version of the BIOS has ability to control voltages as my do not giving any reaction regardless of drivers or settings...


----------



## samoflan

..


----------



## Samoflange

MAMOLII said:


> new gpuz 2.9.0!! Changes in version 2.9.0:Vega's SOC Clock and Hot Spot sensors are now disabled by default at request of AMD. You can enable them any time in settings
> W T F ??


The temp readings are a lot more consistent/smooth now for the Vega cards at least.


----------



## VicsPC

Samoflange said:


> The temp readings are a lot more consistent/smooth now for the Vega cards at least.


Its because SOC temps and hotspot temps are a total joke, completely pointless in my opinion.


----------



## OGkrook

need these vega prices to go down.


----------



## OGkrook

0451 said:


> AMD has been releasing legitimate upgrades at 2-3 year intervals and Vega is only 8 months old. It all depends what you have now, what games you want to play, and at what resolution.


coming from a R9 290x playing on a 1080p monitor and a 4k tv .


----------



## Butthurt Beluga

Just got myself an ASUS ROG Vega 64, now how do I join the cool kids club? 

Also, I noticed in WattMan the card is set to 1632MHz on the last state, but at stock (Turbo bios) and Turbo settings in Wattman, that card never breaches 1500MHz.
I undervolted to 1150mV and increase PL to +50% and it started running at 1550MHz~ which was a little better but, I can't get this thing to boost to even 1600MHz, did I manage to get a dud or is there some fine tuning to be done about this?
I changed the last state to 1700 and immediate crash once loaded into TimeSpy. Increased to 1.2V and still immediate crash running TimeSpy benchmark. Changed to 1650 @1.2V and still, immediate crashing once running TimeSpy.

I still have a massive performance boost over my RX 580 and even my Fury X, R6: Siege runs locked at [email protected], on almost all ultra settings.
But man it would really suck to get another dud OCer, my Asus RX580 wouldn't even do 15+MHz from stock even at 1.2V, my Fury X wouldn't do 1100MHz, the only thing that did OC well was this dinker R7 370 that did 1225 on the core stable from 925 stock lol.


----------



## kondziowy

Look at previous pages. Only liquid cooled versions can go higher. 1150mv at P7 1632MHz is a waste of heat, I run 0.98V 1632MHz at P7 and Memory 0.98V 1080MHz.


----------



## VicsPC

Butthurt Beluga said:


> Just got myself an ASUS ROG Vega 64, now how do I join the cool kids club?
> 
> Also, I noticed in WattMan the card is set to 1632MHz on the last state, but at stock (Turbo bios) and Turbo settings in Wattman, that card never breaches 1500MHz.
> I undervolted to 1150mV and increase PL to +50% and it started running at 1550MHz~ which was a little better but, I can't get this thing to boost to even 1600MHz, did I manage to get a dud or is there some fine tuning to be done about this?
> I changed the last state to 1700 and immediate crash once loaded into TimeSpy. Increased to 1.2V and still immediate crash running TimeSpy benchmark. Changed to 1650 @1.2V and still, immediate crashing once running TimeSpy.
> 
> I still have a massive performance boost over my RX 580 and even my Fury X, R6: Siege runs locked at [email protected], on almost all ultra settings.
> But man it would really suck to get another dud OCer, my Asus RX580 wouldn't even do 15+MHz from stock even at 1.2V, my Fury X wouldn't do 1100MHz, the only thing that did OC well was this dinker R7 370 that did 1225 on the core stable from 925 stock lol.


I used to get close to 180fps on Siege since their last graphics update i get closer to 130 now. Massive drop, the mhz really depends on the game too btw. I think msaa/fxaa also has something to do with it. In theHunter i get around 1560mhz on completely stock settings mind you, and in Mafia III i get closer to 1630mhz. Considering i haven't messed with the OC at all and im still on balanced I dont mind that one bit. Mind you i am on a custom loop with a 360/240mm rad.


----------



## Chaoz

Butthurt Beluga said:


> Just got myself an ASUS ROG Vega 64, now how do I join the cool kids club?
> 
> Also, I noticed in WattMan the card is set to 1632MHz on the last state, but at stock (Turbo bios) and Turbo settings in Wattman, that card never breaches 1500MHz.
> I undervolted to 1150mV and increase PL to +50% and it started running at 1550MHz~ which was a little better but, I can't get this thing to boost to even 1600MHz, did I manage to get a dud or is there some fine tuning to be done about this?
> I changed the last state to 1700 and immediate crash once loaded into TimeSpy. Increased to 1.2V and still immediate crash running TimeSpy benchmark. Changed to 1650 @1.2V and still, immediate crashing once running TimeSpy.
> 
> I still have a massive performance boost over my RX 580 and even my Fury X, R6: Siege runs locked at [email protected], on almost all ultra settings.
> But man it would really suck to get another dud OCer, my Asus RX580 wouldn't even do 15+MHz from stock even at 1.2V, my Fury X wouldn't do 1100MHz, the only thing that did OC well was this dinker R7 370 that did 1225 on the core stable from 925 stock lol.


Guess I got lucky with mine, I flashed the LC BIOS to my ref Vega 64, runs flawlessly on 1750Mhz with 1v on core and 1100MHz with 950mV on HBM with +50% powerlimit.


----------



## kril89

So I seem to be having a weird issue where my Vega 64 Air under liquid now seems to be acting strange. The memory keeps downclocking itself to 800mhz and never really maxing out the card. I am now getting a bunch of dips into sub 60fps in all my games. Currently on 18.3.4 and have the same problems with 18.4.1 drivers. Could this be a windows problem I just kinda of lost.


----------



## hyp36rmax

kril89 said:


> So I seem to be having a weird issue where my Vega 64 Air under liquid now seems to be acting strange. The memory keeps downclocking itself to 800mhz and never really maxing out the card. I am now getting a bunch of dips into sub 60fps in all my games. Currently on 18.3.4 and have the same problems with 18.4.1 drivers. Could this be a windows problem I just kinda of lost.


Did you accidentally enable Chill?


----------



## gupsterg

Naeem said:


> Anyone else having issues with HWinfo64 where it turns off GPU and tahn you have to turn off and on PSU to get display back on Vega ?


By any chance are you using an older version of HWINFO that accesses I2C bus for GPU VRM info?



VicsPC said:


> Samoflange said:
> 
> 
> 
> The temp readings are a lot more consistent/smooth now for the Vega cards at least.
> 
> 
> 
> Its because SOC temps and hotspot temps are a total joke, completely pointless in my opinion.
Click to expand...

Where do you see SOC temps?

Hotspot must be joke throttle point as well by AMD?


----------



## spyshagg

kondziowy said:


> Look at previous pages. Only liquid cooled versions can go higher. 1150mv at P7 1632MHz is a waste of heat, I run 0.98V 1632MHz at P7 and Memory 0.98V 1080MHz.


You can set those values but I doubt the card will ever boost itself up to 1632mhz with only that voltage.


----------



## VicsPC

gupsterg said:


> By any chance are you using an older version of HWINFO that accesses I2C bus for GPU VRM info?
> 
> 
> 
> Where do you see SOC temps?
> 
> Hotspot must be joke throttle point as well by AMD?


Massive typo lol, i meant soc clocks. it's weird because some people hit a hotspot of like 105-115°C and have no issues and some do. I dont ever monitor hotspot temps honestly, then again I am on water but even on air it made no difference i had no throttling even when i saw it hit 105°C.


----------



## gupsterg

Dunno.

All I know it is there as a limit in VBIOS. If via driver they have adjusted/overwritten the limit I do not know. This can be done.

For example when the RX 480 reference PCB was launched there was that thing of how it had power bias to draw power from PCI-E slot than PCI-E plugs. The Stilt released as fix via changing the VRM registers through using i2c command applied by MSI AB. AMD released the same but via driver. This meant owners didn't need a VBIOS flash. It just happened without them knowing  .

I for one like seeing Hotspot temp, I believe it is the memory interface temp, I shall be resuming some testing soon. I also value what Mumak did by adding SOC clock. Again early driver did not raise SOC past 1107MHz (as set in VBIOS). So a HBM clock past ~1100MHz wasn't really gonna work. We modded SOC clock via reg mod and you could gain performance if your HBM could take it. Very quickly a driver came out which automatically pushed SOC past 1107MHz when you set HBM higher  .

All in all the authors that create these tools and update them to show things which are hidden are invaluable IMO .


----------



## VicsPC

gupsterg said:


> Dunno.
> 
> All I know it is there as a limit in VBIOS. If via driver they have adjusted/overwritten the limit I do not know. This can be done.
> 
> For example when the RX 480 reference PCB was launched there was that thing of how it had power bias to draw power from PCI-E slot than PCI-E plugs. The Stilt released as fix via changing the VRM registers through using i2c command applied by MSI AB. AMD released the same but via driver. This meant owners didn't need a VBIOS flash. It just happened without them knowing  .
> 
> I for one like seeing Hotspot temp, I believe it is the memory interface temp, I shall be resuming some testing soon. I also value what Mumak did by adding SOC clock. Again early driver did not raise SOC past 1107MHz (as set in VBIOS). So a HBM clock past ~1100MHz wasn't really gonna work. We modded SOC clock via reg mod and you could gain performance if your HBM could take it. Very quickly a driver came out which automatically pushed SOC past 1107MHz when you set HBM higher  .
> 
> All in all the authors that create these tools and update them to show things which are hidden are invaluable IMO .


I think hotspot temps is probably in between the VRMs or something. My hotspot temp seems to be in between the VRM temps at all times.


----------



## gupsterg

It would not make sense to be there. You already have VRM temp sensors and VEGA is nothing special in that context compared to a past GPU.

Next we have GPU/HBM temp which we had on say past GPUs.

On Hawaii, The Stilt highlighted that VDDCI (ie AUX voltage in MSI AB, etc) shouldn't be increased by x as:-



> The quality of the memory controllers vary so some of them might need a slight increase in the supply voltage (VDDCI) even at or slightly below 1500MHz. Usually 20mV increase in the VDDCI is enough to stabilise it. On Hawaii the VDDCI should never be set higher than +50mV (= 1.050V) as the memory PHY / controller is the hottest part of the GPU already.


So now lets think in the way of moving forward, would it not make sense to be monitoring memory PHY or something of that ilk? I see greater hotspot temps with memory loading IIRC from my past testing.

Now this PDF, page 14 we see a reference to HBM I/F. So again would it not make sense as we move forward on HBM2 card we have HBM and I/F temps?


----------



## STEvil

Hot spot is likely at the bottom of the core, closest to the PCB and furthest from the heatsink.


----------



## Trender

Robotmind said:


> Currently for gaming:
> 
> 
> P6 : 1667 @1025
> 
> P7 : 1697 @1050
> 
> MEM: 1200 @1050
> 
> Power + 180%
> 
> RX Vega 64 air w/EK waterblock custom loop v64lc bios


why do you set ur mem voltage so high? its just the floor


----------



## gupsterg

Trender said:


> why do you set ur mem voltage so high? its just the floor


Check OP here, section *Testing of PowerPlay registry mods* > *What is HBM voltage in WattMan/OverdriveNTool?*.


----------



## Trender

gupsterg said:


> Check OP here, section *Testing of PowerPlay registry mods* > *What is HBM voltage in WattMan/OverdriveNTool?*.


I try to use PP tables but it always locks my HBM to 800 mhz ... and yes I used the PP and rebooted and also clicked custom > apply on wattman


----------



## Dhoulmagus

Hi guys! A little late to the club but I got my vega 64 from a fellow user a few weeks back now.

A few life issues came up and kept me away from really playing with it, so today I'm finally fooling with "dialing it in". 

What would be a reasonable undervolt and gpu/mem speed to shoot for as a baseline? This is the reference air model. While I do have a nice bykski block waiting to go on, I can't afford the rest of a water loop quite yet, so I need it to run on air for a few months.


----------



## hyp36rmax

So I picked up my third VEGA 64 in Limited Edition trim for an NCase M1 build. 










Testing all the gear on my test bench with an ASUS X470-I ROG Strix and an AMD R7 2700X before mounting it all in the M1.


----------



## gupsterg

Trender said:


> I try to use PP tables but it always locks my HBM to 800 mhz ... and yes I used the PP and rebooted and also clicked custom > apply on wattman


I can have the occurrence.

IIRC it is when I lower GPU DPM mV too far. I will make a higher state the same as the GPU DPM 2 VID. IIRC this type of setup breaks the association going on of GPU DPM VID with SOCCLK/MEMCLK.


----------



## THUMPer1

nm


----------



## cplifj

And no playready for VEGA …. sounds very promising for their topline product. *siiiiiiiiigh*


----------



## MacConcierge

Why can't my XFX RX Vega 64 have constant GPU Load?

Is there something wrong with my card? The GPU load will swing wildly from 21 % to 99% and it's constantly up & down.

I'm running [email protected]

All my other cards can stay at 100% GPU load or near it.

http://gpuz.techpowerup.com/18/05/25/kbp.png


----------



## Heruur

*Recently Purchased VEGA FE for compute, light gaming on the side*

Im getting constant 96C HBM2 temps, even with undervolt on gpu core; fan is set to 55-60% . Are these normal temps for HBM2? Seems a tad high.


----------



## AlphaC

RX VEGA 56 NANO


----------



## Chaoz

Heruur said:


> Im getting constant 96C HBM2 temps, even with undervolt on gpu core; fan is set to 55-60% . Are these normal temps for HBM2? Seems a tad high.


That seems quite high, tbh. Even when I had the stock cooler on my 64, HBM temps never got that high. Crank that fan up and see what it does. Could be bad paste, too. The stockpaste on mine was quite a lot, tbh. Might be worth a check.


----------



## hyp36rmax

AlphaC said:


> https://youtu.be/Dp6FloTPOUg
> 
> 
> RX VEGA 56 NANO


I just bought one with an EGPU enclosure waiting for it to arrive. can't wait to play with this.


----------



## LtAldoRaine

Welcome.
First problem is how add my profile to Vega Owners Club?

Have Msi 56vega referent version.
Build own LC loop ,all Barrow element. 360x120x45mm radiator,cooper-plate CPU and GPU block ,pump 17W work in PWM mode (1830RPM).In radiator install 3 fan 120mm,one 120mm to exhoust in backside and one 120mm in bootom(under GPU VRM).GPU mount vertically.
Cpu in AIDA stress test have 59C max.In cpu i used Thermal Grizzly Kryonaut TIM.Cpu looks Ok.But i see in Liquid small pieces of garbage .Must have to rinse the liquid but this is small problem, because temps in CPU is max 59C.

In GPU used TIM from Barrow ,and thermal pads .I used original MSI backplate. I tightened the screws to the end and half turn back(to termo metal expansion). Im not mount bracket cross in backside GPU.
In load AssasinCO benchmark ,my temps in nominal Wattman is 59C in GPU and 55C to HBM,167watt ,99% load and HOTSPOT is 94C !!!! 
Please help me.Sorry my english is low.
I try send photo but still tell me BAD format!trying 8 GIf ,tiff,png itd. not work.


----------



## cplifj

Does anyone else see a memclock getting stuck at 945 from boot on since crimson 18.6.1 driver ?

from 18.5.1 on , the memclock tended to get stuck at 500 when watching YouTube vids. But now with the new driver it just goes to Max of 945 from boot never to drop down again to what used to be an idle of 167MHz.

Nice added 15~20 Watts power usage in idle. AMD bought more stock in power companies and via this way try to up their revenues??…., NO **** SURPRISES ME ANYMORE THESE DAYS.


"the quircks of buying other peoples engineered **** , they can play with your socks as much as they want."



edit: just did some testing/fidling and found out the following. After setting my two screens in radeon display settings to what it should = 4k screen without virtual superresolution (set in crimson) and at 60Hz instead of 59Hz (set in win 10 settings) brings the idles back to what they used to be. 
Now why this has been changed , i haven't got a clue, did not happen before crimson 18.6.1.


----------



## cplifj

ah crap, reboot brings it back to 945MHz. So tired of this AMD. Call yourselves engineers ?


----------



## Dhoulmagus

^^ Oh god not again. I haven't updated yet.

I lived with running 2017 drivers on my 280x until I finally upgraded to Vega64 last month because any driver post 17.1 would cause crazy ISR latency and it would constantly get locked in low power state clocks. 

I haven't been to my rig to update yet so I should still be on 18.4.1 ... Will get back to you this evening


----------



## VicsPC

I;m on 18.5.1 and don't have that issue. Make sure to run the AMD cleanup utility, make sure your windows driver installation is set to OFF, then reinstall. I dont have any problems with my memory downclocking.

And btw people, anytime you have issues with a driver don't complain about it on forums, send it in as an AMD report. Most people don't do that and things don't get fixed.


----------



## CloudEffect

Hi guys,

I've been having issues with my new Vega 64. I have the Gigabyte model with the aftermarket cooler. I'm using an AIO cooler on it and it keeps the core at 52 Celsius and the HBM at 55 Celsius and the hotspot at around 74 Celsius. My issue is with the voltage sticking to what I set it in Wattman.

My current settings are:

P6 state - 1547Mhz 1.060v
P7 state - 1632Mhz 1.085v
HBM - 1035Mhz .955v
Power Limit - +50%

When I play at 1440p I will have the proper voltage show up on the core and it will never go over the 1.085v according to GPUZ. When I increase the resolution to 1800p or 2160p it will start throttling the core clocks down severely, which makes sense since it's hitting the power limit of around 330w. I will then look at the reported voltage again and it will be reading at just under 1.2v, which is way over what I set for the highest P7 state. 

Why is it acting this way? Any ideas?

Thanks.


----------



## dagget3450

Sorry i haven't been around to update this thread guys. I been super busy, but mainly the new overclock net site has been a mess. I was waiting long as possible til they got it sorted. Not really digging this at all but im hanging out to see whats next.( the whole post reply UI is nuts, i cant get files to upload, etc.. etc...

Anyways hope to be back in full force soon


----------



## dagget3450

CloudEffect said:


> Hi guys,
> 
> I've been having issues with my new Vega 64. I have the Gigabyte model with the aftermarket cooler. I'm using an AIO cooler on it and it keeps the core at 52 Celsius and the HBM at 55 Celsius and the hotspot at around 74 Celsius. My issue is with the voltage sticking to what I set it in Wattman.
> 
> My current settings are:
> 
> P6 state - 1547Mhz 1.060v
> P7 state - 1632Mhz 1.085v
> HBM - 1035Mhz .955v
> Power Limit - +50%
> 
> When I play at 1440p I will have the proper voltage show up on the core and it will never go over the 1.085v according to GPUZ. When I increase the resolution to 1800p or 2160p it will start throttling the core clocks down severely, which makes sense since it's hitting the power limit of around 330w. I will then look at the reported voltage again and it will be reading at just under 1.2v, which is way over what I set for the highest P7 state.
> 
> Why is it acting this way? Any ideas?
> 
> Thanks.


I want to say i noticed something like this on my Fe's. When i tried to get WoW to work at 4k or beyond in resolution i get really bad stuttering, which once i checked with overlay. Gpu clocks are constantly bouncing. I didn't check voltage though so it may not be same issue. I am really getting aggravated with the newer drivers...


----------



## CloudEffect

dagget3450 said:


> I want to say i noticed something like this on my Fe's. When i tried to get WoW to work at 4k or beyond in resolution i get really bad stuttering, which once i checked with overlay. Gpu clocks are constantly bouncing. I didn't check voltage though so it may not be same issue. I am really getting aggravated with the newer drivers...


I figured out the issue. I have a 1440p screen, so sometimes I will use the Virtual Super Resolution(VSR) to run it at either 1800p or 2160p and downscale it to 1440p to remove more of the aliasing. Apparently that overrides the voltage you set in Wattman and it will run near 1.2v by default and will hit the power limit wall very hard. Seems like a bug to me.

If I run Gears of War 4 and use their internal scaling options it'll work perfectly without any throttling issues at 4k.

I'm using driver 18.6.1 if anyone cares.


----------



## Dhoulmagus

cplifj said:


> ah crap, reboot brings it back to 945MHz. So tired of this AMD. Call yourselves engineers ?


I updated and I don't seem to be having this problem, but I shipped out my 4k screen to get some replacements later and right now am on 3 1080P screens so that may explain the difference. I had serious TSR lags and p-state isues with my 280x and a 4k monitor in the mix.

My GPU core clock is hovering from 32-39mhz with gpu only power draw of 3-4 watts (according to GPU-Z).


----------



## cplifj

i have 1 iiyama bg2888ushu (UHD) on DP and 1 medion Full HD on HDMI connected, but only with the new 18.6.1 the memory clocks to 945MHz (default max). 

No settings will change that, nothing else changed either , just the new crimson driver. So i am sure it has to do with that driverversion.

Memclock not downclocking started with 18.5.1 allready, but there it just got stuck at 500MHz while watching vids on YouTube. 
Reboots fixed that back to the normal 167Mhz idle.

But now it's 945MHz instantly from boot time on. (strangest thing is that the driver now finally seems to install without any error reports or not doing a restart, it just installs perfect as it should now, only the results still seem borked)


----------



## Trender

CloudEffect said:


> I figured out the issue. I have a 1440p screen, so sometimes I will use the Virtual Super Resolution(VSR) to run it at either 1800p or 2160p and downscale it to 1440p to remove more of the aliasing. Apparently that overrides the voltage you set in Wattman and it will run near 1.2v by default and will hit the power limit wall very hard. Seems like a bug to me.
> 
> If I run Gears of War 4 and use their internal scaling options it'll work perfectly without any throttling issues at 4k.
> 
> I'm using driver 18.6.1 if anyone cares.


Have you tried with 18.5? I've read 18.6 have throttling problems


----------



## faizreds

Is 550 watt seasonic focus+ gold enough for a vega 64 or vega 56?
My processor is Ryzen 1600.


----------



## cg4200

faizreds said:


> Is 550 watt seasonic focus+ gold enough for a vega 64 or vega 56?
> My processor is Ryzen 1600.


I would not want to go any lower than 800watts just on sheer spikes.. if you overclock or play in summer more heat = more draw on power supply 
Have to figure not all vegas undervolt great some do some don't also your 1600 ryzen ram fans stuff like that ..


----------



## kondziowy

faizreds said:


> Is 550 watt seasonic focus+ gold enough for a vega 64 or vega 56?
> My processor is Ryzen 1600.


Knowing what I know about Vega now (I monitor power usage using wall meter), I would not hesitate to run it on 550W Seasonic Focus Gold but... only on Power Save mode. With normal gaming load on cpu(~40% load) and full load on gpu you will draw ~300-320W from the wall (or about 350W up to 400W if undervolted and overclocked to about ~1550MHz but that depends on your chip).

That said, for a random person I would recommend minimum 750W or more, because if you don't change from Balanced mode(and sometimes after restart it can switch back to Balanced) you can easily draw 400W, or 550W in Turbo mode(power waste mode) if CPU load is also high. Maybe more if you overvolt! So if I wanted to run it at 500W power draw all the time, I would just but 1000W PSU to be in the highest effiecency range at 50% PSU load.

PS.
Actually... look at this http://www.overclock.net/forum/67-a...x-vega-64-nitro-vs-devil-vs-strix-vs-ref.html
over 60W difference for the same performance on different cards... so I guess every card will behave differently


----------



## Ne01 OnnA

Waiting for my Vega64 XTX LC, i will Tweak and play my new Toy in this Friday 
Any sugestions? 
What to tweak etc. (OverdriveN Tool will be used)

Also i can't update my Sig here anymore, some bug with 7 lines etc.


----------



## MAMOLII

LtAldoRaine said:


> Welcome.
> First problem is how add my profile to Vega Owners Club?
> 
> Have Msi 56vega referent version.
> Build own LC loop ,all Barrow element. 360x120x45mm radiator,cooper-plate CPU and GPU block ,pump 17W work in PWM mode (1830RPM).In radiator install 3 fan 120mm,one 120mm to exhoust in backside and one 120mm in bootom(under GPU VRM).GPU mount vertically.
> Cpu in AIDA stress test have 59C max.In cpu i used Thermal Grizzly Kryonaut TIM.Cpu looks Ok.But i see in Liquid small pieces of garbage .Must have to rinse the liquid but this is small problem, because temps in CPU is max 59C.
> 
> In GPU used TIM from Barrow ,and thermal pads .I used original MSI backplate. I and half turn back(to termo metal expansion). Im not mount bracket cross in backside GPU.
> In load AssasinCO benchmark ,my temps in nominal Wattman is 59C in GPU and 55C to HBM,167watt ,99% load and HOTSPOT is 94C !!!!
> Please help me.Sorry my english is low.
> I try send photo but still tell me BAD format!trying 8 GIf ,tiff,png itd. not work.


use bracket cross in backside gpu and mount it tight and the screws to the end


----------



## VicsPC

MAMOLII said:


> use bracket cross in backside gpu and mount it tight and the screws to the end


Depends on the water block and it's instructions. I didn't use the bracket cross on my ekwb and ihave better temps then most people, including hotspot temps.


----------



## Ne01 OnnA

Some advise:






More Videos for Vega64 XTX LC:

https://www.youtube.com/user/giantmonkey101/videos


----------



## MAMOLII

VicsPC said:


> Depends on the water block and it's instructions. I didn't use the bracket cross on my ekwb and ihave better temps then most people, including hotspot temps.


probably its something wrong the temps he has for this custom water setup are bad... its pure contact from paste... or mountiNg pressure... i have a modded watercooled vega with aio artic freezer 120 without back plate mount i had gpu 55 hot spot 86 with backplate on gpu 48c hotspot 71
he made a custom cooling with a water block all Barrow element as he said, means he doesnt have instructions and its not a brand name block that comes well engineered from a good company like ekwb and its not mounted with 12 screws(or more) all over the card pcb..


----------



## VicsPC

MAMOLII said:


> probably its something wrong the temps he has for this custom water setup are bad... its pure contact from paste... or mountiNg pressure... i have a modded watercooled vega with aio artic freezer 120 without back plate mount i had gpu 55 hot spot 86 with backplate on gpu 48c hotspot 71
> he made a custom cooling with a water block all Barrow element as he said, means he doesnt have instructions and its not a brand name block that comes well engineered from a good company like ekwb and its not mounted with 12 screws(or more) all over the card pcb..


Hotspot temp i dont think has contact with the waterblock but it helps. I have a backplate on mine and i hit 43°C core/46°C HBM and 53°C hotspot while playing Crew 2. Demanding game but nothing like theHunter or Wildlands. On my ekwb i dont use the cross mount because the instructions didn't say to. Considering my temps are already amazing (its 30°C ambient here btw) i dont think adding that cross mount would make any difference.


----------



## 984984

Hi guys,
I am tried to overclock my frontier liquid cooling edition but I am was unable to change anything via overdriventool. Why is that? I am on the rx vega drivers from april. It´s a little bit odd because on the drivers from last year, I was able to change it...
So is there someone who has soft power play table for vega FE LC? Or is it possible to use sppt from rx vega 64 LC? I am tried to use it table for vega FE air cooling, but the cooling fan on water block acting weird. No matter what rpms is set in tables, it´s always stay on 600-1150 something rpms...
Thanks for help!


----------



## y01p0w3r3d

any1 know what bit I need to remove the screws from asus rog strix vega 56 ? im replacing a old gpu and of the 2 precision bit sets I have, none of bits fit in the screw hole to un-screw them


----------



## y01p0w3r3d

forgot to mention card is ASUS ROG STRIX Radeon RX Vega56 8GB OC Edition 8GB HBM2. im adding a ekwb block and backplate to it for friends wc system


----------



## ZealotKi11er

Anyone know why Vega 64 Liquid has an 8-Pin connector for the pump? I am trying to figure out what each cable is for.


----------



## diggiddi

Speak to EK for info on that but I don't see instructions to remove that screw in manual you only remove 6 screws like it shows in step 1

https://www.ekwb.com/shop/EK-IM/EK-IM-3830046995650.pdf

BTW When you get done let us know how it overclocks under water


----------



## MAMOLII

VicsPC said:


> Hotspot temp i dont think has contact with the waterblock but it helps. I have a backplate on mine and i hit 43°C core/46°C HBM and 53°C hotspot while playing Crew 2. Demanding game but nothing like theHunter or Wildlands. On my ekbw i dont use the cross mount because the instructions didn't say to. Considering my temps are already amazing (its 30°C ambient here btw) i dont think adding that cross mount would make any difference.


yes if he... and I... had a full waterblock from ekwb hotspot temps should be fine and we dont have to use any bracket...but we dont have such a fine full waterblock... so take off the ek and try to put custom block that covers only the gpu... try with bracket and without and see... https://www.reddit.com/r/Amd/comments/7n9tp7/rx_vega_56_hotspot_hell_long_post_findings_inside/


----------



## y01p0w3r3d

diggiddi said:


> Speak to EK for info on that but I don't see instructions to remove that screw in manual you only remove 6 screws like it shows in step 1
> 
> https://www.ekwb.com/shop/EK-IM/EK-IM-3830046995650.pdf
> 
> BTW When you get done let us know how it overclocks under water


I will do that, I have the system in my room that he dropped off.. I found a tear down video for vega64 strix that has same backplate layout as vega56. shows witch screws to remove and it pops right off


----------



## VicsPC

MAMOLII said:


> yes if he... and I... had a full waterblock from ekwb hotspot temps should be fine and we dont have to use any bracket...but we dont have such a fine full waterblock... so take off the ek and try to put custom block that covers only the gpu... try with bracket and without and see... https://www.reddit.com/r/Amd/comments/7n9tp7/rx_vega_56_hotspot_hell_long_post_findings_inside/


I don't think it affects hotspot temps, but it's possible it's a sensor in between both VRMs or maybe could be a sensor below the actual core by the resistors. It's still all speculation. Good way to test would be to direct cool air thru a small tube over the card to see where it makes it drop while under load. 

And why would i take off my EKWB, the whole reason i bought a reference unit is knowing EK or anyone else wouldnt make a block for AIB Vegas.


----------



## dagget3450

984984 said:


> Hi guys,
> I am tried to overclock my frontier liquid cooling edition but I am was unable to change anything via overdriventool. Why is that? I am on the rx vega drivers from april. It´s a little bit odd because on the drivers from last year, I was able to change it...
> So is there someone who has soft power play table for vega FE LC? Or is it possible to use sppt from rx vega 64 LC? I am tried to use it table for vega FE air cooling, but the cooling fan on water block acting weird. No matter what rpms is set in tables, it´s always stay on 600-1150 something rpms...
> Thanks for help!


Far as i am aware, AMD has removed any wattman access from Vega Frontier. I am not entirely sure but i think the last driver supporting any settings changes was 
17.Q4. AMD has dropped the ball on Vega frontier because originally they advertised "game mode" and pro mode. Now it's just Pro mode from what i can see. I have tried many things but perhaps someone else has better insight.


----------



## Ne01 OnnA

*My own Beast ! Vega 64 XTX LC Limited Ed.*

Im Glad to have my own now 
All Games @ Ultra+AA NP w/1440p 10Bit 74Hz FreeSync Monitor (CAP to 70FPS in All games)

For unknown reasons i can't update my Profile... or add new Rigs lol
==


----------



## Rootax

dagget3450 said:


> Far as i am aware, AMD has removed any wattman access from Vega Frontier. I am not entirely sure but i think the last driver supporting any settings changes was
> 17.Q4. AMD has dropped the ball on Vega frontier because originally they advertised "game mode" and pro mode. Now it's just Pro mode from what i can see. I have tried many things but perhaps someone else has better insight.


I've a FE, and wattman is still working here. Gaming mode too. i've 18.6.1 right now, and overdriventtool is still working great...

http://rootax.org/temp/vegafeok.jpg


I just have a reg mod to allow 50+ power limit.


----------



## dagget3450

Rootax said:


> I've a FE, and wattman is still working here. Gaming mode too. i've 18.6.1 right now, and overdriventtool is still working great...
> 
> http://rootax.org/temp/vegafeok.jpg
> 
> 
> I just have a reg mod to allow 50+ power limit.


Something isn't making sense, because overdriventtool doesn't work for me. I get errors about unsupported commands. I looked it up and this was known for Vega frontier and last driver mentioned to work is 17.q4. This is confusing for me because of pro drivers vs Radeon drivers are on different number sets. Either way I tried both versions of latest driver and had same result. I also only get pro ui but I know some of this was due to having 2x Vega Fe in my system. There is some kind of issue with the driver and multiple Vegas. 

I am running win10 pro 1803 I believe....


----------



## Rootax

dagget3450 said:


> Something isn't making sense, because overdriventtool doesn't work for me. I get errors about unsupported commands. I looked it up and this was known for Vega frontier and last driver mentioned to work is 17.q4. This is confusing for me because of pro drivers vs Radeon drivers are on different number sets. Either way I tried both versions of latest driver and had same result. I also only get pro ui but I know some of this was due to having 2x Vega Fe in my system. There is some kind of issue with the driver and multiple Vegas.
> 
> I am running win10 pro 1803 I believe....


Ah yes, I believe gaming mode doesn't support multiple Vega.

For the UI, it's normal that you get the pro interface if you don't/can't go into gaming mode, even with non pro drivers. I've the "gaming" interface because of the gaming mode.


----------



## cg4200

Bro easy pesy fix...
uninstall driver in windows you know if you have 2 fe it might say error or not complete uninstall again..
turn off computer .. very important you can leave your second card installed in computer just disconnect the 2 pci power plugs at powersupply end much easier..
turn on computer install your drivers of choice I would personaly go with 18,5,1 or 18.5.2 after installed I restart computer than turn off .. reinstall pci plugs to power supply turn on computer.. on old drivers this is where you would have to reinstall drivers and option would be there to switch to gaming or pro..will not ever appear if you load drivers with TWO fe cards installed.. on new drivers they rock no need to switch back and forth if you go from say mining now want to game just stop your miner and game.. also I run two frontier in sli gaming with wb 1700 1050 crappy sd2 has no sli.


----------



## Jass11

*Help atiflash 0FL01 on VEGA Pulse*

Hello, i have a Big Problem : i have flash my Sapphire VEGA 56 PULSE > Sapphire VEGA 64 Nitro, boot and drivers is ok but crash crash in game.

Now i try to flashback with my VEGA 56 Pulse bios saved and im stuck on atiflash message "0FL01 rom not erased".

I try with GUI and cmd admin with atiflash.exe and atiwinflash.exe > ko
-unlockrom 0 > ok "rom unlocked" but problem persist witch atiflash 0FL01

I try boot with freedos and i have message for atiflash.exe "programme not launch in dos".
I try older atiflash but in dos its no possible to flash VEGA card

SSID mismatched

I need your help


----------



## Fediuld

Has anyone used the following waterblock with the Vega 64 Nitro?

https://www.aliexpress.com/item/Byk...on-RX-Vega-64-8GB-HBM2-11275/32868393119.html

Apparently is designed for that specific card.


----------



## y01p0w3r3d

well, I found more issues with that pc I built not too long ago... when I replaced the built-in ttlcs in lvl10gt with all ek 240mm kit that had rubberized tubing, all went great, no issues. drained/refilled every 6months like before w/o any issues with temps, nothing odd in the liquid , very clear..

what I did notice is something happened that he didn't/not telling me , so im waiting for callback before I go further. board is crosshair v form-z with fx 8350.. loaded hw and tmp at idle was 150~f witch it never was at before and def wasn't their when I built this loop originally. pc shuts off after 165f or so, maybe took few minutes for this to happen.. q-led diag halts at cpu. hw did report gpu temp, vega56 @ 95~f witch was good for being idle and under water.. I didn't push any oc due to cpu...


----------



## mtrai

Jass11 said:


> Hello, i have a Big Problem : i have flash my Sapphire VEGA 56 PULSE > Sapphire VEGA 64 Nitro, boot and drivers is ok but crash crash in game.
> 
> Now i try to flashback with my VEGA 56 Pulse bios saved and im stuck on atiflash message "0FL01 rom not erased".
> 
> I try with GUI and cmd admin with atiflash.exe and atiwinflash.exe > ko
> -unlockrom 0 > ok "rom unlocked" but problem persist witch atiflash 0FL01
> 
> I try boot with freedos and i have message for atiflash.exe "programme not launch in dos".
> I try older atiflash but in dos its no possible to flash VEGA card
> 
> SSID mismatched
> 
> I need your help


I have this same issue with my OC bios third bios switch on my Powercolor Red Devil Vega 64 I just got. Quite baffled. Was trying to go back to the Stock OC bios. Any ideas?


----------



## Jass11

I try tooday with atiflash_417 in dos boot : adapter not found :/


----------



## DiscoSubmarine

mtrai said:


> I have this same issue with my OC bios third bios switch on my Powercolor Red Devil Vega 64 I just got. Quite baffled. Was trying to go back to the Stock OC bios. Any ideas?


ATIFlash 2.77 doesn't work on windows 10 version 1803 (latest version). make sure you're using the latest version of ATIFlash which supports win 10 1803.

also, the OC bios (rightmost switch position) on my Vega 56 Red Devil is write protected (or at least i assume so as i couldn't flash it, but i could flash the middle bios). it's probably the same on the 64, are you sure the OC bios is the one you flashed?

you might also need to run ATIFlash with -f if you're getting SSID mismatch.


----------



## mtrai

DiscoSubmarine said:


> ATIFlash 2.77 doesn't work on windows 10 version 1803 (latest version). make sure you're using the latest version of ATIFlash which supports win 10 1803.
> 
> also, the OC bios (rightmost switch position) on my Vega 56 Red Devil is write protected (or at least i assume so as i couldn't flash it, but i could flash the middle bios). it's probably the same on the 64, are you sure the OC bios is the one you flashed?
> 
> you might also need to run ATIFlash with -f if you're getting SSID mismatch.


I am using ATIFLASH 2.84 which works in win 10 and win 7. I have used every command line switch I could think of using. It is like this bios is write protected. I should of stated I am getting the error message could not erase rom. with the same error code. -unlockrom is a no go as well. 

Yeah my OC position is the one I flashed but now seems to be write protected as you mentioned. It must be a bios thing. IN that on the original red devil OC bio is not protected but the Powercolor Vega 64 Liquid cooled bios is write protected. I did not actually notice until I after I flashed it...that the Red Devil v64 OC bios has higher TDP limits then the LC one. My OC bios is the right most switch...remember I have 3 bios positions on my card...I guess I could use the middle one to get the Original OC bios back on my card lol.


----------



## mtrai

Hey guys...just wanted to share something...I know this is a known thing...I finally got around to adding aux fans again after getting my new vega 64 like I used to do on my "red modded" RX 580x 2...anyhow I did a different extra fan placement..but I based it on some pics I had seen of the Vega 64 measuring heat. You know the ones that show hot spots. 

Well I only placed one fan...pointed at the backplate about 2/3 down from where the heat sink bracket is. Well it dropped my temps 6 degrees Celsius on idle and under full load. Right now it is not pretty as it is just an old cpu cooling fan I had laying around. Just thought I would share.

Here is a pic showing where I placed it. 



Spoiler


----------



## Worldwin

So i just got my V64 Nitro+ and I cant figure out how to get the frequencies working. I have dynamic frequency set at 1560 p7 and 1550 p6 and the actual frequency in Heaven is around 1490-1500mhz. Does anyone know why this is happening?


----------



## Ne01 OnnA

*Here my Light OC*

Light OC for More Demanding Games:

AC:Origins 20min. session to meansure Temps & Clocks.
Also i have added additional Fan to the front of LiquiD (Now it is in Push-Pull configuration) ||>>>--||-->>>


----------



## hyp36rmax

Here's my VEGA 64 Limited Edition with an EK Waterblock


----------



## Maracus

Worldwin said:


> So i just got my V64 Nitro+ and I cant figure out how to get the frequencies working. I have dynamic frequency set at 1560 p7 and 1550 p6 and the actual frequency in Heaven is around 1490-1500mhz. Does anyone know why this is happening?


Yep, Vega is an interesting card. I have the Asus Strix 56 and mine generally runs 40-50mhz lower that the P7 state of 1590mhz. If your not worried about power so much just increase power limit by 50% should alow you to boost higher.

I currently have it set at P6 1600/1035 and the P7 1650/1050 which actually boost to only 1600-1625mhz at 1.0v

Sadly it has Hynix ram which struggles to reach 900mhz


----------



## Ne01 OnnA

This P6-P7 are not exact and bouncing in MHz occurs? 
Here is why:

This bahaviour is just Marker for Vega AI Boost
If Temps/PSU/Other variables are correct then you will end up with ~Max P7 Mhz marker.

Here -> 20 minutes of AC Origins Gameplay in Alexandria (Yup, Heavy workload at UltraHigh settings with AA & reshade)

I have Additional Fan for Vega so now it's Push-Pull conf.

==


----------



## Ne01 OnnA

New Tweak add-on incoming to Our RadeonMOD Reg.Utility:

Time saver    <- Yes now you have Default like you want (I have >30Games with Profiles in Adrenalin, no more Time consuming Tweaking)

Chill specific Vaules for Target Framerate (usually good for FreeSync Gaming):

[HKEY_LOCAL_MACHINE\SOFTWARE\AMD\Chill]
"ChillLevelDefault"=dword:00000002
"MaxFramerateDefault"=dword:00000046
"MinFramerateDefault"=dword:00000040
"MaxFramerateRange"=dword:00000064
"MinFramerateRange"=dword:0000001e
"ProfileEnableDefault"=dword:00000000

CN response time:

[HKEY_LOCAL_MACHINE\SOFTWARE\AMD\CN]
"PreloadDelay"=dword:000000c8
"UnloadDelay"=dword:000000c8

==
dword:00000046 = 70
dword:00000040 = 64
dword:0000001e = 30

So we have Freesync values set to: 
Min. 30Hz/FPS
Max. 70Hz
Chill Range - 64-70FPS

UPD. Working great with AC: O, and other games
Note. Do not set chill for FPP MP Shooters, You need Fixed Max ! (i have 70Hz for BF1)


----------



## Worldwin

So to not some changes between the Nitro+ LEL vs SR, I believe there are three changes: the change from a vapour chamber to standard heatplate, the removal of the third 8pin and its associated components and lastly the two VRM's phases.


----------



## ZOONAMI

Worldwin said:


> So to not some changes between the Nitro+ LEL vs SR, I believe there are three changes: the change from a vapour chamber to standard heatplate, the removal of the third 8pin and its associated components and lastly the two VRM's phases.


Just picked up an SR. Initial OC attempts not going well. Timespy at stock not even getting close to 1630 advertised boost. Stays at more like 1500mhz. 

Even a modest +50 memory boost seems to be immediately crashing the driver and resetting stock values.

Adding 7% frequency boost does get it around 1600 and temps are fine.

At stock though it's doing a 7400 or so timespy score, which actually seems like a good score for a Vega 64.

Perhaps some memory OC would be more stable in games but I typically use 3d mark to test for stability. 

Also are clock rates generally higher in games than in 3dmark? Haven't had time for any gaming yet just put in the card last night.

Will do some more testing but I'd rather not have to RMA.

I also think it's possibly this is PSU related, evga 850 bronze is getting kind of old and I have previously had some power related shut down issues with my 1070 sli previous set up, but that seemingly resolved itself. I may try a 1000w gold to see if it helps the clocks.


----------



## Worldwin

ZOONAMI said:


> Just picked up an SR. Initial OC attempts not going well. Timespy at stock not even getting close to 1630 advertised boost. Stays at more like 1500mhz.
> 
> Even a modest +50 memory boost seems to be immediately crashing the driver and resetting stock values.
> 
> Adding 7% frequency boost does get it around 1600 and temps are fine.
> 
> At stock though it's doing a 7400 or so timespy score, which actually seems like a good score for a Vega 64.
> 
> Perhaps some memory OC would be more stable in games but I typically use 3d mark to test for stability.
> 
> Also are clock rates generally higher in games than in 3dmark? Haven't had time for any gaming yet just put in the card last night.
> 
> Will do some more testing but I'd rather not have to RMA.
> 
> I also think it's possibly this is PSU related, evga 850 bronze is getting kind of old and I have previously had some power related shut down issues with my 1070 sli previous set up, but that seemingly resolved itself. I may try a 1000w gold to see if it helps the clocks.


As others will suggest lower the voltage.You can probably lower the P6/P7 by 100mV and get it to boost higher since it will have more power and temperature overhead.


----------



## ZOONAMI

Worldwin said:


> ZOONAMI said:
> 
> 
> 
> Just picked up an SR. Initial OC attempts not going well. Timespy at stock not even getting close to 1630 advertised boost. Stays at more like 1500mhz.
> 
> Even a modest +50 memory boost seems to be immediately crashing the driver and resetting stock values.
> 
> Adding 7% frequency boost does get it around 1600 and temps are fine.
> 
> At stock though it's doing a 7400 or so timespy score, which actually seems like a good score for a Vega 64.
> 
> Perhaps some memory OC would be more stable in games but I typically use 3d mark to test for stability.
> 
> Also are clock rates generally higher in games than in 3dmark? Haven't had time for any gaming yet just put in the card last night.
> 
> Will do some more testing but I'd rather not have to RMA.
> 
> I also think it's possibly this is PSU related, evga 850 bronze is getting kind of old and I have previously had some power related shut down issues with my 1070 sli previous set up, but that seemingly resolved itself. I may try a 1000w gold to see if it helps the clocks.
> 
> 
> 
> As others will suggest lower the voltage.You can probably lower the P6/P7 by 100mV and get it to boost higher since it will have more power and temperature overhead.
Click to expand...

Is it normal to not boost what it is supposed to boost to at stock though?


----------



## Worldwin

ZOONAMI said:


> Is it normal to not boost what it is supposed to boost to at stock though?


Vega is unable to hit them advertised clocks. Running FC5 at stock using balanced I had average frequency around 1550mhz. This is well below the set 1620 and advertised 1580. Since Nitro+ is air you should assume it to be around 40-50mhz lower than what you set as P6/P7.
I recommend you use OverdriveNTool and have it set to run as admin. If you right click near the label at the top you can access the Softpowerplay table so you can adjust the P0-P5 states and save them in the registry. Much more convenient.


----------



## ser_renely

Hi,

I just received a Powercolor Red Devil 56, with Samsung HBM. I have a couple of questions:

My main question was about flashing the BIOS to the Red Devil 64 to get more HBM voltage. I have read mixed reports if you can flash a non-reference card, and haven't seen anyone confirm that they have done it successfully. I tried but WinFlash gives me an error, "subsystem ID mismatch". So, Obviously something is not right. I have the cards BIOS switch in the middle position, on STD. I have Samsung HMB memory. I got the 64 BIOS from Techpower, and actually tried the two Red devil ones on there...in case the first one was for Hynix memory or something like that. Anyhow, it didn't work for me and I didn't want to force it with DOS until I get a confirmation that someone has done it successfully. Is this possible? 

MY second question is about how much memory voltage I should use for HBM? I am currently at 950mV and 975MHz


Thanks,
Ser


----------



## Ne01 OnnA

ser_renely said:


> Hi,
> 
> I just received a Powercolor Red Devil 56, with Samsung HBM. I have a couple of questions:
> 
> My main question was about flashing the BIOS to the Red Devil 64 to get more HBM voltage. I have read mixed reports if you can flash a non-reference card, and haven't seen anyone confirm that they have done it successfully. I tried but WinFlash gives me an error, "subsystem ID mismatch". So, Obviously something is not right. I have the cards BIOS switch in the middle position, on STD. I have Samsung HMB memory. I got the 64 BIOS from Techpower, and actually tried the two Red devil ones on there...in case the first one was for Hynix memory or something like that. Anyhow, it didn't work for me and I didn't want to force it with DOS until I get a confirmation that someone has done it successfully. Is this possible?
> 
> MY second question is about how much memory voltage I should use for HBM? I am currently at 950mV and 975MHz
> 
> 
> Thanks,
> Ser


HBM2 made by Samsung are Great Overclocker.
Maby you can go up to 1000-1100 marker, then UV and Set your P5-P7 acordingly.
Test then Play some actual game.

Greets.

PS. Im now Testing 1175 & 1200MHz HBM2
So Far 1175 at 975mV (It's Infinity Fabric Voltage, not HBM2 one this is set to 1.356v)
Yes 1175 and no single Artifact in FC4/5 and AC: O

Here my Easy OC for long session Gaming 
~50min in AC: O making Quests:


----------



## ser_renely

Ne01 OnnA said:


> HBM2 made by Samsung are Great Overclocker.
> Maby you can go up to 1000-1100 marker, then UV and Set your P5-P7 acordingly.
> Test then Play some actual game.
> 
> Greets.
> 
> PS. Im now Testing 1175 & 1200MHz HBM2
> So Far 1175 at 975mV (It's Infinity Fabric Voltage, not HBM2 one this is set to 1.356v)
> Yes 1175 and no single Artifact in FC4/5 and AC: O
> 
> Here my Easy OC for long session Gaming
> ~50min in AC: O making Quests:



Thank you. 



I will be doing more testing/tweaking as time allows. Right now I am at 1100mV, +50% power draw, 1632MHz core @ 60c...fairly new at this so being patient.



Do you know if it can be upgraded to the 64 BIOS?


----------



## ZOONAMI

So hynix memory sucks. Dammit. Should have gotten the red dragon but it's just so ugly.


----------



## Ne01 OnnA

*New 3D mark sick score*

~28k ballpark (I'm limited by PSU & 2700X will be better for sure, so on my Shadow of The Beast 28.500-29.000 pts. in 3Dmark is possible)
BTW I will not change any , one upgrade that is before me is 32GB 3800-4133 RAM CL18/CL19

Also i have made some new Tweak using OverdriveN Tool (by Guru3D user #Tede).
Some additional UV for Cool&Quiet Long Gaming sessions (and this one beats my last 3Dmark lol)

4018MHz CPU | 3082MHz CL15 RAM LLT

Shadow of The BeasT by Vega XTX

-BTW Freesync is not affecting score, but it's affecting IQ-

UPD. Yes i'm using Vega RX_VEGA_64_AIO_Soft_PP Loaded into OverdriveN Tool (Fans working OK with this setup)

-> http://hwbot.org/search/submissions


----------



## ZOONAMI

Alright, making some progress here. I am pretty sure I am PSU limited. May need to take some OC off my CPU or get a better psu. It's not liking if I push more than 20% additional power even with undervolt so around 300w on an 850w bronze with an 8700k at 5.1ghz. But I'm getting close to stable holding around 1600mhz actual clocks and 1000mhz hbm.


----------



## mickeykool

Question guys, if i enable enhanced sync via global settings do i still need to enable Vsync in game menu? I'm in process getting a freesync 2 monitor so right now using a 144htz standalone monitor.


----------



## Trender

Ne01 OnnA said:


> ~28k ballpark (I'm limited by PSU & 2700X will be better for sure, so on my Shadow of The Beast 28.500-29.000 pts. in 3Dmark is possible)
> BTW I will not change any , one upgrade that is before me is 32GB 3800-4133 RAM CL18/CL19
> 
> Also i have made some new Tweak using OverdriveN Tool (by Guru3D user #Tede).
> Some additional UV for Cool&Quiet Long Gaming sessions (and this one beats my last 3Dmark lol)
> 
> 4018MHz CPU | 3082MHz CL15 RAM LLT
> 
> Shadow of The BeasT by Vega XTX
> 
> -BTW Freesync is not affecting score, but it's affecting IQ-
> 
> UPD. Yes i'm using Vega RX_VEGA_64_AIO_Soft_PP Loaded into OverdriveN Tool (Fans working OK with this setup)
> 
> -> http://hwbot.org/search/submissions



My gaming undervolt:
36000 MHZ CL16


----------



## Martin778

How hot does your Vega64 LC run guys? Mine up to 70c with ~25*C ambient...hot like fire. 
When I unlock the power limit, it soaks almost 400W and reports peak 103-105*C hotspot temp, what the!


----------



## LicSqualo

*Undervolt your VGA*



Martin778 said:


> How hot does your Vega64 LC run guys? Mine up to 70c with ~25*C ambient...hot like fire.
> When I unlock the power limit, it soaks almost 400W and reports peak 103-105*C hotspot temp, what the!


Hi Martin, have you undervolted your VGA?
This is my Overdrive NTool setup.
NO power limits. More aggressive fan setup. And I never reach 100°C in hotspot T. At max 80°C.
My GPU speed is around 1600-1700 MHz when play at games. Dependng on how much resource the game require.
Today my T ambient is 34°C. Just to compare.

No site upload allowed today.



And this is my Firestrike run with max readings in evidence for GPU speed and Hotspot temp (as you can see in GPU-Z.


----------



## Ne01 OnnA

@LicSqualo
UV a liitle more to 1081-1100mV also give HBM_2 IF Max 985mV at 1125Mhz
Then test again, you should hit easily 26k> in GPU score.

25.500 up to >26k on 1720MHz/1150 1% POW on my setup is doable with 4018MHz Zen.


----------



## miklkit

Question: I have a chance to get an XFX RX Vega 56 


Model: RX-VEGALDFF6
Reviews are bad saying it runs very hot. Should I keep looking?


----------



## diggiddi

miklkit said:


> Question: I have a chance to get an XFX RX Vega 56
> 
> 
> Model: RX-VEGALDFF6
> Reviews are bad saying it runs very hot. Should I keep looking?


WRT to AIB cards I'd say go Sapphire Nitro or bust


----------



## bloot

I got a Nitro+ last week and this card keeps temps really cool

https://www.3dmark.com/fs/15842880


----------



## Martin778

LicSqualo said:


> Hi Martin, have you undervolted your VGA?
> This is my Overdrive NTool setup.
> NO power limits. More aggressive fan setup. And I never reach 100°C in hotspot T. At max 80°C.
> My GPU speed is around 1600-1700 MHz when play at games. Dependng on how much resource the game require.
> Today my T ambient is 34°C. Just to compare.
> 
> No site upload allowed today.
> 
> And this is my Firestrike run with max readings in evidence for GPU speed and Hotspot temp (as you can see in GPU-Z.



Thanks, will take a look at it. I haven't undervolted it yet, I've noticed that increasing the power limit by 50% causes almost 400W draw. Without touching anything It's <300W.
Apparently the Nvidia trick with "just unlock the power limit and let it fly" doesn't count for the Vega.


----------



## LicSqualo

Martin778 said:


> Thanks, will take a look at it. I haven't undervolted it yet, I've noticed that increasing the power limit by 50% causes almost 400W draw. Without touching anything It's <300W.
> Apparently the Nvidia trick with "just unlock the power limit and let it fly" doesn't count for the Vega.


 NVIDIA... 
For my side (AMD) I noted (when in winter) the this simple "rule" is true only with very low T ambient and with a good case (optimal airflow).
I can move simple the power limit to the max as you described and have better result. Today with 36°C ambient I'm more conservative: 
I love my hardware 



Ne01 OnnA said:


> @LicSqualo
> UV a liitle more to 1081-1100mV also give HBM_2 IF Max 985mV at 1125Mhz
> Then test again, you should hit easily 26k> in GPU score.
> 
> 25.500 up to >26k on 1720MHz/1150 1% POW on my setup is doable with 4018MHz Zen.


Thank you for your suggestions. I will try some parameters later.
In my configuration my limits of 1080Mhz for HBM2 is due a test in TheWitcher3 game (I play randomly ), that is more sensible to graphic artifacts than other tests (3dMark as example).
But I never touch in low my 1000mV settings (not the best, I know, but was so simple put 1000 and search my max speed... and my temperatures was good).


----------



## Newbie2009

Ne01 OnnA said:


> @LicSqualo
> UV a liitle more to 1081-1100mV also give HBM_2 IF Max 985mV at 1125Mhz
> Then test again, you should hit easily 26k> in GPU score.
> 
> 25.500 up to >26k on 1720MHz/1150 1% POW on my setup is doable with 4018MHz Zen.


My best graphics score with vega is 26002 graphics score 1750/1165


----------



## majestynl

Martin778 said:


> Thanks, will take a look at it. I haven't undervolted it yet, I've noticed that increasing the power limit by 50% causes almost 400W draw. Without touching anything It's <300W.
> Apparently the Nvidia trick with "just unlock the power limit and let it fly" doesn't count for the Vega.


Upping Power limit works definitely also with AMD Vega cards. Maybe you have temp bottlenecks.

Mine is on a WB and with aio bios and SoftPowerPlayTable key file and 150% power..


----------



## Martin778

I am giving up on this card, even with undervolt (266W max) I hit 90deg hotspot /70c core /63c liquid temp. Will trade it for the Nitro+ probably.


----------



## mtrai

Just an FYI when testing your overclocks, undervolts etcs...use Timespy vs Firestrike as timespy is more gpu intensive. For example I can run Firestrike all day long with no issues or artifacts..however timespy will crash or freeze if the gpu is not stable. 

Also you will notice that Timespy will actually push your core clocks higher then firestrike and maintain higher overall clocks.

/edit also Timespy will lock up also with unstable ram overclocks pretty fast...so you have to check that as well.


----------



## bloot

Time Spy Vega 64 Nitro+ loving this card so much 

https://www.3dmark.com/spy/3988064


----------



## Higgenbobber

I recently got a Vega 64 and finished my whole undervolting/overclocking process. It used to idle at 950mvish and clock up to 1075 when playing games/under load (which is what I want). However, now whenever my card is idle it is on a static 1100mv, which is silly because when I launch a game it will actually downvolt to 1075mv which is my preferred gaming voltage.

I can't figure out why it no longer downvolts to the 850-950mv range during idle. I believe it has to do with my recent installation of ASUS GPU Tweak II Tool because it coincided with this problem. I've completely uninstalled ASUS gpu tweak II but I still get this problem.

It's a strange problem, because if I boot my computer on the default balanced Wattman settings, it will actually be in the 950mv range during idle. But as soon as I change the voltage to ANYTHING different in wattman, the core voltage will become 1100mv and stay there until I put the GPU under load. And if I revert back to the balanced settings, it won't go back to the 950mv range downvolt until I restart.

This is really bothering me and any help would be greatly appreciated


----------



## ManofGod1000

Higgenbobber said:


> I recently got a Vega 64 and finished my whole undervolting/overclocking process. It used to idle at 950mvish and clock up to 1075 when playing games/under load (which is what I want). However, now whenever my card is idle it is on a static 1100mv, which is silly because when I launch a game it will actually downvolt to 1075mv which is my preferred gaming voltage.
> 
> I can't figure out why it no longer downvolts to the 850-950mv range during idle. I believe it has to do with my recent installation of ASUS GPU Tweak II Tool because it coincided with this problem. I've completely uninstalled ASUS gpu tweak II but I still get this problem.
> 
> It's a strange problem, because if I boot my computer on the default balanced Wattman settings, it will actually be in the 950mv range during idle. But as soon as I change the voltage to ANYTHING different in wattman, the core voltage will become 1100mv and stay there until I put the GPU under load. And if I revert back to the balanced settings, it won't go back to the 950mv range downvolt until I restart.
> 
> This is really bothering me and any help would be greatly appreciated


System restore to before the Asus software install.


----------



## Higgenbobber

Well I don't know how but it's fixed. I've tried system restores, DDU reinstallations of drivers, none of that worked.

But for some reason, I ran a TimeSpy stress test and by the time it finished my core voltage is at the expected values hovering around 762mV. I didn't even restart the computer or anything. Everything seems to be fine now. No idea why it took a stress test to fix it.


----------



## Newbie2009

I miss not having crossfire.


----------



## sinnedone

Newbie2009 said:


> I miss not having crossfire.


Is it a money situation or are you saying that you can't crossfire these specific cards for some reason?


----------



## Newbie2009

sinnedone said:


> Is it a money situation or are you saying that you can't crossfire these specific cards for some reason?


Just vega being a power hog. First card blew up my old 1200w PSU, bought a 860w replacement, which isn't enough to crossfire 2 vegas really.


----------



## THUMPer1

I want to buy a Vega. It looks like a few MSI v56 blowers are the cheapest, but I hear those are trash. What's bad about them? Cooling sucks? HBM I think is hynix, so no flashing. Anything else?


----------



## Stefy

Newbie2009 said:


> Just vega being a power hog. First card blew up my old 1200w PSU, bought a 860w replacement, which isn't enough to crossfire 2 vegas really.


Idk what you're doing on your computer if you manage to blow up a 1200w PSU with only one GPU. It sure as heck ain't the cards fault, that's all I know.



THUMPer1 said:


> I want to buy a Vega. It looks like a few MSI v56 blowers are the cheapest, but I hear those are trash. What's bad about them? Cooling sucks? HBM I think is hynix, so no flashing. Anything else?


The cooling is trash. That's about it, really.


----------



## Newbie2009

Stefy said:


> Idk what you're doing on your computer if you manage to blow up a 1200w PSU with only one GPU. It sure as heck ain't the cards fault, that's all I know.
> 
> The cooling is trash. That's about it, really.


Yeah it was an old psu


----------



## Kyozon

Hello fellow Vega owners.

What are the Max GPU Core Clock that you have been seeing with Frontier Editions? 


Thanks in advance.


----------



## STEvil

1400/945 so far, just fan maxed out. Room temps are about 30c at the moment.

Overdriventool is broken in 18.7.1 and no way to enable wattman for radeon pro that I know of, also only been playing with it for the last 2 hours..

edit - this is while mining, though. Gaming runs up to 1600.


----------



## Conenubi701

Hey guys, so we still can't BIOS edit on custom cards like the ROG Strix right?

I know on the reference models we can't run unsigned bios but I can't find any real answeres for the Strix Vega 64.


----------



## Ne01 OnnA

Just a quick test with 18.7.1

Drivers Test: OK!

==


----------



## AngryLobster

Anyone here with a recently purchased Strix Vega 64? What is your bios version and can you upload it?

The only one I can find is back from when the card launched in September which matches mine.

This is the most ghetto card I have ever used. Changing anything in Wattman results in the card going haywire when I try to undervolt it just locks core/HBM to 1300/800 until I reset.

My reference Vega does not act like this.


----------



## Maracus

AngryLobster said:


> Anyone here with a recently purchased Strix Vega 64? What is your bios version and can you upload it?
> 
> The only one I can find is back from when the card launched in September which matches mine.
> 
> This is the most ghetto card I have ever used. Changing anything in Wattman results in the card going haywire when I try to undervolt it just locks core/HBM to 1300/800 until I reset.
> 
> My reference Vega does not act like this.


Strix Vega 56 here, I don't use wattman, try using OverdriveNTool much better to work with


----------



## AngryLobster

Yeah just tried OverdriveNTool and same result. This makes absolutely no sense. As soon as I adjust P6/P7 voltage, the card just drops to 1262/800 and gets stuck @ 1.05v (even at idle) until I reboot.

I don't understand why it's doing this when all my other Vega cards work fine with wattman.

Anyone out there with a Strix Vega 64 who has successfully undervolted? I feel like this is a bios problem or maybe I'm the first person on earth to attempt undervolting it. The secondary bios that is suppose to be clocked lower with a different fan profile is identical to the original bios.


----------



## Kasaris

AngryLobster said:


> Yeah just tried OverdriveNTool and same result. This makes absolutely no sense. As soon as I adjust P6/P7 voltage, the card just drops to 1262/800 and gets stuck @ 1.05v (even at idle) until I reboot.
> 
> 
> 
> I don't understand why it's doing this when all my other Vega cards work fine with wattman.
> 
> 
> 
> Anyone out there with a Strix Vega 64 who has successfully undervolted? I feel like this is a bios problem or maybe I'm the first person on earth to attempt undervolting it. The secondary bios that is suppose to be clocked lower with a different fan profile is identical to the original bios.




I have a Strix RX Vega 64. Currently have it at +50% power and undervolted to 1000mV. I have the HMB OC’d to 1050Mhz. I left the P6/P7 clocks at stock settings.


----------



## Worldwin

How are your HBM temps? I find mine around 12C+ of my core and it being about equal to my hotspot. My die is unmolded and that seems to be a factor.
On mah Nitro+ V64, Core:60C, Mem 72C and Hotspot 72C. Have it undervolted to 1550mhz/0.943V with HBM at 1050mhz. In FC5 its around 1500mhz in game and temps change depending on ambient.
My memory temperatures just feel really high relative to core and so close to hotspot.


----------



## AngryLobster

Kasaris said:


> I have a Strix RX Vega 64. Currently have it at +50% power and undervolted to 1000mV. I have the HMB OC’d to 1050Mhz. I left the P6/P7 clocks at stock settings.


85 DDU's in safemode later I got it working. Have no idea what the problem was but I have it @ 1462/1025 with 170ish watts GPU power so I'm satisfied.


----------



## STEvil

Ne01 OnnA said:


> Just a quick test with 18.7.1
> 
> Drivers Test: OK!
> 
> ==


how do you guys get ntool working with 18.7.1? Doesnt work for me unless I use the registry edit part and that only works for hbm clocks for me

edit

I see, im using wrong version.. will play with 0.2.7b4 tomorrow.

edit 2

nope, still getting errors.


----------



## Ne01 OnnA

STEvil said:


> how do you guys get ntool working with 18.7.1? Doesnt work for me unless I use the registry edit part and that only works for hbm clocks for me
> 
> edit
> 
> I see, im using wrong version.. will play with 0.2.7b4 tomorrow.
> 
> edit 2
> 
> nope, still getting errors.


ask Tede
-> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/


----------



## OsmiumOC

Woohoo finally I did the right thing and got an RX Vega card! 

I actually sold my 1080ti to switch to Vega 64 (I don´t play in 4k, so the 1080ti was overkill), and with the overhead I stepped up to Threadripper  Thanks to the miners I sold that 1080ti for more money then I paid new back in april 2017. 

I bought a reference pcb with free crappy aircooler on top and replaced that with a waterblock. Then flashed LE-bios on it and dear god what have I unleashed. For baseline I left voltage on stock 1.25 and just raised PL a modest +25%. Result was a cool and bored Vega 64 in firestrike and heaven benchmarks. Sipping 300W the Core (~1720MHz no OC yet) was capping out at 47°C, Hotspot temp peaked at 65°C. HBM (1015MHz) at 48°C. Ambient temp was 24°C. 
I tried undervolting, but my card does not like that at all. Tried it with 1.2 instead of 1.25, since even with 300W PL it rarely went up to 1.2V, but that is an instant crash. 

Any experience on how safe the stock voltage is with full +50% powerlimit? Temps seem fine to me, I think I can give it that extra power, but what is the Vega electron immigration policy on crowded borders? I come from the dark green side, where they already freak out if you want to try and overvolt to 1.2V...


----------



## AngryLobster

I did the same thing (1080 Ti to Vega) since Freesync/Gsync makes them basically indistinguishable in most titles. There are still some games where Vega just performs like ass in comparison like AC: Origins and Rise of the Tomb Raider.


----------



## OsmiumOC

Overclocking the Vega is a bit tricky, it has it´s own mind. 

Anyone else has trouble with getting the core clock to where you actually set it? Mine seems to ignore what I put in wattman, or it tries to behave and then goes on a rampage. For example, I use stock 1750MHz and stock voltage 1.25V. In Valley-benchmark it runs for a minute, while holding 1740-1760MHz. Then GPU-z shows a sudden spike to over 1800MHz mid scene (not between scene changes) with a correlating voltage spike from 1.19V to 1.23V and it results in a crash. 

Now I think, maybe stock settings are not stable. This card was never binned to run the LE-bios, so I set it to 1600MHz, still 1.25V for stability. Result: 1620-1640MHz, passes one more scene then before and again crashes mid scene. GPU-z log shows AGAIN a sudden spike up to 1800MHz (200MHz out of nowhere) and the voltage spike this time from 1.11V to 1.23V. I have no idea why it does that. Could it be an issue with the lower power states? But why would that show under a full load bench? 

Tested that with HBM at 1050, 1025, and stock 950. All the same result. I don´t think HBM is unstable.


----------



## Conenubi701

AngryLobster said:


> Anyone here with a recently purchased Strix Vega 64? What is your bios version and can you upload it?
> 
> The only one I can find is back from when the card launched in September which matches mine.
> 
> This is the most ghetto card I have ever used. Changing anything in Wattman results in the card going haywire when I try to undervolt it just locks core/HBM to 1300/800 until I reset.
> 
> My reference Vega does not act like this.


What vBios do you have? I recently purchased a Strix and it's been running fine.


----------



## DiscoSubmarine

OsmiumOC said:


> Overclocking the Vega is a bit tricky, it has it´s own mind.
> 
> Anyone else has trouble with getting the core clock to where you actually set it? Mine seems to ignore what I put in wattman, or it tries to behave and then goes on a rampage. For example, I use stock 1750MHz and stock voltage 1.25V. In Valley-benchmark it runs for a minute, while holding 1740-1760MHz. Then GPU-z shows a sudden spike to over 1800MHz mid scene (not between scene changes) with a correlating voltage spike from 1.19V to 1.23V and it results in a crash.
> 
> Now I think, maybe stock settings are not stable. This card was never binned to run the LE-bios, so I set it to 1600MHz, still 1.25V for stability. Result: 1620-1640MHz, passes one more scene then before and again crashes mid scene. GPU-z log shows AGAIN a sudden spike up to 1800MHz (200MHz out of nowhere) and the voltage spike this time from 1.11V to 1.23V. I have no idea why it does that. Could it be an issue with the lower power states? But why would that show under a full load bench?
> 
> Tested that with HBM at 1050, 1025, and stock 950. All the same result. I don´t think HBM is unstable.


boosting past max frequency is quite odd. typically (air cooled) vega cards don't even want to boost all the way to p7. maybe it's a quirk with the LE bios?

what happens if you lower the voltage?
try something like 1577mhz/1050mv and 1632mhz/1100mv for p6/p7 respectively. it's a mild undervolt that my card is happy to run, tack on 50mv if you have stability problems.

if that works then maybe it just doesn't like having 1.25v shoved into it. who knows if the reference pcb was designed to allow that.

i've also heard of glitches that cause the card to (falsely) report 1800+ mhz clock speeds, so you could have driver issues as well.
also, i believe the LE bios just shuts the card off if you hit 75C as a safety measure so check your thermals (and make sure you have cooling on the VRMs).

it's probably worth mentioning that the stock bios will get you 330 watts with maxed out power limit, and even more with power table mods, so it may not even be worth the hassle of flashing the LE bios unless you really want to run 1.25v.


----------



## Ne01 OnnA

Here my 1560MHz score at 'wooping' 122tW  
And second one for Default 1717MHz at 160tW

For reference:
(Guru3D All GPUs Shoot out -> http://www.guru3d.com/articles_pages/asrock_phantom_gaming_x_radeon_rx580_8g_oc_review,36.html)

UPD.
Today i have Installed New 18.7.1 WHQL from July 19 (Better OC and More Stable  Real WQHL)
(Yes ATI/AMD Updated Site drivers, im recomending to First Install 18.6.1 then Upgrade to New WHQL, if you want to preserve Your Settings etc.)

==


----------



## bloot

Isn't GPU Chip Power on HWInfo the total wattage measurement for Vega?


----------



## Ne01 OnnA

When Meansured correctly it should sums things Up:
Board ~2-5tW then Core + Memory so you'll end up with Lower than that tW

That's the case when Soft can't adjust Power Spikes  
Typical is a lot lower.
That's why i have mu Own Math |Board+Core+HBM and +/- 4 up to 12tW for LC|
Also this is more Spike Proof technique.


----------



## Kasaris

Worldwin said:


> How are your HBM temps? I find mine around 12C+ of my core and it being about equal to my hotspot. My die is unmolded and that seems to be a factor.
> On mah Nitro+ V64, Core:60C, Mem 72C and Hotspot 72C. Have it undervolted to 1550mhz/0.943V with HBM at 1050mhz. In FC5 its around 1500mhz in game and temps change depending on ambient.
> My memory temperatures just feel really high relative to core and so close to hotspot.


I will have to take a look when I am home from work later today. I know my GPU usually peaks up around 64C when I am in sniper elite 4 since the GPU stays at 100% load in game and clocks at 1530Mhz. In games like WoW or Fallout 4 it is typically lower since the GPU clocks are almost never at full load. (WoW hovers around 50%-60% for GPU and clocks around 1050-1350ish) The ambient in my computer room is typically around 20-22C.


----------



## Kasaris

OsmiumOC said:


> Woohoo finally I did the right thing and got an RX Vega card!
> 
> I actually sold my 1080ti to switch to Vega 64 (I don´t play in 4k, so the 1080ti was overkill), and with the overhead I stepped up to Threadripper  Thanks to the miners I sold that 1080ti for more money then I paid new back in april 2017.
> 
> I bought a reference pcb with free crappy aircooler on top and replaced that with a waterblock. Then flashed LE-bios on it and dear god what have I unleashed. For baseline I left voltage on stock 1.25 and just raised PL a modest +25%. Result was a cool and bored Vega 64 in firestrike and heaven benchmarks. Sipping 300W the Core (~1720MHz no OC yet) was capping out at 47°C, Hotspot temp peaked at 65°C. HBM (1015MHz) at 48°C. Ambient temp was 24°C.
> I tried undervolting, but my card does not like that at all. Tried it with 1.2 instead of 1.25, since even with 300W PL it rarely went up to 1.2V, but that is an instant crash.
> 
> Any experience on how safe the stock voltage is with full +50% powerlimit? Temps seem fine to me, I think I can give it that extra power, but what is the Vega electron immigration policy on crowded borders? I come from the dark green side, where they already freak out if you want to try and overvolt to 1.2V...





AngryLobster said:


> I did the same thing (1080 Ti to Vega) since Freesync/Gsync makes them basically indistinguishable in most titles. There are still some games where Vega just performs like ass in comparison like AC: Origins and Rise of the Tomb Raider.


I did the same thing and ended up going from a 1080Ti to an RX Vega 64 as well when I built my Ryzen system. Like you said when using FreeSync / GSync they are pretty indistinguishable since you are locked to the max refresh rate of the monitor, which in the 1080ti's case was 100hz since it is being utilized on an Acer Predator x34. My Vega on the other hand is on an Acer XZ321QU 32" 144hz 1440p FreeSync monitor so I see higher peak frame rates anyway and the Avg and Min frame rates are still over 60FPS in any of the titles I play at 1440p.


----------



## OsmiumOC

DiscoSubmarine said:


> boosting past max frequency is quite odd. typically (air cooled) vega cards don't even want to boost all the way to p7. maybe it's a quirk with the LE bios?
> 
> what happens if you lower the voltage?
> try something like 1577mhz/1050mv and 1632mhz/1100mv for p6/p7 respectively. it's a mild undervolt that my card is happy to run, tack on 50mv if you have stability problems.
> 
> if that works then maybe it just doesn't like having 1.25v shoved into it. who knows if the reference pcb was designed to allow that.
> 
> i've also heard of glitches that cause the card to (falsely) report 1800+ mhz clock speeds, so you could have driver issues as well.
> also, i believe the LE bios just shuts the card off if you hit 75C as a safety measure so check your thermals (and make sure you have cooling on the VRMs).
> 
> it's probably worth mentioning that the stock bios will get you 330 watts with maxed out power limit, and even more with power table mods, so it may not even be worth the hassle of flashing the LE bios unless you really want to run 1.25v.


I´ll try that now, thank you for the numbers. But there is definitly something very wrong with the lower power states. If I set it to the balanced profile, the driver crashes and resets when playing a youtube video -.-
If I set it to the turbo profile instead it runs stable even in all my benchmarks and jumps from 1650 to 1690 MHz which makes sense with what you said about it not boosting up to p7. Temps are fine I think, with 320W+ reported powerdraw the core reaches 44°C, HBM 50°C and the hotspot sometimes peaks at 70°C. VRM and VR SOC report 63°C/66°C. 

Before I try your settings I´ll update to 18.7.1 tho, I was on 18.6.1.

EDIT: Installing the new one does not seem to work. It somehow told me it was unable to install due to an unexpected error, claims that 18.7.1 is running now tho and tells me this:
Failed to download the package, File not avilable on the server. 
Download Failed, please check internet connection. 

My connection is definitly fine. Did they take down 18.7.1? Is there something wrong with it maybe? I´ll try to download it manually and re-install it. 


I´m impressed on how you all can undervolt your cards that much, but I tried 1.2V before and that was an instant crash at 1600MHz. I think I just have a not that well performing card. :/





Ne01 OnnA said:


> Here my 1560MHz score at 'wooping' 122tW
> And second one for Default 1717MHz at 160tW
> 
> For reference:
> (Guru3D All GPUs Shoot out -> http://www.guru3d.com/articles_pages/asrock_phantom_gaming_x_radeon_rx580_8g_oc_review,36.html)
> 
> UPD.
> Today i have Installed New 18.7.1 WHQL from July 19 (Better OC and More Stable  Real WQHL)
> (Yes ATI/AMD Updated Site drivers, im recomending to First Install 18.6.1 then Upgrade to New WHQL, if you want to preserve Your Settings etc.)
> 
> ==


Holy moly, congrats on that. I´m stunned that your Vega card can run at 1V and above 1500MHz. I don´t think mine will do that at 1.1V.


----------



## Ne01 OnnA

Try 
Also Use OverdriveN Tool and update it with PP.Reg file.
Find it in BIOS section.
Set WattMan as Custom the OC via OverdriveN only 

Greets and happy UV.


----------



## Kasaris

Worldwin said:


> How are your HBM temps? I find mine around 12C+ of my core and it being about equal to my hotspot. My die is unmolded and that seems to be a factor.
> On mah Nitro+ V64, Core:60C, Mem 72C and Hotspot 72C. Have it undervolted to 1550mhz/0.943V with HBM at 1050mhz. In FC5 its around 1500mhz in game and temps change depending on ambient.
> My memory temperatures just feel really high relative to core and so close to hotspot.


These were my temps after playing FC5 for about 45-60 min with an ambient Room temp of 21.5C

GPU Core Clock [MHz] - 1530
GPU Memory Clock [MHz] - 1050
SOC Clock [MHz] - 1107
GPU Temperature [ｰC] - 65
GPU Temperature (Hot Spot) [ｰC] - 76
HBM Temperature [ｰC] - 69
VR SOC Temperature [ｰC] - 82
VR Mem Temperature [ｰC] - 73
Fan Speed (RPM) [RPM] - 1642
Fan Speed (%) [%] - 50
GPU Load [%] - 100
GPU only Power Draw - 187w
Memory Used [MB] - 5082
VDDC [V] - 0.9563


----------



## mtrai

OsmiumOC said:


> Overclocking the Vega is a bit tricky, it has it´s own mind.
> 
> Anyone else has trouble with getting the core clock to where you actually set it? Mine seems to ignore what I put in wattman, or it tries to behave and then goes on a rampage. For example, I use stock 1750MHz and stock voltage 1.25V. In Valley-benchmark it runs for a minute, while holding 1740-1760MHz. Then GPU-z shows a sudden spike to over 1800MHz mid scene (not between scene changes) with a correlating voltage spike from 1.19V to 1.23V and it results in a crash.
> 
> Now I think, maybe stock settings are not stable. This card was never binned to run the LE-bios, so I set it to 1600MHz, still 1.25V for stability. Result: 1620-1640MHz, passes one more scene then before and again crashes mid scene. GPU-z log shows AGAIN a sudden spike up to 1800MHz (200MHz out of nowhere) and the voltage spike this time from 1.11V to 1.23V. I have no idea why it does that. Could it be an issue with the lower power states? But why would that show under a full load bench?
> 
> Tested that with HBM at 1050, 1025, and stock 950. All the same result. I don´t think HBM is unstable.


Mine did the same thing at first when I flashed the Powercolor LC bios to my Red Devil Vega 64 at the stock for the liquid bios. I would spike to 1800+ and then the benchmark and gpu driver would crash.

First thing I found out was that the core voltage for the water cooled cards is too high for air...you do not need 1.25

Second thing, was it was causing thermal shutdowns...one fix I had to do was re do the paste. Just doing that brought my temps down to acceptable levels.

3rd since I am not worried about power usage but best performance I actually added 2 more fans to side of my gpu to help with cooling and it does wonders for temps which allows it to boost higher. Fans are on a fan controller so I can adjust them as I want.

Not sure which card you got but would be helpful to know.


----------



## Ne01 OnnA

RX Vega 64 in HDR-Benchmark-Duell

https://www.computerbase.de/2018-07/hdr-benchmarks-amd-radeon-nvidia-geforce/2/

https://www.reddit.com/r/Amd/comments/90fign/hdr_benchmarks_show_lower_performance_impact_on/


----------



## Monsicek

Hello guys,

I have bought myself upgrade from MSI RX470 to custom model GB Vega 56 OC.
Link: https://www.gigabyte.com/us/Graphics-Card/GV-RXVEGA56GAMING-OC-8GD#kf

Sadly after few minutes in Windows, it either flat out freezes or crashes to black screen and fans go 100%. It crashed in every possible scenario including benchmark, browsing, gaming and idle scenario writing. After reset B2 is shown as MoBo fail boot code (on both of them). Need to cut down power to actually make it turn on again, otherwise it will not boot.

What I have tried so far:
- clean driver installation 18.7.1, 17.12.2, 17.11.3, 17.11.2, 
- use different 8 pin cable and different connector on PSU
- try all 3 profiles, fiddle with power profile
- flashed BIOS from F2 to F5
- overvolt (+25mV) lower P states and underclock (-50MHz) higher P states with OverdriveNTool custom power profile
- run it together with MSI RX470 as main or 2nd card
- fresh Windows instalation


Spec:

GPU: Gigabyte Vega 56 OC custom; previously MSI RX470 4G
CPU: R7 1700 stock
Motherboard: Asrock x370 Professional Gaming tried also GB x370 Gaming 5 (both with update BIOS)
RAM: 2*16GB RAM Gskills B-Die
PSU: PC Power and Cooling 750W gold
Operating System & Version: Windows 10 64bit
GPU Drivers: 18.7.1
Chipset Drivers: 18.7.1, also tried with none
Background Applications: None


Any idea how to resolve this painful issue?


Thank you in advance.


----------



## OsmiumOC

mtrai said:


> Mine did the same thing at first when I flashed the Powercolor LC bios to my Red Devil Vega 64 at the stock for the liquid bios. I would spike to 1800+ and then the benchmark and gpu driver would crash.
> 
> First thing I found out was that the core voltage for the water cooled cards is too high for air...you do not need 1.25
> 
> Second thing, was it was causing thermal shutdowns...one fix I had to do was re do the paste. Just doing that brought my temps down to acceptable levels.
> 
> 3rd since I am not worried about power usage but best performance I actually added 2 more fans to side of my gpu to help with cooling and it does wonders for temps which allows it to boost higher. Fans are on a fan controller so I can adjust them as I want.
> 
> Not sure which card you got but would be helpful to know.


Thank you for the reply, well my card is the MSI Air Boost OC. I bought it since it was the only available reference pcb for waterblock atm. Iam running it under water now.
In regards to your first input and like someone else suggested, I lowered the voltage.

With memory (actually Core baseline) voltage up to 990mV; P6 1577 at 1125mV and P7 1587 at 1130mV it finally behaves and runs stable. I completly maxed the powertarget now, since it doesn´t matter and I don´t care under water to remove any limit on that end.
My guess is that the extra baseline voltage helped, since I experienced a crash even when switching youtube to fullscreen. I think this caused the card to spike up to the 950mV at p5 frequency and that was not enough, since with LC-bios P5 is already 1550MHz!

Second, I´m fairly confident that I applied my TIM correct. I was extra cautious to get more then just enough on there, contacting every thing, slightly spreading over all 4 edges without introducing any airbubbles. And judging from sensors it´s fine. Core and HBM did stay below 55°C, VRM ~65°C. The hotspot was the highest temp, peaking at 72°C under maximum powertarget and maximum voltage. I don´t have any airflow directly on the back of the card, which I heard can help reduce that temp. But I think it´s fine, users on aircooled cards reported this temp as high as 90°C+ and it´s tj-max. was stated with 125°C. No idea if that applies on the LC-bios tho:/





Monsicek said:


> Hello guys,
> 
> I have bought myself upgrade from MSI RX470 to custom model GB Vega 56 OC.
> Link: https://www.gigabyte.com/us/Graphics-Card/GV-RXVEGA56GAMING-OC-8GD#kf
> 
> Sadly after few minutes in Windows, it either flat out freezes or crashes to black screen and fans go 100%. It crashed in every possible scenario including benchmark, browsing, gaming and idle scenario writing. After reset B2 is shown as MoBo fail boot code (on both of them). Need to cut down power to actually make it turn on again, otherwise it will not boot.
> 
> [...]
> Any idea how to resolve this painful issue?
> Thank you in advance.


I had exactly the same trouble with my Vega 64 from MSI as I tested it before putting it under water. It was randomly crashing, no matter if windows desktop use or benchmark or ingame. It ran fine except for those hard crashes, where I needed to cut power too in order to get it to boot.

Problem was, I only used a single cable with 2x 8pin in Y-config. Vega seems to hate that since it needs a lot of power. I guess it trips some safety cutoff, maybe in the PSU since the current on that single cable can spike to high or something.

As soon as I used 2 seperate cables to the PSU (like you should I think, this card can pull 400W), it was all fine.


----------



## Roboyto

OsmiumOC said:


> Overclocking the Vega is a bit tricky, it has it´s own mind.
> 
> Anyone else has trouble with getting the core clock to where you actually set it? Mine seems to ignore what I put in wattman, or it tries to behave and then goes on a rampage. For example, I use stock 1750MHz and stock voltage 1.25V. In Valley-benchmark it runs for a minute, while holding 1740-1760MHz. Then GPU-z shows a sudden spike to over 1800MHz mid scene (not between scene changes) with a correlating voltage spike from 1.19V to 1.23V and it results in a crash.
> 
> Now I think, maybe stock settings are not stable. This card was never binned to run the LE-bios, so I set it to 1600MHz, still 1.25V for stability. Result: 1620-1640MHz, passes one more scene then before and again crashes mid scene. GPU-z log shows AGAIN a sudden spike up to 1800MHz (200MHz out of nowhere) and the voltage spike this time from 1.11V to 1.23V. I have no idea why it does that. Could it be an issue with the lower power states? But why would that show under a full load bench?
> 
> Tested that with HBM at 1050, 1025, and stock 950. All the same result. I don´t think HBM is unstable.



Haven't been on here in a while, but I have had my Vega 64 under water since launch. Overboost is/has been a common problem from the start, and probably more problematic when you're running extremely low temps...the card just wants to fly lol. 

I still have stock air BIOS on my card, so I don't get overboost spikes at 200 MHz, but when pushing the card to the limits all it takes is 30-40 MHz spike to cause issues. I have found the best way to control the overboost is use P7 voltage/power %. I get my best performance and benchmarks with P7 voltage at or under 1150. Generally what I had noted in some of my OC spreadsheets was the actual/effective max clock speed would be ~10-20 MHz under whatever P7 was set for. As long as the voltage/power wasn't too high the benches would complete holding a slightly fluctuating core clock. Increasing voltage/power without altering clock speeds can/will cause an overboost/crash though. 

Perhaps you could try running the stock BIOS and see if you still get the very large overboost spikes?

You should try pushing the HBM a little further. There was a driver version/update sometime last year that allowed most people to push past 1100 without much of an issue. My HBM was absolutely maxed at 1100..driver update and I can game/bench most anything at 1150-1185.


I haven't done any benching in a while but I can give you some examples for settings I had used previously in Super Position 4K:

I'm running R7 1700 @ 4.0 w/ 8GB 3200 RAM


Core: 1627/1722	
P6/P7: 1095/1145	
HBM: 1185	
HBM V: 1095	
Power: 50	
Score: 7170

And Firestrike:

Core: 1627/1722	
P6/P7: 1095/1145	
HBM: 1175	
HBM V: 1095	
Power: 50	
Score: 18737
Graphics: 24967
Physics: 20772

In these benching situations, the core clock would fluctuate between the ~1690's to low 1700's. Just a tad more voltage though and the overboost would be like 1762. 

You may find overclocking and benching to be an arduous task as minor alterations can throw everything off..and don't assume settings for 'A' will work when benching 'B'. 

When not pushing the card to the brink for benching purposes it has been lovely. Some undervolting with a minor overclock, combined with my FreeSync ultra-wide has been very enjoyable.


----------



## ZOONAMI

Ne01 OnnA said:


> Try /forum/images/smilies/wink.gif
> Also Use OverdriveN Tool and update it with PP.Reg file.
> Find it in BIOS section.
> Set WattMan as Custom the OC via OverdriveN only /forum/images/smilies/biggrin.gif
> 
> Greets and happy UV.


What is this pp reg file???


----------



## sega4ever

Ne01 OnnA said:


> Here my 1560MHz score at 'wooping' 122tW
> And second one for Default 1717MHz at 160tW
> 
> For reference:
> (Guru3D All GPUs Shoot out -> http://www.guru3d.com/articles_pages/asrock_phantom_gaming_x_radeon_rx580_8g_oc_review,36.html)
> 
> UPD.
> Today i have Installed New 18.7.1 WHQL from July 19 (Better OC and More Stable  Real WQHL)
> (Yes ATI/AMD Updated Site drivers, im recomending to First Install 18.6.1 then Upgrade to New WHQL, if you want to preserve Your Settings etc.)
> 
> ==


are these stable for games or only benching?


----------



## Ne01 OnnA

ZOONAMI said:


> What is this pp reg file???


This -> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html

And IMO it's enough, we don't need BIOS Flash anymore 

==


----------



## ZOONAMI

Ne01 OnnA said:


> This -> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html
> 
> And IMO it's enough, we don't need BIOS Flash anymore
> 
> ==


Which file should I use? The AIO one? I have a a nitro vega 64 SR.

How do I upgrade overdriven tool with the file?


----------



## Ne01 OnnA

Right click on Top Bar then You have PP Table Editior + Settings (if needed)
then Upload correct PP Table for it and Done.
Ask @gupsterg about which one will be good for your AIB Vega


----------



## snoball

Recently picked up a V64 and getting some really odd behavior. Anyone have experience with V64 Limited Edition (Sapphire) running so hot it instantly throttles?
I've tried custom configs in Global Wattman and the default setting profile. Even allowing this thing to run max fan with undervolt isn't helping.


----------



## ZOONAMI

Ne01 OnnA said:


> Right click on Top Bar then You have PP Table Editior + Settings (if needed)
> then Upload correct PP Table for it and Done.
> Ask @gupsterg about which one will be good for your AIB Vega


Ah shoot, I think doing this was a bad idea. Sapphire Nitro is running worse now with the regular 64 file, clocks in 1500s now and I had it holding above 1600.

The AIO file has a max target temp limit of 70, which seems like it would lead to more throttling than the 85 limit the sapphire had before.

How can I get the card back to stock now?

Anyone have a file for a Sapphire Nitro+ SR?


----------



## nolive721

hello seeking for advice here

I had a 1080p cheap Freesync monitor(freresh rate 48 to 75Hz) coupled with a RX480 back 2 yrs ago when the card was launched.

Did upgrade to a triple monitor set up last year and had to move to a NVIDIA 1080 card to drive games properly but of course I lost the freesync feature

now that VEGA64 prices in Japan have become almost reasonable, I would like to give AMD a chance again

I think Freesync will not be activated in triple monitor set-up since the 2 other monitors arent equipped with it but my question is when I play High Res/Ultra settings on the singe center monitor,is there a benefit really to have such High end card like teh VEGA
My understanding is that I will pull FPS far beyond the 75Hz max rate of my monitor so it wont really matter

its not a deal breaker but just want to hear some experts opinion

also I am leaning towards buying the Nitro+ non LE so what is the best Wattman guide out there to pull the best of the card, if you guys could guide me?
My benchmark is my current FTW 1080 Hybrid who is pulling 2136Mhz core/12,000Mhz frequncy at around 50degC in heavy gaming or benchmark so I hope to get close to the perfs knowing temp, and possibly noise wise,I will ahve to do some compromise

thanks so much


----------



## Ne01 OnnA

ZOONAMI said:


> Ah shoot, I think doing this was a bad idea. Sapphire Nitro is running worse now with the regular 64 file, clocks in 1500s now and I had it holding above 1600.
> 
> The AIO file has a max target temp limit of 70, which seems like it would lead to more throttling than the 85 limit the sapphire had before.
> 
> How can I get the card back to stock now?
> 
> Anyone have a file for a Sapphire Nitro+ SR?


Regedit go to /0000 and delete "PP_PhmSoftPowerPlayTable"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0000]
"PP_PhmSoftPowerPlayTable"

restart or do it in OverdriveN Tool -> same thing


----------



## STEvil

Are there any tools that work with Radeon Pro?


----------



## ZealotKi11er

Is there a way to edit Vega 65 Liquid Bios. I need to lower the clock speeds a bit as my card is not stable at 1750MHz.


----------



## Newbie2009

ZealotKi11er said:


> Is there a way to edit Vega 65 Liquid Bios. I need to lower the clock speeds a bit as my card is not stable at 1750MHz.


Have you tried lowering the voltage a bit? My air card (flashed with Liquid bios) preferred less volts at those clocks, bizarre I know.


----------



## Ne01 OnnA

Monster at Work


----------



## Chaoz

OsmiumOC said:


> I´m impressed on how you all can undervolt your cards that much, but I tried 1.2V before and that was an instant crash at 1600MHz. I think I just have a not that well performing card. :/
> 
> Holy moly, congrats on that. I´m stunned that your Vega card can run at 1V and above 1500MHz. I don´t think mine will do that at 1.1V.



That's nothing, Imho. These are the settings I run daily and never crashes.










I did flash my ref 64 with the LC BIOS.


----------



## Newbie2009

Chaoz said:


> That's nothing, Imho. These are the settings I run daily and never crashes.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did flash my ref 64 with the LC BIOS.


What clock does it hit @ 1750 ?


----------



## VicsPC

Newbie2009 said:


> What clock does it hit @ 1750 ?


Guessing depends on the application. Mine on stock clocks on water hits 1640 without problems in some games, ETS2 tends to do that in the cities quite easily.


----------



## Newbie2009

VicsPC said:


> Guessing depends on the application. Mine on stock clocks on water hits 1640 without problems in some games, ETS2 tends to do that in the cities quite easily.


And what volts are you running? 

Mine for example, 1750mhz core in 90% games tops out 1710mhz, other 10% around 1720mhz, 1180mv.

Higher volts won't make it clock higher and lower volts, say 1100mv will be mid 1600mhz.


----------



## VicsPC

Newbie2009 said:


> And what volts are you running?
> 
> Mine for example, 1750mhz core in 90% games tops out 1710mhz, other 10% around 1720mhz, 1180mv.
> 
> Higher volts won't make it clock higher and lower volts, say 1100mv will be mid 1600mhz.


Whatever factory clocks and voltages are that's it, I've not messed with it lol. It hits it's stated clocks so its not a worry for me. At 1.156v in Siege i hit about 1500mhz, if i turn on MSAA it goes to around 1600 or so.


----------



## Newbie2009

VicsPC said:


> Whatever factory clocks and voltages are that's it, I've not messed with it lol. It hits it's stated clocks so its not a worry for me. At 1.156v in Siege i hit about 1500mhz, if i turn on MSAA it goes to around 1600 or so.


Ah I see. My point was originally what is the point of having low volts @ 1750 if the cards only hits 1600 because of voltage, be as well at @ 1650 target.

If you are on LC bios you can probably lower the voltage a bit from stock. I found mine performs better @ 1180mv than 1250mv as hits the power limit less often so doesn't clock down.


----------



## Chaoz

Newbie2009 said:


> What clock does it hit @ 1750 ?


Mostly depends on the game, usually 1680, but in more demanding games, like Frostpunk it hits 1750 constantly.


----------



## Fediuld

Guys, I have a V64 Nitro+ with the 2 8-pin connectors. Also I have watercooled the card. 

However somehow cannot make the OverdriveNTTool to bypass the Wattman settings. What I am doing wrong?


----------



## Ne01 OnnA

Fediuld said:


> Guys, I have a V64 Nitro+ with the 2 8-pin connectors. Also I have watercooled the card.
> 
> However somehow cannot make the OverdriveNTTool to bypass the Wattman settings. What I am doing wrong?


Always: First -> Activate Custom in WattMan -> Then use OC Tool


----------



## Fediuld

Ne01 OnnA said:


> Always: First -> Activate Custom in WattMan -> Then use OC Tool


Cheers


----------



## diggiddi

Fediuld said:


> Guys, I have a V64 Nitro+ with the 2 8-pin connectors. Also I have watercooled the card.
> 
> However somehow cannot make the OverdriveNTTool to bypass the Wattman settings. What I am doing wrong?


Which block are you using?


----------



## AngryLobster

Jesus Christ this Strix Vega 64 is such a piece of garbage. Every other day it will refuse my UV settings and instead sit at 265w with fans blazing. If I didn't have a Freesync monitor I'd have dumped this garbage a long time ago. I can swap to a reference card and it obeys my command through wattman (mostly) but even then I am forced to install Afterburner to control the fan properly.

Here come 12 DDU wipes again in order to get it to cooperate. I wish the AMD experience was better because IMO it's a damn mess compared to my 1080 Ti.


----------



## Fediuld

AngryLobster said:


> Jesus Christ this Strix Vega 64 is such a piece of garbage. Every other day it will refuse my UV settings and instead sit at 265w with fans blazing. If I didn't have a Freesync monitor I'd have dumped this garbage a long time ago. I can swap to a reference card and it obeys my command through wattman (mostly) but even then I am forced to install Afterburner to control the fan properly.
> 
> Here come 12 DDU wipes again in order to get it to cooperate. I wish the AMD experience was better because IMO it's a damn mess compared to my 1080 Ti.


If you use the stock cooler, especially those warm days, I would advice you to connect a Enermax Magma or something like that, directly blowing air up to the card using the 4 pin connectors provided by the card. I did that and never had issues with overclocking the card to 1747 but just brute force. 

Also, given my experience with watercooling the card, I could advice you also to replace the thermal paste provided with something good like Cryonaut or even better Conductonaut

The thermal paste on the Nitro was breaking apart being hard as cement, and only had the card 20days and was of the new batch with the 2 8pin connectors. 
Here is a pic what I saw. Bear in mind with stock cooler and paste the HBM temps were +20 over the GPU core, and the GPU hotspot hitting 107C all the time. (+40C) 

By using Liquid metal and the stock cooler the temps drop +1-+3C for HBM and +20 for the GPU hotspot. But Cryonaut will do the job also if you spread it properly.


----------



## Fediuld

diggiddi said:


> Which block are you using?


Byksi is making a waterblock for the Nitro+

https://www.aliexpress.com/item/Byk...on-RX-Vega-64-8GB-HBM2-11275/32868393119.html

The block is amazing, and having it attached to the Predator 360 using the EK fittings from a previous pre-filled block I have for a GTX1080 Armor.
If you have done it before is easy process as the manual is in chinese and useless. 
Word of advice. 

a) Get 1.5mm strip of Thermal Grizzly Pad 8, cut it long ways to cover the VRMs on the two sides. Dont use the silicon stuff supplied with the waterblock. 
b) Make sure you connect IN - OUT properly. It wont work the other way around. 
c) Remove 4 springs of the small screws.(use a very fine knife carefully to remove them). Use the X crosspiece Sapphire provides to attach it. You need to put the 4 springs under the corners and then put the LONG screws. 
d) Because the long screws aren't enough, use 4 for the X crosspiece, 3 on the L of where the VRMs are (roughly), 1 at the center (near the ports) near the waterblock 
If you can find somehow more of the long screws where to buy (i am terrible as guessing their size) please let me know to buy some. 
e) Dont use the short screws with the springs to attach the waterblock near the GPU. They do not push it hard anough and the GPU hotpot thermals will go to 107C throttling the card, at least with liquid metal. 

Here is the block.


----------



## OsmiumOC

Chaoz said:


> That's nothing, Imho. These are the settings I run daily and never crashes.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did flash my ref 64 with the LC BIOS.



Well, then I have again managed to pull a negative record from the silicon lottery. Not the first time, my 1800x was not able to get 4GHz, not even on 1,48V....
I can rule out PSU, since its 2 different builds. One with corsair and one with evga PSU.

I spend a good part of my spare time in the last days to try and learn what my card wants from me, but all I can see so far is it wants its voltage. 1550MHz and 1130mV in wattman = crash. In GPUz the crash appears usually at 1540MHz and it reads out ~1.08V. If I try even lower voltage just for fun I get crashes with driver reset even when opening youtube videos.

When I feed it 1.2V it gets up to 1670MHz, but everything above will crash sooner or later. GPUz reads ~1.18V at those clocks. 

Funny thing, If I don´t touch anything on the clocks or voltage, let it stay at 1.25V and only crank up the powertarget +50% it runs stable with clocks around 1730MHz.

Another test I did was just demanding impossible things from wattman, 1820MHz at 1.20V, pt max. And it ran a benchmark like this for 3 minutes, but the sensor log was completly broken. It read "253304832.0 °C" as temp on the VRmem and claimed it ran at 1780MHz on 1.18V with an average of 140W power draw...

I miss analog measuring and analog dials and analog everything, atleast I could trust that


----------



## Maracus

AngryLobster said:


> Jesus Christ this Strix Vega 64 is such a piece of garbage. Every other day it will refuse my UV settings and instead sit at 265w with fans blazing. If I didn't have a Freesync monitor I'd have dumped this garbage a long time ago. I can swap to a reference card and it obeys my command through wattman (mostly) but even then I am forced to install Afterburner to control the fan properly.
> 
> Here come 12 DDU wipes again in order to get it to cooperate. I wish the AMD experience was better because IMO it's a damn mess compared to my 1080 Ti.


I dunno dude I have the 56 strix and only every now and then does it need reset when I UV it. After resuming from sleep or even alt-tabing enough can mess it up, you using wattman or OverdriveNTool?


----------



## Ne01 OnnA

OverdriveN Tool
0.2.6 (16.05.2018) is up.
-> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/

Quick download -> https://drive.google.com/open?id=1vuRhyV0FXK02QA1a5FmHymMC8uRCNkOc

Here is New Beta 0.2.7 v.4 (Best IMO so far)
-> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1

Tede gives me this, here -> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/page-19#post-5558335


----------



## dagget3450

Anyone crazy enough to be running 2 Vegas in crossfire on stock air coolers? I am having massive throttling in a game, j suspect it's not heat but I suppose I'll try another game to see if I can recreate it.


----------



## BeetleatWar1977

dagget3450 said:


> Anyone crazy enough to be running 2 Vegas in crossfire on stock air coolers? I am having massive throttling in a game, j suspect it's not heat but I suppose I'll try another game to see if I can recreate it.


In a german Forum are some tests conducted: https://www.forum-3dcenter.org/vbulletin/showthread.php?t=589610


----------



## RaV[666]

So guys, i tried to find the info but either there isnt one or its just too obscure for me to find it.
I have sapphire nitro+ V56 now, card is nice, but has ****ty hynix memory, also i read that u cant really flash it with 64 bios because that one has samsung and it wont even post.
mvddc so the real hbm2 voltage is 1.25v, and the card is doing a ****ty 860mhz :-/ .
So , in essence, is there any way to increase mvddc to a v64 value ?
I had reference model before, and the mem was doing 1025 with v64 bios


----------



## DiscoSubmarine

RaV[666] said:


> So guys, i tried to find the info but either there isnt one or its just too obscure for me to find it.
> I have sapphire nitro+ V56 now, card is nice, but has ****ty hynix memory, also i read that u cant really flash it with 64 bios because that one has samsung and it wont even post.
> mvddc so the real hbm2 voltage is 1.25v, and the card is doing a ****ty 860mhz :-/ .
> So , in essence, is there any way to increase mvddc to a v64 value ?
> I had reference model before, and the mem was doing 1025 with v64 bios


Sadly afaik there is no way without flashing bios.
Bios modding is still impossible, though i hear there is a hack to force load a modded bios on linux.
So unless you're on linux and willing to do some nonsense you're probably out of luck.


----------



## bill1971

I buy a vega 56 sapphire, with bios switch, the card work only in one position of the switch, if i change the position of the switch my pc don't boot, It happened after I tryed to flash a bios but when pc restart It took along time so i powered off and now pc don't boot in second position of the switch. I want to say isnt like the reference card the bios flash? What went rong?


----------



## RaV[666]

@BILL Either the bios got corrupted or you tried to flash a bios thats incompatible with your card.Im guessing you have a nitro+ or a pulse card and you tried flashing V64 bios, and they have different memory and posiibly pcb.

As for my memory voltage question, thats a real bummer :-/ , ive read about someone pushing 1.45v through their hbm2 and that means he somehow increased it, because max on the v64 bios is 1.35v. But i dont know how he managed to do that :/ .Maybe there will be some soft mod for this.


----------



## bill1971

RaV[666] said:


> @BILL Either the bios got corrupted or you tried to flash a bios thats incompatible with your card.Im guessing you have a nitro+ or a pulse card and you tried flashing V64 bios, and they have different memory and posiibly pcb.
> 
> As for my memory voltage question, thats a real bummer :-/ , ive read about someone pushing 1.45v through their hbm2 and that means he somehow increased it, because max on the v64 bios is 1.35v. But i dont know how he managed to do that :/ .Maybe there will be some soft mod for this.


Yes i have the nitro, when pc restart after bios flash, i corrupted because It stuck to reboot.... In other reference card when I have a fail bios flash, i switch to the correct bios and the problem solve, isnt the same for nitro?


----------



## bill1971

Are there power tables for vega 56 stock bios?


----------



## OsmiumOC

bill1971 said:


> I buy a vega 56 sapphire, with bios switch, the card work only in one position of the switch, if i change the position of the switch my pc don't boot, It happened after I tryed to flash a bios but when pc restart It took along time so i powered off and now pc don't boot in second position of the switch. I want to say isnt like the reference card the bios flash? What went rong?


So if in one position of the bios switch it doesnt boot, I guess that was the bios you tried to flash and it went wrong. Did you save your bios file before flashing? If so boot with the working bios, flip the switch and flash the non-working position. If it works, you got your 2 positions back and can try again with another bios. Don´t flash over the only working position now, if that goes wrong you get into more trouble. 


Something completly different, is there a way to overvolt Vega by modding the bios? Or do I have to resort to hardmodding the card like buildzoid showed? Temps are fine from my end, and I wonder where a little more voltage would get me, especially on the HBM:


----------



## bill1971

OsmiumOC said:


> So if in one position of the bios switch it doesnt boot, I guess that was the bios you tried to flash and it went wrong. Did you save your bios file before flashing? If so boot with the working bios, flip the switch and flash the non-working position. If it works, you got your 2 positions back and can try again with another bios. Don´t flash over the only working position now, if that goes wrong you get into more trouble.
> 
> 
> Something completly different, is there a way to overvolt Vega by modding the bios? Or do I have to resort to hardmodding the card like buildzoid showed? Temps are fine from my end, and I wonder where a little more voltage would get me, especially on the HBM:


When I flip the switch do i have to restart or to contineous the proceed, because if i restart pc don't boot when switch is to false bios.


----------



## DiscoSubmarine

bill1971 said:


> When I flip the switch do i have to restart or to contineous the proceed, because if i restart pc don't boot when switch is to false bios.


boot on the working bios, flip the switch, flash the card. you don't need to reboot after flipping the switch. the bios that gets flashed is the one the switch is set to at the time of flashing, not the one you booted from.


----------



## dagget3450

Anyone here have VRMARK by chance that can run Cyan Room benchmark and post results? a Dx12 benchmark that AMD actually does decent in, and futuremark cannot even allow an active HOF/leader boards so we can compare. Funny how that works, but sure if its half ass dx12 timespy or dx11 a.k.a Nvidia bread and butter then by all means.

I realize VR and Vega probably counts for .000001% of .000001% of users. However, in the early days of the bench AMD 480>980ti even, or Fury> 1070 etc...

Its supposed to be a full on dx12 benchmark. I have been out of the game for a while so maybe i missed something....

Cyan room was released in Nov 2017 and we have this for reference:

https://www.3dmark.com/hall-of-fame-2/vr+benchmark+desktop+score+cyan+room+preset/version+1.0

Nothing....

However benchmarks done back then showed:
https://www.overclock3d.net/reviews/software/vrmark_cyan_room_dx12_benchmark_-_amd_vs_nvidia/2

Many others out there, you can run it on a regular monitor as well. no VR needed - VRMArk Definitely not worth 20 dollars though(maybe during steam sales if its like a few bucks maybe), especially when you can't even compare results on cyan room.


-----------------------------------------------------------------
EDIT found a custom search option, anyways here are my results so far

2x Vega FE air, slight OC
12864
https://www.3dmark.com/vrm/27856253?









top x2 1080ti spot is currently 13696
https://www.3dmark.com/search#/?mod...gpuName=NVIDIA GeForce GTX 1080 Ti&gpuCount=0

top vega 64 and Fe
https://www.3dmark.com/search#/?mod...e=AMD Radeon Vega Frontier Edition&gpuCount=0

https://www.3dmark.com/search#/?mod...uName=AMD Radeon RX Vega 64 Liquid&gpuCount=0


----------



## bloot

https://www.3dmark.com/vrm/27856584

This is my previous GTX [email protected]/12000 https://www.3dmark.com/vrpcr/4520


----------



## Ne01 OnnA

-> https://www.3dmark.com/compare/vrpcr/15811/vrpcr/13407/vrpcr/5514


----------



## bill1971

bill1971 said:


> I buy a vega 56 sapphire, with bios switch, the card work only in one position of the switch, if i change the position of the switch my pc don't boot, It happened after I tryed to flash a bios but when pc restart It took along time so i powered off and now pc don't boot in second position of the switch. I want to say isnt like the reference card the bios flash? What went rong?


My problem solved with format and flash the bios again. Now the gpuz show that my vega 56 sapphire nitro has Samsung Memory, Is there other way to see what Memory the card has?


----------



## Ne01 OnnA

Aida64 ?


----------



## wefornes

hello, i am going to buy a new gpu, i just sell my gtx 1070, i have a freesync monitor 2560x1080p 144hz, i tried a rx 580 but i was not enough power for keeping fps near 100. so i was thinking getting a vega 56 or 64, i am not a profesional gamer but i like medium / high quality details for gaming.

list of the ones that i can get on my country are:

nitro + 56 (2x8 pins version or no LE) (24k)
strix vega 56 (22k)
strix vega 64 (25k)
gigabyte gaming oc 8g vega 56 (21k)
msi air boost vega 56 (23k)

for reference 29k = 999U$S

the gigabyte is the chepaest and the strix 64 the more expensive one.

i will be waiting for your opinions.

best regards


----------



## Formula383

i would go the gigabyte vega 56, i would also recommend putting it on water asap. i would guess 1650 is with in reach maybe more if you can get a high power bios loaded on a 56, that said not really required just turn down the msaa and let her rip.


----------



## Ne01 OnnA

wefornes said:


> hello, i am going to buy a new gpu, i just sell my gtx 1070, i have a freesync monitor 2560x1080p 144hz, i tried a rx 580 but i was not enough power for keeping fps near 100. so i was thinking getting a vega 56 or 64, i am not a profesional gamer but i like medium / high quality details for gaming.
> 
> list of the ones that i can get on my country are:
> 
> nitro + 56 (2x8 pins version or no LE) (24k)
> strix vega 56 (22k)
> strix vega 64 (25k)
> gigabyte gaming oc 8g vega 56 (21k)
> msi air boost vega 56 (23k)
> 
> for reference 29k = 999U$S
> 
> the gigabyte is the chepaest and the strix 64 the more expensive one.
> 
> i will be waiting for your opinions.
> 
> best regards


Try to pick some V64 instead of 56.
If you can go for Powercolor or Sapphire parts.

Asus is also good but not best in the pack 
Here for reference -> https://www.overclock.net/forum/67-...x-vega-64-nitro-vs-devil-vs-strix-vs-ref.html


----------



## Ne01 OnnA

wefornes said:


> hello, i am going to buy a new gpu, i just sell my gtx 1070, i have a freesync monitor 2560x1080p 144hz, i tried a rx 580 but i was not enough power for keeping fps near 100. so i was thinking getting a vega 56 or 64, i am not a profesional gamer but i like medium / high quality details for gaming.
> 
> list of the ones that i can get on my country are:
> 
> nitro + 56 (2x8 pins version or no LE) (24k)
> strix vega 56 (22k)
> strix vega 64 (25k)
> gigabyte gaming oc 8g vega 56 (21k)
> msi air boost vega 56 (23k)
> 
> for reference 29k = 999U$S
> 
> the gigabyte is the chepaest and the strix 64 the more expensive one.
> 
> i will be waiting for your opinions.
> 
> best regards


Try to pick some V64 instead of 56.
If you can go for Powercolor or Sapphire parts.

Asus is also good but not best in the pack 
Here for reference -> https://www.overclock.net/forum/67-...x-vega-64-nitro-vs-devil-vs-strix-vs-ref.html

or Buy normal reference and Mod it with Morpheus 

Hope it will help you:


----------



## Ne01 OnnA




----------



## wefornes

vega 56 nitro is very close to asus vega 64, do you think i worth it? ina future i will buy a water block for the gpu. saludos


----------



## DiscoSubmarine

wefornes said:


> vega 56 nitro is very close to asus vega 64, do you think i worth it? ina future i will buy a water block for the gpu. saludos


if you're going for max performance and cost isn't an issue, then sure.
otherwise if you're on a budget the only reason to get 64 over a 56 is to avoid the danger of getting hynix memory IMO.
as long as you get samsung memory you can close the gap to a 64 quite a bit with overclocking and a 64 bios. even overclock vs overclock.

this is a (my) fire strike benchmark with a powercolor vega 56 red devil on air with a 64 bios (that's why 3dmark thinks it's a 64): http://www.3dmark.com/fs/15995160
here's a comparison with a similar system with similar clocks but with a vega 64: https://www.3dmark.com/compare/fs/16028377/fs/15995160

IMO it's pretty marginal, but you be the judge.


----------



## chris89

Anyone have a AMD Radeon RX Vega 8 Mobile Vega Gfx? I'm wondering what they can clock to & how that Ryzen 5 2500U performs?

Thx


----------



## x86overclock

AngryLobster said:


> Jesus Christ this Strix Vega 64 is such a piece of garbage. Every other day it will refuse my UV settings and instead sit at 265w with fans blazing. If I didn't have a Freesync monitor I'd have dumped this garbage a long time ago. I can swap to a reference card and it obeys my command through wattman (mostly) but even then I am forced to install Afterburner to control the fan properly.
> 
> Here come 12 DDU wipes again in order to get it to cooperate. I wish the AMD experience was better because IMO it's a damn mess compared to my 1080 Ti.


I have had no issues with my XFX RX Vega 64 Liquid Edition but I could not undervolt my Gigabyte Aorus 1080 for the life of me even when using a modded bios. Use Sapphire Trixx instead of MSI Afterburner it is better optimized for Radeon


----------



## Dhoulmagus

New optional Radeon drivers crapped my desktop out, looks like the driver was getting blocked by Win10. 

Didn't have time to look into it nor really care since I barely use windows anymore, I just rolled back. Just a warning X_X


----------



## VicsPC

Serious_Don said:


> New optional Radeon drivers crapped my desktop out, looks like the driver was getting blocked by Win10.
> 
> Didn't have time to look into it nor really care since I barely use windows anymore, I just rolled back. Just a warning X_X


Is that 18.8.1 or a different one? I may have to try that one as I'm reviewing Madden 19.


----------



## Fediuld

Serious_Don said:


> New optional Radeon drivers crapped my desktop out, looks like the driver was getting blocked by Win10.
> 
> Didn't have time to look into it nor really care since I barely use windows anymore, I just rolled back. Just a warning X_X


Installed them without issue nor warning on Windows 10. 
On the things I have observed, seems they fixed 2D overlay flickering with Freesync, and World of Tanks not rendering tree trunks properly with 18.7.1.


----------



## Dhoulmagus

VicsPC said:


> Is that 18.8.1 or a different one? I may have to try that one as I'm reviewing Madden 19.


Yeah it was 18.8.1



Fediuld said:


> Installed them without issue nor warning on Windows 10.
> On the things I have observed, seems they fixed 2D overlay flickering with Freesync, and World of Tanks not rendering tree trunks properly with 18.7.1.


Hmm, I'll give it a try again today. I was getting a warning that the drivers were non WHQL and "possibly a virus" when i checked in device manager. I'm not very keen with Windows 10 but I imagine there must be a way to tell it to use non-certified drivers.

edit: Just tried again updating right through the radeon software to 18.8.1


> Windows cannot verify the digital signature for the drivers required for this device. A recent hardware or software change might have installed a file that is signed incorrectly or damaged, or that might be malicious software from an unknown source. (Code 52)


Double edit: Disabling driver enforcement in advanced startup options got them working. I guess optional drivers aren't yet certified. Makes sense


----------



## Dolk

Hey guys, I have a unique issue that I'm trying to work on and I was wondering if anyone else has had an issue. 

Setup:

Crosshair VI with 1800x
Vega 64 Limited Edition (primary) (current drivers 18.8.1)
GTX750ti (Secondary for decoding streaming media, no display connections or likewise) Connected to PCH PCIe ports, not CPU PCIe.(Windows Update drivers, never forced installed any Nvidia drivers)
16GB RAM
1200W PSU
Main monitor: LG Ultrawide 2K 29"
Sec Monitor: LG 1080p 29"
Custom loop on GPU and CPU. 

Regardless of the Nvidia GTX750ti being present in my system or not, my second monitor will have screen lag with any application while my main monitor is actively running a heavily GPU loaded application. So the situation is when I have a game as my active window on my main monitor, and Notepad++/Firefox/any app displaying on my second monitor and is not the focused window, all applications will have a lag/buffer issue. 

Running Youtube on second monitor while playing a game? horrible lag issues. 
Scrolling through a txt file in notepad++ while a game is active? Horrible lag. 

The moment I switch and make that app on my second monitor the focused window, than no issues are observed. Even games running on my main monitor will not have any issues or loss of FPS. 

So what's going on here? Driver scrub doesn't solve anything, and OC on or off doesn't matter. Need some pointers on where to look on solving this issue.


----------



## dagget3450

Dolk said:


> Hey guys, I have a unique issue that I'm trying to work on and I was wondering if anyone else has had an issue.
> 
> Setup:
> 
> Crosshair VI with 1800x
> Vega 64 Limited Edition (primary) (current drivers 18.8.1)
> GTX750ti (Secondary for decoding streaming media, no display connections or likewise) Connected to PCH PCIe ports, not CPU PCIe.(Windows Update drivers, never forced installed any Nvidia drivers)
> 16GB RAM
> 1200W PSU
> Main monitor: LG Ultrawide 2K 29"
> Sec Monitor: LG 1080p 29"
> Custom loop on GPU and CPU.
> 
> Regardless of the Nvidia GTX750ti being present in my system or not, my second monitor will have screen lag with any application while my main monitor is actively running a heavily GPU loaded application. So the situation is when I have a game as my active window on my main monitor, and Notepad++/Firefox/any app displaying on my second monitor and is not the focused window, all applications will have a lag/buffer issue.
> 
> Running Youtube on second monitor while playing a game? horrible lag issues.
> Scrolling through a txt file in notepad++ while a game is active? Horrible lag.
> 
> The moment I switch and make that app on my second monitor the focused window, than no issues are observed. Even games running on my main monitor will not have any issues or loss of FPS.
> 
> So what's going on here? Driver scrub doesn't solve anything, and OC on or off doesn't matter. Need some pointers on where to look on solving this issue.


I wonder if it related to known issues listed in the newest driver?

Edit: any reason you have the second gpu? You can use multiple monitors on Vega with an adapter.



Cursor or system lag may be observed on some system configurations when two or more displays are connected and one display is powered off.


----------



## Dolk

dagget3450 said:


> I wonder if it related to known issues listed in the newest driver?
> 
> Edit: any reason you have the second gpu? You can use multiple monitors on Vega with an adapter.
> 
> 
> 
> Cursor or system lag may be observed on some system configurations when two or more displays are connected and one display is powered off.


Like I stated I use the Nvidia card to do streaming decoding for me. FFMPEG and similar tools can only use CUDA cores at most times for the decoding process. I've had this setup for a long time and I have had no issues before. 

I forgot to mention this issue started to happen when I began to play around with a second Vega 64. I used to have a Gigabyte stock V64, than bought the LE V64 and tested crossfire. I think I noticed this issue to occur around this time, so all within a few weeks. 

This issue could be the one described in the driver notes, but I don't see how I could have gone from no issue to issue like that. Especially since I've tried the following drivers:

18.5.1
18.7.1
18.8.1

I'm going to try out some old 17.x drivers but I'm not hoping much.

EDIT
17.12.2 doesn't help either.


----------



## Ne01 OnnA

*>28k GFX Score in Firestrike*

28k GFX Score (w/Tesselation of course) 

-> CPU ~4016MHz 
-> RAM ~3080MHz CL14-15-15-15-34-54 1T w/GD
-> Vega goes up to ~1757-1762MHz (Set to 1790MHz in Tool)

======


----------



## THUMPer1

Sapphire Vega 56 Pulse is a Nano PCB. Any info on this card?


----------



## Trender

Guys any help with my Vega 64 reference, after a bit of time playing (its hot and I need 3500 rpm for my vega in summer) playing, my fans rpm gets back to default 2400 rpm and it starts downclocking and only restart fix it to fans speed up again :/ tried with DDU but its the same.
Also tried with overdrive but it wont speed up fans either idk if I used overdrive properly tho but I think I did


----------



## Ne01 OnnA

Trender said:


> Guys any help with my Vega 64 reference, after a bit of time playing (its hot and I need 3500 rpm for my vega in summer) playing, my fans rpm gets back to default 2400 rpm and it starts downclocking and only restart fix it to fans speed up again :/ tried with DDU but its the same.
> Also tried with overdrive but it wont speed up fans either idk if I used overdrive properly tho but I think I did


It's extremely Hot (Here in Europe)
Don't worry about this, try to play some Less demanding games or add some Air into Loop 
Additional Fan is needed


----------



## Trender

Ne01 OnnA said:


> It's extremely Hot (Here in Europe)
> Don't worry about this, try to play some Less demanding games or add some Air into Loop
> Additional Fan is needed


Yeah im from Spain so you can imagine lol. I mean at 3500 rpm I can hold the temps at about 83º but the problem is idk why bugged drivers or something after a time playing the fans gets back down to 2400 rpm even if I put 5000 rpm in wattman


----------



## Ne01 OnnA

Trender said:


> Yeah im from Spain so you can imagine lol. I mean at 3500 rpm I can hold the temps at about 83º but the problem is idk why bugged drivers or something after a time playing the fans gets back down to 2400 rpm even if I put 5000 rpm in wattman


Poland Greets Spain Bratan' 
As i said get add. fan from drawer and Plug in (cool back of the GPU) for some better HotSpot temps.

Also i need to rearange my Internal Flow (LC is Too close to MB VRM ~55deg lol)
I have space in lower part of the case, just notch behind PSU


----------



## Dolk

Trender said:


> Yeah im from Spain so you can imagine lol. I mean at 3500 rpm I can hold the temps at about 83º but the problem is idk why bugged drivers or something after a time playing the fans gets back down to 2400 rpm even if I put 5000 rpm in wattman


Are you setting min = max for the RPM? I believe the Vegas can override to some degree if they have to throttle for power or continuous extreme temperatures.


----------



## Dolk

Dolk said:


> Like I stated I use the Nvidia card to do streaming decoding for me. FFMPEG and similar tools can only use CUDA cores at most times for the decoding process. I've had this setup for a long time and I have had no issues before.
> 
> I forgot to mention this issue started to happen when I began to play around with a second Vega 64. I used to have a Gigabyte stock V64, than bought the LE V64 and tested crossfire. I think I noticed this issue to occur around this time, so all within a few weeks.
> 
> This issue could be the one described in the driver notes, but I don't see how I could have gone from no issue to issue like that. Especially since I've tried the following drivers:
> 
> 18.5.1
> 18.7.1
> 18.8.1
> 
> I'm going to try out some old 17.x drivers but I'm not hoping much.
> 
> EDIT
> 17.12.2 doesn't help either.




If anyone does have this issue, my fix was an OS re-install. Nothing else would fix it I found.


----------



## Chaoz

Trender said:


> Guys any help with my Vega 64 reference, after a bit of time playing (its hot and I need 3500 rpm for my vega in summer) playing, my fans rpm gets back to default 2400 rpm and it starts downclocking and only restart fix it to fans speed up again :/ tried with DDU but its the same.
> Also tried with overdrive but it wont speed up fans either idk if I used overdrive properly tho but I think I did


Ref cards have very ****ty cooling. My ref 64 skyrocketed to 85°C, just when I got it (waterblocks weren't released yet, so had to test it with ref cooler. Waterblocked it immediately when they came out, temps dropped to 45°C with the waterblock and BIOS flashed to the LC BIOS.
Best option in your case would be the Alphacool GPU 120mm AIO.


----------



## Trender

Yeah I can hold it at 83-83º, at 3500 rpm, but the problem it gets bugged now *** after a time of playing the fans wont go more than 2400 even tho it shows 3500 on wattman and overdrive its getting annoying


----------



## diggiddi

Guys is there a "massive leap between the Liquid cooled and the Nitro+? Since Overboost is 1580 on the nitro and 1700s for liqC


----------



## Chaoz

Trender said:


> Yeah I can hold it at 83-83º, at 3500 rpm, but the problem it gets bugged now *** after a time of playing the fans wont go more than 2400 even tho it shows 3500 on wattman and overdrive its getting annoying


Cuz it's throttling, probably. The AIO is the best option to lower your temps quite a bit, yyou can even flash your BIOS to the LC version, if you want to get a higher clockspeed, unless you wanna go full custom loop.


----------



## Trender

Chaoz said:


> Cuz it's throttling, probably. The AIO is the best option to lower your temps quite a bit, yyou can even flash your BIOS to the LC version, if you want to get a higher clockspeed, unless you wanna go full custom loop.


Yeah its throttling cuz the fans won't go up and I don't know why, they go to 3500 rpm and it works fine but after a time they go back to default 2400 rpm i dont know why


----------



## 113802

Trender said:


> Yeah its throttling cuz the fans won't go up and I don't know why, they go to 3500 rpm and it works fine but after a time they go back to default 2400 rpm i dont know why


Sick of the AIO that's on the card. It's horrible compared to other AIO designs(Alphacool Eiswolf). I am thinking about creating my own custom loop for Vega but it may be cheaper to buy a RTX 2080 and sell my Vega 64.

Waiting for the Futuremark Ray Tracing benchmark to see how well Vega performs before jumping back to the green side.


----------



## dagget3450

WannaBeOCer said:


> Sick of the AIO that's on the card. It's horrible compared to other AIO designs(Alphacool Eiswolf). I am thinking about creating my own custom loop for Vega but it may be cheaper to buy a RTX 2080 and sell my Vega 64.
> 
> Waiting for the Futuremark Ray Tracing benchmark to see how well Vega performs before jumping back to the green side.



I wouldn't use futuremark benchmark for anything myself. maybe i need a tinfoil hat but they have done too much shady stuff in the past when it comes to AMD/Nvidia.

look at the Vr benchmarks for example... cyan room does rather well on AMD and yet they don't allow any logging/HOF results.


----------



## bloot

Anybody else noticed W10 installs Nvidia PhysX? I did a fresh W10 install and found it out. Why? Does it even work?


----------



## VicsPC

bloot said:


> Anybody else noticed W10 installs Nvidia PhysX? I did a fresh W10 install and found it out. Why? Does it even work?


Some games want it installed by default, it's quite annoying. For me i think it was Mafia II that installed it, doesn't work though since we don't have Nvidia cards and doesn't hurt performance either so it is what it is.


----------



## bloot

VicsPC said:


> Some games want it installed by default, it's quite annoying. For me i think it was Mafia II that installed it, doesn't work though since we don't have Nvidia cards and doesn't hurt performance either so it is what it is.


It must have been GTA V I guess, but I installed some other games too so it could be any other one :S


----------



## Call

*Hey guys! Sorry to bother...*

Hey guys! Sorry to bother... I'm having a weird "issue" that I don't think I should be having, or I may just be too used to 144fps or something.
So in the game Black Desert Online, they recently released a graphics overhaul which improves the lighting/SSAO/materials/etc. and it looks amazing. Anyway, I'm having an issue where it "stutters" or feels choppy despite never dipping below 80fps (the screenie shows 68 due to being in town, I'm referring to when I'm killing mobs away from town and other players which is much less taxing). 
I noticed that riva tuner displays some consistent "spikes" while playing on the new graphics, where as on the regular High settings that I've always played on (getting 144 everywhere) the spikes or whatever aren't present (second image).

Is this due to my OC or something not being stable? I don't feel like I'm pushing it, and CPU/GPU aren't anywhere near heat thresholds (both sit <50c at all times)
8700k
Vega Frontier (custom waterloop)

Note: I've disabled the OC on the RAM, for troubleshooting and the stutter is still present.

Any help is greatly appreciated! Thanks guys!!

Picture one:
https://cdn.discordapp.com/attachme...3/482040197186453515/2018-08-22_695675484.JPG

Picture two:
https://cdn.discordapp.com/attachme...3/482040794384039957/2018-08-22_695821956.JPG


----------



## DiscoSubmarine

Call said:


> Hey guys! Sorry to bother... I'm having a weird "issue" that I don't think I should be having, or I may just be too used to 144fps or something.
> So in the game Black Desert Online, they recently released a graphics overhaul which improves the lighting/SSAO/materials/etc. and it looks amazing. Anyway, I'm having an issue where it "stutters" or feels choppy despite never dipping below 80fps (the screenie shows 68 due to being in town, I'm referring to when I'm killing mobs away from town and other players which is much less taxing).
> I noticed that riva tuner displays some consistent "spikes" while playing on the new graphics, where as on the regular High settings that I've always played on (getting 144 everywhere) the spikes or whatever aren't present (second image).
> 
> Is this due to my OC or something not being stable? I don't feel like I'm pushing it, and CPU/GPU aren't anywhere near heat thresholds (both sit <50c at all times)
> 8700k
> Vega Frontier (custom waterloop)
> 
> Note: I've disabled the OC on the RAM, for troubleshooting and the stutter is still present.
> 
> Any help is greatly appreciated! Thanks guys!!
> 
> Picture one:
> https://cdn.discordapp.com/attachme...3/482040197186453515/2018-08-22_695675484.JPG
> 
> Picture two:
> https://cdn.discordapp.com/attachme...3/482040794384039957/2018-08-22_695821956.JPG


are you using freesync? and if so, do you have multiple monitors? recent AMD drivers are buggy with multi-monitor setups and freesync (and multi-monitor in general) so try disabling freesync if you have it enabled.
i recall having stuttering issues with freesync when below ~75 fps (on 144hz monitor), but at high fps it would be fine. hasn't happened to me in a while, but if that sounds like what you're experiencing it's probably the freesync bug.


----------



## Call

DiscoSubmarine said:


> are you using freesync? and if so, do you have multiple monitors? recent AMD drivers are buggy with multi-monitor setups and freesync (and multi-monitor in general) so try disabling freesync if you have it enabled.
> i recall having stuttering issues with freesync when below ~75 fps (on 144hz monitor), but at high fps it would be fine. hasn't happened to me in a while, but if that sounds like what you're experiencing it's probably the freesync bug.


FreeSync is on, I don't have my second monitor connected because it's only 60hz and does weird things when I want to watch a youtube video or something on it while playing the game.

I'll try turning FreeSync off when I get home; thanks! ^_^


----------



## DiscoSubmarine

Call said:


> FreeSync is on, I don't have my second monitor connected because it's only 60hz and does weird things when I want to watch a youtube video or something on it while playing the game.
> 
> I'll try turning FreeSync off when I get home; thanks! ^_^


i just checked and it seems this was actually fixed in 18.2.3, explains why i haven't noticed it in a while. oops.
still worth a shot i guess, especially if you're on an older driver.


----------



## Call

DiscoSubmarine said:


> i just checked and it seems this was actually fixed in 18.2.3, explains why i haven't noticed it in a while. oops.
> still worth a shot i guess, especially if you're on an older driver.


I'm on 18.4.1 I think... Ohwell I'll try it regardless, thanks again!


----------



## Call

DiscoSubmarine said:


> i just checked and it seems this was actually fixed in 18.2.3, explains why i haven't noticed it in a while. oops.
> still worth a shot i guess, especially if you're on an older driver.


So I disabled FreeSync last night and did some testing, but it didn't seem to make a difference at all... I was still getting the weird, consistent, "latency spikes" as seen in the first screenshot I posted on the RivaTuner overlay.

Does anyone have other ideas on this? Thanks!


----------



## diggiddi

What is your cpu load during game?
Also try these settings


----------



## Worldwin

diggiddi said:


> What is your cpu load during game?
> Also try these settings
> https://www.youtube.com/watch?v=Xz5kTs8rDfk


If you watch that "guide" you would realize its pretty crap. The changes they made was set max tessellation to 0X(making everything look flat) and priority for games in regedit to high. If you have a vega GPU this guide if useless. Not to mention during the video there are extremely distracting popups to the sides asking to subscribe.


----------



## VicsPC

Worldwin said:


> If you watch that "guide" you would realize its pretty crap. The changes they made was set max tessellation to 0X(making everything look flat) and priority for games in regedit to high. If you have a vega GPU this guide if useless. Not to mention during the video there are extremely distracting popups to the sides asking to subscribe.


Yea i was about to think the same thing. Who the hell turns off tessellation completely? I know it's still on in some games but he over rides that. I've had no issues in any game, if there's a problem with a game then its in game. Ie Madden 19 past 60fps runs horrendously.


----------



## diggiddi

Oh did it? I just changed the regedit settings which seemed to work


----------



## Ne01 OnnA

*Tweaks shortcut*

Make file Tweak.Reg and copy/paste everything what is below  Then Just merge (If you want you can export original values to file as backup)
Also make sure to check if names are OK (Games not G ame s)

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{261a2439-f5b1-4d5d-b5a9-2a6a20295c10}]
"EnableDHCP"=dword:00000001
"TcpAckFrequency"=dword:00000001
"TCPNoDelay"=dword:00000001
"TcpDelAckTicks"=dword:00000000

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile]
"NetworkThrottlingIndex"=dword:ffffffff
"SystemResponsiveness"=dword:00000000

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\Tasks]

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\Tasks\Games]
"Affinity"=dword:00000000
"Background Only"="False"
"Clock Rate"=dword:00002710
"GPU Priority"=dword:00000008
"Priority"=dword:00000006
"Scheduling Category"="High"
"SFIO Priority"="Normal"

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\Tasks\Playback]
"Affinity"=dword:00000000
"Background Only"="False"
"BackgroundPriority"=dword:00000004
"Clock Rate"=dword:00002710
"GPU Priority"=dword:00000008
"Priority"=dword:00000003
"Scheduling Category"="Normal"
"SFIO Priority"="Normal"

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\Tasks\Pro Audio]
"Affinity"=dword:00000000
"Background Only"="False"
"Clock Rate"=dword:00002710
"GPU Priority"=dword:00000008
"Priority"=dword:00000001
"Scheduling Category"="High"
"SFIO Priority"="Normal"


----------



## Ne01 OnnA

*New driver, Beta Adrenalin Edition 18.8.2-Aug20*

I will edit our Guru3D Driver Guy # JonasBeckman ! (Enjoy the highlights)

18.8.2 for Strange Brigade™ with Vulkan™
Dated August 20th.
(18.8.1 re-release is dated August 17th.)

-> https://www.amd.com/en/support/kb/release-notes/rn-rad-win-strangebrigade

==
Driver version is up from 18.30.01.01 to 18.30.11.01
or 24.20.13001.1010 to 24.20.13011.1004

Detailed Info by Jonas -> https://forums.guru3d.com/threads/a...ownload-discussion.422362/page-6#post-5577808

And then Crossfire profiles and these flags found in the next section.

Quake Champions
PlayerUnknown's Battlegrounds
Far Cry 5
Sea of Thieves
Kingdom Come Deliverance
Need for Speed Payback

So nothing added but some profiles got updated.

DXX comes after that.

Notch Builder

This one was updated.

Ni Shui Han
Call Of Duty Black Ops4

These two were added.

==

Vulkan drivers are newer.

9.2.10.49 instead of 9.2.10.45 found in 18.8.1 although the runtime in the installer is using 1.1.77.0 instead of 1.1.81.0 though it's also available for download separately now 

(1.1.82.1)
https://vulkan.lunarg.com/sdk/home#windows


----------



## serave

Sorry for yet posting another tech-support question.

anyone have ever experienced failing to boot into windows with the RX cards?

I changed my V56 with Morpheus II back to the stock reference cooler and now it wont boot into windows.

It simply goes no signal after the mobo POST the and ramping up the fan to 100%.

i remember wiggling one of the small transistor a wee bit while putting the stock cooler back on (i agree its a bit rushed as it took me around 15 minutes to finish it all, shouldve been paid more attention soz)

i recall flashing the card with 64 Air Bios but im pretty sure i reverted it back to the stock BIOS. 

uninstalled the driver, afterburner etc just incase its my overclock doing something funny, still no result.

the said transistor is as follows.

Any help/replies would be appreciated. thanks!


----------



## THUMPer1

serave said:


> Sorry for yet posting another tech-support question.
> 
> anyone have ever experienced failing to boot into windows with the RX cards?
> 
> I changed my V56 with Morpheus II back to the stock reference cooler and now it wont boot into windows.
> 
> It simply goes no signal after the mobo POST the and ramping up the fan to 100%.
> 
> i remember wiggling one of the small transistor a wee bit while putting the stock cooler back on (i agree its a bit rushed as it took me around 15 minutes to finish it all, shouldve been paid more attention soz)
> 
> i recall flashing the card with 64 Air Bios but im pretty sure i reverted it back to the stock BIOS.
> 
> uninstalled the driver, afterburner etc just incase its my overclock doing something funny, still no result.
> 
> the said transistor is as follows.
> 
> Any help/replies would be appreciated. thanks!


Have you tried to just re-seat the card?


----------



## diggiddi

Can Anyone here with Crossfire and project cars/2 run 4k benches,thanks


----------



## Call

diggiddi said:


> What is your cpu load during game?
> Also try these settings
> https://www.youtube.com/watch?v=Xz5kTs8rDfk


I'm not sure if it was these settings that fixed it since I renstalled the latest driver at the same time, but it's really smooth now! 

Thanks guys!! (=


----------



## colorfuel

Hello guys.

So I recently got a VEGA64 Ref. for a decent price (375€) and I have to say, the cooler is way better than I thought it would be.

I've been able to UV the card to run 1500Mhz (P6-1536/980mv P7-1595/1015mv) on load and [email protected] (1100 seems to produce Artifacts in the Tomb Raider DOX, which I'm running in the background to test my settings.) I'm using a custom SoftPowerPlay set.

At this point, GPU chip power hovers around 180-182W. 

I've tried to get it further down, but that would cause crashes unless I also go down with the Mhz. And I'd like to keep 1500Mhz, since that results in better FPS than stock settings with less Power draw. So it seems I found my sweetspot. 

For all the bad press that these cards have gotten, I'm quite pleased. I have like double the performance of my 290X Tri-X OC, with about the same power draw. That seems like a serious improvement. (Of course the Nvidias fare better here, but still.) 


Now to my questions:

1. What does 180W GPU Chip Power in HWinfo or GPU only Power Draw in GPU-z mean? How much do I need to add to know the real power draw, to be able to compare it with reviews online?

I searched for quite some time in several forums, but I couldnt find a conlcusive answer.


----------



## VicsPC

colorfuel said:


> Hello guys.
> 
> So I recently got a VEGA64 Ref. for a decent price (375€) and I have to say, the cooler is way better than I thought it would be.
> 
> I've been able to UV the card to run 1500Mhz (P6-1536/980mv P7-1595/1015mv) on load and [email protected] (1100 seems to produce Artifacts in the Tomb Raider DOX, which I'm running in the background to test my settings.) I'm using a custom SoftPowerPlay set.
> 
> At this point, GPU chip power hovers around 180-182W.
> 
> I've tried to get it further down, but that would cause crashes unless I also go down with the Mhz. And I'd like to keep 1500Mhz, since that results in better FPS than stock settings with less Power draw. So it seems I found my sweetspot.
> 
> For all the bad press that these cards have gotten, I'm quite pleased. I have like double the performance of my 290X Tri-X OC, with about the same power draw. That seems like a serious improvement. (Of course the Nvidias fare better here, but still.)
> 
> 
> Now to my questions:
> 
> 1. What does 180W GPU Chip Power in HWinfo or GPU only Power Draw in GPU-z mean? How much do I need to add to know the real power draw, to be able to compare it with reviews online?
> 
> I searched for quite some time in several forums, but I couldnt find a conlcusive answer.


Real power draw would depend on the efficiency of your power supply, no way for us to calculate it. If it's undervolted youll be using less then in reviews anyways, but there is no way for you to measure gpu power usage only, wall draw would give you the whole system. I think hwinfo using an algorithm passed on amps and voltage to give you wattage. If both of those are current then the wattage will be correct.


----------



## Ne01 OnnA

Yup, when UV got in place you won't exceed Real 150-220tW from the wall (Remember that average is much lower that Max moment spikes)

Here is my Tweaked meansurment for ~Platinum Class PSU (XFX is made by Seasonic ) when gaming in ME: Andromeda (Played 1h mission today on HAVARL)

As you can see GPU was ~1700MHz with UV Max tW ~200 but average within the 1h was only 94tW (Sumarized GPU+HBM2)
Vega is quite performer and it has better tW than my Fiji was (also UV and mostly 1020/560) 

ME: A Same Planet, High Settings Tweaked (Ultra on Fiji is a no go) i had ~240tW and Average ~170tW !
So Vega is ~2x better than Fiji (compared High vs Ultra AA lol)

===


----------



## THUMPer1

Is MSI Afterburner still a viable option for overclocking and undervolting Vega 64? I really like RTSS overlay, only reason I ask.


----------



## THUMPer1

These settings get me around 1730 Mhz.
If I go to 1180 mV I will get a hard crash. Am I missing something? Or is this pretty much it for this V64 Liquid?


----------



## THUMPer1

V64 Liquid.
These setting get me to 1730 Mhz or so in game. If I go to 1180 mV I will get a crash. Is this as good as it gets for this chip? Am I missing anything?


----------



## colorfuel

Thanks for the replys guys! I'd rep if I could, but I cant so I shant.

My be quiet! Straight Power E9 CM 80+Gold is rated at 90% and higher (according to tests).


Anyway, 1700Mhz @ 200W is really nice @Ne01 OnnA! I cant expect mine to go that far on such low volts but this is pretty good aswell.


----------



## Ne01 OnnA

THUMPer1 said:


> V64 Liquid.
> These setting get me to 1730 Mhz or so in game. If I go to 1180 mV I will get a crash. Is this as good as it gets for this chip? Am I missing anything?


I have no such problems with BF1 (im using 70FPS Max as freesync)
If you need some better stable gaming just Tweak Chill e.g. 70-100FPS then Try again.
Also for 1750MHz and above a need Hefty 1.131mV ! (tested in 3Dmark when i was able to beat my older score for [18.7.1] 27.700 to 28.092 )

-> http://hwbot.org/submission/3916844_


----------



## ZealotKi11er

Is this score any good? 

https://www.3dmark.com/fs/16234236


----------



## Ne01 OnnA

ZealotKi11er said:


> Is this score any good?
> 
> https://www.3dmark.com/fs/16234236


IMO you can do better  try to give HBM_2 little more Love... 1150? 1175?
But it's fine by all means  
(any 26+ is an good score, TimeSpy 8.5k is also good one)


----------



## ZealotKi11er

Ne01 OnnA said:


> IMO you can do better  try to give HBM_2 little more Love... 1150? 1175?
> But it's fine by all means
> (any 26+ is an good score, TimeSpy 8.5k is also good one)


HBM2 is already at 1200. My problem is core. It cant do over 1700MHz. I tried 1725MHz but same score. It is a very loaded system though where I have installed a lot of different GPUs.


----------



## THUMPer1

Does only doing the registry hack to get more than 50% power actually help with anything?


----------



## DiscoSubmarine

THUMPer1 said:


> Does only doing the registry hack to get more than 50% power actually help with anything?


what do you mean "only"? as in raising the power limit with no overclocking? then no.
50% is more than enough on stock clocks. the hack is useful if you're pulling more than 330 watts (max power on a reference bios with +50% power limit).


----------



## diggiddi

Ok since no one bit on the 4k request, can I get 2560x1080 project cars/2 bench? there are free demos on steam if you don't have the game, thanx


----------



## Ne01 OnnA

Sub the Guy giantmonkey101 and Matt B 
They have coverage for almost any Game with Vega64 + proper UV.


----------



## Ne01 OnnA

*Editorial about Vega (1 month of Gaming with FreeSync)*

My thoughts about Vega 64 XTX (and Vega uArch as a whole)
Same stuff on our Thread in Guru3d

I would like to share my opinion about Vega XTX.
So Gaming Fun is just excellent, FPS and Low 0.01% are Very Good and thanks to FreeSync you don't even know when it's happening  (Dips to 47-55FPS in Heavy Games)
Also the VEGA 64 XTX is class by it's own (not the 1080Ti Fe/TitanXp performance levels min.10% loss in 1440p) and quite ahead of AIBs 1080.
But situation is change in Vega XTX Favor when gaming in HDR, XTX is almost at 1080Ti Fe levels 1440p 10Bit in mind.

So we have performance table like this:
1080Ti
Vega XTX
Vega 64
1080
Vega 56
1070Ti
1070
RX580 8GB
....

Also if you wanna jump into Vega 64 GPU don't forget about Good FreeSync monitor (best is Sammie 27" 1440p 144Hz HDR Freesync2 or similar quality IPS LED, is good to have 10Bit one)
IMO you will get sufficient performance with V64 and thanks to FreeSync you'll get best possible Smooth gaming Experience for less than 1000-900€ (3-Games included until December)

Note no.2

When tW or Power efficiency is your concern, Vega is also good for you.
Forget about Spike Reading in Bench sites those 275W lol (if it was based on Average redings through gameplay, then you'll end up with very efficient GPU for Gaming)
Also when you Gaming with some UV + FreeSync and have Chill ON then you are Green as you could be.
Easily you can end up with 35->80tW on Average (GPU+HBM2 only, 35tW is for Forza MS7, 78tW is for Deus & AC: O)

===
Below HWinfo64 file measurement

Forza 1h Forzathon last week drv. 18.8.1
&
DeusEx MkD Prague first step, completing some Quests

==


----------



## Paul17041993

Has there been any talk here about 'VegaFreeze'? the SoC hardware bug that causes black/grey/colour hangs from a PCIe root complex collapse?
Or at least, who here uses a 4k display with a secondary display, with the card under water or with a liquid edition?


----------



## Chaoz

THUMPer1 said:


> Is MSI Afterburner still a viable option for overclocking and undervolting Vega 64? I really like RTSS overlay, only reason I ask.


I find OverDriveNTool to work best. Easier to edit the voltage and P6 and P7-states with matching voltage. That or Wattman where you can export and import the profile and edit the same settings aswell.

I used OverdriveNTool to find a stable undervolt and overclock setting then switched to Wattman to export a profile.

Got my 64 with LC BIOS flash running on 1750MHz with 1V on core and 1100MHz with 950mV on HBM with +50% power.


----------



## THUMPer1

What are people using to replace the thermal pads with? What brand?


----------



## dslives

Recently binged on cheap used GPUs. These two 64s turned up today. Awaiting a few more parts before I can put them in my loop.

Still outstanding:

1) EKWB FC block
2) ... backplate
3) more coolant
4) better PSU (due any day now)











Sent from my iPhone using Tapatalk Pro


----------



## majestynl

Chaoz said:


> I find OverDriveNTool to work best. Easier to edit the voltage and P6 and P7-states with matching voltage. That or Wattman where you can export and import the profile and edit the same settings aswell.
> 
> I used OverdriveNTool to find a stable undervolt and overclock setting then switched to Wattman to export a profile.
> 
> Got my 64 with LC BIOS flash running on 1750MHz with 1V on core and 1100MHz with 950mV on HBM with +50% power.


1750mhz @1v ?? That's really low voltage...


----------



## Drake87

Paul17041993 said:


> Has there been any talk here about 'VegaFreeze'? the SoC hardware bug that causes black/grey/colour hangs from a PCIe root complex collapse?
> Or at least, who here uses a 4k display with a secondary display, with the card under water or with a liquid edition?


Is that what causes that? I have a 4k primary with a 1600p secondary display. It doesn't do it often, but I'm pretty sure I've experienced it.


----------



## Ne01 OnnA

majestynl said:


> 1750mhz @1v ?? That's really low voltage...


P6 is 1mV
P7 is 1.1mV

Also don't forget that ATI AI Booster is working in all P0 to P7 spectrum 
If needed it will go 982mV at 1620MHz and so on.
All we can do is Show AI the 'Brackets' for mV & MHz

-> 
1. Here is my xGaming Profile (for DeusEx MkD & AC: O)
2. 40min of HoMM Heroes VII DX10 1440p Ultra + UE3 ini MOD 
==


----------



## LazarusIV

PowerColor Red Devil RX Vega 64

Hey all, just bought an open box version from Newegg plus got a 10% off promo code. All in all, $490 out the door for this card. Was really hoping the Sapphire Nitro that's been on sale to come back in stock but it hasn't. Saw this card and decided to jump on it since I've got a buddy who wants to buy my R9 Fury. Anyone else have this card? Should I re-paste and re-pad this right off the bat?


----------



## Exposal

So is it not possible to get OCs with Wattman? Just trying to get a decent undervolt/oc and it's not going well

Using a Vega 56 flash to vega 64. Has samsung HBM and being cooled by custom loop with EKWB block.

Anyone got a safe baseline I should start with to start tinkering with? Can't seem to find anything that isn't 6-8 months old


----------



## Ne01 OnnA

LazarusIV said:


> PowerColor Red Devil RX Vega 64
> 
> Hey all, just bought an open box version from Newegg plus got a 10% off promo code. All in all, $490 out the door for this card. Was really hoping the Sapphire Nitro that's been on sale to come back in stock but it hasn't. Saw this card and decided to jump on it since I've got a buddy who wants to buy my R9 Fury. Anyone else have this card? Should I re-paste and re-pad this right off the bat?


IMO you don't need to do that, just make sure it has some good Air flow overall in Case 

===



Exposal said:


> So is it not possible to get OCs with Wattman? Just trying to get a decent undervolt/oc and it's not going well
> 
> Using a Vega 56 flash to vega 64. Has samsung HBM and being cooled by custom loop with EKWB block.
> 
> Anyone got a safe baseline I should start with to start tinkering with? Can't seem to find anything that isn't 6-8 months old


You can do it via Notepad++ (just save your OC Custom Setup in WattMan) edit then just Load when needed.
Also you can have many OC profiles for many games (go into game Tab -> WattMan and load Profile)


----------



## Maracus

THUMPer1 said:


> What are people using to replace the thermal pads with? What brand?


I have used Fujipoly in the past



Exposal said:


> So is it not possible to get OCs with Wattman? Just trying to get a decent undervolt/oc and it's not going well
> 
> Using a Vega 56 flash to vega 64. Has samsung HBM and being cooled by custom loop with EKWB block.
> 
> Anyone got a safe baseline I should start with to start tinkering with? Can't seem to find anything that isn't 6-8 months old


OverdriveNTool or Wattman. I using Wattman at the moment for some reason OverdriveNTool isnt applying on my new Ryzen system (might DDU graphics driver later and try again).

I don't have samsung memory on my Vega 56, But i have been able to undervolt/overclock P7 1650mhz/1050mv shows 1.0v in HW64


----------



## Exposal

Ne01 OnnA said:


> IMO you don't need to do that, just make sure it has some good Air flow overall in Case
> 
> ===
> 
> 
> 
> You can do it via Notepad++ (just save your OC Custom Setup in WattMan) edit then just Load when needed.
> Also you can have many OC profiles for many games (go into game Tab -> WattMan and load Profile)


I'd like to just have one profile, set it and forget about it lol. Just can't seem to find a good starting OC/undervolt to start with. I wanna get around a 23k firemark score, more the better but not at the expense of putting a ton of volts


----------



## Spacebug

majestynl said:


> 1750mhz @1v ?? That's really low voltage...


Yes, if that is roughly 1750mhz maintained during load. 
Or I might just have got a really trash chip, seems i always loose that lottery. 

My v64 needs around 1.25v load voltage (after droop) to maintain around 1750mhz :/


----------



## THUMPer1

Spacebug said:


> Yes, if that is roughly 1750mhz maintained during load.
> Or I might just have got a really trash chip, seems i always loose that lottery.
> 
> My v64 needs around 1.25v load voltage (after droop) to maintain around 1750mhz :/


My v64 liquid has made me settle for 1140mhz/1190mv 25% power. In BF1 average clock speed is around 1720-1730. I never get anywhere close to 1750. But it plays great, I have to turn all game settings up to stay under 144hz within my freesync range.


----------



## 113802

THUMPer1 said:


> My v64 liquid has made me settle for 1140mhz/1190mv 25% power. In BF1 average clock speed is around 1720-1730. I never get anywhere close to 1750. But it plays great, I have to turn all game settings up to stay under 144hz within my freesync range.


Removed the garbage AIO cooler and installed my EK waterblock Saturday.


----------



## THUMPer1

Nice! I change hardware and cases too much to justify all the extra water cooling stuff. Maybe one day.


----------



## Chaoz

majestynl said:


> 1750mhz @1v ?? That's really low voltage...


Yeah, it is. Most games it boosts to 1750MHz, but some lesser demanding games it only boosts to 1680MHz. Never the less still a solid clock.
Only possible cuz I flashed the LC BIOS to it. Here's the OverdriveNTool screenshot:












Spacebug said:


> Yes, if that is roughly 1750mhz maintained during load.
> Or I might just have got a really trash chip, seems i always loose that lottery.
> 
> My v64 needs around 1.25v load voltage (after droop) to maintain around 1750mhz :/


It maintains 1750MHz in plenty of the more demanding games with fully maxed out settings.
Some games like Tekken 7 and such it stays at 1680MHz, cuz I'm using FreeSync with 75Hz.

Temps are awesome too, stays at 42-45°C (hotspot around 50°C) even during several hour gaming sessions. CPU stays at 55°C-ish.
Full custom loop with 360 and 480 rad helps a lot, tho.

Must be lucky with my GPU .


----------



## 113802

Chaoz said:


> Yeah, it is. Most games it boosts to 1750MHz, but some lesser demanding games it only boosts to 1680MHz. Never the less still a solid clock.
> Only possible cuz I flashed the LC BIOS to it. Here's the OverdriveNTool screenshot:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It maintains 1750MHz in plenty of the more demanding games with fully maxed out settings.
> Some games like Tekken 7 and such it stays at 1680MHz, cuz I'm using FreeSync with 75Hz.
> 
> Temps are awesome too, stays at 42-45°C (hotspot around 50°C) even during several hour gaming sessions. CPU stays at 55°C-ish.
> Full custom loop with 360 and 480 rad helps a lot, tho.
> 
> Must be lucky with my GPU .


I'm jealous, I can only run it at [email protected] which allows all games to run between 1700mhz-1740mhz


----------



## Chaoz

WannaBeOCer said:


> I'm jealous, I can only run it at [email protected] which allows all games to run between 1700mhz-1740mhz


Damn, that sucks. You still have good clocks, tho.


----------



## 113802

Chaoz said:


> Damn, that sucks. You still have good clocks, tho.


I have the highest GPU FireStrike score on HWBot. 

Curious what yours gets when always running at 1750Mhz


----------



## Chaoz

WannaBeOCer said:


> Chaoz said:
> 
> 
> 
> Damn, that sucks. You still have good clocks, tho.
> 
> 
> 
> I have the highest GPU FireStrike score on HWBot.
> 
> Curious what yours gets when always running at 1750Mhz
Click to expand...

My GPU FS score with clocks on 1750MHz Core and 1050MHz HBM is 23635. Haven't really benched it that much.


----------



## Exposal

Can vega 56 be flashed with 64 liquid bios? I currently have it flashed with air 64. Would it be worth flashing the liquid bios? The card is a waterblocked XFX vega 56 reference with samsung hbm


----------



## Chaoz

Exposal said:


> Can vega 56 be flashed with 64 liquid bios? I currently have it flashed with air 64. Would it be worth flashing the liquid bios? The card is a waterblocked XFX vega 56 reference with samsung hbm


Should work. A lot of people did it when they got their 56. They did say that it didn't perform as a full 64 LC cuz of the lesser powerdraw 275W compared to 300W, but was more like a 58 or 62, but hey, free performance upgrade can't be bad, so inbetween 56 and 64, this was just a week after their release. Dunno if it still is like that.

I flashed my 64 ref with the LC BIOS and now I can get 1750MHz on 1v stable, before it was only 1546MHz, fps increased and temps dropped. Ref cooler temps were 85°C waterblock temps dropped to 45°C.


----------



## THUMPer1

Chaoz said:


> My GPU FS score with clocks on 1750MHz Core and 1050MHz HBM is 23635. Haven't really benched it that much.


Yeah those clocks and voltage are probably the best ive ever seen. You won the lottery.


----------



## Exposal

Chaoz said:


> Should work. A lot of people did it when they got their 56. They did say that it didn't perform as a full 64 LC cuz of the lesser powerdraw 275W compared to 300W, but was more like a 58 or 62, but hey, free performance upgrade can't be bad, so inbetween 56 and 64, this was just a week after their release. Dunno if it still is like that.
> 
> I flashed my 64 ref with the LC BIOS and now I can get 1750MHz on 1v stable, before it was only 1546MHz, fps increased and temps dropped. Ref cooler temps were 85°C waterblock temps dropped to 45°C.


Could you post your overclocks and voltages for p6/p7 and HBM?

Thanks!


----------



## Ne01 OnnA

28k + Great score bratan'
Im waiting for faster RAMM for my Setup (3466 CL14 or better 32GB soon'ish)


----------



## LazarusIV

Ne01 OnnA said:


> IMO you don't need to do that, just make sure it has some good Air flow overall in Case



Well, I've got that taken care of. Define S with 3 140mm intake fans in front and the 2 next-lowest PCIe covers are removed to let air get out 

Can't wait to get this card and put it through its paces!


----------



## majestynl

Ne01 OnnA said:


> P6 is 1mV
> P7 is 1.1mV
> 
> Also don't forget that ATI AI Booster is working in all P0 to P7 spectrum
> If needed it will go 982mV at 1620MHz and so on.
> All we can do is Show AI the 'Brackets' for mV & MHz
> 
> ->
> 1. Here is my xGaming Profile (for DeusEx MkD & AC: O)
> 2. 40min of HoMM Heroes VII DX10 1440p Ultra + UE3 ini MOD
> ==


Yeap i see! Nice voltages! Mine needs 1210mv for p7 @ 1732mhz! Otherwise it crashes on some games! Higher P7 clocks even need more!!!
I do have great scores on FS, but dunno why it need that much voltage. I can leave P6 on 1000mv but p7 is doing strange!

With P7 @ 1732mhz im getting ~1700-1715mhz in games,
With P7 @ 1752mhz im getting ~1710-1730mhz in games!


Im using a LC bios (WC) with Custom PP tables and 150% power! Im suspecting there is something wrong somewhere....
Need to spent some time on it (again) but probably i will start with changing the timm and termal pads, cause temps are not fantastic for a wc! On full load ~50-52c!



WannaBeOCer said:


> Removed the garbage AIO cooler and installed my EK waterblock Saturday.


Nice!!



WannaBeOCer said:


> I have the highest GPU FireStrike score on HWBot.
> 
> Curious what yours gets when always running at 1750Mhz


Great score !!!


----------



## Exposal

Wattman is confusing the hell out of me, setting p6 1650mhz/1000mv and p7 1750/1050mv and the clocks only go to around 1670mhz temps are only getting to around 37-38c

Set it to auto do it's thing and its boosting to 1816 but using 1.250v and 350w


----------



## 113802

Exposal said:


> Wattman is confusing the hell out of me, setting p6 1650mhz/1000mv and p7 1750/1050mv and the clocks only go to around 1670mhz temps are only getting to around 37-38c
> 
> Set it to auto do it's thing and its boosting to 1816 but using 1.250v and 350w


What card do you have? Reference on non-reference designs? Crazy how all of you are able to run p7 way below 1100mv


----------



## Exposal

WannaBeOCer said:


> What card do you have? Reference on non-reference designs? Crazy how all of you are able to run p7 way below 1100mv


Reference vega 56 flashed with 64 water. has samsung hbm


----------



## 113802

Exposal said:


> Reference vega 56 flashed with 64 water. has samsung hbm


When boosting to 1816Mhz on the core does it crash? Crazy lol I bought a Reference Vega 64 LC and it can only go down to 1175mv anything lower it crashes.


----------



## Chaoz

THUMPer1 said:


> Chaoz said:
> 
> 
> 
> My GPU FS score with clocks on 1750MHz Core and 1050MHz HBM is 23635. Haven't really benched it that much.
> 
> 
> 
> Yeah those clocks and voltage are probably the best ive ever seen. You won the lottery.
Click to expand...

Atleast I won something good .

Here's a pic of what it usually clocks to. Had HBM on 945MHz, cuz No Man's Sky acts weird with 1100MHz.













Exposal said:


> Chaoz said:
> 
> 
> 
> Should work. A lot of people did it when they got their 56. They did say that it didn't perform as a full 64 LC cuz of the lesser powerdraw 275W compared to 300W, but was more like a 58 or 62, but hey, free performance upgrade can't be bad, so inbetween 56 and 64, this was just a week after their release. Dunno if it still is like that.
> 
> I flashed my 64 ref with the LC BIOS and now I can get 1750MHz on 1v stable, before it was only 1546MHz, fps increased and temps dropped. Ref cooler temps were 85°C waterblock temps dropped to 45°C.
> 
> 
> 
> Could you post your overclocks and voltages for p6/p7 and HBM?
> 
> Thanks!
Click to expand...

Here are mine:


----------



## Exposal

WannaBeOCer said:


> When boosting to 1816Mhz on the core does it crash? Crazy lol I bought a Reference Vega 64 LC and it can only go down to 1175mv anything lower it crashes.


It stayed stable but the memory clock was at 945 i think


----------



## Exposal

Chaoz said:


> Atleast I won something good .
> 
> Here's a pic of what it usually clocks to. Had HBM on 945MHz, cuz No Man's Sky acts weird with 1100MHz.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here are mine:



Should I focus more on core clock or memory speed? doesn't seem my card can touch yours lol


----------



## 113802

Exposal said:


> Should I focus more on core clock or memory speed? doesn't seem my card can touch yours lol


Memory, what type of cooling are you using? If custom water you can easily be able to keep it at 1100Mhz if not monitor the hot spot temp 1050Mhz should be possible. 

EK's testing: https://www.ekwb.com/blog/can-water-block-really-boost-gpu-performance/

I'm running at p7 1750Mhz at 1175mv +50% power which is using between 310-315w with my HBM @ 1105Mhz which has it running at 1700-1730Mhz sustained in every single game but I'm using a waterblock so it doesn't thermal throttle.

Edit: Which bios are you using? I noticed my card has 016.001.001.000.008709


----------



## Exposal

Chaoz said:


> Atleast I won something good .
> 
> Here's a pic of what it usually clocks to. Had HBM on 945MHz, cuz No Man's Sky acts weird with 1100MHz.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here are mine:



Copied your profile into my overdriven tool and I don't crash but my core clock doesn't really go above like 1575


----------



## Exposal

WannaBeOCer said:


> Memory, what type of cooling are you using? If custom water you can easily be able to keep it at 1100Mhz if not monitor the hot spot temp 1050Mhz should be possible.
> 
> EK's testing: https://www.ekwb.com/blog/can-water-block-really-boost-gpu-performance/
> 
> I'm running at p7 1750Mhz at 1175mv +50% power which is using between 310-315w with my HBM @ 1105Mhz which has it running at 1700-1730Mhz sustained in every single game but I'm using a waterblock so it doesn't thermal throttle.
> 
> Edit: Which bios are you using? I noticed my card has 016.001.001.000.008709


Grabbed this 1 since it had the highest number i figured it was the newest:

016.001.001.000.008774


https://www.techpowerup.com/vgabios/195319/amd-rxvega64-8176-170811


----------



## Chaoz

Exposal said:


> Copied your profile into my overdriven tool and I don't crash but my core clock doesn't really go above like 1575


Did you flash the LC BIOS to yours? Cuz with the stock BIOS mine didn't reach any higher than 1560MHz.
LC BIOS allowed me to clock it higher with lower voltage.




WannaBeOCer said:


> Edit: Which bios are you using? I noticed my card has 016.001.001.000.008709


I'm using the same BIOS version


----------



## Exposal

Chaoz said:


> Did you flash the LC BIOS to yours? Cuz with the stock BIOS mine didn't reach any higher than 1560MHz.
> LC BIOS allowed me to clock it higher with lower voltage.
> 
> 
> 
> 
> I'm using the same BIOS version



I have the LC bios and I have the settings set identical to yours with the same voltages but it won't go to the clocks that I have set. But if i up the voltages the clocks will go up


----------



## Chaoz

Exposal said:


> I have the LC bios and I have the settings set identical to yours with the same voltages but it won't go to the clocks that I have set. But if i up the voltages the clocks will go up


Also +50% Power target?

You won't have it clock to 1750MHz in every game. Only the more demanding games it will boost to 1750MHz. Most of the time it clocks at around 1680MHz. But power usage is a lot lower than stock, I don't mind.


----------



## Exposal

Chaoz said:


> Also +50% Power target?
> 
> You won't have it clock to 1750MHz in every game. Only the more demanding games it will boost to 1750MHz. Most of the time it clocks at around 1680MHz. But power usage is a lot lower than stock, I don't mind.


What game would be a good test to get it to go to 1750 or 1680. Firestrike is only getting to 1569.

Yup +50%


----------



## Exposal

Chaoz said:


> Also +50% Power target?
> 
> You won't have it clock to 1750MHz in every game. Only the more demanding games it will boost to 1750MHz. Most of the time it clocks at around 1680MHz. But power usage is a lot lower than stock, I don't mind.


These are the results of one firestrike run with your settings:


----------



## ht_addict

Posted my results for Strange Brigade in this thread. See link.

https://www.overclock.net/forum/226...vulkan-multi-gpu-support-strange-brigade.html


----------



## Fatrod

WannaBeOCer said:


> I have the highest GPU FireStrike score on HWBot.
> 
> Curious what yours gets when always running at 1750Mhz


Very interesting. I have a few scores over 27k but yours is the highest 64LC score I have seen.

https://www.3dmark.com/fs/15693762

I had to go to 1800+ on core an 1175 on HBM frequency to get that score.

I wonder if it is because of your larger rad or if you are getting some benefit from your faster CPU.


----------



## Ne01 OnnA

I have 28k also (GPU needs only ~1760MHz actuall clocks)

For 27,934 needs only 1.081v with 1742MHz clock w/HBM2 at 1200MHz
So Vega LC is very efficient GPU after all.
Add to this HDR excellent support (No matter SDR/HDR give around same FPS) and we have a clear winner in Perf/Price/Quality segment for enthusiasts  (1k€ for really nice setup V64 with 3 Games + QLED)

-> https://www.samsung.com/us/computin...ming-monitor-with-quantum-dot-lc27hg70qqnxza/

We can easily wait for MicroLED or OLED Monitors with HDR & FreeSync with our Vegas for sure !
1440p 120/144Hz with min. 70FPS we can Play games and have fun -Easy-.
==


----------



## Chaoz

Exposal said:


> Chaoz said:
> 
> 
> 
> Also +50% Power target?
> 
> You won't have it clock to 1750MHz in every game. Only the more demanding games it will boost to 1750MHz. Most of the time it clocks at around 1680MHz. But power usage is a lot lower than stock, I don't mind.
> 
> 
> 
> What game would be a good test to get it to go to 1750 or 1680. Firestrike is only getting to 1569.
> 
> Yup +50%
Click to expand...

Battlefield 1 would be a good game to test it. Always clocks to 1750 for me at 1440 UW.
And definately Frostpunk aswell, such a heavy game that it even maxes out my 8GB VRAM.


----------



## 113802

Fatrod said:


> Very interesting. I have a few scores over 27k but yours is the highest 64LC score I have seen.
> 
> https://www.3dmark.com/fs/15693762
> 
> I had to go to 1800+ on core an 1175 on HBM frequency to get that score.
> 
> I wonder if it is because of your larger rad or if you are getting some benefit from your faster CPU.


Did you undervolt the core? If not you won't reach 28k since the core frequency is jumping around. My Vega 64 @ 1175mv sustains 1750 when running 3DMark, while games are between 1700-1740Mhz


----------



## Exposal

It seems like I can either have 1100 hbm and like 1580 core or 945 hbm and 1750 core but it doesn't seem like i can get a high OC on HBM and Core at the same time


----------



## egrest

Is Unigine Heaven benchmark acting weird for anyone here? I am on a air-cooled reference Vega64, and Heaven crashes the card when I under-volt the card at 1000 mV for P6 and P7. Unigine Superposition runs fine under these settings though.


----------



## sinnedone

Any downsides to going reference frontier 16gb vs reference Vega 64 8gb? Will primarily be used for gaming and would be under water.😀


----------



## Exposal

It seems like I can either have 1100 hbm and like 1580 core or 945 hbm and 1750 core but it doesn't seem like i can get a high OC on HBM and Core at the same time


----------



## 113802

egrest said:


> Is Unigine Heaven benchmark acting weird for anyone here? I am on a air-cooled reference Vega64, and Heaven crashes the card when I under-volt the card at 1000 mV for P6 and P7. Unigine Superposition runs fine under these settings though.


Superposition isn't as stressful. 1000mV is extremely low for p6 and p7. I can run at 1000mV on loading screens along with other things but when actually gaming 24/7 1175mV is the lowest i can do on p7


----------



## Exposal

Ne01 OnnA said:


> I have 28k also (GPU needs only ~1760MHz actuall clocks)
> 
> For 27,934 needs only 1.081v with 1742MHz clock w/HBM2 at 1200MHz
> So Vega LC is very efficient GPU after all.
> Add to this HDR excellent support (No matter SDR/HDR give around same FPS) and we have a clear winner in Perf/Price/Quality segment for enthusiasts  (1k€ for really nice setup V64 with 3 Games + QLED)
> 
> -> https://www.samsung.com/us/computin...ming-monitor-with-quantum-dot-lc27hg70qqnxza/
> 
> We can easily wait for MicroLED or OLED Monitors with HDR & FreeSync with our Vegas for sure !
> 1440p 120/144Hz with min. 70FPS we can Play games and have fun -Easy-.
> ==


Do you use custom powerplay settings in the registry?


----------



## egrest

WannaBeOCer said:


> Superposition isn't as stressful. 1000mV is extremely low for p6 and p7. I can run at 1000mV on loading screens along with other things but when actually gaming 24/7 1175mV is the lowest i can do on p7


Thanks. It is stable now with the increased voltages. I was hoping to save power with the decreased voltages, but it seems that I was too optimistic.


----------



## Ne01 OnnA

Exposal said:


> Do you use custom powerplay settings in the registry?


Yup, you got my setup few pages back (1,2)
Lowered P4 & P5 significantly 943 & 975mV more on screnie.

+++
Ahh it's one sec 
It's been tested in 3Dmark & AC: O, BF1 & DeusEx MkD DX12 
All P-states are fully stable.

OverdriveNTool 0.2.7 Beta4 for PP_states addon:
-> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1

Hellm has created SoftPowerPlayTable key files:
https://www.overclock.net/attachments/49572

Here:


----------



## Exposal

Ne01 OnnA said:


> Yup, you got my setup few pages back (1,2)
> Lowered P4 & P5 significantly 943 & 975mV more on screnie.
> 
> +++
> Ahh it's one sec
> It's been tested in 3Dmark & AC: O, BF1 & DeusEx MkD DX12
> All P-states are fully stable.
> 
> Here:



When i tried editing my tables it says not found, do i have to add a registry entry?


----------



## Ne01 OnnA

Exposal said:


> When i tried editing my tables it says not found, do i have to add a registry entry?


OverdriveNTool 0.2.7 Beta4 for PP_states addon:
-> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1


----------



## ht_addict

I'm new on the Vega front with couple of ASUS Strix Vega 64 watercooled with an EKWB blocks. Could some one point me in the direction on this undervolting people are doing to control temp and squeeze out a bit more performance. Greatly appreciated


----------



## ht_addict

Played around with OverdriveNtool and played some FarCry5 for an hour. Temps pretty good??


----------



## VicsPC

ht_addict said:


> Played around with OverdriveNtool and played some FarCry5 for an hour. Temps pretty good??


Not too bad, from those temps I'm assuming you have one rad for your gpu/cpu correct?


----------



## mtrai

Deleted...was being dumb this morning, guess I am needing more coffee.


----------



## miklkit

Guess I will be joining this club in a few days. Just ordered a Sapphire Nitro RX Vega 64.


----------



## Ne01 OnnA

miklkit said:


> Guess I will be joining this club in a few days. Just ordered a Sapphire Nitro RX Vega 64.


Grats Bratan' and welcome 
FreeSync?


----------



## ht_addict

VicsPC said:


> Not too bad, from those temps I'm assuming you have one rad for your gpu/cpu correct?


Both Vegas on a 360(bottom case) and 140 at Rear


----------



## Fatrod

WannaBeOCer said:


> Did you undervolt the core? If not you won't reach 28k since the core frequency is jumping around. My Vega 64 @ 1175mv sustains 1750 when running 3DMark, while games are between 1700-1740Mhz


Not when I'm trying for high scores...that's when I up the voltage and the frequency. So you're saying you think it will go higher with lower clocks and an undervolt?

I could understand that for an air card but not the LC. I've got the temp target at 55.


----------



## 113802

Fatrod said:


> Not when I'm trying for high scores...that's when I up the voltage and the frequency. So you're saying you think it will go higher with lower clocks and an undervolt?
> 
> I could understand that for an air card but not the LC. I've got the temp target at 55.


The Vega card is power limited. It can only receive 375w and when you overvolt it, exceeds 375w and throttles down the core frequency. That's why when I undervolt my card it runs at 1750Mhz throughout the entire benchmark causing a higher score.


----------



## sinnedone

sinnedone said:


> Any downsides to going reference frontier 16gb vs reference Vega 64 8gb? Will primarily be used for gaming and would be under water.😀


Anyone have any insight on this?

I know I read that there is no crossfire support for the frontier edition, but that's the only downside I've seen. Performance should be the same and overclock ability right?


----------



## miklkit

Ne01 OnnA said:


> Grats Bratan' and welcome
> FreeSync?



I hope so. I have an off brand 27" 1440P 144hz Freesync monitor....on paper. In practice with a Fury it is a 1440P 60hz non Freesync monitor. Changing cables hasn't helped but maybe the Vega will. If not, then a monitor is the next new thing.


----------



## THUMPer1

Does the BIOS Switch on the LC Vega 64 do anything? I assume they are the same BIOS.


----------



## Chaoz

THUMPer1 said:


> Does the BIOS Switch on the LC Vega 64 do anything? I assume they are the same BIOS.


Second BIOS is a low power BIOS. Standard one is the one you can play with and flash, second cannot.


----------



## THUMPer1

Chaoz said:


> Second BIOS is a low power BIOS. Standard one is the one you can play with and flash, second cannot.


Switch to left (back plate) is default? Or Switch to Right (power cables) is default? They both seem the same for me on the LC


----------



## Ne01 OnnA

THUMPer1 said:


> Switch to left (back plate) is default? Or Switch to Right (power cables) is default? They both seem the same for me on the LC


Open GPU-Z Advanced Tab
There you have (This is mine)
==


----------



## Chaoz

THUMPer1 said:


> Switch to left (back plate) is default? Or Switch to Right (power cables) is default? They both seem the same for me on the LC


Left is the High Power, flashable BIOS, right is the Low Power BIOS.


----------



## JackCY

Left right, it's all relative to how you hold the card 

Use references such display outputs = case mounting side not left/right.


----------



## Chaoz

JackCY said:


> Left right, it's all relative to how you hold the card
> 
> Use references such display outputs = case mounting side not left/right.


When the card is slotted in your PCIe slot. 
Btw, the standard setting how it came out of the box is the high power BIOS the other setting is low power BIOS. SO if you haven't touched it, you'd know which side to flick it to.

Plus he did use references, that why I said left, which his towards the I/O.


> Switch to left (back plate) is default? Or Switch to Right (power cables) is default? They both seem the same for me on the LC


Holy cow, all this for a stupid toggle switch, that nobody uses.


----------



## Delijohn

Soooo I guess i'm in the wrong position....


----------



## Chaoz

Delijohn said:


> Soooo I guess i'm in the wrong position....


Seems so, flick it the other way. Mine's still on the High Power BIOS (untouched) and these are the settings:


----------



## Delijohn

Chaoz said:


> Seems so, flick it the other way.


I'm stupid enough to have it enclosed in an Eiswolf GPX-Pro #facepalm since it's the second time I replaced it, I'm so bored and out of time to open it again for that.. fact is I'm still not fully stable or with good temperatures, so i'm checking what else I can do. I've never fully enjoyed this card and it's a pitty...


----------



## Chaoz

Delijohn said:


> Chaoz said:
> 
> 
> 
> Seems so, flick it the other way.
> 
> 
> 
> I'm stupid enough to have it enclosed in an Eiswolf GPX-Pro #facepalm since it's the second time I replaced it, I'm so bored and out of time to open it again for that.. fact is I'm still not fully stable or with good temperatures, so i'm checking what else I can do. I've never fully enjoyed this card and it's a pitty...
Click to expand...

Unscrew the backplate a bit. Maybe you can reach it with a pin or needle or something similar, there must be a small space between the backplate and block.

I've been enjoying my Vega 64 LC the minute I waterblocked it and fiddled around with undervolting, it's been running stable at 1750MHz with 1v on core and 1100MHz with 950mV on HBM, temps never really go over 45°C on full load even with the LC BIOS flash. Bigass custom loop does help with temps, got a 360 and 480 for only a 5820K with VRM's, PLX and chipset cooling and Vega 64 LC.

Undervolting is the best option for a Vega, you get better temps and better performance.

Running solid on a 34" 1440p 75Hz FreeSync monitor now and I love it. Next upgrade is the Samsung VA 100Hz or LG IPS 120Hz FreeSync monitor.


----------



## Trender

Delijohn said:


> Soooo I guess i'm in the wrong position....


Dont think so? Im also using the to the io shield switch and its just like yours
https://www.overclock.net/forum/attachment.php?attachmentid=217176&thumb=1
afaik they run the liquid bios, thays whty they have 264W


----------



## Ne01 OnnA

Here POW for Vega 64 & 56

==


----------



## ITAngel

Can you void your warranty by flashing your FE Air to FE LC bios?


----------



## VicsPC

ITAngel said:


> Can you void your warranty by flashing your FE Air to FE LC bios?


I would assume so yea. I think that would be something easy enough for them to tell. Even overclocking technically voids your warranty and so does removing the cooler for some manufacturers so I would assume that yea for sure.


----------



## ITAngel

VicsPC said:


> I would assume so yea. I think that would be something easy enough for them to tell. Even overclocking technically voids your warranty and so does removing the cooler for some manufacturers so I would assume that yea for sure.


Cool good to know.


----------



## hyp36rmax

We finally have a waterblock for short PCB VEGA's!! 

https://www.alphacool.com/detail/index/sArticle/23655


----------



## ht_addict

Can the liquid version Bios be flashed onto the Asus Strix Vega 64 OC? I have them under water blocks.


----------



## Ne01 OnnA

Shadow of the Tomb Raider Test
Press Adrenalin 18.9.1 will be used in article update dawn the week 

https://www.computerbase.de/2018-09...-sottr-mit-directx-11-vs-directx-12-2560-1440

Side note:
On my XTX i will set Chill @ 64-70FPS

UPD.
https://www.amd.com/en/support/kb/release-notes/rn-rad-win-18-9-1


----------



## Chaoz

Trender said:


> Dont think so? Im also using the to the io shield switch and its just like yours
> https://www.overclock.net/forum/attachment.php?attachmentid=217176&thumb=1
> afaik they run the liquid bios, thays whty they have 264W


He flashed his 64 to the LC version, from what I can understand from the post above.





ITAngel said:


> Can you void your warranty by flashing your FE Air to FE LC bios?


Not really, cuz you can always flash it back, that's the best feature of having 2 BIOS'.
If 1 fails, just switch to the other and flash it back. NO one would be any wiser.



ht_addict said:


> Can the liquid version Bios be flashed onto the Asus Strix Vega 64 OC? I have them under water blocks.


Nope, only ref versions can get flashed with LC BIOS'. Unless Asus Strix has a LC version released, then yes, otherwise, no!


----------



## diggiddi

sinnedone said:


> Any downsides to going reference frontier 16gb vs reference Vega 64 8gb? Will primarily be used for gaming and would be under water.😀





sinnedone said:


> Anyone have any insight on this?
> 
> I know I read that there is no crossfire support for the frontier edition, but that's the only downside I've seen. Performance should be the same and overclock ability right?


I think the drivers are different(Radeon Pro Driver) and the overclockability might be less than the consumer 8gb versions, but someone else can correct me on that


----------



## egrest

Hey guys. my PC black-screened and then restarted just now. Anyone knows what might be causing this? I am on the 18.8.2 drivers. Should I roll back?


----------



## Fatrod

Question for anyone with an EK block on the LC card - how do you control the pump, and can you use the GPU's PWM header for the radiator fans?


----------



## Ne01 OnnA

egrest said:


> Hey guys. my PC black-screened and then restarted just now. Anyone knows what might be causing this? I am on the 18.8.2 drivers. Should I roll back?


Install, 18.9.1 Pre-WHQL


----------



## Ne01 OnnA

FH4 Demo is avaible @ MS Store

==


----------



## miklkit

I installed my new Sapphire Nitro Vega 64 last night. The Fury ran best on the 18.6.1 drivers so I installed the Vega version of them. It black screened, there is no Radeon control panel, and there is some stuttering while gaming. So, what are the most reliable drivers for this Vega 64?


That said, Afterburner kept the fan profile from the Fury. So far temps are in the mid to low 50s and frame rates are a little better but nothing to brag about. I found the power switch and it was on the right (towards the front of the case) position. Tried that then flipped it to the left (rear) position. Temps and fps both went up very slightly. So far, so good!


----------



## Ipak

For me any newer driver then 18.5.1 is resulting in some problems, high memory clocks in idle and wierd screen flicker with freesync.


----------



## Chaoz

I'm still on 18.7.1, it works best for my undervolt and overclock. Had 18.8.1 and it crashed constantly, this was a while back, tho. Haven't tried it recently.




Fatrod said:


> Question for anyone with an EK block on the LC card - how do you control the pump, and can you use the GPU's PWM header for the radiator fans?


You could do that if you have an adapter to the 4-pin on the pcb. 
I just use a Corsair Commander Mini to control my 2 pumps and 7 fans. Works best and is easiest in my case. Or just use your mobo's AIO- or Pump-header and control it through the BIOS.


----------



## VicsPC

I'm on 18.8.1 with zero issues. I did have a memory management BSOD yesterday while playing The Crew 2 but it's not something I've had before and logged in quite a lot of time in that game. Also have a freesync display and haven't noticed any weird flickering or any driver crashes at all. Then again i have not touched my card from its factory settings though so that might be why. It's on a water loop that's about it.


----------



## miklkit

After Macroshaft updated win10 to 1803 performance on the 18.4.1 drivers plummeted with the Fury. The 18.5.1 drivers kill all Unity based games, so that is out. The 6,7,and 8 series drivers caused increasing amounts of stutter and buzzing so I stayed with the least bad 18.6.1 drivers. 



Last night with the 18.6.1 drivers the Vega had some stuttering and buzzing along with all of its memory being used. Today with the 18.9.1 drivers there was some initial stuttering and buzzing but it went away and in well over an hour of gaming it never came back. Only 5 gb of video ram got used too.


The gpu hit 1567mhz, the gpu thermal diode hit 56C, the gpu HBM temp hit 59C, and the gpu hot spot hit 71C.


----------



## Ne01 OnnA

Forza Horizon 4 – Benchmark Guide

-> https://drive.google.com/file/d/1T1UOs06a4RNww7xmpHjM4s8H67bCd3ey/view

==

My setup in 2160p 1732/1180 +1% POW


----------



## Fatrod

Chaoz said:


> You could do that if you have an adapter to the 4-pin on the pcb.
> I just use a Corsair Commander Mini to control my 2 pumps and 7 fans. Works best and is easiest in my case. Or just use your mobo's AIO- or Pump-header and control it through the BIOS.


Thanks dude



WannaBeOCer said:


> The Vega card is power limited. It can only receive 375w and when you overvolt it, exceeds 375w and throttles down the core frequency. That's why when I undervolt my card it runs at 1750Mhz throughout the entire benchmark causing a higher score.


I've not had any success like this. If I want to hit 27k I have to increase everything, voltage and frequency. Mine also does not seem to throttle...if I up the frequency too much I just get a hard shut down.

Has anyone got some verified Firestrike results of 27-28k they can link? Keen to see the full setup.


----------



## 113802

Fatrod said:


> Thanks dude
> 
> 
> 
> I've not had any success like this. If I want to hit 27k I have to increase everything, voltage and frequency. Mine also does not seem to throttle...if I up the frequency too much I just get a hard shut down.
> 
> Has anyone got some verified Firestrike results of 27-28k they can link? Keen to see the full setup.


My result, I've had my RX Vega 64 LC since release date: https://www.3dmark.com/fs/16277993

To hit 27.2k myself all I have to do is undervolt core to 1190Mv and +50% power with 1105Mhz HBM


----------



## majestynl

Ipak said:


> For me any newer driver then 18.5.1 is resulting in some problems, high memory clocks in idle and wierd screen flicker with freesync.


I believe the high memory clocks in idle is fixed with current driver version!!


----------



## VicsPC

majestynl said:


> I believe the high memory clocks in idle is fixed with current driver version!!


Nope it isn't but i haven't had the issue with the past 6 drivers I've had so idk how people are having it. 

"Known Issues
Some AMD Ryzen™ Desktop Processors with Radeon™ Vega Graphics system configurations may experience a black screen during installation downgrade to a previous Radeon Software version. A recommended workaround is to perform a clean install during Radeon Software installation.
Radeon RX Vega Series graphics products may experience elevated memory clocks during system idle.
System configurations with 16+ CPU cores may experience a random system reboot during installation when upgrading Radeon Software from a version older than RSAE 18.8.1. A clean installation is recommended when performing this Radeon Software upgrade."


----------



## Fatrod

WannaBeOCer said:


> My result, I've had my RX Vega 64 LC since release date: https://www.3dmark.com/fs/16277993
> 
> To hit 27.2k myself all I have to do is undervolt core to 1190Mv and +50% power with 1105Mhz HBM


Damn man, golden sample you've got there


----------



## Kyozon

WannaBeOCer said:


> The Vega card is power limited. It can only receive 375w and when you overvolt it, exceeds 375w and throttles down the core frequency. That's why when I undervolt my card it runs at 1750Mhz throughout the entire benchmark causing a higher score.




That is an awesome result you got there. For some reason, my FE LC never seem to have been capable of exceeding 1692Mhz. Anything higher than that would result in a Crash no matter which Voltages i am running or Power Limits. 


The 64 LC itself it is really something different, i wonder if it can find a 64 LC locally, we should see.


----------



## 113802

Kyozon said:


> That is an awesome result you got there. For some reason, my FE LC never seem to have been capable of exceeding 1692Mhz. Anything higher than that would result in a Crash no matter which Voltages i am running or Power Limits.
> 
> 
> The 64 LC itself it is really something different, i wonder if it can find a 64 LC locally, we should see.


If you have the proper cooling just flash the 64 LC UEFI.

Edit: Just noticed you said FE LC, that's fine though 1690Mhz with 16GB of HBM2 memory.


----------



## 0verpowered

Secondhand vega prices are now coming down to earth. Picked up a 64 for a little bit more than a 56. Is vega 56 still expensive because of the lower power draw? Couldn't a 64 be undervolted to almost the same levels?


----------



## miklkit

The built in high definition hdmi sound on this Vega64 is interfering with the Creative Soundblaster Z and is causing stuttering, crackling, and a buzzing sound. I disable it daily in device manager but it keeps coming back. Is there some way to physically disable it?


----------



## VicsPC

miklkit said:


> The built in high definition hdmi sound on this Vega64 is interfering with the Creative Soundblaster Z and is causing stuttering, crackling, and a buzzing sound. I disable it daily in device manager but it keeps coming back. Is there some way to physically disable it?


I think you may be able to uninstall the HDMI audio from AMD uninstaller, not positive but i know that when you install the drivers if you do custom install you can unselect that. If you can't find a way to uninstall it i would fully uninstall the drivers and then reinstall without HDMI audio. You may be able to go under apps and features if you right click the start button then uninstall amd software and see if you can uninstall individual features.


----------



## ht_addict

Used the Softpowerplay table setting a few pages back. Think I achieved some nice results.


----------



## Kyozon

WannaBeOCer said:


> If you have the proper cooling just flash the 64 LC UEFI.
> 
> Edit: Just noticed you said FE LC, that's fine though 1690Mhz with 16GB of HBM2 memory.



I am currently looking at the V64 for a Gaming Centric rig. I have noticed that there are some V64 going for much cheaper than the only available 64 LC model locally. Unfortunately enough, those models are Reference Design only. For a little bit more, maybe $35~50, there are some Strix Cards. From the Strix cards, $100 more and then there is the only actual 64 Liquid available.


Would you recommend the STRIX V64 over the Reference? Do you believe with Proper Cooling and 64 LC it will be able to achieve similar Clock Speeds under Similar Voltages? 


Thanks!


----------



## Kyozon

ht_addict said:


> Used the Softpowerplay table setting a few pages back. Think I achieved some nice results.


Hello, which Vega is that? 64 LC?


----------



## lexer

I just got a VEGA 56 STRIX OC for the price of a GTX 1060 in a promotion. I found a little problem with the card the VRM heat a lot !, since it has warranty i can't take apart the card. But there is 2 screws that add pressure between the vrm, the thermal pad and heatsink. those screw have a "neck" that limit the amount of pressure, so i replaced those screws with normal one and by adding more pressure i lower 15-12ºC from 105-102ºc to 90ºc and with undervolting + 50 TDP + custom fan curve to 74ºc. 
I'm really happy with the card !


Sorry for my broken English


----------



## ht_addict

Kyozon said:


> ht_addict said:
> 
> 
> 
> Used the Softpowerplay table setting a few pages back. Think I achieved some nice results.
> 
> 
> 
> Hello, which Vega is that? 64 LC?
Click to expand...

Asus Strix Vega 64 OC Edition. Have them cooled by EKWB blocks.


----------



## miklkit

VicsPC said:


> I think you may be able to uninstall the HDMI audio from AMD uninstaller, not positive but i know that when you install the drivers if you do custom install you can unselect that. If you can't find a way to uninstall it i would fully uninstall the drivers and then reinstall without HDMI audio. You may be able to go under apps and features if you right click the start button then uninstall amd software and see if you can uninstall individual features.



All I can uninstall is the drivers but because they are actually microsoft drivers win10 immediately re-installs them. All I can do is disable them in device manager. That plus disabling the high definition device managers in system devices is all I can do, but they always seem to be re-enabling themselves which causes a conflict which besides sounding ugly just kills frame rates. 



I was hoping there was a way to physically disable the device.


----------



## diggiddi

Kyozon said:


> I am currently looking at the V64 for a Gaming Centric rig. I have noticed that there are some V64 going for much cheaper than the only available 64 LC model locally. Unfortunately enough, those models are Reference Design only. For a little bit more, maybe $35~50, there are some Strix Cards. From the Strix cards, $100 more and then there is the only actual 64 Liquid available.
> 
> 
> Would you recommend the STRIX V64 over the Reference? Do you believe with Proper Cooling and 64 LC it will be able to achieve similar Clock Speeds under Similar Voltages?
> 
> 
> Thanks!


I'd go reference b4 strix any day, of the 3 options you have I'd say
LC>ref>Strix


----------



## VicsPC

miklkit said:


> All I can uninstall is the drivers but because they are actually microsoft drivers win10 immediately re-installs them. All I can do is disable them in device manager. That plus disabling the high definition device managers in system devices is all I can do, but they always seem to be re-enabling themselves which causes a conflict which besides sounding ugly just kills frame rates.
> 
> 
> 
> I was hoping there was a way to physically disable the device.


Under control panel open up system then advanced system settings, then hardware tab and make sure device installation settings is set to no. Then uninstall amd drivers with either ddu or amd uninstaller then reinstall amd drivers but do custom and you should be able to not install hdmi audio drivers. Not sure why windows keeps installing em but if it doesnt work maybe a registry tweak.


----------



## 113802

ht_addict said:


> Used the Softpowerplay table setting a few pages back. Think I achieved some nice results.


Nice! I am curious what you are using for the rads? Those temps seem very high.


----------



## ht_addict

Running EKWB 360 and 140. Run the Firestrike Ultra stress test and post your temps. I just ran Far Cry 5 Benchmark at 4K Ultra resolution and I get temps that you post with your benchmark.


----------



## 113802

ht_addict said:


> Running EKWB 360 and 140. Run the Firestrike Ultra stress test and post your temps. I just ran Far Cry 5 Benchmark at 4K Ultra resolution and I get temps that you post with your benchmark.


Shadow of the Tomb Raider is more stressful than the FireStrike Ultra stress test. Are you overvolting or undervolting your cards? Mines at 1175mv on the core and uses between 270-320w depending on the game.


----------



## Conenubi701

ht_addict said:


> Used the Softpowerplay table setting a few pages back. Think I achieved some nice results.


 do you mind linking the tables?


----------



## VicsPC

WannaBeOCer said:


> Shadow of the Tomb Raider is more stressful than the FireStrike Ultra stress test. Are you overvolting or undervolting your cards? Mines at 1175mv on the core and uses between 270-320w depending on the game.


Most games are more stressful then firestrike ultra lol. I stopped using firestrike to test stability it's just stupid, i mean hell, Euro Truck Simulator 2 peaks my card core clock more so then Rainbow Six Siege. My stock vega hits 1636mhz without an issue when i play ETS2, only hit about 1540 playing Siege.


----------



## ht_addict

WannaBeOCer said:


> Shadow of the Tomb Raider is more stressful than the FireStrike Ultra stress test. Are you overvolting or undervolting your cards? Mines at 1175mv on the core and uses between 270-320w depending on the game.


I'm undervolting at 1150mv on core for both cards. Haven't really sat down and played with the Pstates, just slapped in some numbers I find on the net and test. If it works great.


----------



## miklkit

VicsPC said:


> Under control panel open up system then advanced system settings, then hardware tab and make sure device installation settings is set to no. Then uninstall amd drivers with either ddu or amd uninstaller then reinstall amd drivers but do custom and you should be able to not install hdmi audio drivers. Not sure why windows keeps installing em but if it doesnt work maybe a registry tweak.



Been there done that, but checked anyway. It is still set to no. But I did find that the Hi def audio controller was enabled again just from rebooting. This means I have to disable it every morning. Even though it says the hi def audio drivers are disabled, they are not and they interfere with system performance massively.


All I have ever used is Afterburner and it doesn't seem to want to let me play with voltages, so all I have done so far is bump the power limit to +50. This has resulted in modestly better fps and a 20C jump in temps. The hot spot hit 91C last night! There has got to be a better way.


----------



## VicsPC

miklkit said:


> Been there done that, but checked anyway. It is still set to no. But I did find that the Hi def audio controller was enabled again just from rebooting. This means I have to disable it every morning. Even though it says the hi def audio drivers are disabled, they are not and they interfere with system performance massively.
> 
> 
> All I have ever used is Afterburner and it doesn't seem to want to let me play with voltages, so all I have done so far is bump the power limit to +50. This has resulted in modestly better fps and a 20C jump in temps. The hot spot hit 91C last night! There has got to be a better way.


Ah yea that's a shame then, only thing i could think of. Only other thing would be to maybe registry tweak it but i'd have no idea where to start with that. Good luck though let us know what happens.


----------



## Conenubi701

Anyone have a link to the softpowerplay editor? or a tutorial? I'm used to Fury X BIOS flashing/modding but can't find a way to do it for my water cooled Strix 64


----------



## ht_addict

lexer said:


> I just got a VEGA 56 STRIX OC for the price of a GTX 1060 in a promotion. I found a little problem with the card the VRM heat a lot !, since it has warranty i can't take apart the card. But there is 2 screws that add pressure between the vrm, the thermal pad and heatsink. those screw have a "neck" that limit the amount of pressure, so i replaced those screws with normal one and by adding more pressure i lower 15-12ºC from 105-102ºc to 90ºc and with undervolting + 50 TDP + custom fan curve to 74ºc.
> I'm really happy with the card !
> 
> 
> Sorry for my broken English


Sure those screws are there to prevent excess pressure that could damage you VRM. You took a risk, and luckily you didnt damage anything.


----------



## majestynl

VicsPC said:


> Nope it isn't but i haven't had the issue with the past 6 drivers I've had so idk how people are having it.
> 
> "Known Issues
> Some AMD Ryzen™ Desktop Processors with Radeon™ Vega Graphics system configurations may experience a black screen during installation downgrade to a previous Radeon Software version. A recommended workaround is to perform a clean install during Radeon Software installation.
> Radeon RX Vega Series graphics products may experience elevated memory clocks during system idle.
> System configurations with 16+ CPU cores may experience a random system reboot during installation when upgrading Radeon Software from a version older than RSAE 18.8.1. A clean installation is recommended when performing this Radeon Software upgrade."


Aah mixed up the fixed vs knows issues. LOL.. my bad !!




Conenubi701 said:


> Anyone have a link to the softpowerplay editor? or a tutorial? I'm used to Fury X BIOS flashing/modding but can't find a way to do it for my water cooled Strix 64


See OP bottom:

https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html


----------



## 113802

Here's some gameplay of Shadow of the Tomb Raider with my RX Vega 64 sustaining 1750Mhz+


----------



## mtrai

Kyozon said:


> Hello, which Vega is that? 64 LC?


Hey hey...as we discussed elsewhere you will be happy with a Vega 64 for what you want to use it for...keep in mind each soft power play table is actually unique to each card, while you can use someone else.s as a template you are still gonna have to tune it to your own GPU so it might or might not save you time. Your aware of my issue of my GPU actually boosting higher then what I set my core.

I will help you with tuning your powerplay table if you would like.

And yes I would be willing to swap all my GPUs for you 2 FE lol.


----------



## Jumbik

I have a question to the owners of Vega 64. I have a Sapphire Nitro+ edition and I have under volted it and added +50% on powerlimit. Yesterday I had a hard lock during a Witcher 3 gameplay, the screen froze with the last game image on, music was still going on then started to buzz for a moment but then went on normally again, the screen eventually turned black and monitor showed an error about no signal. I noticed that the whole system locked up as for example my G15 keyboard screen froze too, so not only GFX went down.

My question is, how do you recognise that the crash/lock/bsod etc. is related to the under volting or something else? I would like to know if the problem happened because of my tampering or if it was for example a driver issue.

When the computer booted up again the profile I was using was not loaded and I had to apply it again. I have raised the voltage a bit and played the game for an hour again without any problem.

I've had this problem for the first time in past two weeks, but it's the first time I was playing Witcher 3 for extended period of time. Before I was playing Doom for few days without any issues with the under volting profile on.

I have GPUZ running in the background and I have noticed that occasionally the Power draw sensor spikes a lot to "unreal" numbers, for example it fluctuates between 200-240W and then jumps to something like 1500W for a second. I do not know if this is an error in readings, or if the card really is trying to fry itself.

I've read that people have similar problems due to bad PSU, I have a 850W Seasonic unit with Titanium rating so I suppose I should be safe in this regard. The card is connected with two separate cables.


----------



## Worldwin

Jumbik said:


> I have a question to the owners of Vega 64. I have a Sapphire Nitro+ edition and I have under volted it and added +50% on powerlimit. Yesterday I had a hard lock during a Witcher 3 gameplay, the screen froze with the last game image on, music was still going on then started to buzz for a moment but then went on normally again, the screen eventually turned black and monitor showed an error about no signal. I noticed that the whole system locked up as for example my G15 keyboard screen froze too, so not only GFX went down.
> 
> My question is, how do you recognise that the crash/lock/bsod etc. is related to the under volting or something else? I would like to know if the problem happened because of my tampering or if it was for example a driver issue.
> 
> When the computer booted up again the profile I was using was not loaded and I had to apply it again. I have raised the voltage a bit and played the game for an hour again without any problem.
> 
> I've had this problem for the first time in past two weeks, but it's the first time I was playing Witcher 3 for extended period of time. Before I was playing Doom for few days without any issues with the under volting profile on.
> 
> I have GPUZ running in the background and I have noticed that occasionally the Power draw sensor spikes a lot to "unreal" numbers, for example it fluctuates between 200-240W and then jumps to something like 1500W for a second. I do not know if this is an error in readings, or if the card really is trying to fry itself.
> 
> I've read that people have similar problems due to bad PSU, I have a 850W Seasonic unit with Titanium rating so I suppose I should be safe in this regard. The card is connected with two separate cables.


When it hardlocks like that I believe it to be a problem with the GPU core. Either frequency too high or voltage too low. There are outlier cases (for me its FF15), but changing the voltage should fix it. Readings of 1500w are errors with the sensor most likely. I mean i have had sensor tell me the temperature of the die exceed the surface of the sun. Can I reasonably believe that?


----------



## 113802

Shadow of the Tomb Raider @ 1750/1105Mhz sustained at 1440p max with SMAA4x.


----------



## Jumbik

Worldwin said:


> When it hardlocks like that I believe it to be a problem with the GPU core. Either frequency too high or voltage too low. There are outlier cases (for me its FF15), but changing the voltage should fix it. Readings of 1500w are errors with the sensor most likely. I mean i have had sensor tell me the temperature of the die exceed the surface of the sun. Can I reasonably believe that?


Thank you very much. I will test the stability with a bit higher volts then.


----------



## Kyozon

Hello friends!


I have recently came across this crazy Waterblock from AquaComputer. Vega Kryographics. I have been contemplating moving to a Full Loop for quite a while, i believe that now i have finally decided.

Do we have any uses here of which has purchased this Block? I see many users of which are currently rocking the EKWB Ones, but not this one.

I wonder if there is any major cooling disparity between the both of them, and if not, i should definitely go with the Kryographics.


https://www.techpowerup.com/236606/aqua-computer-intros-kryographics-vega-water-blocks


Thanks.


----------



## Chaoz

Kyozon said:


> Hello friends!
> 
> 
> I have recently came across this crazy Waterblock from AquaComputer. Vega Kryographics. I have been contemplating moving to a Full Loop for quite a while, i believe that now i have finally decided.
> 
> Do we have any uses here of which has purchased this Block? I see many users of which are currently rocking the EKWB Ones, but not this one.
> 
> I wonder if there is any major cooling disparity between the both of them, and if not, i should definitely go with the Kryographics.
> 
> 
> https://www.techpowerup.com/236606/aqua-computer-intros-kryographics-vega-water-blocks
> 
> 
> Thanks.



I find the EK blocks the best looking ones, Imho. I've had my Nickel Acetal block for a good year now and I love it. 
Went all black, cuz don't like the plexi looks, plus it can have overflow in the middle, cuz there is no O-ring there. Which I dislike.
Plus they come with a single slot bracket and you can also do a LEDmod to the RADEON logo.

The AC Kryographics block looks unfinished, you can see the led cable. Instead of covering it up.


----------



## Kyozon

Chaoz said:


> I find the EK blocks the best looking ones, Imho. I've had my Nickel Acetal block for a good year now and I love it.
> Went all black, cuz don't like the plexi looks, plus it can have overflow in the middle, cuz there is no O-ring there. Which I dislike.
> Plus they come with a single slot bracket and you can also do a LEDmod to the RADEON logo.
> 
> The AC Kryographics block looks unfinished, you can see the led cable. Instead of covering it up.



Thank you. Yes i kind of noticed the LED Cable right there, not a huge fan of that.


I am also sort of trying to do a more "industrial" looking System, which i believe the Acetal would look perfect due to that. Planning to use EK ZMT Soft Tubing.


Are there any major differences between the block that you have choose and the Plexi one? What Pump and Rad Combo are you using at the moment? 


Thanks.


----------



## THUMPer1

What thickness for thermal pads? 1mm or 1.5mm?


----------



## Chaoz

Kyozon said:


> Thank you. Yes i kind of noticed the LED Cable right there, not a huge fan of that.
> 
> 
> I am also sort of trying to do a more "industrial" looking System, which i believe the Acetal would look perfect due to that. Planning to use EK ZMT Soft Tubing.
> 
> 
> Are there any major differences between the block that you have choose and the Plexi one? What Pump and Rad Combo are you using at the moment?
> 
> 
> Thanks.


Yeah, that naked cable bothered me aswell when I saw it in the pics.

Acetal Nickel and Plexi Nickel are very similar, one is just see-through. I don't really like the plexi look, didn't fit into my theme. I went with a more Industrial look aswell, as you can see in the pics in my sig.

Plus the Acetal Nickel blocks are very high quality. Had a Acetal Nickel block aswell from EKWB when I had my 1070 SC, that's why I stuck with them. As EK was the first to release a waterblock for the Vega's, seeing as I was one of the first to own a Vega, so pre-ordered the block.










I'm running dual EKWB D5 pumps in serial setup with an EKWB SE360 rad in top and a EKWB PE480 rad in the bottom.


----------



## AmcieK

Hello . For few days i have Vega 56 nitro + starting UV and little oc but . 
950 hmb2 . any chance to go higher or my hynix stuff is shi..t:E 
https://imgur.com/hca1BSA

Out of the box > 
https://imgur.com/eB3zNOc
change powerplay table >
https://imgur.com/C2z9y1r
for now stable setings> 
https://imgur.com/FIJkLTy

Its ok or it's **** , and what change . 
And last question after rebot always watman back to stock settings ?


----------



## tsamolotoff

Is it possible to change TDC through PT, or is it locked like HBM voltage?


AmcieK said:


> Out of the box >
> https://imgur.com/eB3zNOc
> change powerplay table >


1. Undervolting is pointless on nitro, you have enough thermal headroom to run at proper voltages
2. Using PPTable always got me worse graphics score in FS as compared to 'manual' OC with stock PL through overdrive tool


----------



## VicsPC

Chaoz said:


> Yeah, that naked cable bothered me aswell when I saw it in the pics.
> 
> Acetal Nickel and Plexi Nickel are very similar, one is just see-through. I don't really like the plexi look, didn't fit into my theme. I went with a more Industrial look aswell, as you can see in the pics in my sig.
> 
> Plus the Acetal Nickel blocks are very high quality. Had a Acetal Nickel block aswell from EKWB when I had my 1070 SC, that's why I stuck with them. As EK was the first to release a waterblock for the Vega's, seeing as I was one of the first to own a Vega, so pre-ordered the block.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm running dual EKWB D5 pumps in serial setup with an EKWB SE360 rad in top and a EKWB PE480 rad in the bottom.


Yea i still think this is much sexier sorry lol. It sits vertically in my case with the plexi right in front of it.


----------



## 113802

AmcieK said:


> Hello . For few days i have Vega 56 nitro + starting UV and little oc but .
> 950 hmb2 . any chance to go higher or my hynix stuff is shi..t:E
> https://imgur.com/hca1BSA
> 
> Out of the box >
> https://imgur.com/eB3zNOc
> change powerplay table >
> https://imgur.com/C2z9y1r
> for now stable setings>
> https://imgur.com/FIJkLTy
> 
> Its ok or it's **** , and what change .
> And last question after rebot always watman back to stock settings ?


Undervolt to save a ton of power and reduce heat. If you just bought it recently and can't overclock the HBM2 above 950 exchange it!


----------



## diggiddi

Yeah i thought the under-volting was to prevent you hitting the cards power limit more so than the thermal limit, since vega is power limited more than it is thermally


----------



## cg4200

WannaBeOCer said:


> Undervolt to save a ton of power and reduce heat. If you just bought it recently and can't overclock the HBM2 above 950 exchange it!


Yeah i would take that advice .. if you can exchange it.. lame hbm oc all my ref vega do at least 1050 stable

and i undervolt mine also not all the same 1175v 1750 1200 hbm some can undervolt more but mine is stable as a rock


----------



## Ne01 OnnA

AmcieK said:


> Hello . For few days i have Vega 56 nitro +
> for now stable setings>
> https://imgur.com/FIJkLTy
> 
> Its ok or it's **** , and what change .
> And last question after rebot always watman back to stock settings ?


IMO it's Ok what it is now, UV working great 1.028mV is a good stable bet
You need to do some testing and make also some other profiles.

For gaming i have more than 4 Profiles 
1732MHz for AC: O
1700MHz for The Division DX12
952MHz (Yup Legend of Grimrock)
1640MHz for HoMM VII etc.

Make some testing in different games, save you .ini after Editing.

So far your GPU is OK to me.


----------



## Exposal

So my vega 56 flash to 64 water is an odd one. runs 1180 mhz at 925mv on hbm but requires a ton of volts to get the core clock up


----------



## JackCY

Probably because a part of it you tried to unlock was SW disabled for a reason and isn't HW cut off.


----------



## miklkit

Has anyone else tried the 18.9.2 drivers yet? With a little tinkering I'm seeing low temps and high frame rates. I'm really high on this and it's even legal!


----------



## majestynl

miklkit said:


> Has anyone else tried the 18.9.2 drivers yet? With a little tinkering I'm seeing low temps and high frame rates. I'm really high on this and it's even legal!


Tempz are equal over here but I have it under water. But agree saw some better FPS In a game a tested. 

They managed to increase performance on few games, so probably those improvements having good effect on other titles to...


----------



## Ne01 OnnA

So everyone has better FPS (not only me )
Good


----------



## miklkit

Yes, no, maybe. I'm trying to get wattman working and it keeps resetting itself to defaults. This means performance ranges from slightly better than the Fury to WOW! Does anyone know how to make the settings stick or am I going to have to load a saved profile every day?


----------



## ZealotKi11er

miklkit said:


> Yes, no, maybe. I'm trying to get wattman working and it keeps resetting itself to defaults. This means performance ranges from slightly better than the Fury to WOW! Does anyone know how to make the settings stick or am I going to have to load a saved profile every day?


For me OverdriveNTool seem to keep the setting. I think you can make a start up file to load the setting in case you have problems and want to make sure 100% the OC is set. 

Also for people with FE card, How can I install Gaming drivers that are available for Vega 56/64? In the driver switch tab, the drivers there are older by 2-3 weeks.


----------



## Fediuld

WannaBeOCer said:


> Shadow of the Tomb Raider @ 1750/1105Mhz sustained at 1440p max with SMAA4x.
> 
> https://youtu.be/7I8SxpajvLo



It will be helpful for the community to post your Wattman settings please


----------



## VicsPC

miklkit said:


> Yes, no, maybe. I'm trying to get wattman working and it keeps resetting itself to defaults. This means performance ranges from slightly better than the Fury to WOW! Does anyone know how to make the settings stick or am I going to have to load a saved profile every day?


I think I'm having an issue with FRTC sticking for me. I read online somewhere that possibly fast startup disabled may solve the issue but you may try that yourself it might be why.


----------



## ZealotKi11er

Fediuld said:


> It will be helpful for the community to post your Wattman settings please


For me it does not. My Vega 64 in this game with new driver does not go over 1680MHz even with voltage and power and temp head room.


----------



## 113802

ZealotKi11er said:


> Fediuld said:
> 
> 
> 
> It will be helpful for the community to post your Wattman settings please /forum/images/smilies/smile.gif
> 
> 
> 
> For me it does not. My Vega 64 in this game with new driver does not go over 1680MHz even with voltage and power and temp head room.
Click to expand...

1% on the core, 50% power, 1200Mv and 1105mhz HBM2 using the 264w UEFI on my LC

Are you using the LC 264w UEFI? Or the 220w UEFI?


----------



## miklkit

Well I gave up on wattman and went back to Afterburner because it just plain works. Clocks aren't very high, under 1600, but temps are well down, plus it saves the settings.


----------



## ProZack39

Powercolor Vega 56 Red Devil: I have 3 Cards, Same Distributor, All Samsung Memory. But oddly enough 2 have the same Bios and 1 Different. HERE IS THE PROBLEM!! Every time I apply the Soft Powerplay tables to the card i get an error 42 in Windows. 
Can someone tell me whats causing the error? Is there a Soft Powerplay table that will work? My goal is to lower the Power so they use around 120w-150w.


----------



## Chaoz

VicsPC said:


> Chaoz said:
> 
> 
> 
> Yeah, that naked cable bothered me aswell when I saw it in the pics.
> 
> Acetal Nickel and Plexi Nickel are very similar, one is just see-through. I don't really like the plexi look, didn't fit into my theme. I went with a more Industrial look aswell, as you can see in the pics in my sig.
> 
> Plus the Acetal Nickel blocks are very high quality. Had a Acetal Nickel block aswell from EKWB when I had my 1070 SC, that's why I stuck with them. As EK was the first to release a waterblock for the Vega's, seeing as I was one of the first to own a Vega, so pre-ordered the block.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm running dual EKWB D5 pumps in serial setup with an EKWB SE360 rad in top and a EKWB PE480 rad in the bottom.
> 
> 
> 
> Yea i still think this is much sexier sorry lol. It sits vertically in my case with the plexi right in front of it.
Click to expand...

To each their own. Can't mount my GPU vertically. Not gonna destroy my $600 case to show a see-through block flow and overflow in the middle.


----------



## VicsPC

Chaoz said:


> To each their own. Can't mount my GPU vertically. Not gonna destroy my $600 case to show a see-through block flow and overflow in the middle.


Yea i don't have any overflow and my case came that way. Unfortunately my Klipsch speaker sits next to my pc so its not visible but I don't have any overflow issues in the middle.


----------



## Chaoz

VicsPC said:


> Chaoz said:
> 
> 
> 
> To each their own. Can't mount my GPU vertically. Not gonna destroy my $600 case to show a see-through block flow and overflow in the middle.
> 
> 
> 
> Yea i don't have any overflow and my case came that way. Unfortunately my Klipsch speaker sits next to my pc so its not visible but I don't have any overflow issues in the middle.
Click to expand...

Guess you're lucky that you don't have overflow in the middle. Happens frequently. That's the main reason I don't get plexi blocks.

Yeah, my Hexgear R80 didn't come with a vertical bracket and seeing as everything is OOS on their site, I can't re-order a new back PCIe bracket, in case I screw up or want to re-sell the case.


----------



## VicsPC

Chaoz said:


> Guess you're lucky that you don't have overflow in the middle. Happens frequently. That's the main reason I don't get plexi blocks.
> 
> Yeah, my Hexgear R80 didn't come with a vertical bracket and seeing as everything is OOS on their site, I can't re-order a new back PCIe bracket, in case I screw up or want to re-sell the case.


Yea i also only use distilled water with mayhems copper additive. Last time i looked at the card was probably 2 weeks ago moving it around and didn't see any overflow in the middle so maybe it's been fixed since people have had the issue. I don't remember seeing it recently but i do know 2 years ago or so a few people we've having that problem.


----------



## Chaoz

VicsPC said:


> Chaoz said:
> 
> 
> 
> Guess you're lucky that you don't have overflow in the middle. Happens frequently. That's the main reason I don't get plexi blocks.
> 
> Yeah, my Hexgear R80 didn't come with a vertical bracket and seeing as everything is OOS on their site, I can't re-order a new back PCIe bracket, in case I screw up or want to re-sell the case.
> 
> 
> 
> Yea i also only use distilled water with mayhems copper additive. Last time i looked at the card was probably 2 weeks ago moving it around and didn't see any overflow in the middle so maybe it's been fixed since people have had the issue. I don't remember seeing it recently but i do know 2 years ago or so a few people we've having that problem.
Click to expand...

Distilled isn't that visible, so maybe you do have overflow, but just can't see it?

Can't get fixed, unless EKWB adds an O-Ring in the middle. I've seen it happen to my mate's strix 1080 plexi block, PC isn't even a year old.

He used Navy Blue Cryofuel from EK.


----------



## ITAngel

That is true I totally forgot I have dual bios on my FE. lol Thanks Chaoz!


----------



## THUMPer1

For anyone wondering 1mm thermal pads are what you should use on the Vega 64 LC. Probably the same on the reference cooler. Ask me how I know....


----------



## sinnedone

Overflow, as in coolant/liquid in-between the acrylic and metal block outside of the machined passages?


----------



## Chaoz

ITAngel said:


> That is true I totally forgot I have dual bios on my FE. lol Thanks Chaoz!


Lol, no worries, I guess 😉!




sinnedone said:


> Overflow, as in coolant/liquid in-between the acrylic and metal block outside of the machined passages?


Yes, instead of following the path it overflows in the middle just before it goes over the jetplate.


----------



## sinnedone

Yeah all GPU blocks do that. Only thing is you can see it with acrylic and pastel type coolants. It doesn't make a performance difference, its just aesthetics.


----------



## miklkit

How does one get a Vega64 to run at its rated 1630mhz? Every time I close a game because its only getting 20fps Afterburner shows it was not running at 1630mhz but instead at only 600-700mhz! Cranking up the power limit greatly increases heat with only a very small increase in frame rates. Something is throttling it down but I can't find what it is. 



Wattman is useless as it does not save anything and the voltage control in Afterburner is locked.


----------



## VicsPC

miklkit said:


> How does one get a Vega64 to run at its rated 1630mhz? Every time I close a game because its only getting 20fps Afterburner shows it was not running at 1630mhz but instead at only 600-700mhz! Cranking up the power limit greatly increases heat with only a very small increase in frame rates. Something is throttling it down but I can't find what it is.
> 
> 
> 
> Wattman is useless as it does not save anything and the voltage control in Afterburner is locked.


Mine runs at 1630mhz just fine, HOWEVER, i have found that afterburner seems to be glitching lately. I can open up AB, run Siege and then all stats except fps and frametime will not work. If i alt+tab and roll my mouse over the ab icon in the bottom right ab will just close but rivatuner keeps running. I tried the newest BETA version and it made no difference. 

I dont think too many games i own actually run at 1630mhz, capped or uncapped. I think Siege hits 1560mhz or so uncapped, ETS2 in some areas even capped at 73fps (freesync) hits 1636mhz or so after the new update. You shouldnt be getting 600-700mhz though so something is not right there.


----------



## miklkit

After that rant I tried other games and it runs ok although it never comes close to 1630mhz in any of them. It generally runs around 1400mhz in most games and sees 100% use along with good frame rates. It's just the one game I'm obsessing on that is running poorly. 



I normally just use AB for monitoring, OSD, and frame rate limits. It works perfectly for that, but I tried to get away from it with Wattman anyway, but that failed.


----------



## majestynl

miklkit said:


> How does one get a Vega64 to run at its rated 1630mhz? Every time I close a game because its only getting 20fps Afterburner shows it was not running at 1630mhz but instead at only 600-700mhz! Cranking up the power limit greatly increases heat with only a very small increase in frame rates. Something is throttling it down but I can't find what it is.
> 
> Wattman is useless as it does not save anything and the voltage control in Afterburner is locked.


Something is wrong there. Can you show us a HwInfo screenshot (showing GPU tab) after you ran a bench or game while hwinfo was monitoring the sensors??



THUMPer1 said:


> For anyone wondering 1mm thermal pads are what you should use on the Vega 64 LC. Probably the same on the reference cooler. Ask me how I know....


I'm using 1mm and 0,5mm on my block. Also stated in manual from EK. I replaced the stocks from EK with Thermal grizzly pads in combination with kryonaut Tim. Very good results. -10c on thermals.


----------



## miklkit

Here are 2, one dated 9-16-2018 and the other dated 9-28-2018. The game is Subnautica.


----------



## Fediuld

miklkit said:


> Here are 2, one dated 9-16-2018 and the other dated 9-28-2018. The game is Subnautica.


Uninstall MSI AB is not needed with AMD cards. 

Use the following Wattman Settings. 

https://i.imgur.com/vOEf8DR.png

Let me know how it goes.


----------



## miklkit

So I tried those settings. Got slightly lower temps and slightly higher fps. 



Rebooted and wattman saved the gpu clocks and fan settings but lost the memory and power limit. So I reset them and we shall see if they hold. The OSD does not work at all so used AB for that and the graphs with everything else turned off.


----------



## bloot

Fediuld said:


> Uninstall MSI AB is not needed with AMD cards.
> 
> Use the following Wattman Settings.
> 
> https://i.imgur.com/vOEf8DR.png
> 
> Let me know how it goes.


I'd like to use wattman, but there's one thing that is keeping me away, when I set my oc, voltage on idle is 0.762V, but on restarting the system, it idles at 0.9V.

Am I the only one experiencing this? That's why I keep using Afterburner, if it wasn't by this bug I would use wattman.


----------



## miklkit

It does that to me too, but temps are ok so it is ignored.


It is getting better with every reboot. Now only the OSD and fan curve do not work but temps are ok anyway. I got diagonal black lines on the screen so backed off the ram to 1070 and all is well so far. I also saw 17 fps..............


----------



## bloot

My card is 2-3 degrees higher and also uses a bit more wattage. I don't know if AMD is aware of this so they could try to fix it.


----------



## miklkit

I found the problem with low fps in that game. It's sound related. I had to disable the macroshaft sound drivers in device manager and then in windoze to get them from interfering with well...everything. 



This morning Wattman has saved....the power limit. Everything else has gone back to default.


----------



## Ne01 OnnA

miklkit said:


> I found the problem with low fps in that game. It's sound related. I had to disable the macroshaft sound drivers in device manager and then in windoze to get them from interfering with well...everything.
> 
> 
> 
> This morning Wattman has saved....the power limit. Everything else has gone back to default.


For fast usage, You can save then Edit in Notepad++ WattMan cfg file 
Im using only WattMan + OverdriveN Tool 2.7 beta4 with RX_VEGA_64_AIO_Soft_PP 

Also im using Auto Fan in WattMan (OK) and Manual (also OK) depends on Game.
Some Games 'likes' Auto more than Manual  (The Division im smiling at Ya)
==
Edited like this:
Main 4 Profiles for Different Gaming Scenarios.
I have also addtional Profiles for Forza


----------



## ManofGod1000

miklkit said:


> I found the problem with low fps in that game. It's sound related. I had to disable the macroshaft sound drivers in device manager and then in windoze to get them from interfering with well...everything.
> 
> 
> 
> This morning Wattman has saved....the power limit. Everything else has gone back to default.


Sorry, but what you said makes absolutely no sense. I had to disable my sound drivers completely or the game would not play correctly? Except that now I have no sound so the game is not playing correctly anyways? Microsoft does not create sound drivers so that makes no sense.


----------



## TrixX

FYI I use OverDriveNTool and never have these issues with Wattman...


----------



## miklkit

ManofGod1000 said:


> Sorry, but what you said makes absolutely no sense. I had to disable my sound drivers completely or the game would not play correctly? Except that now I have no sound so the game is not playing correctly anyways? Microsoft does not create sound drivers so that makes no sense.



Ok this is gonna get long.........


I bought my first Creative sound card in 1991 and except for the years when I was using the much better Aureal Vortex (Creative put Aureal out of business via lawsuits and lawyer fees) that is all I've used. Motherboard sound isn't good enough. Even today.


So the onboard sound is disabled in the bios. I'm not totally sure about nvidia but it does look like both nvidia and AMD use the same sound card in their video cards, and those use Macroshaft drivers. Those drivers conflict with the Creative drivers causing bad or no sound. I have been dealing with this for years and finally got it mostly resolved. 



Disabling those drivers in device manager does not work as on the next reboot they are enabled by win10. They only way to disable them and keep them disabled that I know of is to click on the little speaker icon in the task bar, go into that win95 style control panel, and disable them there. Then go into device manager and disable them there too. That ends the graphics stuttering, mouse hanging, and overall system lag caused by that driver conflict.


I've only used Afterburner before, this is my first attempt with Wattman, and never heard of that other one before. I saved the profiles and they seem to reinstall ok so far in Wattman.


----------



## dslives

Has anyone had much luck with OC’ing in crossfire? I have a pair of sapphire vega 64s under EK blocks. I’m struggling to get the cores beyond stock. HBM2 goes beyond 1050, but performance seems to plateau at 1025MHz.

Under load the pair run at 1620+MHz without issue. But if I crank up even a little, stability hits the deck.

I’ve tried with the Liquid BIOS, but same story. All that does is apply a power tune table anyway...

I haven’t put too much time into tuning the individual GPUs, that’s my next port. But curious to hear others experiences. Either way, there’s enough juice here to do anything I need in the near term, so not urgent that I can crank them up.





























Sent from my iPhone using Tapatalk Pro


----------



## sinnedone

For some reason crossfire overclocks are always less than stable single card overclocks. It happens with Nvidia cards too. 

You can be rock stable on a card and know it's limits, test another card to find out it's limits but together they will not be stable at those same clocks.

Just make sure your power supply is up to the task.


----------



## Kyozon

Hello friends. 

I have been reading around and some Vega64 owners claims that tapping into 1750Mhz is only possible when paired with a Custom Loop, and Temps around 40-50C.

For those of you who achieved this kind of Clock Speed, is that accurate? Or is it possible to achieve it on Air as well?

Thanks!


----------



## dslives

sinnedone said:


> For some reason crossfire overclocks are always less than stable single card overclocks. It happens with Nvidia cards too.
> 
> You can be rock stable on a card and know it's limits, test another card to find out it's limits but together they will not be stable at those same clocks.
> 
> Just make sure your power supply is up to the task.




I’m running a Seasonic 1300W prime platinum that should be just fine. It replaced a AX860 that was good but was cutting it too fine to run both of these at load (although never any issues with crossfire 290X).

I’ll find some time to spend tuning them individually. I suspect there is as much related to the stability of components used for crossfire over the PCI bus. I don’t know enough about this just now, need to do some research.


Sent from my iPhone using Tapatalk Pro


----------



## sinnedone

The only way you're going to stabilize those overclocks again is by adding more voltage.

I've had crossfire and tri-fire on different generations AMD graphics cards and it always seemed to be anywhere from 10 to 50mhz less stable overclocked when crossfired.

I haven't researched it in a couple of years, but from what I found it seemed to be a common occurrence even with Nvidia.

If you do find anything that helps please post, it would be very good info to have. 😀


----------



## poisson21

No problem on my part to overclock my crossfire config (2 msi rx vega 64), just like you with ek blocks and LC bios, but with a mora 3 420 to cool them and an ax1500i to power them.
Had to set the second card a little lower than the first because it seems it's the best of the two and i end with overshooting it in some games.
the first is set at p7 1712mhz/1200mv with hbm at 1100Mhz and floor voltage at 1200 mV.
the second is set at p7 1652mhz/1200mV with the same hbm setting.
I can go up to 1200Mhz on hbm but don't see any real improvement.
On most game that enable crossfire i can see the two card go over 1700Mhz on a regular basis but around 1760 Mhz is the limit at which it crash.


----------



## dslives

So, I had a play around this evening. My one obvious conclusion is that this is some clock speed at which I will get a crash. That seems to be around 1730MHz sustained, but hard to tell, occasionally I see them up at 1770MHz but under no load.

The main issue I have is how to cap that upper clock speed. The only real lever seems to be power limit (p-state clock speeds definitely don’t cap)

Ideally, I would setup so that I could run up to whatever that max clock is with maximum power available such that I get the most out of the card. Unfortunately I can’t find a way to do that.

Instead I have to balance volts with power limit and a bit of frequency meddling.

Right now I’m running with P7 1682MHz @ 1175mv, HBM2 P3 1025MHz with floor at 1150mv. Seems to be stable although that demands more thorough testing.

See below, both cards are above 1670MHz under load, temps stable (fans not kicked in yet). HBM2 is stable at 1025MHz.

Anyone knows an exact way to cap core frequency? Perhaps I am looking at this in a bit of an old-school way.

(Apols for the sketchy screen grab, I didn’t have logging on so couldn’t make a nice chart)











Sent from my iPhone using Tapatalk Pro


----------



## 113802

dslives said:


> Has anyone had much luck with OC’ing in crossfire? I have a pair of sapphire vega 64s under EK blocks. I’m struggling to get the cores beyond stock. HBM2 goes beyond 1050, but performance seems to plateau at 1025MHz.
> 
> Under load the pair run at 1620+MHz without issue. But if I crank up even a little, stability hits the deck.
> 
> I’ve tried with the Liquid BIOS, but same story. All that does is apply a power tune table anyway...
> 
> I haven’t put too much time into tuning the individual GPUs, that’s my next port. But curious to hear others experiences. Either way, there’s enough juice here to do anything I need in the near term, so not urgent that I can crank them up.
> 
> Sent from my iPhone using Tapatalk Pro


Did you increase them both to use +50% power and have you undervolted? Are you using the 220w UEFI or the 264w UEFI? You can check by going to the "advanced tab" in GPU-Z. Vega does not overclock when overvolting the core. That only works when overclocking the memory.

Edit: Seems like you answered most of my questions with your last post. That is also my issue with my Vega, I can't seem to find a way to cap the frequency from sky rocketing. I am sick of GPU boost. At least with my GTX 780 Ti/GTX 980 Ti people released GPU boost disabled bios.


----------



## dslives

WannaBeOCer said:


> Did you increase them both to use +50% power and have you undervolted? Are you using the 220w UEFI or the 264w UEFI? You can check by going to the "advanced tab" in GPU-Z. Vega does not overclock when overvolting the core. That only works when overclocking the memory.
> 
> 
> 
> Edit: Seems like you answered most of my questions with your last post. That is also my issue with my Vega, I can't seem to find a way to cap the frequency from sky rocketing. I am sick of GPU boost. At least with my GTX 780 Ti/GTX 980 Ti people released GPU boost disabled bios.




Actually I only increased them to 40%. If I set to 50% with reasonable P7 clock settings, one of the two goes nuts and I get a crash, that happens almost immediately with any kind of clock on, and happens after a while at stock. 

If I increase voltage, power goes up and so clocks come down. Conversely if I decrease voltage (within reason) power goes down so clocks can go up.

I find them to be unstable below 1130mv at 1600+MHz. I can get away with 1150mv, but I have 1175mv in P7 for confidence.

I’m wondering how the bios interpolates the frequency and volts settings . Perhaps if it’s linear then having a flat P5-7 range might prove useful for capping the upper end.

For now I’m running 220W UEFI only because the 264W UEFI clock table sends my cards into a hole the minute I try to do anything. I don’t think upper power reach is my issue TBH, it’s more to do with clock stability. Cards are happily drawing 290W each peaking way higher.


Sent from my iPhone using Tapatalk Pro


----------



## Chaoz

Kyozon said:


> Hello friends.
> 
> I have been reading around and some Vega64 owners claims that tapping into 1750Mhz is only possible when paired with a Custom Loop, and Temps around 40-50C.
> 
> For those of you who achieved this kind of Clock Speed, is that accurate? Or is it possible to achieve it on Air as well?
> 
> Thanks!


Well, kinda. The only way to OC to 1750MHz, while UV'ed aswell, is when you flash the BIOS of the Liquid Cooled version on it.

I did it with mine and runs perfectly on 1750MHz and 1v on core and 1050MHz and 950mV on Mem.

You can only flash a ref card with another BIOS, AiB's can't cuz they don't have any LC versions available, so no higher clocked BIOS is made for it other than their own. Vega's are weird when it comes to OC'ing. They don't like it as much as other GPU's, cuz of the HBM memory bandwidth. So other aircooled cards will struggle.

Running my GPU on those clocks, temps go up to around 40°C, depending on the game and ambient temp. Most games aren't load heavy enough for it to actually clock to 1750MHz and usually only stays at 1630-1680MHz, like Need For Speed Payback. BF1 will clock it to 1750MHz when fully maxed out, except for Motion Blur (hate motion blur a lot, so don't use it).


----------



## 113802

Chaoz said:


> Well, kinda. The only way to OC to 1750MHz, while UV'ed aswell, is when you flash the BIOS of the Liquid Cooled version on it.
> 
> I did it with mine and runs perfectly on 1750MHz and 1v on core and 1050MHz and 950mV on Mem.
> 
> You can only flash a ref card with another BIOS, AiB's can't cuz they don't have any LC versions available, so no higher clocked BIOS is made for it other than their own. Vega's are weird when it comes to OC'ing. They don't like it as much as other GPU's, cuz of the HBM memory bandwidth. So other aircooled cards will struggle.
> 
> Running my GPU on those clocks, temps go up to around 40°C, depending on the game and ambient temp. Most games aren't load heavy enough for it to actually clock to 1750MHz and usually only stays at 1630-1680MHz, like Need For Speed Payback. BF1 will clock it to 1750MHz when fully maxed out, except for Motion Blur (hate motion blur a lot, so don't use it).


I don't understand how you are running at 1v, 1175Mv already spikes my GPU to 1770Mhz+ depending on the game and causes my computer to lockup. I had to go back to 1190Mv for a 100% stable in every game with 950Mv on mem @ 1105Mhz.


----------



## colorfuel

Hi everyone,


After replacing my thermal paste a few times, it seems I managed to screw up the retention bracket screws a bit. They are becoming more and more difficult to screw in. 

But since I know nothing about screws, I dont know what to buy to replace them.

Maybe someone knows what type of screws I need to buy to replace them? 

Thanks in advance fot the help.


----------



## LazarusIV

Quick question, does the Radeon Overlay not work? I can't for the life of me get it to show up so I'm either crazy (possible) or it's just not working. If it's not working, what are you using to monitor metrics while running games? Thanks!


----------



## Chaoz

WannaBeOCer said:


> Chaoz said:
> 
> 
> 
> Well, kinda. The only way to OC to 1750MHz, while UV'ed aswell, is when you flash the BIOS of the Liquid Cooled version on it.
> 
> I did it with mine and runs perfectly on 1750MHz and 1v on core and 1050MHz and 950mV on Mem.
> 
> You can only flash a ref card with another BIOS, AiB's can't cuz they don't have any LC versions available, so no higher clocked BIOS is made for it other than their own. Vega's are weird when it comes to OC'ing. They don't like it as much as other GPU's, cuz of the HBM memory bandwidth. So other aircooled cards will struggle.
> 
> Running my GPU on those clocks, temps go up to around 40°C, depending on the game and ambient temp. Most games aren't load heavy enough for it to actually clock to 1750MHz and usually only stays at 1630-1680MHz, like Need For Speed Payback. BF1 will clock it to 1750MHz when fully maxed out, except for Motion Blur (hate motion blur a lot, so don't use it).
> 
> 
> 
> I don't understand how you are running at 1v, 1175Mv already spikes my GPU to 1770Mhz+ depending on the game and causes my computer to lockup. I had to go back to 1190Mv for a 100% stable in every game with 950Mv on mem @ 1105Mhz.
Click to expand...

No clue either. It doesn't run on 1750MHz all the time, more demanding games only make it go to 1750MHz, but it runs stable, tho.

It's an LC flashed GPU, so it runs stock on 1750MHz boost on core, I just lowered the voltage and set the power to +50%. That's basically it.

Must be a golden chip?


----------



## 113802

Chaoz said:


> No clue either. It doesn't run on 1750MHz all the time, more demanding games only make it go to 1750MHz, but it runs stable, tho.
> 
> It's an LC flashed GPU, so it runs stock on 1750MHz boost on core, I just lowered the voltage and set the power to +50%. That's basically it.
> 
> Must be a golden chip?


I'm using a RX Vega 64 LC that I replaced the crappy AIO with an Ek waterblock. I can get demanding games to run at 1750-1760Mhz but anything that's not demanding spikes up to 1780+ and causes my machine lockup.


----------



## Chaoz

WannaBeOCer said:


> Chaoz said:
> 
> 
> 
> No clue either. It doesn't run on 1750MHz all the time, more demanding games only make it go to 1750MHz, but it runs stable, tho.
> 
> It's an LC flashed GPU, so it runs stock on 1750MHz boost on core, I just lowered the voltage and set the power to +50%. That's basically it.
> 
> Must be a golden chip?
> 
> 
> 
> I'm using a RX Vega 64 LC that I replaced the crappy AIO with an Ek waterblock. I can get demanding games to run at 1750-1760Mhz but anything that's not demanding spikes up to 1780+ and causes my machine lockup.
Click to expand...

Anything less demanding causes it to run on 1680MHz anything else 1750MHz and stays there. It doesn't crash, tho.


----------



## miklkit

LazarusIV said:


> Quick question, does the Radeon Overlay not work? I can't for the life of me get it to show up so I'm either crazy (possible) or it's just not working. If it's not working, what are you using to monitor metrics while running games? Thanks!



The OSD in Wattman didn't work for me either so I'm still using Afterburner for that. I have a Sapphire Vega 64 so am learning how to use Trixx for everything else.


----------



## LazarusIV

miklkit said:


> The OSD in Wattman didn't work for me either so I'm still using Afterburner for that. I have a Sapphire Vega 64 so am learning how to use Trixx for everything else.


Well that's irritating... I guess I'll have to see if I can get HWInfo64 to put something on my screen then.


----------



## Maracus

LazarusIV said:


> Well that's irritating... I guess I'll have to see if I can get HWInfo64 to put something on my screen then.


I use Afterburner OSD with readouts from HWinfo64 for adding HoT spot/VRM Temps/voltage etc pretty easy to setup


----------



## dslives

WannaBeOCer said:


> I'm using a RX Vega 64 LC that I replaced the crappy AIO with an Ek waterblock. I can get demanding games to run at 1750-1760Mhz but anything that's not demanding spikes up to 1780+ and causes my machine lockup.




Which vendor is your card from? I’m using sapphire and I get the exact same problem. Boost to 1800+MHz and then crash immediately. Interesting that you have this with your stock bios (I assume)

I was considering trying XTX and Gigabyte reference bioses to see if they’re any different.



Sent from my iPhone using Tapatalk Pro


----------



## dslives

WannaBeOCer said:


> I'm using a RX Vega 64 LC that I replaced the crappy AIO with an Ek waterblock. I can get demanding games to run at 1750-1760Mhz but anything that's not demanding spikes up to 1780+ and causes my machine lockup.




Which vendor is your card from? I’m using sapphire and I get the exact same problem. Boost to 1800+MHz and then crash immediately. Interesting that you have this with your stock bios (I assume)

I was considering trying XTX and Gigabyte reference bioses to see if they’re any different.



Sent from my iPhone using Tapatalk Pro


----------



## mtrai

dslives said:


> Which vendor is your card from? I’m using sapphire and I get the exact same problem. Boost to 1800+MHz and then crash immediately. Interesting that you have this with your stock bios (I assume)
> 
> I was considering trying XTX and Gigabyte reference bioses to see if they’re any different.
> 
> 
> 
> Sent from my iPhone using Tapatalk Pro





WannaBeOCer said:


> I'm using a RX Vega 64 LC that I replaced the crappy AIO with an Ek waterblock. I can get demanding games to run at 1750-1760Mhz but anything that's not demanding spikes up to 1780+ and causes my machine lockup.


I have this with my PowerColor Red Devil Vega 64 as well. Sometimes it thinks it can boost to 1800+ and of course some form of crash...from driver crash to benchmark freeze...to total system lock up. I am on the Powercolor LC bio on air...however it happens on the other the stock bios as well.


----------



## THUMPer1

For me if GPU speed spikes and it locks up it's because the overclock/underclock is not stable. I just add a little more voltage until it's stable and stops spiking.


----------



## 113802

dslives said:


> Which vendor is your card from? I’m using sapphire and I get the exact same problem. Boost to 1800+MHz and then crash immediately. Interesting that you have this with your stock bios (I assume)
> 
> I was considering trying XTX and Gigabyte reference bioses to see if they’re any different.
> 
> 
> 
> Sent from my iPhone using Tapatalk Pro


I have a Reference Gigabyte XTX card with the stock UEFI. I bought a Gigabyte card at launch since it comes with a three year warranty.



THUMPer1 said:


> For me if GPU speed spikes and it locks up it's because the overclock/underclock is not stable. I just add a little more voltage until it's stable and stops spiking.



You are just causing the card to power throttle. The issue with these dumb cards is their crappy GPU boost. They should of made core overclocking a target speed instead of what it is now.


----------



## DtEW

Hey any Vega Frontier Edition users... anybody having any luck using any driverset with Photoshop CC w/OpenCL acceleration enabled?


----------



## Maracus

mtrai said:


> I have this with my PowerColor Red Devil Vega 64 as well. Sometimes it thinks it can boost to 1800+ and of course some form of crash...from driver crash to benchmark freeze...to total system lock up. I am on the Powercolor LC bio on air...however it happens on the other the stock bios as well.


That's really an odd issue if its boosting over P7 value. Even at P7 1650mhz/1050mv my Strix 56 wont even boost close 1650mhz even thou its staying within the TDP


----------



## colorfuel

Is it possible, that the HotSpot temp is related to the SOC speed aswell? 

In testing, I got 2-3° higher HotSpot temps when my SOC speed was at 1200Mhz compared to 1107 Mhz. Changing HBM speed from 1100Mhz to 1110Mhz does that and Hotspot temps increase 2-3° C, while core and mem temps stay the same. 

I tested with the UE4 - Tomb Raider demo in the background, changing HBM clocks in ONT and observing temps in HWinfo.


At least to me on Reference cooler, I'll stay on 1100Mhz HBM, to avoid SOC going to 1200Mhz, thus gaining a few °C on the Hotspot.


----------



## dslives

Maracus said:


> That's really an odd issue if its boosting over P7 value. Even at P7 1650mhz/1050mv my Strix 56 wont even boost close 1650mhz even thou its staying within the TDP



It’s not clear how the P-state clock speed is used. Mine for sure go way beyond P7. See below, 3rd and 4th columns are GPU clocks in a log file.













Sent from my iPhone using Tapatalk Pro


----------



## LazarusIV

Maracus said:


> I use Afterburner OSD with readouts from HWinfo64 for adding HoT spot/VRM Temps/voltage etc pretty easy to setup


Gotcha, I've done that before so I'll just set it up again! Are you running the latest HWInfo64 or no? I may just get the latest stable anyway, it's been quite a while since I've installed it. Also, which Vega do you have?

In response to the boosting / crashing, I've got the Vega 64 Red Devil and I have yet to have any crashes or anything... All I've done is up the Target Temp, increase mem clock to 1000, and increase power limit to +50%. I have yet to do some in-depth tweaking, no time!


----------



## 113802

LazarusIV said:


> Gotcha, I've done that before so I'll just set it up again! Are you running the latest HWInfo64 or no? I may just get the latest stable anyway, it's been quite a while since I've installed it. Also, which Vega do you have?
> 
> In response to the boosting / crashing, I've got the Vega 64 Red Devil and I have yet to have any crashes or anything... All I've done is up the Target Temp, increase mem clock to 1000, and increase power limit to +50%. I have yet to do some in-depth tweaking, no time!


Because you are power throttled. Under volt and you'll see the frequency surpass the P7 state. When you add +50% to the stock 264w bios you are around 360-400w causing it to throttle the speed. When I run at 1175Mv my video card only uses 280-320w at 1760mhz sustained.


----------



## Maracus

dslives said:


> It’s not clear how the P-state clock speed is used. Mine for sure go way beyond P7. See below, 3rd and 4th columns are GPU clocks in a log file.
> 
> Sent from my iPhone using Tapatalk Pro


Interesting I will try and replicate that if i can when i get home



LazarusIV said:


> Gotcha, I've done that before so I'll just set it up again! Are you running the latest HWInfo64 or no? I may just get the latest stable anyway, it's been quite a while since I've installed it. Also, which Vega do you have?
> 
> In response to the boosting / crashing, I've got the Vega 64 Red Devil and I have yet to have any crashes or anything... All I've done is up the Target Temp, increase mem clock to 1000, and increase power limit to +50%. I have yet to do some in-depth tweaking, no time!


Pretty sure its the lastest version of HWinfor64, when I load afterburner it loads HWinfo64. Currently have the ASUS Strix 56 (Hynix HBM2) kinda wish I'd just grabbed the 64, but this card performs pretty well undervolted 1650mhz/1050mv. Think for my next card I will just wait till after Christmas to see what AMD has then, if anything.

Try undervolting your Red Devil see how it goes.


----------



## THUMPer1

I cant get my V64 LC to 1750mhz in benchmarks. Once it hit about 48c it start to level clocks in the 1720 range. I cant get fire strike over 26k.


----------



## 113802

THUMPer1 said:


> I cant get my V64 LC to 1750mhz in benchmarks. Once it hit about 48c it start to level clocks in the 1720 range. I cant get fire strike over 26k.


Sounds more like a HBM2 speed issue rather than a core clock issue. Is your HBM2 set to 1100Mhz+?


----------



## dslives

Maracus said:


> Interesting I will try and replicate that if i can when i get home


Curious to see what you find. It’s relatively easy to force: Set power limit 50% and P7 clocks low (maybe 1500Mhz) and under-volt to give yourself as much power headroom as possible. I expect it would be clear as day even if your card doesn’t normally exhibit that behaviour with a stock power table - that behaviour being that under low-load scenarios your core clock will go way beyond your P7 clock setting.

A synopsis of my understanding of the problem:

GPU core clock appears not to be governed by absolute frequency. At some frequency above stock P7 clocks it becomes unstable (transistors cant switch fast enough) and will cause a crash.

Instead GPU core clock is governed by thermal and power envelopes. Ignoring thermals for now, the clocks will increase whilst in a high enough P-state and whilst there is power headroom - I.e. draw is less that board output limit.

Under low load, the GPU isn’t working much and doesn’t draw much power even at relatively high frequencies. Consequently the core clock rockets until it becomes unstable.

This can be controlled for by increasing power draw thus limiting headroom and therefore maximum core frequency. That can be done by increasing core voltage, or by reducing the power limit. But these are both a hack.

The problem now becomes that an artificially increased power draw limits core frequency when under load because under load the GPU uses more energy and exhausts it’s power limit faster. Consequently under load it bounces of the power limit sooner and throttles.

Fundamentally, we aren’t able to extract the maximum efficient compute throughout from these cores because we have to control for instability when there is no/little load.

There are two ways in which I think this might have been done better:

1) place an absolute cap on core frequency to limit the GPU to a switching frequency it can cope with at each P-state; or
2) do a better job of selecting P-state. If there is limited load, why do the cards rush all the way to P-6/7? I’m not too familiar with that aspect

Edit: I do wonder if this is why P6/7 stock volts are higher than they typically need to be, and why these cards appear to under-volt so well. It’s a shame AMD didn’t do something a bit more intelligent here. Here’s to hoping there’s a bios update/hack that will allow us to exploit the higher capacity these GPUs clearly can provide. (Under load I’m stable at well over 1700+MHz - if I can get there before I have a crash due to no load!!)



Sent from my iPhone using Tapatalk Pro


----------



## THUMPer1

WannaBeOCer said:


> Sounds more like a HBM2 speed issue rather than a core clock issue. Is your HBM2 set to 1100Mhz+?


Nope, 945.


----------



## 113802

THUMPer1 said:


> Nope, 945.


Than your score is fine, if you want 27k+ you'll need at least HBM2 @ 1100Mhz.


----------



## LazarusIV

WannaBeOCer said:


> Because you are power throttled. Under volt and you'll see the frequency surpass the P7 state. When you add +50% to the stock 264w bios you are around 360-400w causing it to throttle the speed. When I run at 1175Mv my video card only uses 280-320w at 1760mhz sustained.


Ah ok gotcha, looks like I'll have quite a bit of tweaking and testing to do then. Thanks for the info!


----------



## THUMPer1

WannaBeOCer said:


> Than your score is fine, if you want 27k+ you'll need at least HBM2 @ 1100Mhz.


Good to know I guess. Here is the latest info. P7 1750/1210mv Max boost was 1733. Cant do 1200mv it will crash but can do 1205. 1760 requires like 1235mv and doesnt net a better score. With HBM at 1050 I was able to crack 26k on the graphics score though.


----------



## 113802

THUMPer1 said:


> Good to know I guess. Here is the latest info. P7 1750/1210mv Max boost was 1733. Cant do 1200mv it will crash but can do 1205. 1760 requires like 1235mv and doesnt net a better score. With HBM at 1050 I was able to crack 26k on the graphics score though.


https://www.3dmark.com/fs/16277993


----------



## Kyozon

Hi there!


From your experience, which one has been the most effective way to apply thermal compound on the Vega10 Chip - Resin/Resinless? Dot vs Spreading?


Thanks!


----------



## THUMPer1

Kyozon said:


> Hi there!
> 
> 
> From your experience, which one has been the most effective way to apply thermal compound on the Vega10 Chip - Resin/Resinless? Dot vs Spreading?
> 
> 
> Thanks!


I use Thermal Grizzly Kryonaut. And I use a Dot on GPU, and small dots on HBM. It contacts well.


----------



## gamervivek

Easy way to change the power limits in wattman? I wish to decrease the power usage in some games when I'm using the other card.


----------



## 113802

gamervivek said:


> Easy way to change the power limits in wattman? I wish to decrease the power usage in some games when I'm using the other card.


Lower the voltage on the p7 state by 10Mv at a time or set the card to the 220w UEFI.


----------



## Chaoz

Kyozon said:


> Hi there!
> 
> 
> From your experience, which one has been the most effective way to apply thermal compound on the Vega10 Chip - Resin/Resinless? Dot vs Spreading?
> 
> 
> Thanks!


I used the triple X method. So an X on both HBM stacks and core.


----------



## JackCY

Just spread the paste all over the die as long as it's not conductive rather than hoping you used enough and the die will be covered by mounting a cooler spreading it out for you. More is better than less, funny faces shapes don't matter a bit.


----------



## Chaoz

JackCY said:


> Just spread the paste all over the die as long as it's not conductive rather than hoping you used enough and the die will be covered by mounting a cooler spreading it out for you. More is better than less, funny faces shapes don't matter a bit.


Triple X method is listed in the EKWB manual on how to mount your Vega waterblock. Funny shapes or not it works great.
You don't have to overapply the TIM just small X's will be plenty enough.


----------



## miklkit

I've been tinkering with this Sapphire Vega64 and have it about where it is in a good place. Wattman kept resetting itself so I went with Trixx instead. Does this look ok after an hour of gaming or did I mess something up?


----------



## sinnedone

Anyone here added higher power limits through the registry while using the liquid bios? 

Wondering how much more you can add for a daily overclock


----------



## Fatrod

Chaoz said:


> No clue either. It doesn't run on 1750MHz all the time, more demanding games only make it go to 1750MHz, but it runs stable, tho.
> 
> It's an LC flashed GPU, so it runs stock on 1750MHz boost on core, I just lowered the voltage and set the power to +50%. That's basically it.
> 
> Must be a golden chip?


You've definitely got the golden chip bro.

I've got an LC card with push/pull config and can easily do 1800+ at 1.2v...

Still can't beat your graphics score.


----------



## gamervivek

WannaBeOCer said:


> Lower the voltage on the p7 state by 10Mv at a time or set the card to the 220w UEFI.


It's a Vega56 and I've put it on power save bios and -50% in wattman with state0 set as the maximum state. I don't think it hits p7 or p6, but even in games where the other card is rendering, its power usage hovers in the 30-40W. 

I think this can be reduced even further if the wattman limit goes down.


----------



## 113802

gamervivek said:


> It's a Vega56 and I've put it on power save bios and -50% in wattman with state0 set as the maximum state. I don't think it hits p7 or p6, but even in games where the other card is rendering, its power usage hovers in the 30-40W.
> 
> I think this can be reduced even further if the wattman limit goes down.


What's your goal? To run your card at 800Mhz with HBM2 at 500Mhz? Sounds like you should be using a iGPU if you want to run your GPU that low. At idle the card should only be using 8w.


----------



## gamervivek

WannaBeOCer said:


> What's your goal? To run your card at 800Mhz with HBM2 at 500Mhz? Sounds like you should be using a iGPU if you want to run your GPU that low. At idle the card should only be using 8w.


The goal is make Vega consume close to its idle power *when the other card is rendering*. 

I've gone from power save bios -> -50% power usage -> p0 as maximum state, with no fps loss while the power usage of Vega has come down from 150 to 30-40W. I think it can go down further.


----------



## JasonMZW20

colorfuel said:


> Is it possible, that the HotSpot temp is related to the SOC speed aswell?
> 
> In testing, I got 2-3° higher HotSpot temps when my SOC speed was at 1200Mhz compared to 1107 Mhz. Changing HBM speed from 1100Mhz to 1110Mhz does that and Hotspot temps increase 2-3° C, while core and mem temps stay the same.
> 
> I tested with the UE4 - Tomb Raider demo in the background, changing HBM clocks in ONT and observing temps in HWinfo.
> 
> 
> At least to me on Reference cooler, I'll stay on 1100Mhz HBM, to avoid SOC going to 1200Mhz, thus gaining a few °C on the Hotspot.


You know, I think so. I was having issues with my thermal paste and my hotspot temps were hitting 108C after multiple repastes and SoC clock kept throttling down to 950-980MHz. Every time it did that, hotspot went down 10-15C, which I thought was interesting. It'd drop to about 94C or so, then try to ramp back up until it throttled again. The stuttering was horrible, so SoC clock does need to exceed memory clock.

So, hotspot could be related to SoC chip functions, and since HBM2 is connected to IF/SoC, it has to ramp up with memory speed too.


----------



## Wastedsunset

*Cannot Maintain P7 state*

Build: 
Ryzen 2700x X470 Taichi Ultimate
BLCK 104 @4.5ghz(Single) 4.1 (All) with XFR
Celcius S36 AIO
Corsair Vengeance LPX 3228mhz ~ 2x8GB 
Vega Frontier Edition Air (Thermal Paste Redone Thermal Grizzly Kryonaut) Cross Fire
Gamemax 1050W RGB Gold 
Power Limit +50%
HBM: 1050Mhz @ 1000Mv
P6: 1527 @ 1050
P7: 1602 @ 1100 
Max Temp 73~5 Degree Fan at 1200 RPM to 4200 RPM
It only went up to P6 (90% of the time) and never stays P7 phase for most of the time and always fluctuates between P6 and P7.
When it switch between P7 to P6 i saw a sudden spike of FPS down to 41. 
Shadow of the Tomb Raider Ultra 1080p (Single GPU Mode)
Min: 41~3
Max: 119
Avg: 89
95%: 73 
This youtuber managed a pretty good scores in SOTR. 





What could i done wrong?﻿


----------



## By-Tor

Is anyone getting better results over clocking with a certain software over another?

I have been using wattman and undervolting my Powercolor Vega 64 with a liquid cooling bios and wonder is there is something better..

ty


----------



## sinnedone

SO I need a little help with overclocking. 

I believe I've found out what my max stable overclocks/voltages and want to up the power limit some. (vega64 flashed to liquid bios waterblocked)


1. Does it make sense to leave HBM floor voltage at 950mv or should I bring it up higher to match p7 voltage?
2. vega64powerplaytable or overdriveNtool? (they seem to do the same thing? trying to up powerlimit and lock in clocks/voltages outside of wattman)


----------



## miklkit

Question: Is there any way to prevent a Vega from throttling down while in a game? In almost all games I have it is almost overkill at 1440P and works very well with low temps, but in one it powers down in certain areas which results in poor fps. 20fps sucks. Is there any way to force it to run at high clocks and volts?


----------



## sinnedone

miklkit said:


> Question: Is there any way to prevent a Vega from throttling down while in a game? In almost all games I have it is almost overkill at 1440P and works very well with low temps, but in one it powers down in certain areas which results in poor fps. 20fps sucks. Is there any way to force it to run at high clocks and volts?



What are all your GPU Temperatures and power draw looking like?


----------



## miklkit

Please refer to post #6611. Since then because it runs so good I reset it and then hit "power". It still runs fine and even cooler.


----------



## sinnedone

Everything seems good from your attachment on that post. Temps in check, wattage not bad, no cpu constraints, and you mentioned 1440p ultra so that should do the trick depending on the game. 

Background processes maybe?

Try bumping up the power limit to see if it makes a difference or maybe its just an issue with that specific game?

EDIT Maybe try setting p6 to same clocks/voltage as p7?EDIT


----------



## miklkit

Yes it is an issue with that specific game. When the frame rate plummets so does the voltage used. I hope there is some way to keep frame rates up. This Vega64 replaces a Fury. The Fury is only about 2/3rds as powerful as the Vega64 but always gives everything it's got and in this game frame rates only rarely drop into the 30s while the Vega64 drops into the 20s and teens and stays there.



Setting P states? Wattman does not save the settings so I do not use it and instead use Trixx.


----------



## colorfuel

@miklkit: I would give OverdriveNTool a shot and disable fast boot for Windows if you havent done that already.


----------



## miklkit

Just DLed it and have never enabled fastboot.


----------



## Fediuld

miklkit said:


> I've been tinkering with this Sapphire Vega64 and have it about where it is in a good place. Wattman kept resetting itself so I went with Trixx instead. Does this look ok after an hour of gaming or did I mess something up?


Seems you are on the low end. 
Wattman is far superior to Trixx tbh. Have a look here for some pretty low power settings. 

http://i.imgur.com/UcQIfgF.png

There are with the Nitro+


----------



## miklkit

Wattman is nonfunctional. It doesn't save settings and also sets things to what it wants, not what I want. It runs very hot and once it made my entire system shut down because of out of control thermals. Trixx does what I want and saves the settings.




Are there instructions for OverdriveNTool? I tried some settings but when I click "apply" they revert to default.


----------



## Ne01 OnnA

miklkit said:


> Wattman is nonfunctional. It doesn't save settings and also sets things to what it wants, not what I want. It runs very hot and once it made my entire system shut down because of out of control thermals. Trixx does what I want and saves the settings.
> 
> Are there instructions for OverdriveNTool? I tried some settings but when I click "apply" they revert to default.


As for OverdriveNTool 0.2.7 Beta4 you need to Download PowerPlay Soft Table (i'm using this one - RX_VEGA_64_AIO_Soft_PP.reg) then Edit P0-P7 and Done !

Also Go to Task Sheduler & make StartCN & StartDVR Tick -> run with high privileges
Also good to have Adrenalin Settings as run as Admin 

Believe me WattMan and Adrenalin is a really good piece of Software.
====
Look:


----------



## sinnedone

Ne01 OnnA said:


> miklkit said:
> 
> 
> 
> Wattman is nonfunctional. It doesn't save settings and also sets things to what it wants, not what I want. It runs very hot and once it made my entire system shut down because of out of control thermals. Trixx does what I want and saves the settings.
> 
> Are there instructions for OverdriveNTool? I tried some settings but when I click "apply" they revert to default.
> 
> 
> 
> As for OverdriveNTool 0.2.7 Beta4 you need to Download PowerPlay Soft Table (i'm using this one - RX_VEGA_64_AIO_Soft_PP.reg) then Edit P0-P7 and Done !
> 
> Also Go to Task Sheduler & make StartCN & StartDVR Tick -> run with high privileges
> Also good to have Adrenalin Settings as run as Admin /forum/images/smilies/wink.gif
> 
> Believe me WattMan and Adrenalin is a really good piece of Software.
> ====
> Look:
Click to expand...

Can you also add a higher power limit with overdriventool?

Like say instead of maxing out at +50 going all the way up to +100?


----------



## Ne01 OnnA

sinnedone said:


> Can you also add a higher power limit with overdriventool?
> 
> Like say instead of maxing out at +50 going all the way up to +100?


I think You can, look:
Max Power is adjustable 

==


----------



## Whatisthisfor

I have an original AMD Vega 64 LC. I will mod it with the new powerful noctua fan NF-A12x25 PWM. Should give a pleasant result, only thing is the color is awful. 

I just did that with my AMD Vega 64 LC. It works and now Vega is very quiet.


----------



## Fediuld

Ne01 OnnA said:


> I think You can, look:
> Max Power is adjustable
> 
> ==


Hmm

Do you know if someone flashed succesfully the Nitro+ with the LC Bios and work?
I have a watercooled Nitro, yet cannot get anything better than if I had it on air, let alone your settings you have with the LC.


----------



## Ne01 OnnA

Dunno, I have Original Vega XTX so Flashing is not needed for it 
IMO, Adjust Your GPU BIOS by PP.Reg to fit your need, and that's it.
Sapphire have good HW -> Performance is there for sure, all you need is Good UV with OC 

OverdriveNTool 0.2.7 Beta4 for PP_states addon:
-> https://mega.nz/#!lQlCAIyT!X3g7rqqEltSSI2cu7M63rNbPJ7Kl6vfJCh36KHughI8
-> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1

Hellm has created SoftPowerPlayTable key files:
like mine - RX_VEGA_64_AIO_Soft_PP

-> https://www.overclock.net/attachments/49572

More:
-> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html


----------



## sinnedone

Ne01 OnnA said:


> sinnedone said:
> 
> 
> 
> Can you also add a higher power limit with overdriventool?
> 
> Like say instead of maxing out at +50 going all the way up to +100?
> 
> 
> 
> I think You can, look:
> Max Power is adjustable /forum/images/smilies/wink.gif
> 
> ==
Click to expand...



Do you need to set wattman on custom, or just reset wattman then use overdriveNtool?


----------



## Ne01 OnnA

sinnedone said:


> Do you need to set wattman on custom, or just reset wattman then use overdriveNtool?


WattMan always on Custom if you using Your Settings
IMO Custom is a Way to Go.


----------



## sinnedone

Man the mobile site of this website is such crap.


----------



## miklkit

Spreadsheets? Github? Mining? I'm out.


----------



## Conenubi701

Ne01 OnnA said:


> As for OverdriveNTool 0.2.7 Beta4 you need to Download PowerPlay Soft Table (i'm using this one - RX_VEGA_64_AIO_Soft_PP.reg) then Edit P0-P7 and Done !
> 
> Also Go to Task Sheduler & make StartCN & StartDVR Tick -> run with high privileges
> Also good to have Adrenalin Settings as run as Admin
> 
> Believe me WattMan and Adrenalin is a really good piece of Software.
> ====
> Look:


I seem to be having an issue with OverdriveNTool powerplay soft tables and VSR. The whole PC hangs after loading into windows if I load in while having VSR enabled to a higher resolution. This started happening after updating to 18.10.1 and BIOS 1101 on my C7H. I'll try today or tomorrow to rollback and test it out


----------



## Chaoz

Fediuld said:


> Hmm
> 
> Do you know if someone flashed succesfully the Nitro+ with the LC Bios and work?
> I have a watercooled Nitro, yet cannot get anything better than if I had it on air, let alone your settings you have with the LC.


Don't think it will work. The Nitro+ is a completely different card with different PCB and BIOS. The BIOS will only work with reference cards.


The Nitro+ 64 does seem to have a dual BIOS switch, so you can always try. If you fail just flip the switch and flash it back.


----------



## sinnedone

Ne01 OnnA said:


> Dunno, I have Original Vega XTX so Flashing is not needed for it
> IMO, Adjust Your GPU BIOS by PP.Reg to fit your need, and that's it.
> Sapphire have good HW -> Performance is there for sure, all you need is Good UV with OC
> 
> OverdriveNTool 0.2.7 Beta4 for PP_states addon:
> -> https://mega.nz/#!lQlCAIyT!X3g7rqqEltSSI2cu7M63rNbPJ7Kl6vfJCh36KHughI8
> -> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1
> 
> Hellm has created SoftPowerPlayTable key files:
> like mine - RX_VEGA_64_AIO_Soft_PP
> 
> -> https://www.overclock.net/attachments/49572
> 
> More:
> -> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html


Do you happen to know what changes were made in the "RX_VEGA_64_AIO_Soft_PP" powerplay table?

I found a post by Hellm that lists several PPT and How would it compare to say "MorePowerVega64LC_142"?


----------



## Fediuld

Ne01 OnnA said:


> Dunno, I have Original Vega XTX so Flashing is not needed for it
> IMO, Adjust Your GPU BIOS by PP.Reg to fit your need, and that's it.
> Sapphire have good HW -> Performance is there for sure, all you need is Good UV with OC
> 
> OverdriveNTool 0.2.7 Beta4 for PP_states addon:
> -> https://mega.nz/#!lQlCAIyT!X3g7rqqEltSSI2cu7M63rNbPJ7Kl6vfJCh36KHughI8
> -> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1
> 
> Hellm has created SoftPowerPlayTable key files:
> like mine - RX_VEGA_64_AIO_Soft_PP
> 
> -> https://www.overclock.net/attachments/49572
> 
> More:
> -> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html



Unfortunately SoftPowerPlayTables don't work with Nitro. 
The extra power does nothing, on the contrary if you go over 50% it crashes outright. 
Even on downvolted settings that work 100%, the moment you raise it to 51% and the card tries to consume the power it goes.


----------



## Ne01 OnnA

sinnedone said:


> Do you happen to know what changes were made in the "RX_VEGA_64_AIO_Soft_PP" powerplay table?
> 
> I found a post by Hellm that lists several PPT and How would it compare to say "MorePowerVega64LC_142"?


No, but it is good to ask @gupsterg
Im rather new to Vega 64 (Have it from summer so 2> months)



Fediuld said:


> Unfortunately SoftPowerPlayTables don't work with Nitro.
> The extra power does nothing, on the contrary if you go over 50% it crashes outright.
> Even on downvolted settings that work 100%, the moment you raise it to 51% and the card tries to consume the power it goes.


Here:
-> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html

===
Im using Vega XTX on Default 264tW BIOS with PP Loaded with my settings through OverdriveN Tool
Everything is working smooth
Drv. 18.10.1 WHQL WDDM 2.5
Hero VI BIOS is v.1403
CPU mostly at 3.95->4016MHz 105 FSB
3100MHZ CL14-15-15 1T GD 1.46mV

I will change my RAM to 2x8GB 4133 CL19 1.35v tommorow  
so i will have VLLT at min 3600 CL14 -> It will boost my overall FPS by a min. 10 to 20FPS ! (I know in CS GO it will be +100FPS lol)

That's all, everything is cool.
Ahh, one thing not eevery game "likes" Adrenalin FPS Cap -> BF1 need it to OFF, and set in user.cfg at 70.


----------



## naifm92

hi everyone,


a few weeks ago i got my Sapphire vega 64 nitro+ card, and i was thinking if i flashed vega 64 LC bios to it, so i tried using the latest version of ATIwinflash and run it as adminstrator but everytime it says "SubsystemIDs mismatch", i just don't know whats wrong,i tried on both bioses but it gives me the same massege what am i missing? plz help


----------



## Ne01 OnnA

naifm92 said:


> hi everyone,
> 
> 
> a few weeks ago i got my Sapphire vega 64 nitro+ card, and i was thinking if i flashed vega 64 LC bios to it, so i tried using the latest version of ATIwinflash and run it as adminstrator but everytime it says "SubsystemIDs mismatch", i just don't know whats wrong,i tried on both bioses but it gives me the same massege what am i missing? plz help


Yo, maby just try PP Power States -> It is no fuss/no flash great solution if You want to run Custom Settings.
BTW You don't need more than +25% POW anyways


----------



## naifm92

Ne01 OnnA said:


> Yo, maby just try PP Power States -> It is no fuss/no flash great solution if You want to run Custom Settings.
> BTW You don't need more than +25% POW anyways



yeah that would be awesome if it JUST worked :thumbsdow


----------



## Parker

Is anyone here running a Sapphire Pulse Vega 56 (nano Vega PCB) with an aftermarket cooler, either air (Morpheus II) or liquid?


----------



## Ne01 OnnA

naifm92 said:


> yeah that would be awesome if it JUST worked


-> https://www.overclock.net/forum/67-amd-ati/1633446-preliminary-view-amd-vega-bios.html


----------



## mtrai

naifm92 said:


> hi everyone,
> 
> 
> a few weeks ago i got my Sapphire vega 64 nitro+ card, and i was thinking if i flashed vega 64 LC bios to it, so i tried using the latest version of ATIwinflash and run it as adminstrator but everytime it says "SubsystemIDs mismatch", i just don't know whats wrong,i tried on both bioses but it gives me the same massege what am i missing? plz help


You need to flash it using the admin command prompt method with the correct switches. I don't have all the info handy as I just got back home from hurricane Michael.


----------



## Offler

Parker said:


> Is anyone here running a Sapphire Pulse Vega 56 (nano Vega PCB) with an aftermarket cooler, either air (Morpheus II) or liquid?


Currently running FuryX. Just yesterday I noticed Sapphire Pulse Vega 56 with its small PCB and oversized cooler. Actually quite neat design, because the overlapping part is allowing air to throught the cooler.

Today I found out about Powercolor RX 56 Nano:





Even when its not full core with 4000 shaders, it actually looks like a good way to get slightly less compute units, slightly less voltage and get small factor. However... I would like to imagine it in a form factor of FuryX with the liquid cooling attached on to.

I am not actually sure whether i want card like this - mainly because of potential high-pitched noise, while watercooled variant is out for over one year.


----------



## majestynl

Offler said:


> Currently running FuryX. Just yesterday I noticed Sapphire Pulse Vega 56 with its small PCB and oversized cooler. Actually quite neat design, because the overlapping part is allowing air to throught the cooler.
> 
> Today I found out about Powercolor RX 56 Nano:
> https://www.youtube.com/watch?v=Hm7rvPD5hPM
> 
> Even when its not full core with 4000 shaders, it actually looks like a good way to get slightly less compute units, slightly less voltage and get small factor. However... I would like to imagine it in a form factor of FuryX with the liquid cooling attached on to.
> 
> I am not actually sure whether i want card like this - mainly because of potential high-pitched noise, while watercooled variant is out for over one year.


I also ordered the Sapphire Pulse Vega 56 with small PCB for another build. Got it delivered today, will install later this week to see how its running! 
Currently more then happy with the Vega 64 + LC bios and WC!


----------



## Fatrod

Has anyone had any success increasing the power limit on Vega 64 LC?

I'm interested to know if there would be any benefit in upgrading a custom loop and trying to push it further.


----------



## sinnedone

Depends how far your card will overclock. My Vega 64 flashed to liquid bios will max out at 1762mhz @1.25v. I've settled at 1752mhz at 1.22v with +80 power limit. 

It has helped keep those average clocks at about 1735mhz in most games. It only hits 1750 or slightly over in some benchmarks or odd game peaks.

Before upping the power limit the card would average about 1700. So not much of a difference but I'm trying to push as hard as I can.😁


----------



## Fatrod

sinnedone said:


> Depends how far your card will overclock. My Vega 64 flashed to liquid bios will max out at 1762mhz @1.25v. I've settled at 1752mhz at 1.22v with +80 power limit.
> 
> It has helped keep those average clocks at about 1735mhz in most games. It only hits 1750 or slightly over in some benchmarks or odd game peaks.
> 
> Before upping the power limit the card would average about 1700. So not much of a difference but I'm trying to push as hard as I can.😁


Well I'm kind of unclear if the extra power would make a difference. My card can already do 1800+ with 50% power slider.

But where does the voltage come into this as well? I assume that at some point increasing the voltage doesn't work until you increase the power slider further?

I haven't really tried to go over 1.25v yet.


----------



## diggiddi

Fatrod said:


> Well I'm kind of unclear if the extra power would make a difference. My card can already do 1800+ with 50% power slider.
> 
> But where does the voltage come into this as well? I assume that at some point increasing the voltage doesn't work until you increase the power slider further?
> 
> I haven't really tried to go over 1.25v yet.


Which card are you using?


----------



## By-Tor

sinnedone said:


> Can you also add a higher power limit with overdriventool?
> 
> Like say instead of maxing out at +50 going all the way up to +100?


I flashed my Powercolor Vega 64 with a liquid cooling bios (added an EK waterblock) and can adjust my powerlimit from -142/+142 using MorePowerVega65LC_142..


----------



## mtrai

By-Tor said:


> I flashed my Powercolor Vega 64 with a liquid cooling bios (added an EK waterblock) and can adjust my powerlimit from -142/+142 using MorePowerVega65LC_142..


Which Powercolor do you have I mean the exact model? As I want to get a waterblock for mine as well. I just want to make sure it will fit, since Powercolor has a couple of Vega 64 red devil pcbs. I have the PowerColor Radeon RX Vega 64 DirectX 12 AXRX Vega 64 8GBHBM2-2D2H/OC

My Current setting on just the Original Red Devil Fans.


----------



## Fatrod

diggiddi said:


> Which card are you using?


The LC Card


----------



## By-Tor

mtrai said:


> Which Powercolor do you have I mean the exact model? As I want to get a waterblock for mine as well. I just want to make sure it will fit, since Powercolor has a couple of Vega 64 red devil pcbs. I have the PowerColor Radeon RX Vega 64 DirectX 12 AXRX Vega 64 8GBHBM2-2D2H/OC
> 
> My Current setting on just the Original Red Devil Fans.


I have a PowerColor Radeon RX VEGA 64 DirectX 12 AXRX VEGA 64 8GBHBM2-3DH 8GB 2048-Bit HB


----------



## bill1971

Any News, if i can flash my nitro vega 56 to 64?


----------



## 113802

Fatrod said:


> Well I'm kind of unclear if the extra power would make a difference. My card can already do 1800+ with 50% power slider.
> 
> But where does the voltage come into this as well? I assume that at some point increasing the voltage doesn't work until you increase the power slider further?
> 
> I haven't really tried to go over 1.25v yet.


The issue is that the card won't run at 1800+ even with the 50% power slider because it's already using 365w and ends up throttling. I set my RX Vega 64 LC which has an EK Waterblock to 1770Mhz/1105Mhz with 1190v and I'm sitting at comfy 280-320w @ 1750Mhz actually in games.


----------



## By-Tor

I have been messing around with under volting the core at 0.9v, 1757mhz core, 1100mhz mem. and pulling under 200watts at full load.. This is my 24/7 settings and running all games at Ultra/Very High..


----------



## Fatrod

WannaBeOCer said:


> The issue is that the card won't run at 1800+ even with the 50% power slider because it's already using 365w and ends up throttling. I set my RX Vega 64 LC which has an EK Waterblock to 1770Mhz/1105Mhz with 1190v and I'm sitting at comfy 280-320w @ 1750Mhz actually in games.



I think you misunderstand - I can already do 1800 @ 50% slider just fine. I want to know how I can push it further by using the reg mod to remove the power limit.

Will I also have to increase the voltage beyond 1.25 for P7...or will it draw the extra power and boost further regardless?

With slider at 100% and p7 at 1800/1.25 it seemed to hit 1850 on a firestrike run.


----------



## majestynl

Fatrod said:


> I think you misunderstand - I can already do 1800 @ 50% slider just fine. I want to know how I can push it further by using the reg mod to remove the power limit.
> 
> Will I also have to increase the voltage beyond 1.25 for P7...or will it draw the extra power and boost further regardless?
> 
> With slider at 100% and p7 at 1800/1.25 it seemed to hit 1850 on a firestrike run.


Looks like you have a golden chip there 

You can try the PP reg mod! I do have one with 150% Power and 400w limit for you. Using it on my 64 with LC bios!
If you are going to use it, just let us know your results


----------



## Ipak

and im here, with waterblock, blackscreening with anything above stock 1632 mhz meh...

probably smthing with hotspot temp, it skyrocket quickly above 100'C while core is still under 55


----------



## 113802

Fatrod said:


> I think you misunderstand - I can already do 1800 @ 50% slider just fine. I want to know how I can push it further by using the reg mod to remove the power limit.
> 
> Will I also have to increase the voltage beyond 1.25 for P7...or will it draw the extra power and boost further regardless?
> 
> With slider at 100% and p7 at 1800/1.25 it seemed to hit 1850 on a firestrike run.


Proof with a firestrike result?


----------



## By-Tor

Ipak said:


> and im here, with waterblock, blackscreening with anything above stock 1632 mhz meh...
> 
> probably smthing with hotspot temp, it skyrocket quickly above 100'C while core is still under 55


Not knowing your ambient temp hitting near 55c is a bit high with a waterblock mounted. Mine when overclocked stays in the mid to low 30c range. You may want to remove and reinstall the block and see if that helps..


----------



## majestynl

WannaBeOCer said:


> Proof with a firestrike result?


Interested too 




By-Tor said:


> Not knowing your ambient temp hitting near 55c is a bit high with a waterblock mounted. Mine when overclocked stays in the mid to low 30c range. You may want to remove and reinstall the block and see if that helps..


Mine was also around 50-55c before I changed from EK Thermal pads to Thermal Grizzly Pads with Kryonaut as Paste. Now I'm in the low/mid 30c. But my clocks didn't get better. Just a bit more stable.

But his hotspot is quite high.


----------



## Naeem

can any of you post picture of cooler used on vega lc i wanna see how it looks from other side that touches the gpu core and vrm etc


----------



## Fatrod

WannaBeOCer said:


> Proof with a firestrike result?


https://www.3dmark.com/fs/16841880

Not a particularly great result but I hadn't OC'd my CPU for that run.

I normally get something around this: https://www.3dmark.com/fs/16861002

I'm going to try and monitor a run with AMD Link and catch it in the act.


----------



## 113802

Naeem said:


> can any of you post picture of cooler used on vega lc i wanna see how it looks from other side that touches the gpu core and vrm etc


Here's how the LC AIO looks




Fatrod said:


> https://www.3dmark.com/fs/16841880
> 
> Not a particularly great result but I hadn't OC'd my CPU for that run.
> 
> I normally get something around this: https://www.3dmark.com/fs/16861002
> 
> I'm going to try and monitor a run with AMD Link and catch it in the act.


Like I expected, that's not a sustained 1800Mhz overclock. Your card is throttling throughout that entire benchmark. Here is my sustained 1750Mhz run: https://www.3dmark.com/fs/16277993


You'll probably get a higher score from undervolting it to 1200Mv +50% with 1105Mhz HBM2.


----------



## Fatrod

Here is another run: https://www.3dmark.com/3dm/29818339

However looking in Link I didn't see it go above 1750 once, so I'm not convinced the Firestrike reading is correct.

I also didn't see the power consumption go above 370w, so I'm not convinced the 100% power target does anything either.

Did you have to increase P6 state to get a sustained 1750?


----------



## 113802

Fatrod said:


> Here is another run: https://www.3dmark.com/3dm/29818339
> 
> However looking in Link I didn't see it go above 1750 once, so I'm not convinced the Firestrike reading is correct.
> 
> I also didn't see the power consumption go above 370w, so I'm not convinced the 100% power target does anything either.
> 
> Did you have to increase P6 state to get a sustained 1750?


Nope I didn't touch the P6 state. To get 1750Mhz sustained in FireStrike it required an undervolt down to 1170Mv on P7 and the core at 1770Mhz. 

I suggest making sure you are on the 264w bios. Undervolting P7 down to 1200Mv +50% power and running the HBM2 at 1105Mhz using stocking for 24/7 use.


----------



## Fatrod

WannaBeOCer said:


> Nope I didn't touch the P6 state. To get 1750Mhz sustained in FireStrike it required an undervolt down to 1170Mv on P7 and the core at 1770Mhz.
> 
> I suggest making sure you are on the 264w bios. Undervolting P7 down to 1200Mv +50% power and running the HBM2 at 1105Mhz using stocking for 24/7 use.


Yeh I've got the LC card so LC BIOS.

I assumed that with the reg mod I could go over the power limit and it wouldn't throttle. Doesn't seem to work though.


----------



## Fatrod

Here's another one with 1850: https://www.3dmark.com/3dm/29822108

That was with the settings below.

Again didn't see it go anywhere near that with AMD Link. Maybe its a reporting issue with the latest driver.


----------



## Ne01 OnnA

Here is my Old 18.9.x run (don't have 3Dmark installed now, done with the testing for a while, Gaming )

GPU hits ~1757MHz
That was my first 28k score !

Pretty easy to acomplish on new drivers IMO
Set GPU P7 at 1767-1790 and HBM2 to 1175-1200 and you can have ~29k on recent driver.

==


----------



## Fatrod

Ne01 OnnA said:


> Here is my Old 18.9.x run (don't have 3Dmark installed now, done with the testing for a while, Gaming )
> 
> GPU hits ~1757MHz
> That was my first 28k score !
> 
> Pretty easy to acomplish on new drivers IMO
> Set GPU P7 at 1767-1790 and HBM2 to 1175-1200 and you can have ~29k on recent driver.
> 
> ==


Instant crash with those settings


----------



## java4ever

Hey,

I've got some serious trouble with my Vega FE (Air).

When I start to play Dreadnought, the card overheats and shuts down the system. I have to wait for a few minutes, switch off the power, switch it back on and then I can start the computer again.
The GPU was quite hot when I touched it.
According to the AMD driver, Temp was 77°C, power consumption 220W but fan speed only 2000RPM(?!) right before shutdown.

I will repaste the card with Thermal Grizzly Kryonaut on Tuesday...

I'm pretty sure it's not the PSU, because it's a Seasonic PRIME Titanium Modular 1000W and I'm using two cables for the GPU.
Rest of the setup: AMD Ryzen Threadripper 1950X @ ASUS Zenith Extreme, 4x 16GByte DDR4-3200 CL14

Do you have any further ideas on this?

//EDIT:
Card is not OCed or anything.


----------



## Drake87

java4ever said:


> Hey,
> 
> I've got some serious trouble with my Vega FE (Air).
> 
> When I start to play Dreadnought, the card overheats and shuts down the system. I have to wait for a few minutes, switch off the power, switch it back on and then I can start the computer again.
> The GPU was quite hot when I touched it.
> According to the AMD driver, Temp was 77°C, power consumption 220W but fan speed only 2000RPM(?!) right before shutdown.
> 
> I will repaste the card with Thermal Grizzly Kryonaut on Tuesday...
> 
> I'm pretty sure it's not the PSU, because it's a Seasonic PRIME Titanium Modular 1000W and I'm using two cables for the GPU.
> Rest of the setup: AMD Ryzen Threadripper 1950X @ ASUS Zenith Extreme, 4x 16GByte DDR4-3200 CL14
> 
> Do you have any further ideas on this?
> 
> //EDIT:
> Card is not OCed or anything.


How hot is the memory getting? I know with my vega 64 it can get quite toasty.


----------



## java4ever

At least 88°C.

With Enterprise driver 18.Q3, something is definitely broken beyond repair.
FurMark stress test, changing from "Manual" fan setting to "Automatic": Fan RPM = 700, instantly!
Even though HBM=88°C and HotSpot=85°C


----------



## ZealotKi11er

I am trying undervolt my vega 64 air. When i change P6 and P7 the card uses the other P states to stay within TDP. How can I fix this.


----------



## Worldwin

ZealotKi11er said:


> I am trying undervolt my vega 64 air. When i change P6 and P7 the card uses the other P states to stay within TDP. How can I fix this.


Increase the power limit? Seems like you are undervolting from driver instead of using OverdriveNTool. You'll have less problems with tool as you can lower P0-P5.


----------



## ZealotKi11er

Worldwin said:


> Increase the power limit? Seems like you are undervolting from driver instead of using OverdriveNTool. You'll have less problems with tool as you can lower P0-P5.


Actually used Radeon Settings and all is good. With OverdriveNTool is where I was having problems. Not its doing ~ 15XX ~ 0.91v ~ 155W.


----------



## ht_addict

Thanks to the Power table settings from OnnA that I found over on Guru3D I made it to 45th on the Time Spy results for 1950X/Vega64 x2 and 59th for just Vega64 x2. I did bump up the HBM mhz and set power target to 10%


----------



## Ne01 OnnA

ZealotKi11er said:


> I am trying undervolt my vega 64 air. When i change P6 and P7 the card uses the other P states to stay within TDP. How can I fix this.


Here, You need to Edit all P-states

-> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1
-> https://www.overclock.net/attachments/49572

==


----------



## Ne01 OnnA

@ht_addict

Grats Bratan' -> Keep this score rockin'


----------



## miklkit

Can a bios go bad without being touched? All I have done bios wise is move the switch from the right hand (front of the case) position to the left hand position (rear of the case) but the left hand bios appears to be corrupted. Wattman will not hold its settings, Trixx shows its speed as anywhere from 1100mhz to 1680mhz, and Afterburner says it is a Generic Microsoft Video Device. 



After setting the fans to hit 100% @ 60C I got the clocks up to 1680mhz, +15 power, and -13voltage in Trixx. This seemed stable and ran well until yesterday when it started black screening in games. Temperatures are not a problem as they almost never go over 60C and are usually around 55C. 



After moving the switch to the right hand position Afterburner says it is a Vega again. Have not gamed yet so don't know if it will continue to black screen. RMA time?


----------



## Offler

miklkit said:


> Can a bios go bad without being touched? All I have done bios wise is move the switch from the right hand (front of the case) position to the left hand position (rear of the case) but the left hand bios appears to be corrupted. Wattman will not hold its settings, Trixx shows its speed as anywhere from 1100mhz to 1680mhz, and Afterburner says it is a Generic Microsoft Video Device.
> 
> 
> 
> After setting the fans to hit 100% @ 60C I got the clocks up to 1680mhz, +15 power, and -13voltage in Trixx. This seemed stable and ran well until yesterday when it started black screening in games. Temperatures are not a problem as they almost never go over 60C and are usually around 55C.
> 
> 
> 
> After moving the switch to the right hand position Afterburner says it is a Vega again. Have not gamed yet so don't know if it will continue to black screen. RMA time?


Its unlikely, yet possible that secondary BIOS was not loaded in factory at all, or that it was erased due some action. Check it with GPU-Z and try to export it.


----------



## miklkit

I've been gaming on the #1 bios for hours with no black screens. Here is GPU-Z for bios #1 and bios #2.


EDIT: Possible false alarm? I've been gaming on #2 bios with no black screens. Could the switch have been giving a bad connection?
EDIT2: Nope. It took a lot longer but it just black screened again 5 hours later.
EDIT3: Both bios black screen.


----------



## greg1184

Anyone looking to get a RX Vega 64 check out this one:


https://www.newegg.com/Product/Product.aspx?Item=N82E16814202326


----------



## Bartouille

That's the one I got. Just start playing around with it and it seems to do 1752MHz at 1200mV (hovers around 1700MHz in Heaven). Is that any good?


----------



## Dhoulmagus

Bartouille said:


> That's the one I got. Just start playing around with it and it seems to do 1752MHz at 1200mV (hovers around 1700MHz in Heaven). Is that any good?


What are your settings currently? I've been having a joke of a time getting my powercolor reference v64 to get past 1665 at stock volts.


----------



## Offler

miklkit said:


> I've been gaming on the #1 bios for hours with no black screens. Here is GPU-Z for bios #1 and bios #2.
> 
> 
> EDIT: Possible false alarm? I've been gaming on #2 bios with no black screens. Could the switch have been giving a bad connection?
> EDIT2: Nope. It took a lot longer but it just black screened again 5 hours later.
> EDIT3: Both bios black screen.


to say which one is currently used - you have t power down the PC.


----------



## BeetleatWar1977

Serious_Don said:


> What are your settings currently? I've been having a joke of a time getting my powercolor reference v64 to get past 1665 at stock volts.


my v56 goes up to [email protected] voltages...... need 1020mV for 1620Mhz......


----------



## Bartouille

Serious_Don said:


> What are your settings currently? I've been having a joke of a time getting my powercolor reference v64 to get past 1665 at stock volts.


I was using +50% PL and 100% fan, memory stock and 1752MHz P6/P7 at 1200mV. Unfortunately I don't think this card is working at its full potential under Windows 7 and that might be the reason why I'm getting these clocks. I'm only getting 22k graphics score on Firestrike and 4300 points in Superposition 1080p Extreme.


----------



## ozlay

Powercolor Red dragon vega56 is US $329.99 at newegg with promo code: EMCXEEPS2


----------



## miklkit

Offler said:


> to say which one is currently used - you have t power down the PC.



I powered down the puter before messing with that tiny switch, which I can not see without a magnifying glass. They do show slightly different numbers but both of them black screen.



It's all moot now as I have pulled the Vega64 and put the Fury back in and contacted Sapphire. No more black screens with the Fury.


Question: It is past the return time for Newegg but Sapphire says to return it to them. This doesn't sound good.


EDIT: I wuz wrong. The Seasonic psu is dying as the fan is broken and it overheats.


----------



## orlfman

miklkit said:


> I powered down the puter before messing with that tiny switch, which I can not see without a magnifying glass. They do show slightly different numbers but both of them black screen.
> 
> 
> 
> It's all moot now as I have pulled the Vega64 and put the Fury back in and contacted Sapphire. No more black screens with the Fury.
> 
> 
> Question: It is past the return time for Newegg but Sapphire says to return it to them. This doesn't sound good.


did you by chance wipe the old drivers you had with your fury with ddu before installing the vega 64? and generally black screens are due to either power supply issues, voltage issues, or a bad monitor cable. though seeing how you described issues with the secondary bios on your vega 64, something could actually be wrong with it from the factory. amd themselves support dual bios on vega. reference vega ships with two bioses. a "quiet" version and a "normal" version which is default. quiet bios reduces the power limit to use less power, less clocks because of it, and therefor less fan speed. while the normal is normal power limits and such. seeing how you have the nitro+ 64, which has modified bioses from sapphire, something could have went wrong with the flashing or, not saying its you, but user error. especially if you accidently hit it without realizing it with the power on.

on the notion of sapphire telling you to return it to newegg, well they shouldn't be saying that if they knew newegg return window is closed. let alone, even IF you still have the 30 day window with newegg, sapphire can still accept your warranty claim regardless. more than likely they're probably under the impression for whatever reason, regardless what you said, that you still have the window and because of that it be quicker and easier to just get a new one from newegg if you still had that 30 day window. also depending how long its been with newegg pass the window, you could try talking to them about it.


----------



## miklkit

Yes I did wipe the drivers as they are not compatible. Many times. 



It black screened with both bios and also it did black screen with the Fury, which lead me to start looking upstream. After giving the puter a thorough beating with a DataVac the psu started giving a loud rhythmic clunking sound and then black screened while idling on the desktop. It's 4 years old so it looks to be time for a new one.


I bought the Vega64 2 months ago so it is past the return window. The Sapphire site says that it should be returned to the retailer and not them, but they have not replied to my last message to them. At this point I'm going to replace the psu and see how it goes.


----------



## miklkit

I ordered a Seasonic 850 watt gold psu and to keep it running until it arrives I installed a Seasonic 620 watt bronze psu and new gpu cable that were laying around. The highest loads from the wall is 555 watts and there are no more black screens, so it should be ok for a few days.


----------



## 113802

Here's my Vega 64 24/7 settings and my FireStrike result. https://www.3dmark.com/3dm/30041437?


----------



## Doubleyoupee

WannaBeOCer said:


> The issue is that the card won't run at 1800+ even with the 50% power slider because it's already using 365w and ends up throttling. I set my RX Vega 64 LC which has an EK Waterblock to 1770Mhz/1105Mhz with 1190v and I'm sitting at comfy 280-320w @ 1750Mhz actually in games.
> 
> https://www.youtube.com/watch?v=7I8SxpajvLo&t=27s


 Hi,

Can you do the same in Witcher 3 and make a video?

That game must be very easy on the GPU, because my Vega air uses more power than that on less voltage and less frequency (which can't be right).
E.g. 1640mhz actual clock @ 1.115v in far cry 5 is ~275-300W. In Witcher 3 it's even 330W. This is with 1697/1050mhz @1.115v.


----------



## Doubleyoupee

The reason I made an account here though, is the following: I have a Sapphire Nitro+ Vega 64, it's a great card and pretty happy with it.
Firestrike after some tweaking: https://www.3dmark.com/fs/16428045
I used both OverdriveNtool and wattman. Ended up with wattman because OverdriveNtool wasn't applying my profiles properly at boot.

*I spent quite some time tweaking and learned a lot, but some things are still so buggy/broken/unclear to me:*

1) I can't get it to reach target clocks, even at super low clocks. I know Vega looks at a lot of factors; temp, voltage, load, fan speed, power usage etc. and I can imagine it throttling at 1750mhz.
However, even when I set 1500mhz as p7, it doesn't reach it. Even when it has a LOT of reserves in temperature, fan speed, power etc.. It doesn't matter if I use 1.1v or 1.2v. Why is this?

2) What is min/max and max in wattman? What does it even do?

3) Changing p6 does NOTHING for me. I can literally set it to 1500mhz or 1700mhz, and it doesn't affect my actual clock speed by 1mhz. 

4) I often see people saying increasing voltage increases clocks. But increasing voltage sometimes decreases clocks for me too, even though I'm no where near power limit. (e.g. at 1500mhz p7). 

How do you guys deal with all of this?

Example: 1700/1050mhz @ 1.1v
Power +50%
Fan max 2300rpm, target temp 70c. 

In far cry 5 it reaches ~1630mhz. This is what I'm actually what I'm aiming at (pretty much advertised boost speed).
However, in witcher 2 cutscenes, it reaches 1670mhz, and becomes unstable.
In witcher 3, it only reaches 1600mhz, with only 1900rpm fan.

Maybe it's reaching power limit in Witcher 3 (~330W)? I could lower voltage with 1600mhz actual clock, but then it will crash in other games.
Seperate profile for each game? 
Fixed frequency would be so much easier.


----------



## 113802

Doubleyoupee said:


> Hi,
> 
> Can you do the same in Witcher 3 and make a video?
> 
> That game must be very easy on the GPU, because my Vega air uses more power than that on less voltage and less frequency (which can't be right).
> E.g. 1640mhz actual clock @ 1.115v in far cry 5 is ~275-300W. In Witcher 3 it's even 330W. This is with 1697/1050mhz @1.115v.


Shockingly I don't have Witcher 3, I have different profiles for games since the annoying boost like you mentioned overboost. It's ridiculous and frustrating that I can't force the card just to cap at the max boost speed.


----------



## Doubleyoupee

WannaBeOCer said:


> Shockingly I don't have Witcher 3, I have different profiles for games since the annoying boost like you mentioned overboost. It's ridiculous and frustrating that I can't force the card just to cap at the max boost speed.


 I ended up at 1697/1050mhz @ 1.115v. I had to go 1.115v instead of 1.1v to become stable at this frequency, even though it would probably be fine at 1.1v without overboosting. Really annoying. 

This was also staying below 300W. Then I tried Witcher 3.. 330W (beyond even timespy or firestrike). I could make a seperate profile for it to try and get 1640mhz but the wattage would be even more crazy. I can't imagine how much it would throttle in witcher 3 at stock settings.

Anyway even seperate profiles wouldn't fix it; Like I said in Witcher 2 it's stable all the time and then you reach a cutscene, which is capped fps and very low load. It jumps to 1670mhz or even higher and crashes.


----------



## Maracus

Doubleyoupee said:


> The reason I made an account here though, is the following: I have a Sapphire Nitro+ Vega 64, it's a great card and pretty happy with it.
> Firestrike after some tweaking: https://www.3dmark.com/fs/16428045
> I used both OverdriveNtool and wattman. Ended up with wattman because OverdriveNtool wasn't applying my profiles properly at boot.
> 
> *I spent quite some time tweaking and learned a lot, but some things are still so buggy/broken/unclear to me:*
> 
> 1) I can't get it to reach target clocks, even at super low clocks. I know Vega looks at a lot of factors; temp, voltage, load, fan speed, power usage etc. and I can imagine it throttling at 1750mhz.
> However, even when I set 1500mhz as p7, it doesn't reach it. Even when it has a LOT of reserves in temperature, fan speed, power etc.. It doesn't matter if I use 1.1v or 1.2v. Why is this?
> 
> 2) What is min/max and max in wattman? What does it even do?
> 
> 3) Changing p6 does NOTHING for me. I can literally set it to 1500mhz or 1700mhz, and it doesn't affect my actual clock speed by 1mhz.
> 
> 4) I often see people saying increasing voltage increases clocks. But increasing voltage sometimes decreases clocks for me too, even though I'm no where near power limit. (e.g. at 1500mhz p7).
> 
> How do you guys deal with all of this?
> 
> Example: 1700/1050mhz @ 1.1v
> Power +50%
> Fan max 2300rpm, target temp 70c.
> 
> In far cry 5 it reaches ~1630mhz. This is what I'm actually what I'm aiming at (pretty much advertised boost speed).
> However, in witcher 2 cutscenes, it reaches 1670mhz, and becomes unstable.
> In witcher 3, it only reaches 1600mhz, with only 1900rpm fan.
> 
> Maybe it's reaching power limit in Witcher 3 (~330W)? I could lower voltage with 1600mhz actual clock, but then it will crash in other games.
> Seperate profile for each game?
> Fixed frequency would be so much easier.


Someone else here had this "overshoot" your talking about and I tried to replicate it on my Strix 56 but I didn't experience the same behavior. What overclocking tool are you using? Only program I use is OverdriveNTtool, I did have an issue before getting clocks to apply but seemed to have fixed it.


----------



## Doubleyoupee

Maracus said:


> Someone else here had this "overshoot" your talking about and I tried to replicate it on my Strix 56 but I didn't experience the same behavior. What overclocking tool are you using? Only program I use is OverdriveNTtool, I did have an issue before getting clocks to apply but seemed to have fixed it.


Wattman. But overdriventool is the same.. it uses the same values as wattman.
It's very hard to detect, I mean if I only played far cry 5 and witcher 3 or many other countless game I would never have noticed.. it happens in certain scenerios only with low load like rendered cutscenes with caps.

This isn't really my other vagueness though.. others are the min/max, p6 state being useless, and clock speed increasing with lower voltage.
E.g. I'm running 1697/1050 @1.115v. In Witcher 3 this results in around 1600mhz and 300-330W.
Now if I decrease voltage to 1.1.v, my core clock will go up to around 1620mhz. Why? I know the card can draw 360W.. I'm guessing it's power limited at 300-330W anyway?

The algorithms are quite complicated.


----------



## vmlinuzz

My AirBoost Vega 56 x2 Firestrike result and Firestrike Ultra result under custom water with 150% power limit on both, both have crap Hynix HBM2. Also anyone know what causes the low combined score? My 1800X is at 4 Ghz at 1.42v so I'm out of CPU OC headroom

https://www.3dmark.com/fs/16956497
https://www.3dmark.com/fs/16956265

The doo doo lower clocking card has has a molded die and the HBM2 run's 4c hotter generally then the unmolded better clocking die, interesting results could be paste application though but they both have a very good coverage all across the die and coldplate with thermal paste. Hot spots are also both the same.


----------



## Doubleyoupee

Also on the guru3d forums I see people talking about:

1) 18.10.x drivers improving Vega's algorithms regarding clock speed/voltage etc.

2) HBM2 timings "loosening" above 65c.


Is there any source to this?


----------



## Fatrod

Doubleyoupee said:


> Also on the guru3d forums I see people talking about:
> 
> 1) 18.10.x drivers improving Vega's algorithms regarding clock speed/voltage etc.
> 
> 2) HBM2 timings "loosening" above 65c.
> 
> 
> Is there any source to this?


Same thing has been mentioned on Reddit.

I tried to run auto voltage to test but my HBM overclock crashes instantly with auto voltage.


----------



## SpecChum

Doubleyoupee said:


> Also on the guru3d forums I see people talking about:
> 
> 1) 18.10.x drivers improving Vega's algorithms regarding clock speed/voltage etc.
> 
> 2) HBM2 timings "loosening" above 65c.
> 
> 
> Is there any source to this?


I've seen both in action here on my reference Vega 64.

It does seems to alter the voltage now so the clocks stay more consistently higher, I get around 1600Mhz at stock now whereas it was about 1500 before still at 330W (50% PL).

Although, even though I'm custom watercooled I usually run 915mV for 1483Mhz at 180W.

in regards to 2, that's always happened, before I watercooled I'd lose 3 to 5 fps from 90 odd the instant it hit 65C. Doesn't reach anywhere near 65C at all now tho, so i can't say if this has changed in recent drivers too.


----------



## seansplayin

So I have water cooled my Sapphire Rx Vega 64 Nitro+ card, I want to flash the liquid bios but since this card is not a referenced design I get an error subsystem ID mismatch. In earlier AMD's I would just use the -F flag and it usually worked just fine but where this card is so different I'm afraid to try, is my reluctance merited?
https://www.newegg.com/Product/Product.aspx?Item=N82E16814202321


----------



## SpecChum

seansplayin said:


> So I have water cooled my Sapphire Rx Vega 64 Nitro+ card, I want to flash the liquid bios but since this card is not a referenced design I get an error subsystem ID mismatch. In earlier AMD's I would just use the -F flag and it usually worked just fine but where this card is so different I'm afraid to try, is my reluctance merited?
> https://www.newegg.com/Product/Product.aspx?Item=N82E16814202321


I honestly have no idea, but I don't think there's anything on the LC BIOS that you can't set manually with OverdriveNTool


----------



## Doubleyoupee

SpecChum said:


> I've seen both in action here on my reference Vega 64.
> 
> It does seems to alter the voltage now so the clocks stay more consistently higher, I get around 1600Mhz at stock now whereas it was about 1500 before still at 330W (50% PL).
> 
> Although, even though I'm custom watercooled I usually run 915mV for 1483Mhz at 180W.
> 
> in regards to 2, that's always happened, before I watercooled I'd lose 3 to 5 fps from 90 odd the instant it hit 65C. Doesn't reach anywhere near 65C at all now tho, so i can't say if this has changed in recent drivers too.


 So I actually gave it a try myself in both Far Cry 5 and Witcher 3.
I restarted beforehand to account for fast startup bug (it happens also after sleeping my PC).
Anyway I consistently saw +10mhz in both games with the same profile. I came from 18.8.2.

@Fatrod
Yeah I saw the same, they said the "auto-voltage" was improved but I can't see this in my case.
Turning on auto voltage increases voltage to 1.1v+ from ~1.035v (after vdroop) from my undervolt and this causes power usage to increase to 350W and therefore decrease my clocks almost 50mhz.

I don't know where the "HBM2 memory timings" story comes from though. Any insight would be appreciated.


----------



## Doubleyoupee

SpecChum said:


> I honestly have no idea, but I don't think there's anything on the LC BIOS that you can't set manually with OverdriveNTool



1.25V for the core?


----------



## SpecChum

Doubleyoupee said:


> 1.25V for the core?


You can set that in the app, not that I think you'll need it, mine can do LC clocks at 1180mV.


----------



## miklkit

I'm trying to dial in my Sapphire Nitro Vega 64 and would like some advice on voltages. The clocks are set to 1680 and 950 and there are no plans on changing them. 



I set an aggressive fan profile that has them hitting 100% @ 60C and temps are always under that. The hot spot hits 72-76C. 



I'm using Trixx and have the power limit at 20%. Reading in this thread suggests I should not go further. 



The gpu voltage is getting progressively lowered and is currently ay -30mv. Assuming stock voltage is 1200 this should be 1170mv. How much lower can it go? 



According to my UPS the highest current draw from the wall is around 550 watts for the system. When it is doing this frame rates are great.


I am having issues in 2 games. In one when the fps drops into the 20-30fps range the UPS shows 260-280 watts being pulled from the wall. The other game drops graphics into the DX7 range from DX12. For some reason dropping the volts helps with the minimums. So how far can I go?


----------



## Maracus

miklkit said:


> I'm trying to dial in my Sapphire Nitro Vega 64 and would like some advice on voltages. The clocks are set to 1680 and 950 and there are no plans on changing them.
> 
> 
> 
> I set an aggressive fan profile that has them hitting 100% @ 60C and temps are always under that. The hot spot hits 72-76C.
> 
> 
> 
> I'm using Trixx and have the power limit at 20%. Reading in this thread suggests I should not go further.
> 
> 
> 
> The gpu voltage is getting progressively lowered and is currently ay -30mv. Assuming stock voltage is 1200 this should be 1170mv. How much lower can it go?
> 
> 
> 
> According to my UPS the highest current draw from the wall is around 550 watts for the system. When it is doing this frame rates are great.
> 
> 
> I am having issues in 2 games. In one when the fps drops into the 20-30fps range the UPS shows 260-280 watts being pulled from the wall. The other game drops graphics into the DX7 range from DX12. For some reason dropping the volts helps with the minimums. So how far can I go?


Just lower it by -10mv until it hangs, some people have gone as low as 995mv to achieve similar clocks.


----------



## miklkit

Thanks. It's currently at +25% on power and -70mv (1130mv) and running well. The main goal is to improve minimums. Oh, just installed and tested the 18.11.1 drivers and they make a big difference.


----------



## Doubleyoupee

miklkit said:


> I'm trying to dial in my Sapphire Nitro Vega 64 and would like some advice on voltages. The clocks are set to 1680 and 950 and there are no plans on changing them.
> 
> 
> 
> I set an aggressive fan profile that has them hitting 100% @ 60C and temps are always under that. The hot spot hits 72-76C.
> 
> 
> 
> I'm using Trixx and have the power limit at 20%. Reading in this thread suggests I should not go further.
> 
> 
> 
> The gpu voltage is getting progressively lowered and is currently ay -30mv. Assuming stock voltage is 1200 this should be 1170mv. How much lower can it go?
> 
> 
> 
> According to my UPS the highest current draw from the wall is around 550 watts for the system. When it is doing this frame rates are great.
> 
> 
> I am having issues in 2 games. In one when the fps drops into the 20-30fps range the UPS shows 260-280 watts being pulled from the wall. The other game drops graphics into the DX7 range from DX12. For some reason dropping the volts helps with the minimums. So how far can I go?


 First of all, Put your HBM2 to 1050mhz. That's basically free performance without any noticeable power or heat increase.

Second. For my nitro the sweet spot is around 1100mv. It reaches around 1600-1670mhz in-game depending on the game, which much lower power usage than stock.
If your Nitro is similar to mine, 1680mhz p7 should be stable at 1100mv.


----------



## miklkit

1050? There is no way to adjust memory voltage in Trixx. Is that ok?


At -70 (1130) it is running cooler and better, so it will be going lower later today. Methinks I tried 1100 when I was still struggling with Wattman and it got artifacts.


The power limit is now at 25% and no difference was noticed. Is raising it a good idea on air?


----------



## Doubleyoupee

miklkit said:


> 1050? There is no way to adjust memory voltage in Trixx. Is that ok?
> 
> 
> At -70 (1130) it is running cooler and better, so it will be going lower later today. Methinks I tried 1100 when I was still struggling with Wattman and it got artifacts.
> 
> 
> The power limit is now at 25% and no difference was noticed. Is raising it a good idea on air?


You cannot change memory voltage for Vega, it is fixed at 1.35v. The "memory voltage" in wattman is something else, most say it's "minimum chipset voltage".
Haven't seen many Vega's get artifacts, from core, mostly from HBM OC. On unstable core it just crashes.

Sweetspot for my Vega Nitro 64 is 1115mv.


----------



## Maracus

miklkit said:


> 1050? There is no way to adjust memory voltage in Trixx. Is that ok?
> 
> 
> At -70 (1130) it is running cooler and better, so it will be going lower later today. Methinks I tried 1100 when I was still struggling with Wattman and it got artifacts.
> 
> 
> The power limit is now at 25% and no difference was noticed. Is raising it a good idea on air?


Start by finding highest core clock and lowest voltage then look at how much you have left for memory overclock. I have crappy hynix memory on my vega 56 which doesn't overclock well at all (880mhz), Generally for gaming i just leave it stock to keep the overall wattage and extra heat down.


----------



## miklkit

It is now at 1680 and 1050, -90 (1100)mv and it seems stable so far. Temps are good so far too, but I have not played any really stressful games yet. More testing needed.


----------



## ZealotKi11er

So I am trying to OC Vega 64 Liquid and I just used Turbo Mode and I get 310-330W. Stock is 264W. My problem is that I cant get the card to use more power. I set the fan to 100% and temps dropped 10C from 65C to 55C and still no more power draw. In theory it should pull ~ 400W. 

http://gpuz.techpowerup.com/18/11/11/s9f.png


-100 mV, It just drops power. 

http://gpuz.techpowerup.com/18/11/11/9sa.png


----------



## 113802

ZealotKi11er said:


> So I am trying to OC Vega 64 Liquid and I just used Turbo Mode and I get 310-330W. Stock is 264W. My problem is that I cant get the card to use more power. I set the fan to 100% and temps dropped 10C from 65C to 55C and still no more power draw. In theory it should pull ~ 400W.
> 
> http://gpuz.techpowerup.com/18/11/11/s9f.png
> 
> 
> -100 mV, It just drops power.
> 
> http://gpuz.techpowerup.com/18/11/11/9sa.png


I'll test it to see if the latest driver changed power usage. When you undervolt does it run at a higher core frequency when gaming/benching?


----------



## ZealotKi11er

WannaBeOCer said:


> I'll test it to see if the latest driver changed power usage. When you undervolt does it run at a higher core frequency when gaming/benching?


Well you can see the clocks in the images. The clocks do not change. I have the highest DPM set to 1630MHz and only can manage 1585MHz max. Lowering voltage just bring power down.


----------



## 113802

ZealotKi11er said:


> Well you can see the clocks in the images. The clocks do not change. I have the highest DPM set to 1630MHz and only can manage 1585MHz max. Lowering voltage just bring power down.


Odd it's not running at 1630Mhz or above. It makes sense why it's not pulling near 400w though since 400w requires 1750Mhz+ to pull that much. At 1680-1700Mhz at stock 1250mv my card pulls 365w. When I changed the highest DPM to 1647 I am capping at 230w and it's running at 1642Mhz all the time.

When playing Destiny 2

Think I may rock this from now on. Runs around 1616Mhz-1620Mhz all the time and only pulls 180-220w max. Below the 265w limit of the UEFI so I don't need to increase the power target.


----------



## Doubleyoupee

ZealotKi11er said:


> Well you can see the clocks in the images. The clocks do not change. I have the highest DPM set to 1630MHz and only can manage 1585MHz max. Lowering voltage just bring power down.


What is your P7? Put it to 1700mhz+, it should definitely go to 1630mhz +


----------



## Ne01 OnnA

My Last Run in Destiny2 (1692MHz @ 1.087v 1160HBM2 @ Floor 975v) FreeSync 70FPS w/Chill at 65-70 no Drops 
P6 = 1662MHz 1.050v
P7 = 1692MHz 1.087v

1700+MHz needs ~1.094mV on my XTX

==
Sec.Pic is my OC setup in OV.tool


----------



## 113802

Ne01 OnnA said:


> My Last Run in Destiny2 (1692MHz @ 1.087v 1160HBM2 @ Floor 975v) FreeSync 70FPS w/Chill at 65-70 no Drops
> P6 = 1662MHz 1.050v
> P7 = 1692MHz 1.087v
> 
> 1700+MHz needs ~1.094mV on my XTX
> 
> ==
> Sec.Pic is my OC setup in OV.tool


Jealous of your HBM2 clock speed! I went up to 1682Mhz on P7 with 1050mV. Runs at 1640Mhz all the time using 220w max I am using the 264w UEFI. When using your settings My card only runs at 1660Mhz all the time and uses 250w. 

P7 set to 1682Mhz with 1050mV: 

https://www.3dmark.com/3dm/30245326?


----------



## dmbrio

*RX VEGA 64 Red Devil*

Hello guys,

I have just purchased my Vega 64 Red Devil and I've been struggling to get past 25k on Firestrike(using OC or Std Bios switch, did not try to flash any bioses nor know if it's possible on a AIB card like mine)

My GPU never goes above 1610 on the test no matter what I do, I still have not tried to use OverdriveNTool, currently using only Wattman.

I navigated a little bit through the pages of this thread but I couldn't find any consistent information of people who have the same AIB version of mine(Red Devil 64).

I have managed to UV my card to 1050 P6 / 1100 P7, but increasing the core clock above the stock(1632mhz) have been a complete disappointment.

Lowering the core voltage further results on crash, I was able to increase +20mhz on P6 and P7 once but the result did not improve.

On the HBM2 side I have managed to put it on 1140mhz / 980mv, I feel like I can lower the voltage here a little further.

My best result today: https://www.3dmark.com/fs/17030106

It looks way worse than the results I have been seeing here, people are able to get way better core clocks than I do and way better results in FS.

What can I do here?

Thanks in advance for anybody who is willing to assist.


----------



## 113802

dmbrio said:


> Hello guys,
> 
> I have just purchased my Vega 64 Red Devil and I've been struggling to get past 25k on Firestrike(using OC or Std Bios switch, did not try to flash any bioses nor know if it's possible on a AIB card like mine)
> 
> My GPU never goes above 1610 on the test no matter what I do, I still have not tried to use OverdriveNTool, currently using only Wattman.
> 
> I navigated a little bit through the pages of this thread but I couldn't find any consistent information of people who have the same AIB version of mine(Red Devil 64).
> 
> I have managed to UV my card to 1050 P6 / 1100 P7, but increasing the core clock above the stock(1632mhz) have been a complete disappointment.
> 
> Lowering the core voltage further results on crash, I was able to increase +20mhz on P6 and P7 once but the result did not improve.
> 
> On the HBM2 side I have managed to put it on 1140mhz / 980mv, I feel like I can lower the voltage here a little further.
> 
> My best result today: https://www.3dmark.com/fs/17030106
> 
> It looks way worse than the results I have been seeing here, people are able to get way better core clocks than I do and way better results in FS.
> 
> What can I do here?
> 
> Thanks in advance for anybody who is willing to assist.


Most of us are using the Vega LC with waterblocks or users who flashed their Vega 64 to the LC. Monitor your temperatures and frequency fluctuations.

For my card
1750Mhz+ sustained uses 365w+ 28100 
1700-1730mhz sustained uses 280-330w. 27400 FireStrike
1690Mhz sustained uses 255w. 26800
1640Mhz sustained only uses 220w max. 26000


----------



## Ne01 OnnA

^^ As i can see you have very good clocks/v
1630 at 1.1v is not bad for your silicon (but You're not done yet ).

All you can do is try/error mode (as all we do) no better method + Fun 

Do not exceed 1.125v-1.150v -> Best is to find sweet spot UV/OC etc.

Pro TIP: Always is better to have more HBM2 OC than GPU

Take care and have Fun (share with us Your endeavours)


----------



## dmbrio

WannaBeOCer said:


> Most of us are using the Vega LC with waterblocks or users who flashed their Vega 64 to the LC. Monitor your temperatures and frequency fluctuations.
> 
> For my card
> 1750Mhz+ sustained uses 365w+ 28100
> 1700-1730mhz sustained uses 280-330w. 27400 FireStrike
> 1690Mhz sustained uses 255w. 26800
> 1640Mhz sustained only uses 220w max. 26000


Do you know if it's possible to flash another Bios to my card to try things out? I simply cannot add a single MHz on my core.

Is there any difference in results between ODNtool and Wattman?


----------



## dmbrio

Ne01 OnnA said:


> ^^ As i can see you have very good clocks/v
> 1630 at 1.1v is not bad for your silicon (but You're not done yet ).
> 
> All you can do is try/error mode (as all we do) no better method + Fun
> 
> Do not exceed 1.125v-1.150v -> Best is to find sweet spot UV/OC etc.
> 
> Pro TIP: Always is better to have more HBM2 OC than GPU
> 
> Take care and have Fun (share with us Your endeavours)


First of all thanks for your answer mate.

I think my HBM is done in frequency, the only possibility is to lower voltage further.

Do you think it's worth trying another driver? Currently I'm using the last WHQL 18.10.2


----------



## 113802

Ne01 OnnA said:


> ^^ As i can see you have very good clocks/v
> 1630 at 1.1v is not bad for your silicon (but You're not done yet ).
> 
> All you can do is try/error mode (as all we do) no better method + Fun
> 
> Do not exceed 1.125v-1.150v -> Best is to find sweet spot UV/OC etc.
> 
> Pro TIP: Always is better to have more HBM2 OC than GPU
> 
> Take care and have Fun (share with us Your endeavours)


That's incorrect, it's better to have a higher GPU core frequency than memory. When reviewers first got their hands on Vega they weren't undervolting they were overvolting causing the cards to throttle. It never hurts to have a high HBM2 overclock but core frequency is always superior to overclocking memory on any GPU. The question about Vega is it worth the extra power/heat for the extra core frequency. That depends on the user.

1690/1105 - 26896: https://www.3dmark.com/3dm/30248528?
1640/1105 - 26307: https://www.3dmark.com/3dm/30248322?
1640/1145 - 26488 : https://www.3dmark.com/3dm/30248640?


----------



## dmbrio

WannaBeOCer said:


> That's incorrect, it's better to have a higher GPU core frequency than memory. When reviewers first got their hands on Vega they weren't undervolting they were overvolting causing the cards to throttle. It never hurts to have a high HBM2 overclock but core frequency is always superior to overclocking memory on any GPU. The question about Vega is it worth the extra power/heat for the extra core frequency. That depends on the user.
> 
> 1690/1105 - 26896: https://www.3dmark.com/3dm/30248528?
> 1640/1105 - 26307: https://www.3dmark.com/3dm/30248322?
> 1640/1145 - 26488 : https://www.3dmark.com/3dm/30248640?


Do you think I should try lowering the HBM frequency in order to try to increase the core clock or are they independent?

I'm kinda disappointed with my core =/, not sure what to do here.

Is it possible to flash some bios into my Devil?


----------



## 113802

dmbrio said:


> Do you think I should try lowering the HBM frequency in order to try to increase the core clock or are they independent?
> 
> I'm kinda disappointed with my core =/, not sure what to do here.
> 
> Is it possible to flash some bios into my Devil?


I keep my HBM2 at 1105Mhz because that's stable without increasing voltage. Keep your HBM2 to what it's currently set and focus on undervolting and sustaining a frequency. Don't compare your results with LC cards like mine.


----------



## dmbrio

WannaBeOCer said:


> I keep my HBM2 at 1105Mhz because that's stable without increasing voltage. Keep your HBM2 to what it's currently set and focus on undervolting and sustaining a frequency. Don't compare your results with LC cards like mine.


Looks stable at 1140mhz / 960mv. My disappointment is not reaching even 1632(default core clock for my card) in Firestrike, trying to increase any MHz on the core is resulting in crash, even comparing to people here with Air Cooled cards, my results look bad.


----------



## Doubleyoupee

Ne01 OnnA said:


> My Last Run in Destiny2 (1692MHz @ 1.087v 1160HBM2 @ Floor 975v) FreeSync 70FPS w/Chill at 65-70 no Drops
> P6 = 1662MHz 1.050v
> P7 = 1692MHz 1.087v
> 
> 1700+MHz needs ~1.094mV on my XTX
> 
> ==
> Sec.Pic is my OC setup in OV.tool


I don't understand your -5% power target. Do you have some edited bios?
I run the exact same config otherwise (~1700mhz, ~1100mv) but if I put -5% power target my clocks drop to 1450mhz.




dmbrio said:


> Looks stable at 1140mhz / 960mv. My disappointment is not reaching even 1632(default core clock for my card) in Firestrike, trying to increase any MHz on the core is resulting in crash, even comparing to people here with Air Cooled cards, my results look bad.


 Yes at 1100mv you should be able to put more than 1630mhz in wattman. Mine does about 1690mhz p7 at 1100mv.
That said it's not impossible to have such a "bad" chip. AMD didn't put the high default voltage for no reason. I'm guessing you need 1150mv to reach higher clocks.




WannaBeOCer said:


> That's incorrect, it's better to have a higher GPU core frequency than memory. When reviewers first got their hands on Vega they weren't undervolting they were overvolting causing the cards to throttle. It never hurts to have a high HBM2 overclock but core frequency is always superior to overclocking memory on any GPU. The question about Vega is it worth the extra power/heat for the extra core frequency. That depends on the user.
> 
> 1690/1105 - 26896: https://www.3dmark.com/3dm/30248528?
> 1640/1105 - 26307: https://www.3dmark.com/3dm/30248322?
> 1640/1145 - 26488 : https://www.3dmark.com/3dm/30248640?


 Be aware, firestrike scales very well with cores. That is why Vega scores so well against e.g. 1080.
In real gaming scenarios, the difference is less. 
Like you said, both will increase performance, however HBM2 hardly increases power/heat.


----------



## Spacebug

dmbrio said:


> Looks stable at 1140mhz / 960mv. My disappointment is not reaching even 1632(default core clock for my card) in Firestrike, trying to increase any MHz on the core is resulting in crash, even comparing to people here with Air Cooled cards, my results look bad.


If it isn't some wierdness with Vega power/clock algorithm my guess is just poor silicon quality. 


Now, it probably isn't fair to compare the normal aircooled cards to the liquid cooled XTX chips which seems to be better binned parts.
But ive seen some posts of about 1770mhz coreclock maintained through benches at 1150-1200mV vcore for some XTX chips.
My waterblocked normal aircooled vega64 can do those clocks at 1300mV, after Vdroop. 
My card is hardmodded to not droop much, but on a normal vega i guess that would be equivalent to set vcore to 1350+mV in wattman.

My point being silicon quality can vary wildly, so don't be surprised if you can't reach similar clocks like others posted here for the same cards... 


If your card is anything like mine crashes seems to indicate too low vcore, try upping vcore and see if you can run higher clocks, if you have cooling enough to handle higher vcore...
But go for maintained reported clocks in benchmarks as the goal, and treat the numbers in wattman more as target values that probably never will be met...


----------



## mtrai

dmbrio said:


> Do you think I should try lowering the HBM frequency in order to try to increase the core clock or are they independent?
> 
> I'm kinda disappointed with my core =/, not sure what to do here.
> 
> Is it possible to flash some bios into my Devil?


Yes you can...I flashed my PowerColor Red Devil Vega 64 with the Powercolor LC bios with no issues...other then the PowerColor LC bios is a locked bios...so once you flash it...there is no way to flash back. So choose your bios switch carefully. What I did was flash the OC bios to the LC bios ...the 3rd bios. Keep the Quiet one, and flashed the middle switch to the PowerColor OC bios.

It does make a difference on high the core clock will maintain. Also you need to considerable ramp up your fan speed.. I have 1500 min and 3500 as max, yes it can get loud. You will also need to undervolt the core...as PC sets some silly high voltage of 1250 mv on the LC bios...and 1200 on the OC bios.

techpowerup has the bios for download.


----------



## Ne01 OnnA

WannaBeOCer said:


> That's incorrect, it's better to have a higher GPU core frequency than memory. When reviewers first got their hands on Vega they weren't undervolting they were overvolting causing the cards to throttle. It never hurts to have a high HBM2 overclock but core frequency is always superior to overclocking memory on any GPU. The question about Vega is it worth the extra power/heat for the extra core frequency. That depends on the user.
> 
> 1690/1105 - 26896: https://www.3dmark.com/3dm/30248528?
> 1640/1105 - 26307: https://www.3dmark.com/3dm/30248322?
> 1640/1145 - 26488 : https://www.3dmark.com/3dm/30248640?


Yes, of course You're right.
Waht i mean is not to push too high on GPU, better is to have Good Stable GPU OC + some nice HBM2
All in all, 1650 with 1100 will be better than 1690 945 (im talkin about Gaming)



Doubleyoupee said:


> I don't understand your -5% power target. Do you have some edited bios?
> I run the exact same config otherwise (~1700mhz, ~1100mv) but if I put -5% power target my clocks drop to 1450mhz.


Only OverdriveN Tool PP_states for Vega XTX
BIOS is set to 2nd 264tW (normal) -> Maby i have Platinum chip? 

Don't forget im Playing all my Games Max to 70FPS (FreeSync)
OC by CRU to 75Hz is possible tho with better DP Cable.

Here:


----------



## 113802

Ne01 OnnA said:


> Yes, of course You're right.
> Waht i mean is not to push too high on GPU, better is to have Good Stable GPU OC + some nice HBM2
> All in all, 1650 with 1100 will be better than 1690 945 (im talkin about Gaming)
> 
> 
> 
> Only OverdriveN Tool PP_states for Vega XTX
> BIOS is set to 2nd 264tW (normal) -> Maby i have Platinum chip?
> 
> Don't forget im Playing all my Games Max to 70FPS (FreeSync)
> OC by CRU to 75Hz is possible tho with better DP Cable.
> 
> Here:


Post a 3DMark FireStrike result. That golden chip should be able to hit 28500!


----------



## Doubleyoupee

Ne01 OnnA said:


> Yes, of course You're right.
> Waht i mean is not to push too high on GPU, better is to have Good Stable GPU OC + some nice HBM2
> All in all, 1650 with 1100 will be better than 1690 945 (im talkin about Gaming)
> 
> 
> 
> Only OverdriveN Tool PP_states for Vega XTX
> BIOS is set to 2nd 264tW (normal) -> Maby i have Platinum chip?
> 
> Don't forget im Playing all my Games Max to 70FPS (FreeSync)
> OC by CRU to 75Hz is possible tho with better DP Cable.
> 
> Here:


 What does power setting have to do with a platinum chip? No way it's running 1650mhz in-game while staying below 250W. Running 1600mhz in-game at 1.1v will result in roughly the same power usage for every vega 64.. of course there are slight differences, no chip is the same. You will need less voltage to reach the same clocks with a platinum chip, though.

Looks like you are simply using +50% just like me (see powerplaytable). I'm guessing the power play table overrides the overdriventool value.

But yeah, 1767 at 1.137v is indeed quite the golden chip .


----------



## naifm92

does anyone experience the same issues??? ALL my games(AC odyssey, BF1,V, GTA V) have stuttering IDK why plz help, i'm not even maxing everything out and it's 1080p WHY i'm getting such low FPS


here is a benchmark of AC odyssey as an example along with my PC specs note that my Ave FPS is 51 and i didn't max everything out at 1080p, same with other games, and that stuttering OMG (x_x)

EDIT: i added the file for the benchmark


----------



## 113802

Doubleyoupee said:


> What does power setting have to do with a platinum chip? No way it's running 1650mhz in-game while staying below 250W. Running 1600mhz in-game at 1.1v will result in roughly the same power usage for every vega 64.. of course there are slight differences, no chip is the same. You will need less voltage to reach the same clocks with a platinum chip, though.
> 
> Looks like you are simply using +50% just like me (see powerplaytable). I'm guessing the power play table overrides the overdriventool value.
> 
> But yeah, 1767 at 1.137v is indeed quite the golden chip .


264w is just the LC power UEFI. RX Vega LC cards have a 220w and a 264w UEFI. My card runs at 1690Mhz all the time in-game while staying at 250w or below with p6/p7 at 1050/1100mV. 220w and below at 1640Mhz in game all the time at 1000/1050mV. When running at those settings with the 264w UEFI I don't have to add +50% power since it never goes above 264w.

Here's a video fo my card running at 1750Mhz all the time in game.


----------



## dmbrio

mtrai said:


> Yes you can...I flashed my PowerColor Red Devil Vega 64 with the Powercolor LC bios with no issues...other then the PowerColor LC bios is a locked bios...so once you flash it...there is no way to flash back. So choose your bios switch carefully. What I did was flash the OC bios to the LC bios ...the 3rd bios. Keep the Quiet one, and flashed the middle switch to the PowerColor OC bios.
> 
> It does make a difference on high the core clock will maintain. Also you need to considerable ramp up your fan speed.. I have 1500 min and 3500 as max, yes it can get loud. You will also need to undervolt the core...as PC sets some silly high voltage of 1250 mv on the LC bios...and 1200 on the OC bios.
> 
> techpowerup has the bios for download.


Were you able to achieve better results on the core with the LC bios?

Are you still running the Devil with the stock cooler or did you block it?


----------



## mtrai

dmbrio said:


> Were you able to achieve better results on the core with the LC bios?
> 
> Are you still running the Devil with the stock cooler or did you block it?


Yes better results with the LC bios ...higher core clocks....however like I said PC LC bios is a locked one...so it can't be overwritten, nor unlocked.

Yeah as I have the AXRX VEGA 64 8GBHBM2-**2D2H/OC**. I am still searching for a mod or something that might work with my model.


----------



## dmbrio

mtrai said:


> Yes better results with the LC bios ...higher core clocks....however like I said PC LC bios is a locked one...so it can't be overwritten, nor unlocked.
> 
> Yeah as I have the AXRX VEGA 64 8GBHBM2-**2D2H/OC**. I am still searching for a mod or something that might work with my model.


First of all thanks a lot for your help mate.

There are two LC bioses there in techpowerup, which one should I get?

Also, could you please share any of your results in FS?


----------



## mtrai

dmbrio said:


> First of all thanks a lot for your help mate.
> 
> There are two LC bioses there in techpowerup, which one should I get?
> 
> Also, could you please share any of your results in FS?


Second question first...not easily without digging...as I have a lot of results I have to hide for NDA reasons. My GPU Graphics score is 26000+ easy still being on air does make a difference.

I used this one 016.001.001.000.008734 https://www.techpowerup.com/vgabios/195013/powercolor-rxvega64-8176-170730 You can try the other...but it was the first one I tried and since it locked could not try other one. The other one may not be locked.


----------



## Ne01 OnnA

WannaBeOCer said:


> Post a 3DMark FireStrike result. That golden chip should be able to hit 28500!


I don't have 3Dmark now 
But i have my old score (with weaker RAM 3090MHz CL14-15-15-14)

==
>28k is a no problemo (i'm dreaming of 29k now, but first i need to get my stuff together & buy 3Dmark Advanced, soon should be on sale)
~1760MHz was enough for this score w/+25% POW

==


----------



## Ne01 OnnA

Also here some 2 tests for TimeSpy
One is set to GTX1080 Stock score of 7.5k (Score from Guru3D)
and second is for 8k score
7.55k needs 122tW
8k 162tW

IMO Vega is quite power efficient when Tweaked right.
===


----------



## 113802

Ne01 OnnA said:


> I don't have 3Dmark now
> But i have my old score (with weaker RAM 3090MHz CL14-15-15-14)
> 
> ==
> >28k is a no problemo (i'm dreaming of 29k now, but first i need to get my stuff together & buy 3Dmark Advanced, soon should be on sale)
> ~1760MHz was enough for this score.
> 
> ==


Yeah 28k is no problem for me either. I was expecting a much higher score due to your HBM2 being 65Mhz faster and higher core clock at a much lower voltage.

https://www.3dmark.com/fs/16277993
https://www.3dmark.com/fs/16287671

https://www.3dmark.com/spy/5013755

Edit: I have a feeling your card is being starved for power since your 1700Mhz+ results aren't at 50% power target.


----------



## dmbrio

mtrai said:


> Second question first...not easily without digging...as I have a lot of results I have to hide for NDA reasons. My GPU Graphics score is 26000+ easy still being on air does make a difference.
> 
> I used this one 016.001.001.000.008734 https://www.techpowerup.com/vgabios/195013/powercolor-rxvega64-8176-170730 You can try the other...but it was the first one I tried and since it locked could not try other one. The other one may not be locked.


Why NDA? 

With my current bios i'm being able to get 25500ish


----------



## mtrai

dmbrio said:


> Why NDA?
> 
> With my current bios i'm being able to get 25500ish


NDA enough said. Other people might suggest why but I will not and have not violated the many NDAs I work with. Do not take this personally...but only 8 posts and you are questioning a senior member. If your chasing benchmarks alrighty...or chasing better performance alrighty...but do not question me on why I can or cannot reveal what you wanted. I said why I could not and that should be enough...nor should I have to hunt through my hundreds of results to answer you either. My time has value just as yours. Sorry if this is harsh.

So either try it or not.


----------



## dmbrio

mtrai said:


> NDA enough said. Other people might suggest why but I will not and have not violated the many NDAs I work with. Do not take this personally...but only 8 posts and you are questioning a senior member. If your chasing benchmarks alrighty...or chasing better performance alrighty...but do not question me on why I can or cannot reveal what you wanted. I said why I could not and that should be enough...nor should I have to hunt through my hundreds of results to answer you either. My time has value just as yours. Sorry if this is harsh.
> 
> So either try it or not.


Bro I just asked you a 3 letter question: "W-H-Y"

I didn't question your character nor anything, it just looks weird to me that you can't share a FS result, never seen that before, and yes I'm new here and this is probably the reason why.

BTW, I have made some tests and was able to squeeze a few more MHZ on the core in FS but the result did not improve.

This is my best: https://www.3dmark.com/fs/17041365

1642mhz/1080mv core
1140mhz/960mv hbm2
Power target 25%

I will try to do what you said with the LC bios and see if my results improve somehow. I'm trying to use ATIWinflash and it's giving me an error message. How can I flash the bios?


----------



## dmbrio

WannaBeOCer said:


> Yeah 28k is no problem for me either. I was expecting a much higher score due to your HBM2 being 65Mhz faster and higher core clock at a much lower voltage.
> 
> https://www.3dmark.com/fs/16277993
> https://www.3dmark.com/fs/16287671
> 
> https://www.3dmark.com/spy/5013755
> 
> Edit: I have a feeling your card is being starved for power since your 1700Mhz+ results aren't at 50% power target.


Hey mate,

I was actually able to flash the LC bios into my Devil 64 card, the problem now is with the power tables, I can only edit the P6 and P7 and I could not find a solution to edit the other ones.

Could you please give me a light?

Thanks.


----------



## 113802

dmbrio said:


> WannaBeOCer said:
> 
> 
> 
> Yeah 28k is no problem for me either. I was expecting a much higher score due to your HBM2 being 65Mhz faster and higher core clock at a much lower voltage.
> 
> https://www.3dmark.com/fs/16277993
> https://www.3dmark.com/fs/16287671
> 
> https://www.3dmark.com/spy/5013755
> 
> Edit: I have a feeling your card is being starved for power since your 1700Mhz+ results aren't at 50% power target.
> 
> 
> 
> Hey mate,
> 
> I was actually able to flash the LC bios into my Devil 64 card, the problem now is with the power tables, I can only edit the P6 and P7 and I could not find a solution to edit the other ones.
> 
> Could you please give me a light?
> 
> Thanks.
Click to expand...

I only edit P6/P7 and use Wattman exclusively.


----------



## dmbrio

WannaBeOCer said:


> I only edit P6/P7 and use Wattman exclusively.


Yea, feelsbad.

Did not get any improvement using the LC Bios, I was able to squeeze a few MHz more but the result did not improve.

Guess I'm going to stick with my normal bios for now, not sure what to do to improve.


----------



## 113802

dmbrio said:


> Yea, feelsbad.
> 
> Did not get any improvement using the LC Bios, I was able to squeeze a few MHz more but the result did not improve.
> 
> Guess I'm going to stick with my normal bios for now, not sure what to do to improve.


Wattman is awesome. Did you flash the 265w LC UEFI or the 220w? What are your temps?


----------



## dmbrio

WannaBeOCer said:


> Wattman is awesome. Did you flash the 265w LC UEFI or the 220w? What are your temps?


The 264 one.

Temps are good, never go above 55c on FS.

Is there a way to bypass the write protection? The BIOS I flashed is protected. It's not really an issue as the results I've got with it were pretty much the same as the OC Bios and I flashed it onto that switch, but I'd like to flash back to the original if possible.


----------



## Doubleyoupee

WannaBeOCer said:


> 264w is just the LC power UEFI. RX Vega LC cards have a 220w and a 264w UEFI. My card runs at 1690Mhz all the time in-game while staying at 250w or below with p6/p7 at 1050/1100mV. 220w and below at 1640Mhz in game all the time at 1000/1050mV. When running at those settings with the 264w UEFI I don't have to add +50% power since it never goes above 264w.
> 
> Here's a video fo my card running at 1750Mhz all the time in game.
> 
> https://www.youtube.com/watch?v=7I8SxpajvLo&t=85s


 Yes, please make a video with anoher game, or timespy.
Obviously if you can use lower voltage, you will use less power. My point was that at the same voltage and same frequency and same load, every Vega should use roughly the same.
Load is very important though. For example in Unigine Heaven, I can get 1700mhz actual clock at 1.2v (1780mhz p7) but it uses only 310W. I can get the same power usage in Witcher 3 at only 1.1v/1620mhz in-game.


Ps wouldn't using powerplaytable allow you to do the same as flash bios? 



Ne01 OnnA said:


> I don't have 3Dmark now
> But i have my old score (with weaker RAM 3090MHz CL14-15-15-14)
> 
> ==
> >28k is a no problemo (i'm dreaming of 29k now, but first i need to get my stuff together & buy 3Dmark Advanced, soon should be on sale)
> ~1760MHz was enough for this score w/+25% POW
> 
> ==


 1790mhz at that voltage, yeah definitely golden chip. Mine needs 1.2v for that and runs in power limit.




WannaBeOCer said:


> I only edit P6/P7 and use Wattman exclusively.


Why are you even changing P6? for me it does absolutely nothing. I can put it to 1050mv/1500mhz or 1200mv/1700mhz it doesn't change a thing. It just listens to P7.


----------



## mtrai

dmbrio said:


> The 264 one.
> 
> Temps are good, never go above 55c on FS.
> 
> Is there a way to bypass the write protection? The BIOS I flashed is protected. It's not really an issue as the results I've got with it were pretty much the same as the OC Bios and I flashed it onto that switch, but I'd like to flash back to the original if possible.


To edit the full powerplay table launch overdriventool as admin....then click in the upper left corner for the menu to load the ppeditor ..then just follow the instructions found in this thread.



dmbrio said:


> The 264 one.
> 
> Temps are good, never go above 55c on FS.
> 
> Is there a way to bypass the write protection? The BIOS I flashed is protected. It's not really an issue as the results I've got with it were pretty much the same as the OC Bios and I flashed it onto that switch, but I'd like to flash back to the original if possible.


I am sure there is way...it is either I can't remember in all these years, or can't find how. I have tried all the various switches I know. But for me it works well so I have not really bothered looking for a solution. (Not at you but -unlock rom n does not work either, in before someone says that)



Doubleyoupee said:


> Yes, please make a video with anoher game, or timespy.
> Obviously if you can use lower voltage, you will use less power. My point was that at the same voltage and same frequency and same load, every Vega should use roughly the same.
> Load is very important though. For example in Unigine Heaven, I can get 1700mhz actual clock at 1.2v (1780mhz p7) but it uses only 310W. I can get the same power usage in Witcher 3 at only 1.1v/1620mhz in-game.
> 
> 
> Ps wouldn't using powerplaytable allow you to do the same as flash bios?


Not exactly...it appears that the the LC bios handle boosting differently then Air bios. They both are needed to get the full potential. The LC bios will boost higher then Air.


----------



## dmbrio

mtrai said:


> To edit the full powerplay table launch overdriventool as admin....then click in the upper left corner for the menu to load the ppeditor ..then just follow the instructions found in this thread.
> 
> 
> 
> I am sure there is way...it is either I can't remember in all these years, or can't find how. I have tried all the various switches I know. But for me it works well so I have not really bothered looking for a solution. (Not at you but -unlock rom n does not work either, in before someone says that)
> 
> 
> 
> Not exactly...it appears that the the LC bios handle boosting differently then Air bios. They both are needed to get the full potential. The LC bios will boost higher then Air.


Mtrai, thanks for your answer once more my friend.

I could not find the instructions, this is why I asked =/.


----------



## Ne01 OnnA

Here:

OverdriveN Tool 2.0.7 beta4 (Hellm SoftPowerPlayTable key Import is possible)
Download:

-> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1

Hellm has created SoftPowerPlayTable key files. This PowerPlay in registry the driver will give priority over firmware PowerPlay. This is the same as on past cards where we used 'Extend Official Overclocking Limits' in MSI AB. It is a known workaround which does not cause issues to OS/driver. The registry PowerPlay can be modified like we would the VBIOS one, I will add guide soon, for now reference Linux VEGA PP linked above, info below and Hellm's post here and here.
(It's something like BIOS Mod but Injected into WinX)

-> https://www.overclock.net/attachments/49572

Click on the Top-bar w/Right then edit PP_soft Table

==


----------



## dmbrio

Ne01 OnnA said:


> Here:
> 
> OverdriveN Tool 2.0.7 beta4 (Hellm SoftPowerPlayTable key Import is possible)
> Download:
> 
> -> https://www.dropbox.com/s/equ297p3otqu28n/OverdriveNTool 0.2.7beta4.7z?dl=1
> 
> Hellm has created SoftPowerPlayTable key files. This PowerPlay in registry the driver will give priority over firmware PowerPlay. This is the same as on past cards where we used 'Extend Official Overclocking Limits' in MSI AB. It is a known workaround which does not cause issues to OS/driver. The registry PowerPlay can be modified like we would the VBIOS one, I will add guide soon, for now reference Linux VEGA PP linked above, info below and Hellm's post here and here.
> (It's something like BIOS Mod but Injected into WinX)
> 
> -> https://www.overclock.net/attachments/49572
> 
> Click on the Top-bar w/Right then edit PP_soft Table
> 
> ==


Onna, my friend.

I was following your post on GURU3D aswell.

All I need to do is execute the registry RX_VEGA_64_Soft_PP.reg and I will be able to select the "use existing in the registry" option?

Also, do you know a way to flash over a protected BIOS? The Powercolor LC Bios I flashed is protected like Mtrai's.


----------



## 113802

Doubleyoupee said:


> Yes, please make a video with anoher game, or timespy.
> Obviously if you can use lower voltage, you will use less power. My point was that at the same voltage and same frequency and same load, every Vega should use roughly the same.
> Load is very important though. For example in Unigine Heaven, I can get 1700mhz actual clock at 1.2v (1780mhz p7) but it uses only 310W. I can get the same power usage in Witcher 3 at only 1.1v/1620mhz in-game.
> 
> Why are you even changing P6? for me it does absolutely nothing. I can put it to 1050mv/1500mhz or 1200mv/1700mhz it doesn't change a thing. It just listens to P7.


That's correct load is important, for example when I'm sitting at menu screens in Destiny 2 my card spikes up to 1800Mhz causing my card to crash. Destiny 2 runs at 1730Mhz all the time instead of 1750Mhz like in Tomb Raider. I'll have to check TimeSpy but FireStrike runs at 1750Mhz throughout the benchmark. If I have time today I'll upload another game. I noticed lower clock speeds when P6/P7 have the same exact voltage so I lower P6 below P7.


----------



## Ne01 OnnA

dmbrio said:


> Onna, my friend.
> 
> I was following your post on GURU3D aswell.
> 
> All I need to do is execute the registry RX_VEGA_64_Soft_PP.reg and I will be able to select the "use existing in the registry" option?
> 
> Also, do you know a way to flash over a protected BIOS? The Powercolor LC Bios I flashed is protected like Mtrai's.


Nah, im not into Flashing on Vega (I was in Fiji Time)
Basically you need to find Compatible BIOS (w/Hardware) then it will be success.

Second: 
Open Bios/Reg file then import into OverdriveN Tool
Edit to Your liking, done
Then Test P0-P5 to be sure it's sufficient.
Then move to P6-P7 test.

Then Save as Reg so You can import it when you Update drivers.


----------



## dmbrio

Ne01 OnnA said:


> Nah, im not into Flashing on Vega (I was in Fiji Time)
> Basically you need to find Compatible BIOS (w/Hardware) then it will be success.
> 
> Second:
> Open Bios/Reg file then import into OverdriveN Tool
> Edit to Your liking, done
> Then Test P0-P5 to be sure it's sufficient.
> Then move to P6-P7 test.
> 
> Then Save as Reg so You can import it when you Update drivers.


I have tried some with the two 64 Power Tables included in the rar file you sent, no success, FS crashes right at the beginning.

Tried editing the same BIOS I flashed into the card, same result.


----------



## dmbrio

dmbrio said:


> I have tried some with the two 64 Power Tables included in the rar file you sent, no success, FS crashes right at the beginning.
> 
> Tried editing the same BIOS I flashed into the card, same result.


I was able to get the card running with the edited Power Table, for some reason in this BIOS my mV goes higher than on the OC Bios by a good margin(reaching 1130+), with the power target at 25%(same as I used for the best result that I have with the OC Bios).

If I touch the core to anything above my stock bios limit(1632mhz) FS crashes at the beginning(even with the LC Bios), on the OC Bios that came with the card I was able to get it to 1642mhz but that was my limit.

Do I have a bad chip that can't go not even a little bit above stock on the core? I don't know.

Feels a little bit disappointing.

https://www.3dmark.com/fs/17041365 (OC BIOS)

I was able to get this result with the OC Bios(that came with the card) and undervolting(1030 p6/1080 p7 25% target), with 10mhz increase on the core clock p7 and 20mhz on p6.

My best result with the LC Bios was this one: https://www.3dmark.com/fs/17041962 (LC BIOS)

It shows 1750mhz but it did not reach this by any means, I didn't notice higher clocks than the OC Bios during the test. You can see that the result was roughly the same.

Do you have any ideas of what I can try to improve? I feel like I'm running out of options.


----------



## Maracus

dmbrio said:


> I was able to get the card running with the edited Power Table, for some reason in this BIOS my mV goes higher than on the OC Bios by a good margin(reaching 1130+), with the power target at 25%(same as I used for the best result that I have with the OC Bios).
> 
> If I touch the core to anything above my stock bios limit(1632mhz) FS crashes at the beginning(even with the LC Bios), on the OC Bios that came with the card I was able to get it to 1642mhz but that was my limit.
> 
> Do I have a bad chip that can't go not even a little bit above stock on the core? I don't know.
> 
> Feels a little bit disappointing.
> 
> https://www.3dmark.com/fs/17041365 (OC BIOS)
> 
> I was able to get this result with the OC Bios(that came with the card) and undervolting(1030 p6/1080 p7 25% target), with 10mhz increase on the core clock p7 and 20mhz on p6.
> 
> My best result with the LC Bios was this one: https://www.3dmark.com/fs/17041962 (LC BIOS)
> 
> It shows 1750mhz but it did not reach this by any means, I didn't notice higher clocks than the OC Bios during the test. You can see that the result was roughly the same.
> 
> Do you have any ideas of what I can try to improve? I feel like I'm running out of options.


I haven't had time to go back and look at all your posts, but have you tried just a core overclock with HBM at default. i'm guessing if you're crashing its probably core related but gotta try something i guess.


----------



## Spacebug

dmbrio said:


> I was able to get the card running with the edited Power Table, for some reason in this BIOS my mV goes higher than on the OC Bios by a good margin(reaching 1130+), with the power target at 25%(same as I used for the best result that I have with the OC Bios).
> 
> If I touch the core to anything above my stock bios limit(1632mhz) FS crashes at the beginning(even with the LC Bios), on the OC Bios that came with the card I was able to get it to 1642mhz but that was my limit.
> 
> Do I have a bad chip that can't go not even a little bit above stock on the core? I don't know.
> 
> Feels a little bit disappointing.
> 
> https://www.3dmark.com/fs/17041365 (OC BIOS)
> 
> I was able to get this result with the OC Bios(that came with the card) and undervolting(1030 p6/1080 p7 25% target), with 10mhz increase on the core clock p7 and 20mhz on p6.
> 
> My best result with the LC Bios was this one: https://www.3dmark.com/fs/17041962 (LC BIOS)
> 
> It shows 1750mhz but it did not reach this by any means, I didn't notice higher clocks than the OC Bios during the test. You can see that the result was roughly the same.
> 
> Do you have any ideas of what I can try to improve? I feel like I'm running out of options.


Not sure if you have already done this but go by reported maintained clocks in benchmarks that report actual clockspeeds and treat the clockvalues in wattman only as rough target values.
Vega takes voltage into account when calculating what clocks to run, so an increase or decrease in voltage will give different maintained clocks, with the same target clock in wattman. 
And same clocks in wattman can give different maintained clocks depending on which bios you run, so hard to compare between bioses, instead go by reported maintained clocks instead of numbers in wattman..


----------



## TrixX

First up seen a few posts in the last few pages that made me cringe a little. If you want to find out what Vega64 is doing, then unlock the power and keep the core at 1750Mhz. Find a known HBM value that works and then start using the voltage to find the different bands you card works with. Testing between 1000mv and 1200mv on the core has resulted in 30-60mv bands which it jumps between, so 1065mv is exactly the same as 1060mv for instance.

When I say unlock the core, I run 200% power via the SoftPP Table. So throttling via power is not a limitation and doesn't impact results. I run the 264W 8774 LC BIOS on my Vega64, though from anecdotal evidence there's 2-3 types of card. Some work with the LC BIOS, some don't. Hence at least 2 types of card, mine was actually unstable with the stock BIOS it came with and I considered returning it. However BIOS flash to 8774 and it's been golden ever since.

Messing with power states P0 thru P5 are pointless. Very little to be gained and performance wise when Benchmarking/Gaming you want to lock to P7 on core anyway, same with HBM lock to P3. Since the last couple of months managed to run 1180MHz on HBM, not sure which driver that changed with, but most of the time I couldn't breach 1100MHz without artifacts. Obviously some free performance isn't a bad thing  I still run HBM at 1100MHz daily though and only some games don't mind 1180MHz. Basically anything Unreal Engine has a hissy fit with 1180MHz on my rig.
@dmbrio running the LC BIOS has two effects, first is that the HBM gets 1.35v vs 1.2v for the stock Air 56/64 BIOS. The second is that because it's the LC BIOS it's got a lower peak operating temp and that can't be increased so it drops to 70C from ~80C (can't remember the exact figure, been on LC too long  ).

So far the max I've seen Power draw wise was 412W (according to GPU-Z and Afterburner) with the 200% value used. Obviously that's using a PSU that can support it, but not a daily workload. Mostly I run 200-270W 100% load with 1750MHz Core, 1200mv and 1100MHz HBM with a floor (HBM) voltage of 1100mv. For games that don't benefit from high powered GPU's I run 1750MHz core 1060mv and 1100MHz HBM with 950mv floor. That usually results in well under 200W reported power draw.


----------



## Doubleyoupee

TrixX said:


> @*dmbrio* running the LC BIOS has two effects, first is that the HBM gets 1.35v vs 1.2v for the stock Air 56/64 BIOS. The second is that because it's the LC BIOS it's got a lower peak operating temp and that can't be increased so it drops to 70C from ~80C (can't remember the exact figure, been on LC too long  ).


 Not true. Stock Vega 64 air also has 1.35v HBM. 56 has the lower voltage.




TrixX said:


> Mostly I run 200-270W 100% load with 1750MHz Core, 1200mv and 1100MHz HBM with a floor (HBM) voltage of 1100mv.


200-270W with 1750mhz @ 1.2v? Yeah I doubt it. Maybe when playing minecraft.
Try some Witcher 3 or even Far cry 5 and you will be well above 300W chip power. Probably above 350W even in Witcher 3.


----------



## TrixX

Doubleyoupee said:


> Not true. Stock Vega 64 air also has 1.35v HBM. 56 has the lower voltage.


Well that's good to know, on the earlier BIOS I was under the impression that the 64's had 1.2v on the HBM too. One of the reasons for going to the LC BIOS when they released...



Doubleyoupee said:


> 200-270W with 1750mhz @ 1.2v? Yeah I doubt it. Maybe when playing minecraft.
> Try some Witcher 3 or even Far cry 5 and you will be well above 300W chip power. Probably above 350W even in Witcher 3.


It's set to 1750 on the core, not ACTUAL 1750 on the core. Was sustaining 1695MHz actual in Ark with 1200mv on the Core just now though with an average around 245W according to GPU-Z. Thanks for playing. I manage my core clocks with the voltage rather than the clock speed setting. That way I can manage the power and the core speeds with a single parameter (I run +200% power so it doesn't throttle).


----------



## Doubleyoupee

TrixX said:


> It's set to 1750 on the core, not ACTUAL 1750 on the core. Was sustaining 1695MHz actual in Ark with 1200mv on the Core just now though with an average around 245W according to GPU-Z. Thanks for playing. I manage my core clocks with the voltage rather than the clock speed setting. That way I can manage the power and the core speeds with a single parameter (I run +200% power so it doesn't throttle).


 Yeah, but still. My guess is Ark is not properly utilizing the GPU and therefore it's going to such high clocks while having such low power usage.
If you don't have Witcher 3 or Far Cry 5 try this free demo:
http://tombraider-dox.com/

Set everything to max settings and put it to fullscreen borderless so you can alt-tab to GPUz easily. Just walk 10sec into the cave. Can you make the same screenshots? 
I'm assuming this "GPU power draw only" is the same as HWInfo64's GPU chip power btw..


----------



## THUMPer1

I have an MSI Vega 64 LC. Are all Vega LC BIOS the same?


----------



## TrixX

Doubleyoupee said:


> Yeah, but still. My guess is Ark is not properly utilizing the GPU and therefore it's going to such high clocks while having such low power usage.
> If you don't have Witcher 3 or Far Cry 5 try this free demo:
> http://tombraider-dox.com/
> 
> Set everything to max settings and put it to fullscreen borderless so you can alt-tab to GPUz easily. Just walk 10sec into the cave. Can you make the same screenshots?
> I'm assuming this "GPU power draw only" is the same as HWInfo64's GPU chip power btw..


Ok bored of you already. Those were live from my gameplay earlier, with borderless window. It's properly utilizing the GPU. Been benching this Vega since release and had a lot of fun with the people in this thread a few hundred pages earlier. Currently have the fastest ThreadRipper/Vega64 single card combo on the Firestrike, Firestrike Ultra, Firestrike Extreme, Timespy and Timespy Extreme leaderboards. So yeah, not going to piss about to satisfy your disbelief.

BTW that was flying through the main section at the centre of the Extinction map. Heavy load on CPU and very heavy on GPU.


----------



## Doubleyoupee

TrixX said:


> Ok bored of you already. Those were live from my gameplay earlier, with borderless window. It's properly utilizing the GPU. Been benching this Vega since release and had a lot of fun with the people in this thread a few hundred pages earlier. Currently have the fastest ThreadRipper/Vega64 single card combo on the Firestrike, Firestrike Ultra, Firestrike Extreme, Timespy and Timespy Extreme leaderboards. So yeah, not going to piss about to satisfy your disbelief.
> 
> BTW that was flying through the main section at the centre of the Extinction map. Heavy load on CPU and very heavy on GPU.


If you are so sure, then make a screenshot please. You thought Vega 64 air had 1.2v HBM2 too .
You will see Ark is not a good load example. I can say my Vega 64 uses 270W too at 1700mhz 1.2v when I use Unigine Heaven. I didn't mean you have to use borderless window for it to properly use your GPU, just that Ark itself might not be that intensive on Vega (It's not optimized, for one).


----------



## 113802

Doubleyoupee said:


> If you are so sure, then make a screenshot please. You thought Vega 64 air had 1.2v HBM2 too /forum/images/smilies/wink.gif.
> You will see Ark is not a good load example. I can say my Vega 64 uses 270W too at 1700mhz 1.2v when I use Unigine Heaven. I didn't mean you have to use borderless window for it to properly use your GPU, just that Ark itself might not be that intensive on Vega (It's not optimized, for one).


I noticed optimized games like Wolfenstein 2, FarCry 5 and Shadow of the Tomb Raider actually use less power and run at higher frequencies. While DX 11 games use much more power and run at lower frequencies. I use AMD's overlay as you've seen with my previous video.


----------



## TrixX

Doubleyoupee said:


> If you are so sure, then make a screenshot please. You thought Vega 64 air had 1.2v HBM2 too .
> You will see Ark is not a good load example. I can say my Vega 64 uses 270W too at 1700mhz 1.2v when I use Unigine Heaven. I didn't mean you have to use borderless window for it to properly use your GPU, just that Ark itself might not be that intensive on Vega (It's not optimized, for one).


Anything Unreal Engine is a heavy GPU load unless properly optimised like Fortnite. So PUBG and Ark are perfect examples of crap optimisation and heavy GPU loads. I use both for stability testing (the menu in PUBG used to be one of the heaviest loads on the GPU for some crazy reason) to make sure the OC is stable, much more reliable than the more artificial benchmarks which pass with settings that are unusable on a daily basis.

Heaven is a DX9 workload and is a pointless test for a Vega, when comparing to equivalent Nvidia cards they score 700+ points extra for some odd reason. Superposition would be a better load test than Heaven. Even then Superposition and Heaven are light load's, Firestrike (normal) is a much heavier workload than either Heaven or Superposition. Highlights Core instability nicely. Just ran my Ark Profile in Firestrike and pulled a peak of 337W and averaged 313W for the run according to Afterburner (same as HWiNFO64 or GPU-Z reading for Wattage). I don't have an inline wattage reader unfortunately, would be nice to see system draw though...

I didn't have it in Bordless Window to satisfy you. I run it that way anyway. I'd be very interested to see if you were maintaining 1700MHz or just had it set to 1700MHz in OverdriveNTool. Mine actually hits a consistent ~1695MHz at 1.2v. BTW the load for Ark was 250W, I'm sure that would differ depending on the different loads presented by different games etc. However I stated what it was at the time and that is correct for that load.

At the start of all this I was trying to find a solution to my card crashing using stock clocks on the stock BIOS, one of the things mentioned at the time was 1.2v for HBM for all the Air cards and that the LC BIOS had 1.35v so I tried that. I wasn't aware the Air 64 had 1.35v on HBM as I never saw the need to use the stock Air BIOS again. So yeah I didn't know because I didn't encounter it again. One error does not beget the others as my OC results have shown, which are all quite findable on the 3DMark site and OCUK's leaderboards in the Graphics forum. I did just check and Seifer is beating me in the ThreadRipper/Vega combo for Firestrike normal, though I have the other two. If he ran them I assume his would be quicker as his card can clock higher than mine can, there are a few unique cards with the ability to get to 1800MHz, something mine can't do.

OCUK benchmarks, position relates to Vega's only.

Firestrike (3rd currently)
https://forums.overclockers.co.uk/posts/27903763

Firestrike Extreme (1st currently)
https://forums.overclockers.co.uk/posts/27387417

Firestrike Ultra (1st currently)
https://forums.overclockers.co.uk/posts/27044551

Timespy (1st currently)
https://forums.overclockers.co.uk/posts/29782421

Timespy Ultra (1st currently though only very few run this)
https://forums.overclockers.co.uk/posts/31207163

Superposition (3rd currently, just uploading the 3rd place run now)
https://forums.overclockers.co.uk/posts/30675797

Heaven 4.0 (1st currently)
https://forums.overclockers.co.uk/posts/28330031


----------



## 113802

TrixX said:


> Anything Unreal Engine is a heavy GPU load unless properly optimised like Fortnite. So PUBG and Ark are perfect examples of crap optimisation and heavy GPU loads. I use both for stability testing (the menu in PUBG used to be one of the heaviest loads on the GPU for some crazy reason) to make sure the OC is stable, much more reliable than the more artificial benchmarks which pass with settings that are unusable on a daily basis.
> 
> Heaven is a DX9 workload and is a pointless test for a Vega, when comparing to equivalent Nvidia cards they score 700+ points extra for some odd reason. Superposition would be a better load test than Heaven. Even then Superposition and Heaven are light load's, Firestrike (normal) is a much heavier workload than either Heaven or Superposition. Highlights Core instability nicely. Just ran my Ark Profile in Firestrike and pulled a peak of 337W and averaged 313W for the run according to Afterburner (same as HWiNFO64 or GPU-Z reading for Wattage). I don't have an inline wattage reader unfortunately, would be nice to see system draw though...
> 
> I didn't have it in Bordless Window to satisfy you. I run it that way anyway. I'd be very interested to see if you were maintaining 1700MHz or just had it set to 1700MHz in OverdriveNTool. Mine actually hits a consistent ~1695MHz at 1.2v. BTW the load for Ark was 250W, I'm sure that would differ depending on the different loads presented by different games etc. However I stated what it was at the time and that is correct for that load.
> 
> At the start of all this I was trying to find a solution to my card crashing using stock clocks on the stock BIOS, one of the things mentioned at the time was 1.2v for HBM for all the Air cards and that the LC BIOS had 1.35v so I tried that. I wasn't aware the Air 64 had 1.35v on HBM as I never saw the need to use the stock Air BIOS again. So yeah I didn't know because I didn't encounter it again. One error does not beget the others as my OC results have shown, which are all quite findable on the 3DMark site and OCUK's leaderboards in the Graphics forum. I did just check and Seifer is beating me in the ThreadRipper/Vega combo for Firestrike normal, though I have the other two. If he ran them I assume his would be quicker as his card can clock higher than mine can, there are a few unique cards with the ability to get to 1800MHz, something mine can't do.
> 
> OCUK benchmarks, position relates to Vega's only.


Nice results but come on attacking Heaven? It's a DirectX 11 benchmark, that's why nVidia scores higher than Vega. nVidia built their Fermi architecture around Tessellation and ever since than demolished AMD in DirectX 11 titles.

https://www.anandtech.com/show/2977...tx-470-6-months-late-was-it-worth-the-wait-/5


----------



## majestynl

TrixX said:


> Spoiler
> 
> 
> 
> Anything Unreal Engine is a heavy GPU load unless properly optimised like Fortnite. So PUBG and Ark are perfect examples of crap optimisation and heavy GPU loads. I use both for stability testing (the menu in PUBG used to be one of the heaviest loads on the GPU for some crazy reason) to make sure the OC is stable, much more reliable than the more artificial benchmarks which pass with settings that are unusable on a daily basis.
> 
> Heaven is a DX9 workload and is a pointless test for a Vega, when comparing to equivalent Nvidia cards they score 700+ points extra for some odd reason. Superposition would be a better load test than Heaven. Even then Superposition and Heaven are light load's, Firestrike (normal) is a much heavier workload than either Heaven or Superposition. Highlights Core instability nicely. Just ran my Ark Profile in Firestrike and pulled a peak of 337W and averaged 313W for the run according to Afterburner (same as HWiNFO64 or GPU-Z reading for Wattage). I don't have an inline wattage reader unfortunately, would be nice to see system draw though...
> 
> I didn't have it in Bordless Window to satisfy you. I run it that way anyway. I'd be very interested to see if you were maintaining 1700MHz or just had it set to 1700MHz in OverdriveNTool. Mine actually hits a consistent ~1695MHz at 1.2v. BTW the load for Ark was 250W, I'm sure that would differ depending on the different loads presented by different games etc. However I stated what it was at the time and that is correct for that load.
> 
> At the start of all this I was trying to find a solution to my card crashing using stock clocks on the stock BIOS, one of the things mentioned at the time was 1.2v for HBM for all the Air cards and that the LC BIOS had 1.35v so I tried that. I wasn't aware the Air 64 had 1.35v on HBM as I never saw the need to use the stock Air BIOS again. So yeah I didn't know because I didn't encounter it again. One error does not beget the others as my OC results have shown, which are all quite findable on the 3DMark site and OCUK's leaderboards in the Graphics forum. I did just check and Seifer is beating me in the ThreadRipper/Vega combo for Firestrike normal, though I have the other two. If he ran them I assume his would be quicker as his card can clock higher than mine can, there are a few unique cards with the ability to get to 1800MHz, something mine can't do.
> 
> OCUK benchmarks, position relates to Vega's only.
> 
> Firestrike (3rd currently)
> https://forums.overclockers.co.uk/posts/27903763
> 
> Firestrike Extreme (1st currently)
> https://forums.overclockers.co.uk/posts/27387417
> 
> Firestrike Ultra (1st currently)
> https://forums.overclockers.co.uk/posts/27044551
> 
> Timespy (1st currently)
> https://forums.overclockers.co.uk/posts/29782421
> 
> Timespy Ultra (1st currently though only very few run this)
> https://forums.overclockers.co.uk/posts/31207163
> 
> Superposition (3rd currently, just uploading the 3rd place run now)
> https://forums.overclockers.co.uk/posts/30675797
> 
> Heaven 4.0 (1st currently)
> https://forums.overclockers.co.uk/posts/28330031



hmm I see you beat my 1st place scores with Vega64. Let me have some runs in weekend


----------



## TrixX

WannaBeOCer said:


> Nice results but come on attacking Heaven? It's a DirectX 11 benchmark, that's why nVidia scores higher than Vega. nVidia built their Fermi architecture around Tessellation and ever since than demolished AMD in DirectX 11 titles.
> 
> https://www.anandtech.com/show/2977...tx-470-6-months-late-was-it-worth-the-wait-/5


It's quite demonstrable there's something favouring Nvidia in Heaven. Whether it's driver related or whether it's something else I don't know. However in every comparable test between a 1080 and a Vega64 for instance they are close to equal in DX11. Superposition highlights that nicely actually, where the 1080 is behind the Vega's and it's still DX11. Firestrike also shows this too in the GPU scores. I'd be very happy to be proven wrong on this. I mean 3241 vs 2579 is quite a huge margin for cards that are roughly equivalent in other DX9 or DX11 real world tests. Superposition is a DX11 benchmark and doesn't show the same margin between the Vega64 and GTX1080 either. Based on a newer version of the same engine too. The flip side is that now the Vega is almost 800 points higher than the GTX1080 at 5415 (Vega64) vs 4759 (GTX1080) which makes little sense again. I would expect less of a gap between the two.

In turn I don't really use anything Unigine based as a comparison between cards, but as a comparison within cards. So Vega vs Vega, Pascal vs Pascal etc...
It's just a fundamentally incorrect comparison source between the different architectures.



majestynl said:


> hmm I see you beat my 1st place scores with Vega64. Let me have some runs in weekend


Oh bollocks a serious challenger


----------



## 113802

TrixX said:


> It's quite demonstrable there's something favouring Nvidia in Heaven. Whether it's driver related or whether it's something else I don't know. However in every comparable test between a 1080 and a Vega64 for instance they are close to equal in DX11. Superposition highlights that nicely actually, where the 1080 is behind the Vega's and it's still DX11. Firestrike also shows this too in the GPU scores. I'd be very happy to be proven wrong on this. I mean 3241 vs 2579 is quite a huge margin for cards that are roughly equivalent in other DX9 or DX11 real world tests. Superposition is a DX11 benchmark and doesn't show the same margin between the Vega64 and GTX1080 either. Based on a newer version of the same engine too. The flip side is that now the Vega is almost 800 points higher than the GTX1080 at 5415 (Vega64) vs 4759 (GTX1080) which makes little sense again. I would expect less of a gap between the two.
> 
> In turn I don't really use anything Unigine based as a comparison between cards, but as a comparison within cards. So Vega vs Vega, Pascal vs Pascal etc...
> It's just a fundamentally incorrect comparison source between the different architectures.
> 
> 
> 
> Oh bollocks a serious challenger


I'm also trying to figure out why Vega scores much higher in FireStrike than the GTX 1080. My RX Vega 64 scores a bit higher than a stock GTX 1080 Ti when real performance isn't anywhere near it. I have other benchmarks that beat your other GPU scores but my processor is holding me back. I'm just waiting on some tech with at least 20% faster IPC than my SkyLake since it's IPC is the same as Coffee Lake. I'll post my Superposition score later this afternoon.

https://www.3dmark.com/fs/16277993

TimeSpy with my 24/7 setup: https://www.3dmark.com/spy/5013755


----------



## majestynl

TrixX said:


> Spoiler
> 
> 
> 
> First up seen a few posts in the last few pages that made me cringe a little. If you want to find out what Vega64 is doing, then unlock the power and keep the core at 1750Mhz. Find a known HBM value that works and then start using the voltage to find the different bands you card works with. Testing between 1000mv and
> 
> 
> TrixX said:
> 
> 
> 
> Oh bollocks a serious challenger
> 
> 
> 
> on the core has resulted in 30-60mv bands which it jumps between, so 1065mv is exactly the same as 1060mv for instance.
> 
> When I say unlock the core, I run 200% power via the SoftPP Table. So throttling via power is not a limitation and doesn't impact results. I run the 264W 8774 LC BIOS on my Vega64, though from anecdotal evidence there's 2-3 types of card. Some work with the LC BIOS, some don't. Hence at least 2 types of card, mine was actually unstable with the stock BIOS it came with and I considered returning it. However BIOS flash to 8774 and it's been golden ever since.
Click to expand...


Can agree most of things above!!



TrixX said:


> Spoiler
> 
> 
> 
> Messing with power states P0 thru P5 are pointless. Very little to be gained and performance wise when Benchmarking/Gaming you want to lock to P7 on core anyway, same with HBM lock to P3. Since the last couple of months managed to run 1180MHz on HBM, not sure which driver that changed with, but most of the time I couldn't breach 1100MHz without artifacts. Obviously some free performance isn't a bad thing  I still run HBM at 1100MHz daily though and only some games don't mind 1180MHz. Basically anything Unreal Engine has a hissy fit with 1180MHz on my rig.



Yeap, agree again about messing with p0-p5. 
And P6 is also not that much effecting me. I just left it on 1050mv. No difference for me 

I could also run high HBM since few drivers back. I can go way to 1200mhz on HBM e.g for Benchmarks. But some games don't like it and instead of artifact it crashes the card. My daily game HBM is 1130-1150mhz




TrixX said:


> Spoiler
> 
> 
> 
> @dmbrio running the LC BIOS has two effects, first is that the HBM gets 1.35v vs 1.2v for the stock Air 56/64 BIOS. The second is that because it's the LC BIOS it's got a lower peak operating temp and that can't be increased so it drops to 70C from ~80C (can't remember the exact figure, been on LC too long  ).
> 
> So far the max I've seen Power draw wise was 412W (according to GPU-Z and Afterburner) with the 200% value used. Obviously that's using a PSU that can support it, but not a daily workload. Mostly I run 200-270W 100% load with 1750MHz Core, 1200mv and 1100MHz HBM with a floor (HBM) voltage of 1100mv. For games that don't benefit from high powered GPU's I run 1750MHz core 1060mv and 1100MHz HBM with 950mv floor. That usually results in well under 200W reported power draw.



as far I can remember gupsterg mentioned the voltage we see at HBM isn't the always the floor voltage for HBM but could also DP5 voltage for core. Just need to read again in https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html

Interesting cause for me that voltage is doing nothing. I left it on 950 for months now. No markable change between 950-1050 for me.



TrixX said:


> Oh bollocks a serious challenger


Hehe


----------



## TrixX

majestynl said:


> Interesting cause for me that voltage is doing nothing. I left it on 950 for months now. No markable change between 950-1050 for me.


Talking about the actual voltage, not the Wattman setting. Vega56 has 1.2v and until recently I was under the impression than Vega64 air cards were too. With only the LC BIOS having it set to 1.35v. Looks like the Air Vega64 BIOS were 1.35v at the HBM apparently. Though I can test that by switching to my backup BIOS and see


----------



## TrixX

WannaBeOCer said:


> I'm also trying to figure out why Vega scores much higher in FireStrike than the GTX 1080. My RX Vega 64 scores a bit higher than a stock GTX 1080 Ti when real performance isn't anywhere near it. I have other benchmarks that beat your other GPU scores but my processor is holding me back. I'm just waiting on some tech with at least 20% faster IPC than my SkyLake since it's IPC is the same as Coffee Lake. I'll post my Superposition score later this afternoon.
> 
> https://www.3dmark.com/fs/16277993
> 
> TimeSpy with my 24/7 setup: https://www.3dmark.com/spy/5013755


Your Firestrike score is very impressive. Only seen 2 others above the 28K mark. Very golden sample right there. Interesting that it's leveled to just a 10pt difference between yours and mine in Timespy, whereas in FS it's a 200pt difference.


----------



## majestynl

TrixX said:


> Talking about the actual voltage, not the Wattman setting. Vega56 has 1.2v and until recently I was under the impression than Vega64 air cards were too. With only the LC BIOS having it set to 1.35v. Looks like the Air Vega64 BIOS were 1.35v at the HBM apparently. Though I can test that by switching to my backup BIOS and see


Aha oke. Can't even remember the base bios from my Vega Air. I was getting the LC on day one but someone kidnapped my vega LC from shopping card before the site registered my purchase 

So I needed to buy the air version. As far I remember I slapped a block and LC bios when it came out. So don't know the numbers from begin


----------



## 113802

TrixX said:


> WannaBeOCer said:
> 
> 
> 
> I'm also trying to figure out why Vega scores much higher in FireStrike than the GTX 1080. My RX Vega 64 scores a bit higher than a stock GTX 1080 Ti when real performance isn't anywhere near it. I have other benchmarks that beat your other GPU scores but my processor is holding me back. I'm just waiting on some tech with at least 20% faster IPC than my SkyLake since it's IPC is the same as Coffee Lake. I'll post my Superposition score later this afternoon.
> 
> https://www.3dmark.com/fs/16277993
> 
> TimeSpy with my 24/7 setup: https://www.3dmark.com/spy/5013755
> 
> 
> 
> Your Firestrike score is very impressive. Only seen 2 others above the 28K mark. Very golden sample right there. Interesting that it's leveled to just a 10pt difference between yours and mine in Timespy, whereas in FS it's a 200pt difference.
Click to expand...

That TimeSpy was at my 24/7 results where as the FireStrike was only a few games stable. Notice the HBM2 clock speed.


----------



## Ne01 OnnA

TrixX said:


> Well that's good to know, on the earlier BIOS I was under the impression that the 64's had 1.2v on the HBM too. One of the reasons for going to the LC BIOS when they released...
> 
> 
> It's set to 1750 on the core, not ACTUAL 1750 on the core. Was sustaining 1695MHz actual in Ark with 1200mv on the Core just now though with an average around 245W according to GPU-Z. Thanks for playing. I manage my core clocks with the voltage rather than the clock speed setting. That way I can manage the power and the core speeds with a single parameter (I run +200% power so it doesn't throttle).





TrixX said:


> OCUK benchmarks, position relates to Vega's only.
> 
> Firestrike (3rd currently)
> https://forums.overclockers.co.uk/posts/27903763
> 
> Firestrike Extreme (1st currently)
> https://forums.overclockers.co.uk/posts/27387417
> 
> Firestrike Ultra (1st currently)
> https://forums.overclockers.co.uk/posts/27044551
> 
> Timespy (1st currently)
> https://forums.overclockers.co.uk/posts/29782421
> 
> Timespy Ultra (1st currently though only very few run this)
> https://forums.overclockers.co.uk/posts/31207163
> 
> Superposition (3rd currently, just uploading the 3rd place run now)
> https://forums.overclockers.co.uk/posts/30675797
> 
> Heaven 4.0 (1st currently)
> https://forums.overclockers.co.uk/posts/28330031



I put those on My Guru3D Vega Tread 
I know that Vega is very power efficient, just got Bad press initially -> nowdays is another story tho 

I never saw >200tW in any game (i mean ^spike^ not average)
70-90FPS is walkin' in the park for my Vega at 1692MHz/1150 HBM2 @ 140-170tW


----------



## TrixX

WannaBeOCer said:


> That TimeSpy was at my 24/7 results where as the FireStrike was only a few games stable. Notice the HBM2 clock speed.


Sorry I missed that. You did mention it in the first post, I forgot when replying 

Very nice benching perf and the 24/7 is equal to my bench.

Normally I run 1750MHz Core and 1100MHz HBM with 1.1v core and floor of 950mv. 200% Power target to prevent power throttling and my water loop keeps control of the thermals 



Ne01 OnnA said:


> I put those on My Guru3D Vega Tread
> I know that Vega is very power efficient, just got Bad press initially -> nowdays is another story tho
> 
> I never saw >200tW in any game (i mean ^spike^ not average)
> 70-90FPS is walkin' in the park for my Vega at 1692MHz/1150 HBM2 @ 140-170tW


Yeah Vega's quite a good and powerful card. Very fun to OC if you are an OC enthusiast and easily kicks a 1080 into touch with a good sample. Poor samples are just unfortunate and do exist 
I would join the fun at Guru3D, but there's too many people willing to go full agro over there. Dunno why.


----------



## Doubleyoupee

TrixX said:


> Heaven is a DX9 workload and is a pointless test for a Vega, when comparing to equivalent Nvidia cards they score 700+ points extra for some odd reason. Superposition would be a better load test than Heaven. Even then Superposition and Heaven are light load's, Firestrike (normal) is a much heavier workload than either Heaven or Superposition. Highlights Core instability nicely. Just ran my Ark Profile in Firestrike and pulled a peak of 337W and averaged 313W for the run according to Afterburner (same as HWiNFO64 or GPU-Z reading for Wattage). I don't have an inline wattage reader unfortunately, would be nice to see system draw though...


 Yes, that's all I wanted to point out .
As you are saying yourself, in Ark it pulls 250-270W and in firestrike 300-340W. 
In witcher 3, i pull even more watts than firestrike, and consistently. This is my point, that saying "Mostly I run 200-270W 100% load with 1750MHz Core, 1200mv" is not unrealistic unless you only play certain games. I'm not saying I don't believe your reading in Ark.
The tombraider demo is quite heavy too, more than far cry 5 but not as much as witcher 3. And in alt-tabs nicely, so nice game to test stability. Heavy on HBM2 too.


TrixX said:


> I didn't have it in Bordless Window to satisfy you. I run it that way anyway. I'd be very interested to see if you were maintaining 1700MHz or just had it set to 1700MHz in OverdriveNTool. Mine actually hits a consistent ~1695MHz at 1.2v. BTW the load for Ark was 250W, I'm sure that would differ depending on the different loads presented by different games etc. However I stated what it was at the time and that is correct for that load.



https://www.dropbox.com/s/r9drierj19o1zxo/Vega17002.png?raw=1
Here you can see my stock air card can run 1700mhz in Heaven and only draw 300-310W (ps, it uses DX11 not DX9). In most AAA games I play it pulls this already at 1630mhz actual clock and 1115mv. So no way it can use 200-270W at those frequencies/voltages in games that actually really stress Vega.
I could never reach 1700mhz in Tombraider Dox or similar without pulling well over 350W. In my case I'm still using +50% power because of my cheap 650W PSU, so I will run into power limit.


----------



## TrixX

So what's your point then? Most games I play run 200-270W depending on load and game. Very few games actually stress it to the point of 300+W with the settings used in Ark, if they do I tend to run a lower voltage.

Looks like you just wanted to argue my power usage, which was correct for the load stated and the majority of games I play...


----------



## Doubleyoupee

TrixX said:


> Looks like you just wanted to argue my power usage


 That's right :thumb: because I think 200-270W is unrealistic at 1700mhz (actual clock), 1200mv and 100% usage (so no fps cap or chill) in any scenario that actually stresses Vega.
But yeah, you could lower voltage if the power usage gets too high. That's what I do in Witcher 3, but then I can't reach high clocks anymore, and definitely not 1700mhz in-game. 

Maybe resolution increases power consumption too (ROP power usage?). I'm running 3440x1440.


----------



## Bartouille

He's right about Witcher 3. Power consumption in that game is insane.

Best I got so far on FS was 26.5k graphics score with some driver tweaks (1750/1100mhz 1.2v). I'm still on the reference cooler tho and stock cpu. Hopefully when I get my loop done I get over 27k.


----------



## mtrai

Doubleyoupee said:


> That's right :thumb: because I think 200-270W is unrealistic at 1700mhz (actual clock), 1200mv and 100% usage (so no fps cap or chill) in any scenario that actually stresses Vega.
> But yeah, you could lower voltage if the power usage gets too high. That's what I do in Witcher 3, but then I can't reach high clocks anymore, and definitely not 1700mhz in-game.
> 
> Maybe resolution increases power consumption too (ROP power usage?). I'm running 3440x1440.


Actually y'all are all sort of correct in how you are seeing the Core clock vs total watts. Y'all need to understand a few things. IT is gonna depend on the type of the GPU workload. I can run OpenCL at 1750ish clocks but only use about 240ish watts with 100% load. On the other hand Firestrike which runs at about 1690 and Timespy which runs at about 1708, which are using different DX both draw a different amount of total watts. So keep in mind it is also application specific on power draw.

If you want to test this run Realbench openCL test and have a monitor program open, then run firestike and finally timespy...you will see each has a totally different power draw. Another example...play wow with everything on ultra has a totally different power draw then playing FarCry 5. 

So to sum it up...yeah it is gonna need different power draws depending on what it is doing. So to sum up what I have had to do is create different profiles in Overdriventool for some specific applications. I have to use one just custom to timespy due to the higher power draw for best benches between Firestrike and Timespy.

So I have my everyday general use safe as my Soft Power Play Table registy edit. Then I have one for Firestike one for Timespy, one for a game I am alpha testing. etc.


----------



## TrixX

Doubleyoupee said:


> That's right :thumb: because I think 200-270W is unrealistic at 1700mhz (actual clock), 1200mv and 100% usage (so no fps cap or chill) in any scenario that actually stresses Vega.
> But yeah, you could lower voltage if the power usage gets too high. That's what I do in Witcher 3, but then I can't reach high clocks anymore, and definitely not 1700mhz in-game.
> 
> Maybe resolution increases power consumption too (ROP power usage?). I'm running 3440x1440.


For the Witcher 3 maybe, but for the workloads I was using, no. So in essence you were arguing against your preconceptions. How about make the questions useful next time. ******* waste of time this was.

Could have just asked me to see what power draw I'd get in Witcher 3 instead of trying to suggest I was lying. I have no need to do so.

BTW Same settings as Ark. Guess my card likes 1.2v (1.15 shown due to vdroop)


----------



## Doubleyoupee

TrixX said:


> For the Witcher 3 maybe, but for the workloads I was using, no. So in essence you were arguing against your preconceptions. How about make the questions useful next time. ******* waste of time this was.
> 
> Could have just asked me to see what power draw I'd get in Witcher 3 instead of trying to suggest I was lying. I have no need to do so.
> 
> BTW Same settings as Ark. Guess my card likes 1.2v (1.15 shown due to vdroop)


Lol, I asked you for a screenshot and you said "So yeah, not going to piss about to satisfy your disbelief."

And 1920x1080, possible non-ultra settings (judging by that 114fps, too)? Even now it's not 200-270W.
Try again on ultra settings (HBAO+ adds at least 30Watts) and proper resolution.

For some reason you believe that a "golden sample" will use less watts, that's not how it works. If you use the same voltage, same frequency and same load, every vega will roughly use the same.
A golden chip just allows you to use less voltage for the same frequency. You don't have to prove anything.


----------



## SpecChum

Anyone have issue with Hellblade needing lower HBM clocks?

I've always assumed my HBM wasn't the best as anything above 1020Mhz artifacts in Hellblade, eventually locking up, so I set it there ages ago and I've left it since.

I can bench at 1100Mh tho, seemingly without issue so I'm starting to think it might be something about the game maybe?

I haven't gamed that much recently so I guess I should really up the frequency and play. Gives me an excuse to play more


----------



## 113802

Doubleyoupee said:


> Lol, I asked you for a screenshot and you said "So yeah, not going to piss about to satisfy your disbelief."
> 
> And 1920x1080, possible non-ultra settings (judging by that 114fps, too)? Even now it's not 200-270W.
> Try again on ultra settings (HBAO+ adds at least 30Watts) and proper resolution.
> 
> For some reason you believe that a "golden sample" will use less watts, that's not how it works. If you use the same voltage, same frequency and same load, every vega will roughly use the same.
> A golden chip just allows you to use less voltage for the same frequency. You don't have to prove anything.


That's correct Doubleyoupee, they will use the same. I've never seen my card use under 270w with 1200mV. At 1750Mhz sustained at 1200mV it's around 270-330w. Never seen it above 330w in any workload. 1640Mhz at 1100mV is between 190-230w and never saw it above 230w. 

Here's an Overwatch video of my RX Vega 64 LC at 1750Mhz @ 1200mV


----------



## SpecChum

WannaBeOCer said:


> That's correct Doubleyoupee, they will use the same. I've never seen my card use under 270w with 1200mV. At 1750Mhz sustained at 1200mV it's around 270-330w. Never seen it above 330w in any workload. 1640Mhz at 1100mV is between 190-230w and never saw it above 230w.
> 
> Here's an Overwatch video of my RX Vega 64 LC at 1750Mhz @ 1200mV
> 
> https://www.youtube.com/watch?v=vX1YD-n9xwc


You're not at 50% PL are you? 330W is the max for that (220 x 1.5 = 330).


----------



## Gustavo Al

Hi guys,

I've got a Vega 64 a couple of months ago and I'm trying to squeeze all possible performance from it, so far I've modded the cooling to liquid with an liquid freezer 120 and flashed the Vega 64 LC bios on it.

I'm trying to increase the power limit and GPU voltage, I've tried the OverdriveNTool but I always get "Failed to set GPU values (ErrorCode -1)", can anybody tell what I'm doing wrong?

Also, I've downloaded the RX_VEGA_64_AIO_Soft_PP from a couple of pages behind, I've executed it but couldn't see any changes, GPU behaves the same and I still have the same power limit up to 50%.


----------



## 113802

SpecChum said:


> You're not at 50% PL are you? 330W is the max for that (220 x 1.5 = 330).


I am at 50% PL, stock is 264w without a power level increase on LC. I am using a LC card with an EK Waterblock. stock is 264w while the power saving UEFI is 220w.



Gustavo Al said:


> Hi guys,
> 
> I've got a Vega 64 a couple of months ago and I'm trying to squeeze all possible performance from it, so far I've modded the cooling to liquid with an liquid freezer 120 and flashed the Vega 64 LC bios on it.
> 
> I'm trying to increase the power limit and GPU voltage, I've tried the OverdriveNTool but I always get "Failed to set GPU values (ErrorCode -1)", can anybody tell what I'm doing wrong?
> 
> Also, I've downloaded the RX_VEGA_64_AIO_Soft_PP from a couple of pages behind, I've executed it but couldn't see any changes, GPU behaves the same and I still have the same power limit up to 50%.


Stop trying to overvolt and undervolt instead. Just use Wattman, it's a great tool. Try 1200mV +50% power level and see if you can run that. If you can than overclock your HBM.


----------



## Gustavo Al

WannaBeOCer said:


> I am at 50% PL, stock is 264w without a power level increase on LC. I am using a LC card with an EK Waterblock. stock is 264w while the power saving UEFI is 220w.
> 
> 
> 
> Stop trying to overvolt and undervolt instead. Just use Wattman, it's a great tool. Try 1200mV +50% power level and see if you can run that. If you can than overclock your HBM.


Already did at first, but the clocks don't reach the max boost due to power limit, in Witcher 3 it only goes to around 1600Mhz while using 360W according to wattman.


----------



## 113802

Gustavo Al said:


> Already did at first, but the clocks don't reach the max boost due to power limit, in Witcher 3 it only goes to around 1600Mhz while using 360W according to wattman.


360w when you undervolted the P7 to 1200mV? You'll never see the max boost of 1750Mhz but you should be around 1680-1730Mhz at 1200mV.


----------



## SpecChum

WannaBeOCer said:


> I am at 50% PL, stock is 264w without a power level increase on LC. I am using a LC card with an EK Waterblock. stock is 264w while the power saving UEFI is 220w.


Yeah, the LC comes with a +20% at PL stock, which is 264W, power saving just puts this back to +0%.

By setting 50% you're setting 330W max which is why you're not getting above that. Not that you need to really, as you've found out.

Non LC Vegas apply a negative PL (-20% I think?) on power save and +0% on normal.

+0% PL is 220W on both.

EDIT: you've got me doubting myself now, I think this is right, been a while since I read this stuff lol

EDIT2: Seems I was right to doubt, @WannaBeOCer has confirmed +0% PL is 264W on LC


----------



## Doubleyoupee

Gustavo Al said:


> Already did at first, but the clocks don't reach the max boost due to power limit, in Witcher 3 it only goes to around 1600Mhz while using 360W according to wattman.


You need to use way lower voltages in Witcher 3. At 1200mv you will run into power limit with this game on ultra/high res.
I was actually going to do a separate post on this because this game made me realize how much my Vega 64 drops off a cliff efficiency wise above ~1650mhz P7.

Before Witcher 3, my profile was 1697mhz/1115mv because this kept my 64 roughly below 300W while boosting around 1630mhz. Quite ok. This was on the edge because at 1100mv it would crash, even at 1690mhz.

But for Witcher 3 with this profile I was using 330W and only 1610mhz, so started to make a new profile to get power down with slighly lower clocks. If I lowered the voltage, clocks would actually go up (less close to power limit?), but then crash.
So I lowered clock to 1650mhz and to my surprise I could lower voltage all the way to 1050mv while clocks only dropped to 1590mhz (-25mhz).
FPS is within margin of error, even higher sometimes now with +50mhz hbm2 (both were hovering 70-72fps).

On the pic you can also see that it's actually going below P5 voltage (1100mv?) without touching powertables/registry. Even to 0.975mv after vdroop. It was actually stable for 45min. -50W just like that. Quite impressive.


----------



## 113802

SpecChum said:


> Yeah, the LC comes with a +20% at PL stock, which is 264W, power saving just puts this back to +0%.
> 
> By setting 50% you're setting 330W max which is why you're not getting above that. Not that you need to really, as you've found out.
> 
> Non LC Vegas apply a negative PL (-20% I think?) on power save and +0% on normal.
> 
> +0% PL is 220W on both.
> 
> EDIT: you've got me doubting myself now, I think this is right, been a while since I read this stuff lol


When I run my Vega 64 LC at 1250mV it spikes up to 364w with 50% PL. When I under volt it drops down to 330w and below. There are two different UEFIs on the card. One is the power saving UEFI 220w while the LC UEFI is 264w.


----------



## SpecChum

WannaBeOCer said:


> When I run my Vega 64 LC at 1250mV it spikes up to 364w with 50% PL. When I under volt it drops down to 330w and below. There are two different UEFIs on the card. One is the power saving UEFI 220w while the LC UEFI is 264w.


Righto, I added the edit as I wasn't 100% after I posted it.

So +0% PL on the LC must be 264W, as you say.

Good to know, thanks for confirming.


----------



## majestynl

Gustavo Al said:


> Hi guys,
> 
> Also, I've downloaded the RX_VEGA_64_AIO_Soft_PP from a couple of pages behind, I've executed it but couldn't see any changes, GPU behaves the same and I still have the same power limit up to 50%.


You have probably installed the drivers few times without a fresh DDU uninstall, so the pp reg is applying to the first driver assignment in register instead of your currently in use one.

2 ways to fix:
- You could change the value in the reg file to the right value, or
- Run DDU in Windows save mode, uninstall amd Radeon drivers. I would press few times on uninstall after each complete uninstall.

Then restart PC, install latest drivers for Vega. Restart again. Apply PP reg file, then restart PC again. Now you are able to see the PP changes in Wattmann. Including higher the Power limit.


----------



## TrixX

Doubleyoupee said:


> Lol, I asked you for a screenshot and you said "So yeah, not going to piss about to satisfy your disbelief."
> 
> And 1920x1080, possible non-ultra settings (judging by that 114fps, too)? Even now it's not 200-270W.
> Try again on ultra settings (HBAO+ adds at least 30Watts) and proper resolution.
> 
> For some reason you believe that a "golden sample" will use less watts, that's not how it works. If you use the same voltage, same frequency and same load, every vega will roughly use the same.
> A golden chip just allows you to use less voltage for the same frequency. You don't have to prove anything.


Everything Ultra, Hairworks disabled. Stop insinuating I'm doing something underhand or **** off. Really bored of that ****.

If a GPU requires less voltage to perform at the same performance as another card, then physics dictates that it will use less watts. It's not hard to work that out. Stop being disingenuous and combative for the sake of trying to appear more intelligent.


----------



## 113802

Doubleyoupee said:


> You need to use way lower voltages in Witcher 3. At 1200mv you will run into power limit with this game on ultra/high res.
> I was actually going to do a separate post on this because this game made me realize how much my Vega 64 drops off a cliff efficiency wise above ~1650mhz P7.
> 
> Before Witcher 3, my profile was 1697mhz/1115mv because this kept my 64 roughly below 300W while boosting around 1630mhz. Quite ok. This was on the edge because at 1100mv it would crash, even at 1690mhz.
> 
> But for Witcher 3 with this profile I was using 330W and only 1610mhz, so started to make a new profile to get power down with slighly lower clocks. If I lowered the voltage, clocks would actually go up (less close to power limit?), but then crash.
> So I lowered clock to 1650mhz and to my surprise I could lower voltage all the way to 1050mv while clocks only dropped to 1590mhz (-25mhz).
> FPS is within margin of error, even higher sometimes now with +50mhz hbm2 (both were hovering 70-72fps).
> 
> On the pic you can also see that it's actually going below P5 voltage (1100mv?) without touching powertables/registry. Even to 0.975mv after vdroop. It was actually stable for 45min. -50W just like that. Quite impressive.


Maybe it's because you're playing at that wack resolution? Witcher 3 at max settings is acting the same way any other game I play at 2560x1440. It's actually using less power by 15w than Overwatch with 1750Mhz @ 1200mV.


----------



## Doubleyoupee

TrixX said:


> Everything Ultra, Hairworks disabled. Stop insinuating I'm doing something underhand or **** off. Really bored of that ****.
> 
> If a GPU requires less voltage to perform at the same performance as another card, then physics dictates that it will use less watts. It's not hard to work that out. Stop being disingenuous and combative for the sake of trying to appear more intelligent.


Yes but you are running 1200mv which is not less voltage. You were claiming as low as 200W at 1700mhz 1200mv 100% which sounded unrealistic to me.
Also you started it with "bored of you already" and acting like an elite when I was just curious and asked for a screenshot to figure out why you are using 100W less power at significantly higher clocks and frequencies.



WannaBeOCer said:


> Maybe it's because you're playing at that wack resolution? Witcher 3 at max settings is acting the same way any other game I play at 2560x1440. It's actually using less power by 15w than Overwatch with 1750Mhz @ 1200mV.


Yes maybe Vega is using significant more power at higher res (ROPs). Will test later. Are you using HBAO+?
Maybe Trixx is always using 1080p which would explain his lower usage in this case. That's all I wanted to figure out...


----------



## TrixX

Doubleyoupee said:


> Yes but you are running 1200mv which is not less voltage. You were claiming as low as 200W at 1700mhz 1200mv 100% which sounded unrealistic to me.
> Also you started it with "bored of you already" and acting like an elite when I was just curious and asked for a screenshot to figure out why you are using 100W less power at significantly higher clocks and frequencies.


I responded like that because of the insinuation of lying each and every post. It didn't change either. I claimed as low as 200W at 1700MHz and 1200mv because I wasn't specifying a single specific workload like you suddenly started doing in an attempt to prove that incorrect. No idea why that got your goat but it did. Then you pissed me off by the continuous assertion I was doing something incorrect to fabricate numbers. Had you asked what power it was using in The Witcher 3 in the first post without the insinuation I was talking **** then it would have been a different reaction.

Amusingly if I'm running iRacing I barely hit 140W in GPU usage and if I left it to it's own devices it'd be running at P3 state for it not P7. Only reason I don't run it that low (doesn't affect FPS) is that when it shifts P State it stutters and I can't have that while racing.



Doubleyoupee said:


> Yes maybe Vega is using significant more power at higher res (ROPs). Will test later. Are you using HBAO+?
> Maybe Trixx is always using 1080p which would explain his lower usage in this case. That's all I wanted to figure out...


Yes I do use 1080p, not got around to getting the dual 3440x1440 monitors yet. I can simulate 3440x1440 but it's not accurate as the simulation isn't quite as true as running it live.


----------



## sinnedone

My Vega64 flashed to liquid BIOS will pull up to 370W at stock 1750mhz 1.25v +50 power limit.(no power limit mods)


----------



## TrixX

sinnedone said:


> My Vega64 flashed to liquid BIOS will pull up to 370W at stock 1750mhz 1.25v +50 power limit.(no power limit mods)


264W *1.5 = 396W max. So yeah that's expected.


----------



## 113802

Doubleyoupee said:


> Yes maybe Vega is using significant more power at higher res (ROPs). Will test later. Are you using HBAO+?
> Maybe Trixx is always using 1080p which would explain his lower usage in this case. That's all I wanted to figure out...


HBAO+ is enabled, i even used AMD's Virtual Super Resolution. Set it to 4k and 5k and all three used the same power. My card still ran at 1700Mhz-1710Mhz and used 300-320w frames tanked instantly of course.


----------



## majestynl

TrixX said:


> Oh bollocks a serious challenger



Sorry but took my places back 

*Timespy (1st Place):*
https://forums.overclockers.co.uk/threads/time-spy-standard-dx-12-bench.18740536/

*Heaven 4.0(1st Place):*
https://forums.overclockers.co.uk/threads/unigine-heaven-4-benchmark.18487976/page-206#post-28330031

*Superposition (2nd Place):*
https://forums.overclockers.co.uk/threads/unigine-superposition-benchmark.18775328/

_*Will throw AMDMatt soon from his throne @ Superposition_

Tired for today..Will leave you alone with Firestrike...(for now )


----------



## mtrai

majestynl said:


> Sorry but took my places back
> 
> *Timespy (1st Place):*
> https://forums.overclockers.co.uk/threads/time-spy-standard-dx-12-bench.18740536/
> 
> *Heaven 4.0(1st Place):*
> https://forums.overclockers.co.uk/threads/unigine-heaven-4-benchmark.18487976/page-206#post-28330031
> 
> *Superposition (2nd Place):*
> https://forums.overclockers.co.uk/threads/unigine-superposition-benchmark.18775328/
> 
> _*Will throw AMDMatt soon from his throne @ Superposition_
> 
> Tired for today..Will leave you alone with Firestrike...(for now )


I am coming for y'all installing drivers I can show.


----------



## Gustavo Al

WannaBeOCer said:


> 360w when you undervolted the P7 to 1200mV? You'll never see the max boost of 1750Mhz but you should be around 1680-1730Mhz at 1200mV.


Can't saw my card getting to 1680Mhz once, for some reason is always below the selected clock on WattMan.



Doubleyoupee said:


> You need to use way lower voltages in Witcher 3. At 1200mv you will run into power limit with this game on ultra/high res.
> I was actually going to do a separate post on this because this game made me realize how much my Vega 64 drops off a cliff efficiency wise above ~1650mhz P7.
> 
> Before Witcher 3, my profile was 1697mhz/1115mv because this kept my 64 roughly below 300W while boosting around 1630mhz. Quite ok. This was on the edge because at 1100mv it would crash, even at 1690mhz.
> 
> But for Witcher 3 with this profile I was using 330W and only 1610mhz, so started to make a new profile to get power down with slighly lower clocks. If I lowered the voltage, clocks would actually go up (less close to power limit?), but then crash.
> So I lowered clock to 1650mhz and to my surprise I could lower voltage all the way to 1050mv while clocks only dropped to 1590mhz (-25mhz).
> FPS is within margin of error, even higher sometimes now with +50mhz hbm2 (both were hovering 70-72fps).
> 
> On the pic you can also see that it's actually going below P5 voltage (1100mv?) without touching powertables/registry. Even to 0.975mv after vdroop. It was actually stable for 45min. -50W just like that. Quite impressive.


Indeed TW3 have a very strange behavior, after changing back to the original bios, I'm getting higher clocks with lower voltages.



majestynl said:


> You have probably installed the drivers few times without a fresh DDU uninstall, so the pp reg is applying to the first driver assignment in register instead of your currently in use one.
> 
> 2 ways to fix:
> - You could change the value in the reg file to the right value, or
> - Run DDU in Windows save mode, uninstall amd Radeon drivers. I would press few times on uninstall after each complete uninstall.
> 
> Then restart PC, install latest drivers for Vega. Restart again. Apply PP reg file, then restart PC again. Now you are able to see the PP changes in Wattmann. Including higher the Power limit.


I've just modded the card to water cooling, using an non compatible AIO and bracket, and you took me by a noob? Hahaha, of course I'm using DDU, the reason for not seeing any changes is that the file don't change the power limit by itself, after learning which lines to change in one of Buildzoid's videos it worked fine, and you don't even need to restart after applying, restarting the driver with C.R.U. is enough, anyway thanks for the advice.


Thanks for the help guys, weirdly enough my card never gets to the max clocks set on Wattman, even with +150% power and temperatures below 60c, looking on the internet I've saw similar cases, but never with a water cooled card.


----------



## Doubleyoupee

Gustavo Al said:


> Indeed TW3 have a very strange behavior, after changing back to the original bios, I'm getting higher clocks with lower voltages.


I think it's because with higher voltages it's getting close to power/amp limited.
I see the same behavior in other loads at 1780mhz P7. If I clock down HBM2, freeing up 10watts, my core clock will increase slightly.



TrixX said:


> Had you asked what power it was using in The Witcher 3 in the first post without the insinuation I was talking **** then it would have been a different reaction.
> 
> Yes I do use 1080p, not got around to getting the dual 3440x1440 monitors yet. I can simulate 3440x1440 but it's not accurate as the simulation isn't quite as true as running it live.


I did ask that except it was Tombraider Dox instead because it's a free demo and also uses a lot of power and I wasn't sure whether you had Witcher 3...

But let's move on. 1080p might be the reason. Didn't know there were still people on 1080p with a 600$ GPU.
I will render regular 1440p and 1080p later when I get home and see how much difference that makes. I didn't think it would be that much when you use 100% gpu in both cases.


----------



## majestynl

Gustavo Al said:


> Can't saw my card getting to 1680Mhz once, for some reason is always below the selected clock on WattMan.
> 
> 
> 
> Indeed TW3 have a very strange behavior, after changing back to the original bios, I'm getting higher clocks with lower voltages.
> 
> 
> 
> I've just modded the card to water cooling, using an non compatible AIO and bracket, and you took me by a noob? Hahaha, of course I'm using DDU, the reason for not seeing any changes is that the file don't change the power limit by itself, after learning which lines to change in one of Buildzoid's videos it worked fine, and you don't even need to restart after applying, restarting the driver with C.R.U. is enough, anyway thanks for the advice.
> 
> 
> Thanks for the help guys, weirdly enough my card never gets to the max clocks set on Wattman, even with +150% power and temperatures below 60c, looking on the internet I've saw similar cases, but never with a water cooled card.


Don't see you as Noob just simple answering you how to fix it. And by the way, my both solutions are exactly fine. Or you could fix it by just having 1 driver installed in register or changing the reg file to the right driver assignment. If you reinstalled all drivers with DDU properly the power limit would accepted fine!!

And about restart: sure that's possible, I can even tell you more ways. But I just suggested the simplest way.


----------



## mtrai

Gustavo Al said:


> Can't saw my card getting to 1680Mhz once, for some reason is always below the selected clock on WattMan.
> 
> 
> 
> Indeed TW3 have a very strange behavior, after changing back to the original bios, I'm getting higher clocks with lower voltages.
> 
> 
> 
> I've just modded the card to water cooling, using an non compatible AIO and bracket, and you took me by a noob? Hahaha, of course I'm using DDU, the reason for not seeing any changes is that the file don't change the power limit by itself, after learning which lines to change in one of Buildzoid's videos it worked fine, and you don't even need to restart after applying, restarting the driver with C.R.U. is enough, anyway thanks for the advice.
> 
> 
> Thanks for the help guys, weirdly enough my card never gets to the max clocks set on Wattman, even with +150% power and temperatures below 60c, looking on the internet I've saw similar cases, but never with a water cooled card.


Which GPU do you have? And how did you mod your card to make it fit?


----------



## Gustavo Al

Doubleyoupee said:


> I think it's because with higher voltages it's getting close to power/amp limited.
> I see the same behavior in other loads at 1780mhz P7. If I clock down HBM2, freeing up 10watts, my core clock will increase slightly.


Makes sense, but I can't see improvements even even raising power/amp limits.



majestynl said:


> Don't see you as Noob just simple answering you how to fix it. And by the way, my both solutions are exactly fine. Or you could fix it by just having 1 driver installed in register or changing the reg file to the right driver assignment. If you reinstalled all drivers with DDU properly the power limit would accepted fine!!
> 
> And about restart: sure that's possible, I can even tell you more ways. But I just suggested the simplest way.


Cool man, the noob thing was a joke, would be weird somebody hard modding a graphics card without knowing how to properly uninstall drivers.

Your solutions were fine, just don't apply to my case, what I was doing wrong is applying an reg entry that doesn't change anything.



mtrai said:


> Which GPU do you have? And how did you mod your card to make it fit?


Vega 64 ref from Sapphire, I've disassembled it and then used an modded Intel retention ring to hold an Liquid Freezer 120 on my card, there's a (not so good) picture:


----------



## AmcieK

GUys any idea for coil whine ... the graphics do not bother me but i can hear the headphones when it is quiet...


----------



## Doubleyoupee

AmcieK said:


> GUys any idea for coil whine ... the graphics do not bother me but i can hear the headphones when it is quiet...


Limit max FPS to something like 100-150 depending on your monitor (with RTSS or chill).


----------



## Doubleyoupee

So here's the comparison between 3440x1440 and 1920x1080. Everything else equal. (Nevermind both showing 3440x1440, that's just afterburner text, can't find a variable for resolution)
As you can see, it's +-25W difference even though both are at 100% usage. Quite interesting. I guess the ROPs (or whatever) do use quite a bit of power.

Also interesting to see that on 1080p it boosts higher. This is the behavior I was talking about before, where it clocks higher at lower power usage, even though ~315W from the 3440x1440 is not at the power limit at all.


----------



## 113802

Doubleyoupee said:


> So here's the comparison between 3440x1440 and 1920x1080. Everything else equal. (Nevermind both showing 3440x1440, that's just afterburner text, can't find a variable for resolution)
> As you can see, it's +-25W difference even though both are at 100% usage. Quite interesting. I guess the ROPs (or whatever) do use quite a bit of power.
> 
> Also interesting to see that on 1080p it boosts higher. This is the behavior I was talking about before, where it clocks higher at lower power usage, even though ~315W from the 3440x1440 is not at the power limit at all.


What did you change to prevent it from using 365w? Also that difference in boost speed/wattage can depend on the scene. In many games I see a few mhz drop along with wattage varying from 270-330w depending on the scene. Also that's still too much for 1050mV. I'm only using 230w in Witcher at 2560x1440p.


----------



## AlphaC

Can someone with a VEGA FE (Radeon Pro drivers) get over 13FPS in Creo benchmark subtest 11? I'm really surprised that nobody has yet.

http://hwbot.org/benchmark/specviewperf_12_creo_subtest_11/


Highest validated official no overclock score is 12.58 :

https://www.spec.org/gwpg/gpc.data/vp12.1/creo01.html




Download benchmark here:
ftp://ftp.spec.org/dist/gpc/opc/viewperf/SPECviewperf12_1_1.zip


I'm also interested in how well a RX VEGA 56 with a power limit of 150W does, whether it outperforms a WX 7100 or Quadro P4000 is of interest


Nvidia non pro cards absolutely die at this benchmark. GTX 1080 Ti is slower than R9 290 on hwbot


What I have established is 

* Ryzen 7 2700X is about 1 FPS better than Ryzen 7 1700X @ 3.9GHz
* Quadro P2000 only gets sub 10 FPS on this


In theory if someone could get WX 8200 drivers to work on RX VEGA56 it could do well too , albeit without ECC. (see https://www.amd.com/en/products/professional-graphics/radeon-pro-wx-8200)


Hardwareluxx had an expose in which they showed WX 8200 performing far better when rendering and viewport was on the GPU at the same time: https://www.hardwareluxx.de/index.p...g-amd-praesentiert-die-radeon-pro-wx8200.html
Similar: https://www.kitguru.net/components/...wx-8200-professional-graphics-card-review/11/


----------



## Doubleyoupee

WannaBeOCer said:


> What did you change to prevent it from using 365w? Also that difference in boost speed/wattage can depend on the scene. In many games I see a few mhz drop along with wattage varying from 270-330w depending on the scene. Also that's still too much for 1050mV. I'm only using 230w in Witcher at 2560x1440p.


 Downvolted from 1.2 to ~1.1v. Actually playing on 1050mv now. I've never used more than 1115mv except for benchmarks since power skyrockets.

Yes scene makes a difference, but this at the same spot within 1min. I checked some more and it's 20-25W consistently less on 1080p.
These results are not at 1050mv P7 but at 1115mv (old profile). It happens to be around 1050mv after vdroop . At 1050mv p7 I get around 250-275W depending on scene. I guess the remaining difference is because of resolution, scene and setting differences.

I'm now actually using my witcher 3 profile in all games. What a difference.. I'm using even <250W now in other games while only getting 30mhz less, which I've compensated with extra hbm2. Card stays below 60c now:thumb:


----------



## Offler

Doubleyoupee said:


> Downvolted from 1.2 to ~1.1v. Actually playing on 1050mv now. I've never used more than 1115mv except for benchmarks since power skyrockets.
> 
> Yes scene makes a difference, but this at the same spot within 1min. I checked some more and it's 20-25W consistently less on 1080p.
> These results are not at 1050mv P7 but at 1115mv (old profile). It happens to be around 1050mv after vdroop . At 1050mv p7 I get around 250-275W depending on scene. I guess the remaining difference is because of resolution, scene and setting differences.
> 
> I'm now actually using my witcher 3 profile in all games. What a difference.. I'm using even <250W now in other games while only getting 30mhz less, which I've compensated with extra hbm2. Card stays below 60c now:thumb:


I have FuryX. Its not that easy to heat it up, but i tend to use Vsync + Frame limit (63 fps) in order to have smooth performance, with best possible frame/watt ratio.


----------



## Doubleyoupee

Offler said:


> I have FuryX. Its not that easy to heat it up, but i tend to use Vsync + Frame limit (63 fps) in order to have smooth performance, with best possible frame/watt ratio.


 I have freesync so no need for that. Game is 100% smooth, no complains there. 
Also we have Radeon chill now


----------



## AmcieK

Hello again . 

I think i have stable oc but no ... like always bf verified my settings . Sometimes game freez and quit to windows , sometimes just have sometimes like this freez and graphic isue . Its the core or mem problem or must start again from 0 :E 

https://drive.google.com/file/d/1kvlLkAxNETswMpaNjyFym_SAXM6NAujZ/view?usp=sharing


----------



## Spacebug

From my experience, freeze and quit to windows indicates too low Vcore, or too high coreclock for a given Vcore, whichever way you want to look at it...

Unstable memory OC usually gives some colorful artifacts if just past the edge of stability or black screen/lost display if way past stability.


----------



## VicsPC

Has anyone else noticed higher memory temps then usual with latest drivers? I used to be within 3°C between my core and memory now I'm closer to 7°C without changing my setup at all, still watercooled still running the same pump speed. Maybe TMI needs to be reapplied?


----------



## Ne01 OnnA

^^ Yup it's Low V Core.

I have for BFV (70FPS FreeSync, no 0.01% Dips - Stable 70, no-Chill for FPPS first person perspective shooter)
P6 1662MHz 1.05mV
P7 1690MHz 1.087mV (HWinfo64 shows 1.081v)
HBM2 1150MHz 975mV

Actual need for BFV is ~1650Mhz (Ultra/High settings w/no input Lag Tweak -> GstRender.FutureFrameRendering 0 )

========
UPD. Todays HWinfo w/Adrenaline 18.11.2
Here:


----------



## colorfuel

@Ne01 OnnA:

How is it possible that you're only having 168W @ P7 1690MHz 1.087mV (HWinfo64 shows 1.081v)?


Also is it possible to lower vdroop with another bios? Or is that HW related? Mine drops almost 50mv from set voltage, yours seems to drop only 6mv.


----------



## Ne01 OnnA

colorfuel said:


> @Ne01 OnnA:
> 
> How is it possible that you're only having 168W @ P7 1690MHz 1.087mV (HWinfo64 shows 1.081v)?
> 
> 
> Also is it possible to lower vdroop with another bios? Or is that HW related? Mine drops almost 50mv from set voltage, yours seems to drop only 6mv.


HW related + You need to Edit PP_Power states with OverdriveN Tool.
Also it's FPS Capped at 70FPS (so GPU is not stressed to the max)

On Vega is very good/easy way to have Very Power Efficient Gaming.
You got FRT (FPS Cap) & Radeon Chill


----------



## majestynl

AmcieK said:


> Hello again .
> 
> I think i have stable oc but no ... like always bf verified my settings . Sometimes game freez and quit to windows , sometimes just have sometimes like this freez and graphic isue . Its the core or mem problem or must start again from 0 :E
> 
> https://drive.google.com/file/d/1kvlLkAxNETswMpaNjyFym_SAXM6NAujZ/view?usp=sharing





Spacebug said:


> From my experience, freeze and quit to windows indicates too low Vcore, or too high coreclock for a given Vcore, whichever way you want to look at it...
> 
> Unstable memory OC usually gives some colorful artifacts if just past the edge of stability or black screen/lost display if way past stability.


Not totally true from my experience. Freeze en quit to windows / Artifacts / Black screen / Indicates mostly something with Memory OC.
To low vcore mostly fully freezes the game and/or system. Need restart most of times!




Ne01 OnnA said:


> ^^ Yup it's Low V Core.
> 
> I have for BFV (70FPS FreeSync, no 0.01% Dips - Stable 70, no-Chill for FPPS first person perspective shooter)
> P6 1662MHz 1.05mV
> P7 1690MHz 1.087mV (HWinfo64 shows 1.081v)
> HBM2 1150MHz 975mV
> 
> Actual need for BFV is ~1650Mhz (Ultra/High settings w/no input Lag Tweak -> GstRender.FutureFrameRendering 0 )
> 
> ========
> UPD. Todays HWinfo w/Adrenaline 18.11.2
> Here:


Thanks m8 for the share! Do you really need the 975mv below the HBM? Cause said many times that voltage is doing nothing for me  Can run 1200HBM with 950 or any other + value 




colorfuel said:


> @Ne01 OnnA:
> 
> How is it possible that you're only having 168W @ P7 1690MHz 1.087mV (HWinfo64 shows 1.081v)?
> 
> 
> Also is it possible to lower vdroop with another bios? Or is that HW related? Mine drops almost 50mv from set voltage, yours seems to drop only 6mv.


vdroop is generated by the chip ! All chips acts different on this! 
Just a tip: Make a graph with your P6 en P7 values. e.g. clocks on the X as, and set voltages on the Y as. This way you can easily test and see all data from a graph and double check with Hwinfo 
Including minimal voltages needed for a certain speed so you can easily adjust new values while OC'ing!


----------



## Doubleyoupee

Anyone have any reliable info on the bios switches and their corresponding power limits?
Can the GPUZ board power be trusted?

On techpowerup I can see two bioses for my Nitro 64+, 240 and 264 watt.
Are these the two bioses available on the Nitro 64+ with the switches?

I've never seen my card go above 360W on +50%, which would indicate I have the 240W bios (240*1.50=360). However I thought the default bios was high performance and the 2nd switch was 220W.
If not, where did the 264W bios come from?


----------



## Ne01 OnnA

majestynl said:


> Thanks m8 for the share! Do you really need the 975mv below the HBM? Cause said many times that voltage is doing nothing for me  Can run 1200HBM with 950 or any other + value


As i said many Times, HBM2 Core is not actuall HBM2 V (This one is 1.356v) but instead this is Infinity Fabric Voltage.
The GPU-->HBM2 connect Fabric Voltage.

I need 993v or 1mV for stable 1200MHz with low GPU v

e.g.
1767MHz @ 1.1v | HBM2 @ 1200MHz needs 1mV IF HBM2 voltage.
It can, and it will rise Your overall Temps 

===
This was tested in september driver (every driver is different, one need more/less than the other, usually WHQL is more Power Efficient read: It needs less V)


----------



## Doubleyoupee

Ne01 OnnA said:


> As i said many Times, HBM2 Core is not actuall HBM2 V (This one is 1.356v) but instead this is Infinity Fabric Voltage.
> The GPU-->HBM2 connect Fabric Voltage.
> 
> I need 993v or 1mV for stable 1200MHz with low GPU v
> 
> e.g.
> 1767MHz @ 1.1v | HBM2 @ 1200MHz needs 1mV IF HBM2 voltage.
> It can, and it will rise Your overall Temps
> 
> ===
> This was tested in september driver (every driver is different, one need more/less than the other, usually WHQL is more Power Efficient read: It needs less V)


Source for memory voltage being infinity fabric voltage? I've seen many people say it's some sort of minimum chip voltage, like a lower limit.


----------



## majestynl

Ne01 OnnA said:


> As i said many Times, HBM2 Core is not actuall HBM2 V (This one is 1.356v) but instead this is Infinity Fabric Voltage.
> The GPU-->HBM2 connect Fabric Voltage.
> 
> I need 993v or 1mV for stable 1200MHz with low GPU v
> 
> e.g.
> 1767MHz @ 1.1v | HBM2 @ 1200MHz needs 1mV IF HBM2 voltage.
> It can, and it will rise Your overall Temps
> 
> ===
> This was tested in september driver (every driver is different, one need more/less than the other, usually WHQL is more Power Efficient read: It needs less V)


I know it isnt the HBM voltage..  therefor im saying that voltage has zero impact on my side.


----------



## TrixX

Doubleyoupee said:


> Source for memory voltage being infinity fabric voltage? I've seen many people say it's some sort of minimum chip voltage, like a lower limit.


Honestly the concept of Floor Voltage/Minimum Voltage has no citation either, it's just a running theory. Maybe worth getting LTMatt from the OCUK forums to clarify the exact function of that voltage.


----------



## Doubleyoupee

TrixX said:


> Honestly the concept of Floor Voltage/Minimum Voltage has no citation either, it's just a running theory. Maybe worth getting LTMatt from the OCUK forums to clarify the exact function of that voltage.


 Yeah, I've haven't found any reliable info on it so far. Even buildzoid didn't know what it was (although that was last year).
Pretty weird for a feature that is promintly available in wattman.

I've only had weird issues with it, like not being able to go below 1.1v on the core (even if I set 900mv p6 and p7) when my "memory voltage" was set to 900mv.
Read many others having bugs too. Very confusing. I have set it to 1050mv now and no more issues....


----------



## majestynl

Doubleyoupee said:


> Yeah, I've haven't found any reliable info on it so far. Even buildzoid didn't know what it was (although that was last year).
> Pretty weird for a feature that is promintly available in wattman.
> 
> I've only had weird issues with it, like not being able to go below 1.1v on the core (even if I set 900mv p6 and p7) when my "memory voltage" was set to 900mv.
> Read many others having bugs too. Very confusing. I have set it to 1050mv now and no more issues....


Check post from @gupsterg , he stated this many times


----------



## Shadowarez

subbed just picked up a Vega FE for $600 will read through this to tweak it once it arrives, already been watching the Gamers Nexus vids on how to under volt and keep it from throtteling with stock clocks. using wattman any other suggestions on tools to use itll be use for gaming untill amd drops there 7nm gpu.


----------



## TrixX

If it works, OverdriveNTool is a lot quicker and simpler to use than Wattman


----------



## Doubleyoupee

majestynl said:


> Check post from @gupsterg , he stated this many times


Found this by him.



gupsterg said:


> "Many will say Memory Voltage in Wattman is floor voltage, it isn't.
> 
> It is DPM 5 voltage. As it is a ACG/AVFS state and you can see in Valley run I will see will below the 1062mV I have set in it. I several times posted in owners club/ r/AMD , etc about what I had observed. As far as I can tell pretty much most do not take it on board and keep saying "set floor voltage by setting Memory Voltage in Wattman..."
> "


And this



gupsterg said:


> As stated before, Memory Voltage in WattMan/OverdrivenNT is DPM 5 VID. Do a test:-
> 
> i) Create a powerplay registry file for VBIOS you use.
> ii) Use the VEGA powerplay editor in OP to modify DPM 5 VID, save and apply the registry file.
> iii) Reset WattMan page, reboot system (if you use Windows Fast Startup do a restart not shutdown, so a fresh kernel is loaded).
> iv) Now when you open WattMan and change the Memory Voltage to manual control you will see what you set DPM 5 as.


I'm guessing DPM 5 = P5/state 5?
No idea ACG/AVFS is. 
So why AMD would put P5 voltage as memory voltage in wattman... what is the use of this entry?


----------



## greg1184

Question from someone who has joined the Red team for the first time in a long time (I last had a 6990):

I just got an RX Vega 64. I have Acer Predator X34, which is a G-Sync monitor with capabilities of 100hz. Since I have an AMD card now, am I required to get a freesync monitor in order to run at higher refresh rates? I notice when I try to set my refresh rate to 100hz in games like BF1 or BF5, it either flickers/refreshes resolution or crashes. The games run flawless at 60hz.

I also notice when I go to my info screen on the monitor's menu it says its max refresh rate is 60hz despite overclocking and 100hz setting enabled. 

Pardon my ignorance about this stuff.


----------



## 113802

greg1184 said:


> Question from someone who has joined the Red team for the first time in a long time (I last had a 6990):
> 
> I just got an RX Vega 64. I have Acer Predator X34, which is a G-Sync monitor with capabilities of 100hz. Since I have an AMD card now, am I required to get a freesync monitor in order to run at higher refresh rates? I notice when I try to set my refresh rate to 100hz in games like BF1 or BF5, it either flickers/refreshes resolution or crashes. The games run flawless at 60hz.
> 
> I also notice when I go to my info screen on the monitor's menu it says its max refresh rate is 60hz despite overclocking and 100hz setting enabled.
> 
> Pardon my ignorance about this stuff.


Are you using a DP cable? Did you set the refresh rate to 100hz in the Windows settings under "Display adapter properties" on the Monitor tab? You'll be able to set the monitor at the higher refresh rate but you'll lose out on G-Sync.


----------



## astrixx

I had to use pre render frames to use ultra on DX11 to get high frames 100-120fps, Running the latest Win 10 Pro 1809 with 18.11.2 I can now run full ultra on DX12 without pre render and get a stable 100+ on most Maps but I had to turn down my overclock on my RX Vega 64 Wave. Normally I would run 1755 1250mV and HBM 1145Mhz but DX12 was unstable as it would randomly freeze or crash, but after putting my RX Vega 64 wave to default clocks for HBM I was able to play without crashes but it did at the end of the round on the summery screen. I think there might be a memory bug on DX12 as it could crash between maps.

When I first tried DX12 last week it was a stuttering mess but now it runs really well besides the crashes but the performance is great on full ultra as it's using 100% GPU usage all the time, on DX11 it would only do that on full ultra when i ran pre render frames but that causes lag as you guys know.


----------



## astrixx

Anyone else had a huge problem when BFV defaulted the refresh rate to 60hz and your monitor was running 144hz freesync. I thought it was the driver and would constantly freeze up at launch but after turning off freesync it was able to start and then i was able to change the games refresh rate to 144hz, after that it would start fine with Freesync on. That was the last thing I was going to try before reverting to the previous AMD driver 18.11.1. BFV also had a update and reset the settings putting it back to 60hz. Not sure why it sets it in game first to 60hz when your monitor is already running at 144hz with freesync on.


----------



## doritos93

Picked up a Strix V64 and can't get anything higher than 1560MHz approx 280w in Superposition

I merged hellm's 142% reg, here are my settings

Fan 100%
P7 1630 
P7 mV 1100
PL 142%
HBM 1000
HBM mV 1100 

Temps peak at around 50c

Any ideas? Any BIOS I can try?


----------



## VicsPC

doritos93 said:


> Picked up a Strix V64 and can't get anything higher than 1560MHz approx 280w in Superposition
> 
> I merged hellm's 142% reg, here are my settings
> 
> Fan 100%
> P7 1630
> P7 mV 1100
> PL 142%
> HBM 1000
> HBM mV 1100
> 
> Temps peak at around 50c
> 
> Any ideas? Any BIOS I can try?


Might just be superposition, my core clock was pretty low in that. Im on stock settings on water and quite a lot of games i hit anywhere between 1550-1640mhz without issues.


----------



## astrixx

Seems like the BFV were DX12 crashes as it's been running fine on 1154Mhz HBM that's good to know lol


----------



## TrixX

doritos93 said:


> Picked up a Strix V64 and can't get anything higher than 1560MHz approx 280w in Superposition
> 
> I merged hellm's 142% reg, here are my settings
> 
> Fan 100%
> P7 1630
> P7 mV 1100
> PL 142%
> HBM 1000
> HBM mV 1100
> 
> Temps peak at around 50c
> 
> Any ideas? Any BIOS I can try?


If it's hitting 50C with fans, then it's definitely not pulling 280W...

280W for those settings is a mile out. The settings shown should hit around 180W. Set the Core clocks higher, so P7 to 1700 and leave everything else the same and see what happens. The +142% on the Power Target will prevent Power Throttling. Set HBM mv to 950mv or 1000mv. No need to have it at 1100mv.


----------



## Doubleyoupee

doritos93 said:


> Picked up a Strix V64 and can't get anything higher than 1560MHz approx 280w in Superposition
> 
> I merged hellm's 142% reg, here are my settings
> 
> Fan 100%
> P7 1630
> P7 mV 1100
> PL 142%
> HBM 1000
> HBM mV 1100
> 
> Temps peak at around 50c
> 
> Any ideas? Any BIOS I can try?


 What is your VRM temperature? The Asus has a lot of problems with that. Other than that, it's normal for Vega to not hit its target. If you want higher clock, you have to increase P7 even further.



TrixX said:


> If it's hitting 50C with fans, then it's definitely not pulling 280W...
> 
> 280W for those settings is a mile out. The settings shown should hit around 180W. Set the Core clocks higher, so P7 to 1700 and leave everything else the same and see what happens. The +142% on the Power Target will prevent Power Throttling. Set HBM mv to 950mv or 1000mv. No need to have it at 1100mv.


Not true. I can pull 330W and still be 50c with only 60% fans. The aftermarket air coolers on Vega are actually quite good, especially Sapphire and Powercolor.
Also 280W sounds a lot closer to what I'm seeing in superposition 1080p extreme at 1100mv than 180W. I will do a superposition bench a those exact settings and make a screenshot.


----------



## 1usmus

Hi guys, I'm with you now, next week I will get Vega 56 pulse. If there are owners of this video card, I will ask to share the settings for the undervolt 

Thanks!


----------



## gupsterg

Doubleyoupee said:


> Found this by him.
> 
> 
> 
> And this
> 
> 
> 
> I'm guessing DPM 5 = P5/state 5?
> No idea ACG/AVFS is.
> So why AMD would put P5 voltage as memory voltage in wattman... what is the use of this entry?


You need to look at the powerplay to understand why this is as I state and run simple test at end of post.

I asked AMD Matt few times that AMD need to improve WattMan not to cause this confusion.

Memory voltage is fixed. There is only one entry in power play. Pure and simple.

In powerplay memory clock have no voltage association.



Spoiler






Code:


typedef struct _ATOM_Vega10_MCLK_Dependency_Table {
01    UCHAR ucRevId;
04    UCHAR ucNumEntries;                                         /* Number of entries. */
    ATOM_Vega10_MCLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
} ATOM_Vega10_MCLK_Dependency_Table;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
3C 41 00 00 (167MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
00			UCHAR  ucVddInd;                                            /* SOC_VDD index */
00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
50 C3 00 00 (500MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
00			UCHAR  ucVddInd;                                            /* SOC_VDD index */
00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
80 38 01 00 (800MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
02			UCHAR  ucVddInd;                                            /* SOC_VDD index */
00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;

typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
24 71 01 00 (945MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
05			UCHAR  ucVddInd;                                            /* SOC_VDD index */
00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
} ATOM_Vega10_MCLK_Dependency_Record;





Note on final clock (945MHz) is:-



Spoiler






Code:


05			UCHAR  ucVddInd;                                            /* SOC_VDD index */





A link is being made between MCLK and SOCCLK.

Now look at SOCCLK table in powerplay.



Spoiler






Code:


typedef struct _ATOM_Vega10_SOCCLK_Dependency_Table {
00    UCHAR ucRevId;
08    UCHAR ucNumEntries;                                         /* Number of entries. */
    ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
} ATOM_Vega10_SOCCLK_Dependency_Table;

typedef struct _ATOM_Vega10_CLK_Dependency_Record {
60 EA 00 00 (600MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
00			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
40 19 01 00 (720MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
01			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
80 38 01 00  (800MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
02			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
DC 4A 01 00  (847MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
03			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
90 5F 01 00  (900MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
04			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
00 77 01 00 (960MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
05			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
90 91 01 00 (1028MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
06			UCHAR  ucVddInd;                                            /* Base voltage */

} ATOM_Vega10_CLK_Dependency_Record;
typedef struct _ATOM_Vega10_CLK_Dependency_Record {
6C B0 01 00 (1107MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
07			UCHAR  ucVddInd;                                            /* Base voltage */
} ATOM_Vega10_CLK_Dependency_Record;





MEMCLK 945MHz has linked with SOCCLK 960MHz. It has not voltage again, it is linking to voltage table. The voltage table is dumb table for pulling out xyz to apps like WattMan, OverdrivenNT, so the user can the apply xyz.

Look at entry 05 in VDDC LUT (start count from 0).



Spoiler






Code:


7A 00 (0x7Ah)		USHORT usVddcLookupTableOffset;            /* points to ATOM_Vega10_Voltage_Lookup_Table */
8C 00 (0x8Ch)		USHORT usVddmemLookupTableOffset;          /* points to ATOM_Vega10_Voltage_Lookup_Table */
90 00 (0x90h)		USHORT usVddciLookupTableOffset;           /* points to ATOM_Vega10_Voltage_Lookup_Table */

VDDC

typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
01	UCHAR ucRevId;
08	UCHAR ucNumEntries;                                          /* Number of entries */
	ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
} ATOM_Vega10_Voltage_Lookup_Table;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
20 03 (800mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
84 03 (900mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
B6 03 (950mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
E8 03 (1000mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
1A 04 (1050mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
4C 04 (1100mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
7E 04 (1150mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
B0 04 (1200mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

VDDMEM

typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
01	UCHAR ucRevId;
01	UCHAR ucNumEntries;                                          /* Number of entries */
	ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
} ATOM_Vega10_Voltage_Lookup_Table;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
46 05 (1350mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;

VDDCI

typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
01	UCHAR ucRevId;
01	UCHAR ucNumEntries;                                          /* Number of entries */
	ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
} ATOM_Vega10_Voltage_Lookup_Table;

typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
84 03 (900mV)	USHORT usVdd;                                               /* Base voltage */
} ATOM_Vega10_Voltage_Lookup_Record;





Simply put on RX VEGA 56/64 apply a PP mod, use the PP editor in OP of VEGA VBIOS thread, edit DPM 5 GPU voltage, apply the PP mod and reboot and once you reset Wattman and go to custom/manual you will see whatever you entered in DPM 5 will become Memory Voltage.


----------



## majestynl

gupsterg said:


> You need to look at......
> 
> 
> Spoiler
> 
> 
> 
> the powerplay to understand why this is as I state and run simple test at end of post.
> 
> I asked AMD Matt few times that AMD need to improve WattMan not to cause this confusion.
> 
> Memory voltage is fixed. There is only one entry in power play. Pure and simple.
> 
> In powerplay memory clock have no voltage association.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Table {
> 01    UCHAR ucRevId;
> 04    UCHAR ucNumEntries;                                         /* Number of entries. */
> ATOM_Vega10_MCLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
> } ATOM_Vega10_MCLK_Dependency_Table;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 3C 41 00 00 (167MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
> 00			UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 50 C3 00 00 (500MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
> 00			UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 80 38 01 00 (800MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
> 02			UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 24 71 01 00 (945MHz)	ULONG  ulMemClk;                                            /* Clock Frequency */
> 05			UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00			UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00			UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> 
> 
> 
> 
> Note on final clock (945MHz) is:-
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> 05			UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 
> 
> 
> 
> 
> A link is being made between MCLK and SOCCLK.
> 
> Now look at SOCCLK table in powerplay.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> typedef struct _ATOM_Vega10_SOCCLK_Dependency_Table {
> 00    UCHAR ucRevId;
> 08    UCHAR ucNumEntries;                                         /* Number of entries. */
> ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
> } ATOM_Vega10_SOCCLK_Dependency_Table;
> 
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 60 EA 00 00 (600MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 00			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 40 19 01 00 (720MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 01			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 80 38 01 00  (800MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 02			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> DC 4A 01 00  (847MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 03			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 90 5F 01 00  (900MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 04			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 00 77 01 00 (960MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 05			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 90 91 01 00 (1028MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 06			UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 6C B0 01 00 (1107MHz)	ULONG  ulClk;                                               /* Frequency of Clock */
> 07			UCHAR  ucVddInd;                                            /* Base voltage */
> } ATOM_Vega10_CLK_Dependency_Record;
> 
> 
> 
> 
> 
> MEMCLK 945MHz has linked with SOCCLK 960MHz. It has not voltage again, it is linking to voltage table. The voltage table is dumb table for pulling out xyz to apps like WattMan, OverdrivenNT, so the user can the apply xyz.
> 
> Look at entry 05 in VDDC LUT (start count from 0).
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> 7A 00 (0x7Ah)		USHORT usVddcLookupTableOffset;            /* points to ATOM_Vega10_Voltage_Lookup_Table */
> 8C 00 (0x8Ch)		USHORT usVddmemLookupTableOffset;          /* points to ATOM_Vega10_Voltage_Lookup_Table */
> 90 00 (0x90h)		USHORT usVddciLookupTableOffset;           /* points to ATOM_Vega10_Voltage_Lookup_Table */
> 
> VDDC
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
> 01	UCHAR ucRevId;
> 08	UCHAR ucNumEntries;                                          /* Number of entries */
> ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
> } ATOM_Vega10_Voltage_Lookup_Table;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 20 03 (800mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 84 03 (900mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> B6 03 (950mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> E8 03 (1000mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 1A 04 (1050mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 4C 04 (1100mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 7E 04 (1150mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> B0 04 (1200mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> VDDMEM
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
> 01	UCHAR ucRevId;
> 01	UCHAR ucNumEntries;                                          /* Number of entries */
> ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
> } ATOM_Vega10_Voltage_Lookup_Table;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 46 05 (1350mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> VDDCI
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
> 01	UCHAR ucRevId;
> 01	UCHAR ucNumEntries;                                          /* Number of entries */
> ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
> } ATOM_Vega10_Voltage_Lookup_Table;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 84 03 (900mV)	USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> 
> 
> 
> 
> Simply put on RX VEGA 56/64 apply a PP mod, use the PP editor in OP of VEGA VBIOS thread, edit DPM 5 GPU voltage, apply the PP mod and reboot and once you reset Wattman and go to custom/manual you will see whatever you entered in DPM 5 will become Memory Voltage.


Thanks gup for jumping in and detailed info for those who needed it few pages back! 




1usmus said:


> Hi guys, I'm with you now, next week I will get Vega 56 pulse. If there are owners of this video card, I will ask to share the settings for the undervolt
> 
> Thanks!


Welcome 1usmus  Just installed 2 Vega 56 last week for some friends. Both have Hynix memory!

- Vega 56 Red Dragon / Wattman custom profile / P6 1050mv / P7 1100mv / HBM 925mhz (max clocks heaven 1580mhz @ 60c)
- Vega 56 Pulse / Wattman custom profile / P6 1050mv / P7 1100mv / HBM 950mhz (max clocks heaven 1575mhz @ 63c)

Above give me already a big boost compared with Stock. Both cards have 1200mv as stock voltage for P7

Because those where not my own cards, i didn't go to far with tweaking. So i haven't loaded the Vega64 bios or any custom PP Table mod.
I know i can push more out of it, but again. I didnt have the time and i didnt want to guys stressing while gaming if it was getting unstable 

Above numbers are just to help you start with  Good luck with your card. And with your love for tweaking i truly believe you will have some great times with Vega Cards. They have so much power to release with 
some time for tweaking!


----------



## Doubleyoupee

gupsterg said:


> You need to look at
> 
> 
> Spoiler
> 
> 
> 
> the powerplay to understand why this is as I state and run simple test at end of post.
> 
> I asked AMD Matt few times that AMD need to improve WattMan not to cause this confusion.
> 
> Memory voltage is fixed. There is only one entry in power play. Pure and simple.
> 
> In powerplay memory clock have no voltage association.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Table {
> 01    UCHAR ucRevId;
> 04    UCHAR ucNumEntries;                                         /* Number of entries. */
> ATOM_Vega10_MCLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
> } ATOM_Vega10_MCLK_Dependency_Table;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 3C 41 00 00 (167MHz)    ULONG  ulMemClk;                                            /* Clock Frequency */
> 00            UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00            UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00            UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 50 C3 00 00 (500MHz)    ULONG  ulMemClk;                                            /* Clock Frequency */
> 00            UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00            UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00            UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 80 38 01 00 (800MHz)    ULONG  ulMemClk;                                            /* Clock Frequency */
> 02            UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00            UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00            UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
> 24 71 01 00 (945MHz)    ULONG  ulMemClk;                                            /* Clock Frequency */
> 05            UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 00            UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
> 00            UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
> } ATOM_Vega10_MCLK_Dependency_Record;
> 
> 
> 
> 
> 
> Note on final clock (945MHz) is:-
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> 05            UCHAR  ucVddInd;                                            /* SOC_VDD index */
> 
> 
> 
> 
> 
> A link is being made between MCLK and SOCCLK.
> 
> Now look at SOCCLK table in powerplay.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> typedef struct _ATOM_Vega10_SOCCLK_Dependency_Table {
> 00    UCHAR ucRevId;
> 08    UCHAR ucNumEntries;                                         /* Number of entries. */
> ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
> } ATOM_Vega10_SOCCLK_Dependency_Table;
> 
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 60 EA 00 00 (600MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 00            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 40 19 01 00 (720MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 01            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 80 38 01 00  (800MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 02            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> DC 4A 01 00  (847MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 03            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 90 5F 01 00  (900MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 04            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 00 77 01 00 (960MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 05            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 90 91 01 00 (1028MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 06            UCHAR  ucVddInd;                                            /* Base voltage */
> 
> } ATOM_Vega10_CLK_Dependency_Record;
> typedef struct _ATOM_Vega10_CLK_Dependency_Record {
> 6C B0 01 00 (1107MHz)    ULONG  ulClk;                                               /* Frequency of Clock */
> 07            UCHAR  ucVddInd;                                            /* Base voltage */
> } ATOM_Vega10_CLK_Dependency_Record;
> 
> 
> 
> 
> 
> MEMCLK 945MHz has linked with SOCCLK 960MHz. It has not voltage again, it is linking to voltage table. The voltage table is dumb table for pulling out xyz to apps like WattMan, OverdrivenNT, so the user can the apply xyz.
> 
> Look at entry 05 in VDDC LUT (start count from 0).
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> 7A 00 (0x7Ah)        USHORT usVddcLookupTableOffset;            /* points to ATOM_Vega10_Voltage_Lookup_Table */
> 8C 00 (0x8Ch)        USHORT usVddmemLookupTableOffset;          /* points to ATOM_Vega10_Voltage_Lookup_Table */
> 90 00 (0x90h)        USHORT usVddciLookupTableOffset;           /* points to ATOM_Vega10_Voltage_Lookup_Table */
> 
> VDDC
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
> 01    UCHAR ucRevId;
> 08    UCHAR ucNumEntries;                                          /* Number of entries */
> ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
> } ATOM_Vega10_Voltage_Lookup_Table;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 20 03 (800mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 84 03 (900mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> B6 03 (950mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> E8 03 (1000mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 1A 04 (1050mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 4C 04 (1100mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 7E 04 (1150mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> B0 04 (1200mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> VDDMEM
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
> 01    UCHAR ucRevId;
> 01    UCHAR ucNumEntries;                                          /* Number of entries */
> ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
> } ATOM_Vega10_Voltage_Lookup_Table;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 46 05 (1350mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> VDDCI
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
> 01    UCHAR ucRevId;
> 01    UCHAR ucNumEntries;                                          /* Number of entries */
> ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
> } ATOM_Vega10_Voltage_Lookup_Table;
> 
> typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
> 84 03 (900mV)    USHORT usVdd;                                               /* Base voltage */
> } ATOM_Vega10_Voltage_Lookup_Record;
> 
> 
> 
> 
> 
> Simply put on RX VEGA 56/64 apply a PP mod, use the PP editor in OP of VEGA VBIOS thread, edit DPM 5 GPU voltage, apply the PP mod and reboot and once you reset Wattman and go to custom/manual you will see whatever you entered in DPM 5 will become Memory Voltage.


Thanks for that write up.

But what does it mean in practice? It's still not clear to me why this entry is in Wattman in the first place.
Does changing it have any effect at all?
I've only seen my Vega not wanting to go below 1100mv (+- 1.037v after vdroop). This is same value I see entry 05 in VDDC LU (1100mv). Maybe that is why?
My memory voltage was set to 900mv at that time. Now it's set to 1050mv and I don't see any weird behavior anymore. Other than that, I can't see any effects.


----------



## TrixX

Doubleyoupee said:


> Not true. I can pull 330W and still be 50c with only 60% fans. The aftermarket air coolers on Vega are actually quite good, especially Sapphire and Powercolor.
> Also 280W sounds a lot closer to what I'm seeing in superposition 1080p extreme at 1100mv than 180W. I will do a superposition bench a those exact settings and make a screenshot.


I guess your ambient temp is lower than mine then...

It was true for me again, though I underestimated power draw.

Just ran Superposition with his settings. 1000MHz and 1100mv 'HBM' settings the same.

1630MHz core / 1100mv = ~1590MHz actual with vdroop to ~1050mv with ~240W in AB

1750MHz core / 1100mv = ~1640MHz actual with vdroop to ~1050mv with ~255W in AB

To get ~280W I had to run:

1000MHz and 1100mv HBM

1630MHz core / 1157mv = ~1594MHz actual and vdroop to ~1100mv with ~284W in AB

1750MHz core / 1157mv = ~1683MHz actual and vdroop to ~1100mv with ~299W in AB

Hope this highlights how much of a restriction a low core clock can be and how effective adjusting the mv is to getting the right power and temp levels.

For a 280W target with 1000MHz and 1100mv HBM:

1750MHz core / 1135mv = ~1667MHz actual and vdroop to ~1075mv with ~280W in AB


----------



## 1usmus

majestynl said:


> Thanks gup for jumping in and detailed info for those who needed it few pages back!
> 
> 
> 
> 
> Welcome 1usmus  Just installed 2 Vega 56 last week for some friends. Both have Hynix memory!
> 
> - Vega 56 Red Dragon / Wattman custom profile / P6 1050mv / P7 1100mv / HBM 925mhz (max clocks heaven 1580mhz @ 60c)
> - Vega 56 Pulse / Wattman custom profile / P6 1050mv / P7 1100mv / HBM 950mhz (max clocks heaven 1575mhz @ 63c)
> 
> Above give me already a big boost compared with Stock. Both cards have 1200mv as stock voltage for P7
> 
> Because those where not my own cards, i didn't go to far with tweaking. So i haven't loaded the Vega64 bios or any custom PP Table mod.
> I know i can push more out of it, but again. I didnt have the time and i didnt want to guys stressing while gaming if it was getting unstable
> 
> Above numbers are just to help you start with  Good luck with your card. And with your love for tweaking i truly believe you will have some great times with Vega Cards. They have so much power to release with
> some time for tweaking!


Thank you for your feedback!
I see that the topic is very popular, I will have to learn a lot

on Pulse bios from Vega 64 fit? I see a difference in the power controller and phases :thinking:

*PP Table mod* please explain to me what it is


----------



## majestynl

1usmus said:


> Thank you for your feedback!
> I see that the topic is very popular, I will have to learn a lot
> 
> on Pulse bios from Vega 64 fit? I see a difference in the power controller and phases :thinking:
> 
> *PP Table mod* please explain to me what it is


I never flashed a Vega 64 bios on a Pulse or Red Dragon. Only did on a ref 56 card to ref 64 bios, or 64 air to 64 LC successfully 
I saw some people who managed to flash another AIB V64 bios on it. Cant guarantee it will work. But if you are going to try dont forget the Vega has 2 bios, who can managed by the switch on the card. Just keep 1 always as backup if something went wrong. And again.. Please investigate before doing anything stupid. I also saw people bricking bios while playing with incompatible bios versions 

PP Table Mod: _"PowerPlay in registry the driver will give priority over firmware PowerPlay. This is the same as on past cards where we used 'Extend Official Overclocking Limits' in MSI AB. It is a known workaround which does not cause issues to OS/driver. The registry PowerPlay can be modified like we would the VBIOS one"_

With the PP table mod you can extend the Power limits etc. Standard 50%, with MOD i have it on 200%! So no Power throttling for me  

Detailed info: https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html


----------



## Doubleyoupee

TrixX said:


> I guess your ambient temp is lower than mine then...
> 
> It was true for me again, though I underestimated power draw.
> 
> Just ran Superposition with his settings. 1000MHz and 1100mv 'HBM' settings the same.
> 
> 1630MHz core / 1100mv = ~1590MHz actual with vdroop to ~1050mv with ~240W in AB
> 
> 1750MHz core / 1100mv = ~1640MHz actual with vdroop to ~1050mv with ~255W in AB
> 
> To get ~280W I had to run:
> 
> 1000MHz and 1100mv HBM
> 
> 1630MHz core / 1157mv = ~1594MHz actual and vdroop to ~1100mv with ~284W in AB
> 
> 1750MHz core / 1157mv = ~1683MHz actual and vdroop to ~1100mv with ~299W in AB
> 
> Hope this highlights how much of a restriction a low core clock can be and how effective adjusting the mv is to getting the right power and temp levels.
> 
> For a 280W target with 1000MHz and 1100mv HBM:
> 
> 1750MHz core / 1135mv = ~1667MHz actual and vdroop to ~1075mv with ~280W in AB


 Thanks for that edit . Those numbers are what I would expect, roughly.

Maybe his extra power usage comes from the hot VRMs or he's reading peak usage instead. Not sure, will test later myself. Extra power usage and VRM temp will probably cause the slighly lower extra clock for him. 

Or...
doritos93 are you aware of the hibernation/sleep bug? Please restart PC and try again. My vega runs -20mhz just by using sleep (i have hibernation disabled).

Btw from your own data you can see 1750mhz/1157mv = ~299W. Now imagine what it would be at 1750mhz/1200mv . This is why I said Ark is probably not really stressing it.


----------



## TrixX

Doubleyoupee said:


> Btw from your own data you can see 1750mhz/1157mv = ~299W. Now imagine what it would be at 1750mhz/1200mv . This is why I said Ark is probably not really stressing it.


Go back and read the original post if you want to continue this nonsense. I never claimed Ark was stressing it as much as a benchmark designed to stress a GPU would...

I thought we'd got past you feeling the need to 1up me all the time.


----------



## majestynl

TrixX said:


> I thought we'd got past you feeling the need to 1up me all the time.


LOL, he loves you too much  See it from the brightside


----------



## Doubleyoupee

TrixX said:


> Go back and read the original post if you want to continue this nonsense. I never claimed Ark was stressing it as much as a benchmark designed to stress a GPU would...
> 
> I thought we'd got past you feeling the need to 1up me all the time.


A lot of games use the same power as superposition. At least for me.
We were past it but then you again started it by claiming 180W to another user while claiming his are way off, when in fact yours were 60W off. At least this time you corrected yourself.
Don't blame me for pointing it out.



majestynl said:


> LOL, he loves you too much  See it from the brightside





Yes won't mention it again :gotproof:


----------



## gupsterg

Doubleyoupee said:


> Thanks for that write up.
> 
> But what does it mean in practice? It's still not clear to me why this entry is in Wattman in the first place.
> Does changing it have any effect at all?
> I've only seen my Vega not wanting to go below 1100mv (+- 1.037v after vdroop). This is same value I see entry 05 in VDDC LU (1100mv). Maybe that is why?
> My memory voltage was set to 900mv at that time. Now it's set to 1050mv and I don't see any weird behavior anymore. Other than that, I can't see any effects.


It is very easy to get confused on VEGA what does what, especially voltage.

I have somewhat given up explaining the intricacies of it.

In Wattman we DPM states, 0 to 7. PowerTune has more though. VEGA based on "usage" can determine what is best method it wishes to employ. Even when we go custom/manual. Read the whitepaper.



> To take advantage of this sort of dynamic power-tuning capability, “Vega” adds a new feature known as active workload identification. The driver software can identify certain workloads with specific needs—such as full-screen gaming, compute, video, or VR—and notify the power management microcontroller than such a workload is active. The microcontroller can then tune the chip’s various domains appropriately to extract the best mix of performance and power consumption.


Now state 5 to 7 are ACG/AVFS (Advanced Clock-Gating/Adaptive Voltage & Frequency Scaling). When people say "memory voltage" is GPU floor voltage it's a load of "wotsit".

Reference section *Testing of PowerPlay registry mods* > *Setting my WattMan profile in PP reg mod* in OP of this thread.

You will see 3 graphs using MSI AB for same games, read that section carefully and it should answer why manipulation of voltage could mean you do not use it.

All we are really setting are reference points for voltage. IMO the GPU/SMU retains still so much control compared with past GPUs. Some what this is good, some what bad.


----------



## TrixX

Doubleyoupee said:


> A lot of games use the same power as superposition. At least for me.
> We were past it but then you again started it by claiming 180W to another user while claiming his are way off, when in fact yours were 60W off. At least this time you corrected yourself.
> Don't blame me for pointing it out.


I was trying to remember numbers from testing that out over a year ago. Turns out I remembered the wattage for something like 1000mv or 1050mv or something like that. I knew that 280W for the specific settings he was showing was incorrect though, so something else was indeed going on, with testing and correction added. I have no issue with saying stuff is wrong as I don't like putting out bad data.

Something I've noticed from a driver or two back, there was an issue where if I changed settings in OverdriveNTool, it would auto-apply 1200mv to the settings for the core, regardless of the actual value set or in the profile loaded. Which lead to a bunch of benchmarks making zero sense. I mean getting 1700MHz actual with 980mv on core was a bit of a red flag 



Doubleyoupee said:


> Yes won't mention it again :gotproof:


Bloody hope so...


----------



## Doubleyoupee

gupsterg said:


> Spoiler
> 
> 
> 
> It is very easy to get confused on VEGA what does what, especially voltage.
> 
> I have somewhat given up explaining the intricacies of it.
> 
> In Wattman we DPM states, 0 to 7. PowerTune has more though. VEGA based on "usage" can determine what is best method it wishes to employ. Even when we go custom/manual. Read the whitepaper.
> 
> 
> 
> Now state 5 to 7 are ACG/AVFS (Advanced Clock-Gating/Adaptive Voltage & Frequency Scaling). When people say "memory voltage" is GPU floor voltage it's a load of "wotsit".
> 
> Reference section *Testing of PowerPlay registry mods* > *Setting my WattMan profile in PP reg mod* in OP of this thread.
> 
> You will see 3 graphs using MSI AB for same games, read that section carefully and it should answer why manipulation of voltage could mean you do not use it.
> 
> All we are really setting are reference points for voltage. IMO the GPU/SMU retains still so much control compared with past GPUs. Some what this is good, some what bad.


:tiredsmil

Conclusion: Set to 1050mv and never touch it again


----------



## gupsterg

Doubleyoupee said:


> :tiredsmil


You could profile/setup your GPU as you want. Run xyz application and between changing settings on same application see GPU do what it wants  .



Doubleyoupee said:


> Conclusion: Set to 1050mv and never touch it again


 .

Welcome to VEGA.


----------



## Doubleyoupee

gupsterg said:


> You could profile/setup your GPU as you want. Run xyz application and between changing settings on same application see GPU do what it wants  .
> 
> 
> 
> .
> 
> Welcome to VEGA.


Static frequency/voltage like my R9 280X is so much easier....
Vega algorithm is already annoying as it is, and then they confuse even more with this memory voltage entry.

I just don't understand why AMD devs would go all the way to put this "memory voltage" in wattman, when it does nothing useful, and then leave it there for 1.5 year.
Let's hope december update brings something... :thumb:


----------



## Ne01 OnnA

1usmus said:


> Thank you for your feedback!
> I see that the topic is very popular, I will have to learn a lot
> 
> on Pulse bios from Vega 64 fit? I see a difference in the power controller and phases :thinking:
> 
> *PP Table mod* please explain to me what it is


Here is comprehensive knowledge base
All in some Order, to learn easy 

-> https://forums.guru3d.com/threads/rx-vega-owners-thread-tests-mods-bios-tweaks.416287/

Respect to All involved


----------



## gupsterg

Doubleyoupee said:


> Static frequency/voltage like my R9 280X is so much easier....
> Vega algorithm is already annoying as it is, and then they confuse even more with this memory voltage entry.
> 
> I just don't understand why AMD devs would go all the way to put this "memory voltage" in wattman, when it does nothing useful, and then leave it there for 1.5 year.
> Let's hope december update brings something... :thumb:


I agree they need to make WattMan better. Recently I flashed a VEGA FE 8GB VBIOS, on gaming drivers that gives you access to all DPM voltages. I was like why is FE getting this and not RX!?

Yeah I hope the end of year driver has nice goodies. Several months back when I did jump to nVida after being on AMD for so long, I hated the driver panel and was so glad to be back on AMD.


----------



## greg1184

WannaBeOCer said:


> Are you using a DP cable? Did you set the refresh rate to 100hz in the Windows settings under "Display adapter properties" on the Monitor tab? You'll be able to set the monitor at the higher refresh rate but you'll lose out on G-Sync.


Fixed. Looks like the issue is that I didn't have 100hz set on windows. That should buy me time until I decide if I want to get a freesync monitor.



By the way folks the Vega 64 is $399 on Newegg plus 3 free games. I am tempted to get a second for Crossfire.


----------



## SpecChum

gupsterg said:


> You will see 3 graphs using MSI AB for same games, read that section carefully and it should answer why manipulation of voltage could mean you do not use it.
> 
> All we are really setting are reference points for voltage. IMO the GPU/SMU retains still so much control compared with past GPUs. Some what this is good, some what bad.


Hey Gup, I set 915mV on all 3 last year and not really touched it since - is this bad practice? I get 1483MHz at 180W which I'm fine with for the most part. I am watercooled tho (EKWB) so heat isn't an issue if I do want to go higher.

I have read that new drivers seem to prefer auto voltages now as they seem to be much better at lowering the volts so you can retain higher clocks, although I'll admit I've not really tested this much.


----------



## gupsterg

SpecChum said:


> Hey Gup, I set 915mV on all 3 last year and not really touched it since - is this bad practice? I get 1483MHz at 180W which I'm fine with for the most part. I am watercooled tho (EKWB) so heat isn't an issue if I do want to go higher.


Non issue IMO as you are not having an issue in usage. Lowered is never really gonna harm anything IMO.



SpecChum said:


> I have read that new drivers seem to prefer auto voltages now as they seem to be much better at lowering the volts so you can retain higher clocks, although I'll admit I've not really tested this much.


If I use PP mod (determined ~1yr ago) and don't set Wattman to use the voltages I've set (ie let it use auto), newer drivers seem the same for cases like [email protected], bionic. I may compare gaming when get a chance.


----------



## Doubleyoupee

gupsterg said:


> Non issue IMO as you are not having an issue in usage. Lowered is never really gonna harm anything IMO.


But why would he have an issue in usage? Isn't it state 5 VID like you said? My card only uses state 7 in games. I don't even know when state 5 is supposed to be used.


----------



## Lard

1usmus said:


> Hi guys, I'm with you now, next week I will get Vega 56 pulse. If there are owners of this video card, I will ask to share the settings for the undervolt
> 
> Thanks!


My Sapphire RX Vega 56 Pulse UV at 1540MHz, 0.944V and 161W GPU Chip Power:


----------



## TrixX

Very interesting, tried to match those settings on my V64, couldn't get it to UV below 1050mv. I got it down to ~190W usage with settings attached. Though I did get 4644 in Superposition 1080p Extreme...

Couldn't get down to my previously capable undervolts of 950mv which was very odd. Will have to look into this further...


----------



## Doubleyoupee

Lard said:


> My Sapphire RX Vega 56 Pulse UV at 1540MHz, 0.944V and 161W GPU Chip Power:


Damn, that's quite the UV. Pretty weird it's running only 1540mhz with such a a high P7. It kinda looks it's ignoring P7 alltogether (because of too low voltage?) and only using P6. 1600mhz would result in around 1540mhz.
If I set such a high P7 with such low voltage it will insta-crash. However 1600/0.944 might work.


----------



## TrixX

Mine works with the high P7 and low voltage settings, though currently hitting a 1050mv barrier for some reason, not sure the cause of that as it used to run 900mv...


----------



## Doubleyoupee

TrixX said:


> Mine works with the high P7 and low voltage settings, though currently hitting a 1050mv barrier for some reason, not sure the cause of that as it used to run 900mv...


Is it crashing or simple not going below?

Because the latter is exactly the issue i was describing when putting my "memory voltage" at 950mv or 900mv. I changed it to 1050mv and my voltage dropped from 1037mv to 9xx instantly


----------



## Lard

The high P7 state is necessary to adjust the clock for the given voltage, and to get the card stable.
The voltage is dependent on the states P5-P7, not sure about the other lower states, but I think it's bad if some of the lower states have a higher voltage.
Yes, the low voltage prevented the card to boost up to P7.


----------



## TrixX

Doubleyoupee said:


> Is it crashing or simple not going below?
> 
> Because the latter is exactly the issue i was describing when putting my "memory voltage" at 950mv or 900mv. I changed it to 1050mv and my voltage dropped from 1037mv to 9xx instantly


No rhyme or reason. Randomly settings will accept for voltage below 1100mv. Can't find a repeatable pattern. Though I have managed to get a Core 1750/950 1100MHz HBM set working.


----------



## TrixX

Here's Superposition with HBCC enabled and 950mv set ~900mv actual during run. 1100MHz HBM and ~1464MHz core during the run. ~163W during the run too.


----------



## Falkentyne

I know this isn't a driver thread, but does AMD still use "Flip Queue Size=2 or 32 00" by default? They changed this from the default of 3 (or 33 00 in binary) back when they rebranded the Catalyst drivers to Crimson, and that causes some problems in some games with stuttering, and setting it back in the registry manually to 3 (string) or 33 00 (binary) in the "UMD" field (HLKM/System/CCC/control/video" fixes that.

Anyone notice anything similar?


----------



## 113802

TrixX said:


> Here's Superposition with HBCC enabled and 950mv set ~900mv actual during run. 1100MHz HBM and ~1464MHz core during the run. ~163W during the run too.


Odd it only boost to 1464Mhz when set to 1750Mhz. I underclocked and undervolted my RX Vega 64 LC to 1662Mhz(-5) with 900mV(P6)/950mV(P7) and it's boosting to 1530Mhz. Using 145w-180w depending on the game load.


----------



## TrixX

WannaBeOCer said:


> Odd it only boost to 1464Mhz when set to 1750Mhz. I underclocked and undervolted my RX Vega 64 LC to 1662Mhz(-5) with 900mV(P6)/950mV(P7) and it's boosting to 1530Mhz. Using 145w-180w depending on the game load.


Had 1100MHz on the HBM instead of 950MHz. If it was 950MHz it would have boosted to ~1540MHz on core. When the voltage is that low the Core loses out if the HBM has high MHz.


----------



## Ne01 OnnA

Falkentyne said:


> I know this isn't a driver thread, but does AMD still use "Flip Queue Size=2 or 32 00" by default? They changed this from the default of 3 (or 33 00 in binary) back when they rebranded the Catalyst drivers to Crimson, and that causes some problems in some games with stuttering, and setting it back in the registry manually to 3 (string) or 33 00 (binary) in the "UMD" field (HLKM/System/CCC/control/video" fixes that.
> 
> Anyone notice anything similar?


===========
Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0000\UMD]

"Main3D_DEF"="3"
"Main3D"=hex:33,00,00,00

"ShaderCache"=hex:31,00
"AdaptiveAAMethod"=hex:30,00
"HighQualityAF"=hex:31,00
"FlipQueueSize"=hex:31,00

=============

I have like this, no problems found.
Create Tweak.Reg file and copy/paste Then save & Merge to registry


----------



## 113802

TrixX said:


> WannaBeOCer said:
> 
> 
> 
> Odd it only boost to 1464Mhz when set to 1750Mhz. I underclocked and undervolted my RX Vega 64 LC to 1662Mhz(-5) with 900mV(P6)/950mV(P7) and it's boosting to 1530Mhz. Using 145w-180w depending on the game load.
> 
> 
> 
> Had 1100MHz on the HBM instead of 950MHz. If it was 950MHz it would have boosted to ~1540MHz on core. When the voltage is that low the Core loses out if the HBM has high MHz.
Click to expand...

My card is at 1662Mhz/1105Mhz and boost to 1500-1530Mhz. Undervolt the P6 and report back.

I'm using 900mV/950mV


----------



## SpecChum

WannaBeOCer said:


> My card is at 1662Mhz/1105Mhz and boost to 1500-1530Mhz. Undervolt the P6 and report back.
> 
> I'm using 900mV/950mV


Mine's pretty much in line with this.

If I set core p6 and p7 and "memory" to 915 on p3 I get 1483Mhz and your p7 is 30Mhz faster.

Actually, quick question as it's been ages since I read about this, does the memory p3 volts have anything at all to do with HBM stability? I just can't seem to get stable above 1020Mhz.


----------



## TrixX

WannaBeOCer said:


> My card is at 1662Mhz/1105Mhz and boost to 1500-1530Mhz. Undervolt the P6 and report back.
> 
> I'm using 900mV/950mV


Did a test, only way to get it to hit 1530MHz in Superposition was to run 1000mv. I'd question whether you are actually getting a 950mv undervolt on the Core. I had issues with that where it would stick at higher levels.

The other point is I set it at 1000mv, got identical performance to what you state and in Superposition the vdroop resulted in ~950mv actual voltage. With it set to 950mv I get ~900mv in Superposition.


----------



## 113802

TrixX said:


> Did a test, only way to get it to hit 1530MHz in Superposition was to run 1000mv. I'd question whether you are actually getting a 950mv undervolt on the Core. I had issues with that where it would stick at higher levels.
> 
> The other point is I set it at 1000mv, got identical performance to what you state and in Superposition the vdroop resulted in ~950mv actual voltage. With it set to 950mv I get ~900mv in Superposition.


Superposition runs at 1500Mhz all the time at my settings. Games are between 1500-1530Mhz and only use 145-180 depending on the game.

1520-1530Mhz on Superposition:


----------



## Doubleyoupee

WannaBeOCer said:


> Superposition runs at 1500Mhz all the time at my settings. Games are between 1500-1530Mhz and only use 145-180 depending on the game.
> 
> 1520-1530Mhz on Superposition: https://www.youtube.com/watch?v=_-RmzKtHTic&t=6s


You can see there is a hard limit of 0.9v. What is your "memory voltage" set at? This looks more like 950mv -> 900-915mv after vdroop.

Anyway, still pretty crazy to see. This is pretty much a higher clock than a reference air cooled or even some of the bad aftermarket ones would get, at almost 2x lower power usage . How much better Vega would've been with lower voltages out of the box...


----------



## 113802

Doubleyoupee said:


> You can see there is a hard limit of 0.9v. What is your "memory voltage" set at? This looks more like 950mv -> 900-915mv after vdroop.
> 
> Anyway, still pretty crazy to see. This is pretty much a higher clock than a reference air cooled or even some of the bad aftermarket ones would get, at almost 2x lower power usage . How much better Vega would've been with lower voltages out of the box...


Memory voltage is stock which should be 1.35v, no clue what the actual memory voltage is set to. I can't complain at all I'm losing 9.5% performance compared to 1750Mhz @ 1200mV but it's max wattage is using 150w less which is around 83.4% and temps went down by 13C. No clue why I moved the power limit slider. I think I left it at 50% when I was testing the difference between 1750Mhz @ 1200mV vs 16662Mhz @ 900mV.


----------



## michstuff

Can anyone tel me what the tp lines are for on the vega pcb? Got them damaged but card seems to run normal..


----------



## TrixX

Looks like you were lucky enough not to cut through the traces fully...



WannaBeOCer said:


> Memory voltage is stock which should be 1.35v, no clue what the actual memory voltage is set to. I can't complain at all I'm losing 9.5% performance compared to 1750Mhz @ 1200mV but it's max wattage is using 150w less which is around 83.4% and temps went down by 13C. No clue why I moved the power limit slider. I think I left it at 50% when I was testing the difference between 1750Mhz @ 1200mV vs 16662Mhz @ 900mV.


A good thing, lets the power limit not get in the way, not that it should with that low voltage.


----------



## michstuff

Some traces are broken. The thing is are they important or what are they for


----------



## Doubleyoupee

WannaBeOCer said:


> Memory voltage is stock which should be 1.35v, no clue what the actual memory voltage is set to. I can't complain at all I'm losing 9.5% performance compared to 1750Mhz @ 1200mV but it's max wattage is using 150w less which is around 83.4% and temps went down by 13C. No clue why I moved the power limit slider. I think I left it at 50% when I was testing the difference between 1750Mhz @ 1200mV vs 16662Mhz @ 900mV.


You can see you have set "memory voltage" entry to 950mv from your own screenshot. This means in your case this is your minimum voltage and your Vega is running 950mv, which you can see, because you are getting 900-915mv after vdroop, which would be the same as setting 950mv as p7. 
Using 900mv would actually result in 850-860mv from the sensor.


----------



## Ne01 OnnA

michstuff said:


> Can anyone tel me what the tp lines are for on the vega pcb? Got them damaged but card seems to run normal..


It seems that it's not Damaged  (if it works then it's OK)
You're lucky....


----------



## michstuff

Ne01 OnnA said:


> michstuff said:
> 
> 
> 
> Can anyone tel me what the tp lines are for on the vega pcb? Got them damaged but card seems to run normal..
> 
> 
> 
> It seems that it's not Damaged /forum/images/smilies/wink.gif (if it works then it's OK)
> You're lucky....
Click to expand...

If I measure with multimeter some of the traces are gone. Just wondering what they are for.


----------



## 113802

Doubleyoupee said:


> You can see you have set "memory voltage" entry to 950mv from your own screenshot. This means in your case this is your minimum voltage and your Vega is running 950mv, which you can see, because you are getting 900-915mv after vdroop, which would be the same as setting 950mv as p7.
> Using 900mv would actually result in 850-860mv from the sensor.


That's stock for a RX Vega 64 LC, I'll test it when I get back this evening.


----------



## Dhoulmagus

michstuff said:


> Can anyone tel me what the tp lines are for on the vega pcb? Got them damaged but card seems to run normal..



if you're testing with a multimeter, find which traces (count from the left) are dead and tell me. If you look closely each of those lines is a pair of two traces. Also can you verify that the card is working in X8 or X16 mode? The bulk of that gash appears to be over a couple of data traces used in x8/x16 mode. 

or you can check the pinout table yourself here:

https://en.wikipedia.org/wiki/PCI_Express


----------



## CynicalUnicorn

michstuff said:


> Can anyone tel me what the tp lines are for on the vega pcb? Got them damaged but card seems to run normal..


Can you check GPU-Z and tell us what the bus interface says? It looks like the bulk of the damage is to data pins, especially to the seventh lane's transmit, so it may be running in PCIe x4 or x1 mode. I still see copper though so I don't think you've cut all the pins (I mean, you couldn't have - some of the damaged pins are to the very first lane yet the GPU still works!) Pinout here, look at side B.

See if there's continuity between the test points below the left HBM stack (i.e. the one nearest the PCIe bracket) and the respective pin on the edge connector at the bottom. I've highlighted the pairs to try below:










I honestly don't know about the others. If you've severed ground pins (the ones with two tiny holes above them) then that shouldn't technically cause problems, but it might because we're dealing with high-frequency signalling. I can't imagine damaging the reserved pin (#30, the one with three holes above it) would cause problems either since its functionality is literally not defined in the PCIe spec. Finally, you may have damaged any of the PRSNT#2 pins (no obvious traces coming off them, likely hooked up to a layer on the other side of the board). These are used to determine the slot's width. My understanding is that each of those wires up directly to the PRSNT#1 pin, located on the back of the GPU and in the small half of the connector nearest the bracket, and the highest-numbered intact PRSNT#2 pin is the only one that matters.

I'm really hoping it's just cosmetic damage and you've ripped off some of the solder mask.


----------



## 113802

Doubleyoupee said:


> You can see you have set "memory voltage" entry to 950mv from your own screenshot. This means in your case this is your minimum voltage and your Vega is running 950mv, which you can see, because you are getting 900-915mv after vdroop, which would be the same as setting 950mv as p7.
> Using 900mv would actually result in 850-860mv from the sensor.


Setting it to 900mV did result in 850-860mV but it also locks the HBM2 speed to 800Mhz and core ran at 1442Mhz using 142w. Setting it to 901mV uses the same power and acts just like 950mV.


----------



## Bartouille

Just finished building my first loop. 

Temps are ridiculous, 20c ambient and I'm getting 26c after a run of firestrike with stock LC bios 50% PL.

Running firestrike right now...

https://www.3dmark.com/3dm/30684909?

Prob need to OC cpu to get more. Clocks were 1792/1175MHz 1250mV.


----------



## Bartouille

https://www.3dmark.com/3dm/30685589?

With CPU at 4.1ghz. RAM is at 3466MHz but timings are crap. Prob 28k+ once that's fine tuned.


----------



## Doubleyoupee

WannaBeOCer said:


> Setting it to 900mV did result in 850-860mV but it also locks the HBM2 speed to 800Mhz and core ran at 1442Mhz using 142w. Setting it to 901mV uses the same power and acts just like 950mV.


 Yeah, I had similar weird issues at 900-950mv so I'm not bothering anymore and leave it at 1050mv. Already -150mv which is fine for me since I want to reach at least 1600mhz in games.
Hopefully december update brings some wattman fixes.


@Bartouille
Is that a big ass radiator? That's crazy 


Code:


Core clock 1.872 MHz

Definitely need to crack that 28k though
Why is your GPU mem showing max of 945mhz?


----------



## VicsPC

Bartouille said:


> Just finished building my first loop.
> 
> Temps are ridiculous, 20c ambient and I'm getting 26c after a run of firestrike with stock LC bios 50% PL.
> 
> Running firestrike right now...
> 
> https://www.3dmark.com/3dm/30684909?
> 
> Prob need to OC cpu to get more. Clocks were 1792/1175MHz 1250mV.


That's crazy, I'm surprised a single pump is enough.


----------



## Bartouille

Doubleyoupee said:


> @Bartouille
> Is that a big ass radiator? That's crazy
> 
> 
> Code:
> 
> 
> Core clock 1.872 MHz
> 
> Definitely need to crack that 28k though
> Why is your GPU mem showing max of 945mhz?


Yeah core clock says 1872MHz because I lock the card to P7 but it's actually 1792MHz (~1750MHz actual). I tired 1802MHz but it froze after a while. 1200MHz HBM works but it artifacts, 1175MHz seems to be working fine.



VicsPC said:


> That's crazy, I'm surprised a single pump is enough.


I'm running it off 24V, should be fine.


----------



## Doubleyoupee

Bartouille said:


> Yeah core clock says 1872MHz because I lock the card to P7 but it's actually 1792MHz (~1750MHz actual). I tired 1802MHz but it froze after a while. 1200MHz HBM works but it artifacts, 1175MHz seems to be working fine.


What I meant was, the HWINFO64 log says it didnt' go above 945mhz (the HBM2)


----------



## Ne01 OnnA

Here some news (December Driver will be Great)
"KMD_PrimitiveShaderSupport"=dword:00000001

-> https://twitter.com/CatalystMaker/status/1067219511676301312

==
Also new drivers are little slower than 18.10.x & 18.11.1

Here (I have beaten 28k in summer ) 
Now i have best Overall to date 

Same settings was used (real Clock at ~1750-1762Mhz)
-> https://www.3dmark.com/fs/17145833
Tess x8

===


----------



## Doubleyoupee

Ne01 OnnA said:


> -> https://www.3dmark.com/fs/17145833
> 
> ===


_Benchmark tessellation load_ modified by AMD Catalyst driver, result invalid.


----------



## 113802

Bartouille said:


> https://www.3dmark.com/3dm/30685589?
> 
> With CPU at 4.1ghz. RAM is at 3466MHz but timings are crap. Prob 28k+ once that's fine tuned.


You'll be able to break 28k+ with your HBM clock. Along with the overclock you'll need to undervolt. I passed 28k twice with 1175mV along with 1772Mhz on the core. I'll have to re-run it some time because I forgot to turn off the meltdown/spectre patches before running it.

https://www.3dmark.com/fs/16277993 - 28225
https://www.3dmark.com/fs/16287671 - 28132

CPU score with meltdown/spectre disabled

https://www.3dmark.com/fs/17023300


----------



## maddangerous

Hey all!

Just grabbed a Powercolor AXRX Vega 56 (https://www.powercolor.com/product?id=1521537060#spe)

So far it's been great with every game I'm playing, basically maxed, pushing 2560x1080.

Looking for some tips on overclocking, I came from a GTX 970.. I'm working my way through this thread as well.


----------



## poisson21

Can not wait for Zen 2, my 1800x is not really good and not on par with my 2 rx vega 64 ;'(
https://www.3dmark.com/3dm/30729451?


----------



## majestynl

maddangerous said:


> Hey all!
> 
> Just grabbed a Powercolor AXRX Vega 56 (https://www.powercolor.com/product?id=1521537060#spe)
> 
> So far it's been great with every game I'm playing, basically maxed, pushing 2560x1080.
> 
> Looking for some tips on overclocking, I came from a GTX 970.. I'm working my way through this thread as well.


Welcome! First of all be happy you got the Samsung memory from Lottery 
I had the bad luck with Hynix on both Red Dragon's i have installed.

1) Start pushing your HBM Freq. higher. This will give you a boost in perf.
2) Try under-volting P7 till it crashes!
3) Load custom PP tables ( see detailed info: https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html)
4) And if you want more, you can always try to load the 64 bios for even more perf.

tip: Write down every change/tweak you made and bench the results with different benchmarks e.g. : Superposition / Heaven / TimeSpy / FS / games!
You can easily safe your Profiles with Wattmann or Overdrivetool etc!


----------



## Doubleyoupee

majestynl said:


> Welcome! First of all be happy you got the Samsung memory from Lottery
> I had the bad luck with Hynix on both Red Dragon's i have installed.
> 
> 1) Start pushing your HBM Freq. higher. This will give you a boost in perf.
> 2) Try under-volting P7 till it crashes!
> 3) Load custom PP tables ( see detailed info: https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html)
> 4) And if you want more, you can always try to load the 64 bios for even more perf.
> 
> tip: Write down every change/tweak you made and bench the results with different benchmarks e.g. : Superposition / Heaven / TimeSpy / FS / games!
> You can easily safe your Profiles with Wattmann or Overdrivetool etc!



0) If you are going for performance, set +50% power and increase fan speed target as high as you can tolerate


----------



## greg1184

Do any of you guys crossfire? I have a couple of questions:

1. I don't seem to be getting good framrates on Battlefield 1 despite both cards being used. Is there any particular profile or settings I can do to tweak it?

2. Anyone able to force Crossfire on Battefield 5 (Direct X 11) because my second card is just chilling with one light on in the meter (reference card) when I am playing. Is there a profile I can use?


----------



## poisson21

Crossfire is very tricky, you have to test each setting (afr, friendly,1x1), and see if there is a difference, some graphic engine don't like it at all and some didn't gain anything from it. 
You can also test with profile of game that use the same engine, sometimes it work. http://amdcrossfire.wikia.com/wiki/Crossfire_Game_Compatibility_List (don't think it's updated anymore)


----------



## maddangerous

majestynl said:


> Welcome! First of all be happy you got the Samsung memory from Lottery
> I had the bad luck with Hynix on both Red Dragon's i have installed.
> 
> 1) Start pushing your HBM Freq. higher. This will give you a boost in perf.
> 2) Try under-volting P7 till it crashes!
> 3) Load custom PP tables ( see detailed info: https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html)
> 4) And if you want more, you can always try to load the 64 bios for even more perf.
> 
> tip: Write down every change/tweak you made and bench the results with different benchmarks e.g. : Superposition / Heaven / TimeSpy / FS / games!
> You can easily safe your Profiles with Wattmann or Overdrivetool etc!





Doubleyoupee said:


> 0) If you are going for performance, set +50% power and increase fan speed target as high as you can tolerate


Excellent, thanks, I'll get started on reading that thread as well. It seems I can't go higher than 950 MHz on the HBM2 clock, though I believe I've seen it said somewhere that you might need a Vega 64 crossflash to do so? Can't quite remember.

I've set +50% power, and I can hit 1600 MHz core so far, seems to be stable. It handled ~3 hours of Destiny 2 last night with State 7 @ 1600 MHz, and 950 MHz HBM2. I haven't messed with voltages at all. Not sure if I want to stay with WattMan

I do have Superposition installed, and I picked up the 3dmark tools on Steam for $10 on black friday, so I'm going to run through those and post my scores as I go.

Thanks for the tips!


----------



## Doubleyoupee

maddangerous said:


> Excellent, thanks, I'll get started on reading that thread as well. It seems I can't go higher than 950 MHz on the HBM2 clock, though I believe I've seen it said somewhere that you might need a Vega 64 crossflash to do so? Can't quite remember.
> 
> I've set +50% power, and I can hit 1600 MHz core so far, seems to be stable. It handled ~3 hours of Destiny 2 last night with State 7 @ 1600 MHz, and 950 MHz HBM2. I haven't messed with voltages at all. Not sure if I want to stay with WattMan
> 
> I do have Superposition installed, and I picked up the 3dmark tools on Steam for $10 on black friday, so I'm going to run through those and post my scores as I go.
> 
> Thanks for the tips!


Yes because with the vega 64 bios you get 1.35v HBM2 voltage instead of 1.25v. With Samsung memory, it's definitely worth looking into.
Oh and try to undervolt. You can easily get 1650mhz at the same power, or much less power at the same or slightly lower frequency.


----------



## Ne01 OnnA

Driver 18.12.1

No Tesselation Tweak  (almost 28k)
-> https://www.3dmark.com/3dm/30782097?


With Tesselation x8: 28096  New Best Score & Fastest Driver so far.
-> https://www.3dmark.com/3dm/30782310?

Settings used (No Tess. Tweak)


----------



## maddangerous

Doubleyoupee said:


> Yes because with the vega 64 bios you get 1.35v HBM2 voltage instead of 1.25v. With Samsung memory, it's definitely worth looking into.
> Oh and try to undervolt. You can easily get 1650mhz at the same power, or much less power at the same or slightly lower frequency.


Ok, I'll think about it. I just got this card, so I want to see what I can do before cross-flashing a bios lol. Undervolting I'm alright with, but I want to find a nice stopping point with speeds before I start undervolting.

Is it worth changing the thermal paste, even with no real heat issue? I'm just curious if the stock paste is good, or if putting one something else would help temps more. I've got a variety kicking around (prolimatech, Noctua (I think), Thermal Grizzly Kryonaut & Conductonaut)

So, I'm noticing that I don't seem to be boosting to what I'm setting state 7 to. Even when running superposition (which actually didn't even get to state 6 set speeds). Any thoughts on this? Still haven't messed with voltage, I left that at auto. Cranked power target all the way, volts to auto, and State 6 is set to 1652, State 7 is.. 1752.

Thoughts?


----------



## 113802

maddangerous said:


> Ok, I'll think about it. I just got this card, so I want to see what I can do before cross-flashing a bios lol. Undervolting I'm alright with, but I want to find a nice stopping point with speeds before I start undervolting.
> 
> Is it worth changing the thermal paste, even with no real heat issue? I'm just curious if the stock paste is good, or if putting one something else would help temps more. I've got a variety kicking around (prolimatech, Noctua (I think), Thermal Grizzly Kryonaut & Conductonaut)
> 
> So, I'm noticing that I don't seem to be boosting to what I'm setting state 7 to. Even when running superposition (which actually didn't even get to state 6 set speeds). Any thoughts on this? Still haven't messed with voltage, I left that at auto. Cranked power target all the way, volts to auto, and State 6 is set to 1652, State 7 is.. 1752.
> 
> Thoughts?


You'll never see the max PEAK boost on normal workloads. You'll only see it at very low workloads. Undervolt P7 to 1200mV and keep everything else the same. Have fun!


----------



## Doubleyoupee

Ne01 OnnA said:


> Drv. 18.12.1
> 
> No Tesselation Tweak  (almost 28k)
> -> https://www.3dmark.com/3dm/30782097?
> 
> 
> With Tess x8 28096  New Best Score & Fastest Driver so far.
> -> https://www.3dmark.com/3dm/30782310?


Both say the tesselation has been adjusted




WannaBeOCer said:


> You'll never see the max PEAK boost on normal workloads. You'll only see it at very low workloads. Undervolt P7 to 1200mV and keep everything else the same. Have fun!


1200mv is the default/max voltage for Vega 56/64 air. 

He can try maybe 1150mv, but unless he has a golden chip going much lower might not be stable with 1750mhz P7


----------



## 033Y5

hi 
anyone know if or is using the ek vega strix waterblock with the vega 56 strix 
thanks

jay from asus has confirmed its the same pcb so block should fit


----------



## OlDirtyBox

got my red devil 56 to 1762 core and 900mem. The mem crashes at 910 but ive read flashing 64 bios will help me get higher clocks, card has hynix mem. Is it still possible to flash 64? I also want to increase power limit but will that help?

Edit: vega 64 did not work. Now im not not sure if its because i had the performance switch on OC since when i flipped the bios switch the card still wouldnt boot until i flipped the performance switch to normal and was able to boot and flash stock bios

Edit2:Stock bios is flashed but pc wont boot with the switch on OC mode only in silent or normal. Just get a black screen


----------



## Ne01 OnnA

F1 2018 in DX12

-> https://www.computerbase.de/2018-11...-beta-patch-3840-2160-grafikkarten-benchmarks


----------



## cnckane

Hello. Got a Vega56, trying to do some undervolting but I always crash in-game (BF1 - the whole system in fact, freeze then I see a frozen bootlogo with purple artifacts). Even though Timespy runs okay.
My UV settings for the last 2 P-states were [email protected], and [email protected] I even tried to run around [email protected] but still crashes. I don't understand I see a lot of people recommending even lower voltages for the similar clock speeds I have.


----------



## 99belle99

I can pick up a reference model 56 ex miner pretty cheap is it worth it as in how loud do the reference models get?


----------



## ZealotKi11er

99belle99 said:


> I can pick up a reference model 56 ex miner pretty cheap is it worth it as in how loud do the reference models get?


If you undervolt it and set a custom fan profile its not bad at all.


----------



## sinnedone

cnckane said:


> Hello. Got a Vega56, trying to do some undervolting but I always crash in-game (BF1 - the whole system in fact, freeze then I see a frozen bootlogo with purple artifacts). Even though Timespy runs okay.
> My UV settings for the last 2 P-states were [email protected], and [email protected] I even tried to run around [email protected] but still crashes. I don't understand I see a lot of people recommending even lower voltages for the similar clock speeds I have.



That's not the way that works. Each card works differently. Some might not undervolt at all. It takes days and sometimes weeks to dial in an overclock. 

Put all the voltages back at stock, turn your fan speed up as loud as you can take it, power limit to +50.

After that's done drop voltage down from stock 1.20v to 1.19v. Run your system for a couple of days(couple of hours) and if no crashes drop down 1.18v and try again until you find your stable voltage.


----------



## maddangerous

Doubleyoupee said:


> He can try maybe 1150mv, but unless he has a golden chip going much lower might not be stable with 1750mhz P7


Yeah I'm not too concerned about power usage... but I might undervolt

So if I won't hit P7 under normal usage (which.. even a synthetic bench didn't get me to) how am I supposed to test it.....?



OlDirtyBox said:


> got my red devil 56 to 1762 core and 900mem. The mem crashes at 910 but ive read flashing 64 bios will help me get higher clocks, card has hynix mem. Is it still possible to flash 64? I also want to increase power limit but will that help?


What are you using to test/stress your card?


----------



## Maracus

cnckane said:


> Hello. Got a Vega56, trying to do some undervolting but I always crash in-game (BF1 - the whole system in fact, freeze then I see a frozen bootlogo with purple artifacts). Even though Timespy runs okay.
> My UV settings for the last 2 P-states were [email protected], and [email protected] I even tried to run around [email protected] but still crashes. I don't understand I see a lot of people recommending even lower voltages for the similar clock speeds I have.


Yeah I get the exact same same crash with freeze, bootlogo then hard reset if i go to low on my strix 56 I had to up the voltage by 20mv to get it stable.


----------



## OlDirtyBox

maddangerous said:


> What are you using to test/stress your card?


3d mark Firestrike. Scored 24,432 on graphics


----------



## By-Tor

Wanted to run firestrike to see how mine matched up to others cards. Not sure how good or bad these are, but ran it at Stock, Turbo and Custom undervolt. I like the graphics score and would like to get them a little higher...


----------



## Doubleyoupee

Again, there's is no use comparing scores if you have invalid scores or scores with tessellation tweaks


----------



## Ne01 OnnA

@mtrai said that:
According to new rule at http://hwbot.org/
Tesselation can be set to OFF when testing Radeon GPUs in 3Dmark suite.

UPD. now it's correct


----------



## mtrai

Ne01 OnnA said:


> @mtrai said that:
> According to new rule at http://hwbot.org/
> Tesselation must be set to OFF when testing Radeon GPUs in 3Dmark suite.


Actually I said it could be turned off. This is GPU neutral so can be set on Nvidia as well. Also using Windows 10 is disallowed for this and most submissions to HWbot unless certain criteria is met. https://hwbotnews.s3.amazonaws.com/wp-content/windows8-81-10.png

"Allowed optimisations	
LOD
Change tessellation level in graphics driver

Disallowed tweaks/cheats	
Altering benchmark files or the rendering
Any software or human interaction altering the perceived speed of the benchmark program, tricking it to believe it ran faster
Lucid Virtu MVP
Mipmap
Change benchmark settings"

http://hwbot.org/news/9039_application_52_rules/


----------



## Ne01 OnnA

^^ @mtrai

New driver 18.12.1 -> is the fastest for me 
Can You share your recent 3d score? (Only Firestrike Base & TimeSpy base)

Im PSU short for my RIG to 'stretch legs', but all in all >28k is a great score (Tesselation x8)
Why Tess x8? beecause im using it for my games 
(Yes everyone remembers Crysis 3 frogs  Crytek & many other Creators states that Tess x6 or x8 is all we need for good geometry tesselation, rest is good for benchies)


----------



## mtrai

Ne01 OnnA said:


> ^^ @mtrai
> 
> So this new driver 18.12.1 -> is the fastest for me
> Can You share your recent 3d score? (Only Firestrike Base & TimeSpy base)
> 
> Im PSU short for my RIG to 'stretch legs', but all in all >28k is a great score (Tesselation x8)
> Why Tess x8? beecause im using it for my games
> (Yes everyone remembers Crysis 3 frogs  Crytek & many other Creators states that Tess x6 or x8 is all we need for good geometry tesselation, rest is good for benchies)


Sorry no I can't for the most part...due to the drivers I get to use. I share when I can. It just depends on what I am working at the time or what I am working on.


----------



## OlDirtyBox

alright i need some help. Flashed vega 64 bios and didn't work so now im trying to flash back the backup but i keep getting "error rom not erased". I already tried flashing another bios i got online but same thing. Not sure whats going on
flashed the bios with the switch on OC on my red devil if that matters.


----------



## Synoxia

What's the current stablest driver for 1809? I've noticed i can't use the same OC settings i've been using on 18.9.3 with latest driver


----------



## TrixX

Latest driver seems to be the best so far.


----------



## burning chrome

I've picked up a PowerColor Red Dragon Vega 56 recently and have had some hickups during my first real long term tests with Forza Horizon 4: it randomly reebots my computer sooner or later, sometimes with a blue screen but 8/9 times it's just like I pushed the reset button, with Adrenalin telling me that it reset my profile due to an unexpected error. This happened at stock, undervolted, overclocked, or with silent bios enabled.
Temps seem ok with core never going above 71, and hot spot in the 80-90s range.

After reading a bit I suspect it's my PSU which is a 750 Seasonic G-series Gold with a single 62 amp 12v rail. I've swapped it for an old Corsair HX 620w for now, with no crash in the last couple hours running the vega undervolted and underclocked.

Does anybody have any other ideas for me? Is it safe to push for +50 Power and 1500+mhz with 1050-1070mv on an older 620w psu? The rest of my system is a stock Ryzen 1600 and 3 mechanical hdds.


----------



## Zero989

Pretty sure I have a golden Vega 64. 200W @ 4k res or 150-180W @ 1080P

I run -200mV or .956v in game @ 1702Mhz (1574-1584Mhz in actual gameplay) @ 99%, at lower utilization it goes above 1600Mhz.

I just use afterburner because it wastes less time.


----------



## TrixX

Zero989 said:


> Pretty sure I have a golden Vega 64. 200W @ 4k res or 150-180W @ 1080P
> 
> I run -200mV or .956v in game @ 1702Mhz (1574-1584Mhz in actual gameplay) @ 99%, at lower utilization it goes above 1600Mhz.
> 
> I just use afterburner because it wastes less time.


Afterburner doesn't have full OC functionality. OverdriveNTool works a lot better and doesn't take more time than AB. I use AB for data logging though.

Stats look about right for 1000mV drooping to 956mV. Not sure it's golden, would have to test in repeatable benches like Forza Horizon 4, Superposition and Firestrike. As an example in Firestrike standard if you can get a Graphics score in the 28000 range then you have a golden GPU. Personally I'm only in the low 27000's.


----------



## Doubleyoupee

Zero989 said:


> Pretty sure I have a golden Vega 64. 200W @ 4k res or 150-180W @ 1080P
> 
> I run -200mV or .956v in game @ 1702Mhz (1574-1584Mhz in actual gameplay) @ 99%, at lower utilization it goes above 1600Mhz.
> 
> I just use afterburner because it wastes less time.


Looks like it's taking the clockspeed down because of the very low voltage.
Mine does 1600mhz @ 1650mhz P7. But yeah, pretty good chip. Mine does 1590-1610mhz actual with 1050mv.
But I agree with Trixx, Vega works in mysterious ways, you can have 200W in some games but then 300W in others, same frequency. Or be stable 24/7 in one game then crash within 5min in the other.


----------



## 113802

Finally had some time to bench with spectre/meltdown patches disabled today. CPU score is up by 600 points, GPU score is around the same. 

Driver 18.12.1.1 - it's not verified yet

https://www.3dmark.com/fs/17370977


----------



## Zero989

TrixX said:


> Afterburner doesn't have full OC functionality. OverdriveNTool works a lot better and doesn't take more time than AB. I use AB for data logging though.
> 
> Stats look about right for 1000mV drooping to 956mV. Not sure it's golden, would have to test in repeatable benches like Forza Horizon 4, Superposition and Firestrike. As an example in Firestrike standard if you can get a Graphics score in the 28000 range then you have a golden GPU. Personally I'm only in the low 27000's.


I'm not at 28K, https://www.3dmark.com/fs/17236170 - This run is at 1100mV I think, never went higher. tessellation untouched.

Highest HBM run I've had is 1180.

I'm using the stock cooler, so what I could do is put the Vega LC bios on during deeper into the winter and have some fun in the garage.

I switched to AB from WattMan because WattMan is buggy. When I moved to a 4K monitor the lowest voltage it would do was 1.05 in game @ 4K res.



Doubleyoupee said:


> Looks like it's taking the clockspeed down because of the very low voltage.
> Mine does 1600mhz @ 1650mhz P7. But yeah, pretty good chip. Mine does 1590-1610mhz actual with 1050mv.
> But I agree with Trixx, Vega works in mysterious ways, you can have 200W in some games but then 300W in others, same frequency. Or be stable 24/7 in one game then crash within 5min in the other.


Yes, that's exactly how it works. That's why I had to adjust to 1702Mhz OC. I'm running on a 4K monitor, the power use I'm seeing in Overwatch is 200W. It's fine in Doom and Tomb Raider. Haven't tried much else.


----------



## TrixX

Zero989 said:


> I'm not at 28K, https://www.3dmark.com/fs/17236170 - This run is at 1100mV I think, never went higher. tessellation untouched.
> 
> Highest HBM run I've had is 1180.
> 
> I'm using the stock cooler, so what I could do is put the Vega LC bios on during deeper into the winter and have some fun in the garage.
> 
> I switched to AB from WattMan because WattMan is buggy. When I moved to a 4K monitor the lowest voltage it would do was 1.05 in game @ 4K res.


Personally I'd use OverdriveNTool. Gives more control over the different P States as the new version of AB which has full Power State functionality hasn't been released yet. IMO AB just isn't the right tool to set Vega P States properly.

26.5K with 1100mV that's damn good. Depends on max clocks for your card, but with an LC BIOS and 1250mV, unlocked power target you could have a golden sample on your hands


----------



## virpz

Got me a 64 + waterblock last week.

What I found is that the VEGA 64 Liquid cooled bios did no good to my board as every setting was unstable matter what. Now, playing with the vega 64 reference bios and PP tables from the VEGA LC did enabled me to undervolt and score well with the 3dmark.

Got me the 4th place in Hwbot -Timespy for the vega 64 gpu http://hwbot.org/benchmark/3dmark_-...ard_2879&start=-6&cores=1#start=0#interval=20

20.790 in Firestrike - no tessellation adjustment, whql driver. https://www.3dmark.com/fs/17335771


----------



## Minotaurtoo

Hey guys, I don't know if this has been discussed a lot or not... no time to read through the whole thing, question I have is this.... have you ever had a card that was tripping OCP even when power limit was down to -25%? I swapped from a Fury x that had a modded bios with extra power limits that let it pull upwards of 350 watts under normal load... and my PSU could handle it and the cpu being stressed at the same time... but just hitting this vega 64 with userbench causes it to trip OCP in the psu... funny bit is the power draw at the wall never exceeds 500 watts for what parts of the test it does pass whereas the fury x would top 550.


----------



## ZealotKi11er

Minotaurtoo said:


> Hey guys, I don't know if this has been discussed a lot or not... no time to read through the whole thing, question I have is this.... have you ever had a card that was tripping OCP even when power limit was down to -25%? I swapped from a Fury x that had a modded bios with extra power limits that let it pull upwards of 350 watts under normal load... and my PSU could handle it and the cpu being stressed at the same time... but just hitting this vega 64 with userbench causes it to trip OCP in the psu... funny bit is the power draw at the wall never exceeds 500 watts for what parts of the test it does pass whereas the fury x would top 550.


Well, power meters do not measure spikes.


----------



## Minotaurtoo

ZealotKi11er said:


> Well, power meters do not measure spikes.


 I think that's it... just a spike... I put it in my sons pc (old fx rig) that had a better psu and no issues.... right now we are overclocking to the max to see if it holds... I have an 850w psu on the way (rosewill glacier) was for another persons rig, but may work for me instead


----------



## TrixX

Minotaurtoo said:


> I think that's it... just a spike... I put it in my sons pc (old fx rig) that had a better psu and no issues.... right now we are overclocking to the max to see if it holds... I have an 850w psu on the way (rosewill glacier) was for another persons rig, but may work for me instead


There's a known issue with a series of Seasonic's that were incapable of working properly with Vega. Issue was with Seasonic.


----------



## Minotaurtoo

TrixX said:


> There's a known issue with a series of Seasonic's that were incapable of working properly with Vega. Issue was with Seasonic.



Thanks to all for the responces!


My PSU is a rosewill capstone, his is an ultra LSP, I'm pretty sure if his ran it, then the one I have coming will do so as well... the problem with this one is it's not up to the power rating requirement set for vega 64, I honestly thought they were just adding fudge room, but apparently when they say 750 watt, they meant it... his is exactly 750 watt.... the new one I'm getting in monday is an 850 watt... was supposed to be for a customers pc, but honestly it was overkill for them and I only got one that big because it was on sale directly from rosewill for less than a 650 watt unit... I'll try it out and see when it gets here... they may get my old 650 watt unit... I'm really surprised though that vega 64 pulls so much power at stock settings as to trip OCP just playing lightweight games... but it did... I can overclock this fury x to the max and not do that.


----------



## TrixX

Minotaurtoo said:


> Thanks to all for the responces!
> 
> 
> My PSU is a rosewill capstone, his is an ultra LSP, I'm pretty sure if his ran it, then the one I have coming will do so as well... the problem with this one is it's not up to the power rating requirement set for vega 64, I honestly thought they were just adding fudge room, but apparently when they say 750 watt, they meant it... his is exactly 750 watt.... the new one I'm getting in monday is an 850 watt... was supposed to be for a customers pc, but honestly it was overkill for them and I only got one that big because it was on sale directly from rosewill for less than a 650 watt unit... I'll try it out and see when it gets here... they may get my old 650 watt unit... I'm really surprised though that vega 64 pulls so much power at stock settings as to trip OCP just playing lightweight games... but it did... I can overclock this fury x to the max and not do that.


I have managed to hit 400W+ registered in AB, which also underpredicts wattage draw of the GPU. Though that's with pushing the limits of the voltage curve and core frequency. You can mitigate the power draw by undervolting though. Most of the time I game with 1100mV on the Core in some games that aren't FPS dependent I can lower it to 960mV.


----------



## LicSqualo

*New driver coming soon*

This is interesting: https://wccftech.com/amd-radeon-software-adrenalin-2019-editions-25-new-features-leaked-out-dec-13/


----------



## Ne01 OnnA

LicSqualo said:


> This is interesting: https://wccftech.com/amd-radeon-software-adrenalin-2019-editions-25-new-features-leaked-out-dec-13/




==========================================
AMD/ATI Radeon Software Adrenalin 2019 Edition

As has become tradition, AMD is yet again on the cusp of launching its next big driver package as a holiday gift to all of its Radeon fans.
Adrenalin 2019 Edition is the fifth consecutive installment in AMD’s big yearly GPU driver package releases since it all began back in 2014 with the Omega Driver package.

==========================================
Performance

The only slide with meaningful performance figures compares the new driver to 17.12.1 driver.
Most of the games in this comparison were not even out when 17.12.1 was available, performance increased (up to) 15% for Radeon RX 570.

==========================================
Radeon Advisors

Game Advisor – “provides game settings guidance for a personalized & improved experience”
This tool works in conjunction with Radeon Overlay. What it does is rather simple: it suggests how to improve performance and frame rating by adjusting game settings.
Settings Advisor – this is designed for new users who never used Radeon Settings before. It recommends which features should be enabled (VSR, FreeSync etc.) based on measured performance.
Upgrade Advisor – system analyzer for “minimum and recommended game compatibility”

==========================================
Wattman

Auto-GPU & Memory Overclocking
Auto-GPU undervolting
Temperature-dependent fan curves
Unlocked DPM states for RX Vega
Memory tuning

==========================================
Display Technologies

FreeSync 2 HDR – improved auto-tone mapping for a more detailed experience
Virtual Super Resolution – support for 21:9 displays

==========================================
Radeon Overlay

Display Settings – in-game controls: enhanced sync, FreeSync, per-game settings
Wattman – in-game adjustment for: gpu frequency, gpu voltage, gpu temperature, memory timing, memory frequency, load/save profiles
Refined performance metrics – framerate and frametimes – adjust colors, columns, position, size and transparency

==========================================
AMD Link

QR Code linking
Voice Control – streaming, recording, screenshots, instant replay, min/avg/max fps, gpu temp, gpu clocks, mem clocks, fan speed
Wattman – same settings as in Radeon Overlay
Enhanced Performance Metrics — detailed analytics (basically turning a phone into a benchmarking tool).
ReLive – View, Edit, Stream recordings/screenshots from a mobile device

==========================================
AMD ReLive

In-Game Replay
Scene Editor
GIF Support

==========================================
AMD ReLive streaming

Wireless streaming up to 4K60FPS to mobile devices
Free on Android and IOS
70ms (AMD) vs 125ms (competitor) responsiveness
Works with AMD Link (through Game Explorer tab)
AMD ReLive for VR: support for HTC Vive Focus, Oculus GO, Samsung Gear VR, Google Daydream
— features: 3rd party Bluetooth compatibility, up to 1440x1440p

==========================================

My Note from Guru3D:

^^ I wonder how fast it will be?
I have 2 sets of 3Dmark test (templates) for Driver prelimenary-speed Testing.
First: 1732MHz/1150HBM2 used for Hardcore games like UBI AC: O etc. 
Second: 1790/1210 with 1.137mV/1025v for >28k scoring

If i get 28.500 or 29k then Yes this Driver will gives us roughly +5 up to 10FPS across the board....
But if we get >29k + then this one will be the Fastest to date driver for Vega XTX, +7-15FPS depends on Gaming scenario.

Note:
Im always using the same stable OC (or De-OC) to actually know how fast The driver is.
18.9.x Branch was fast (28k for the first time)
18.10.x & 18.11.x was a little slower
18.12.x is now the fastest to date (28.094 GPU pts. in FS)


----------



## Spacebug

The performance uplift numbers given were compared to a driver from last year so I wouldn't hope for too much there.

My hope is for the additions to wattman, if they work, memory timings.
Might be a way to get more performance for a given clock or perhaps loosen up timings to clock higher.
We'll see tomorrow what we get.. 

I can't seem to get 1220Mhz Hbm2 clock stable no matter what voltage I throw at it, 1215 works fine without artifacts.
1230 and up is artifact heaven on windows desktop, suspect I'm at the edge of stable timing range, or just clockwall for the Hbm2 chips...


----------



## LicSqualo

For my case the only stability test to show artifact when overclock the HBM2 is The Witcher3 (@1440P Ultra settings).
My limit for (rock) stable (gaming and not only) clock is 1080 MHz (I moved up to 1083 without artifacts).
With 3DMark and others games (like World of Tanks as example) i can play up to 1150 MHz with my Samsung HBM2.
I'm hoping that these new features (as auto-undervolting) will move the Vega power in the right direction.
I've tested the AIDA GPGPU test with my Vega stock settings and I'm happy with my results compared to the Nvidia 2070 RTX and i9 9900K .


----------



## rdr09

Is it still worth it to get a Vega 64 or 56 Or get a cheaper RX 570 and wait for 7nm?


----------



## 113802

rdr09 said:


> Is it still worth it to get a Vega 64 or 56 Or get a cheaper RX 570 and wait for 7nm?


A card that trades blows with a GTX 1080 for $400 isn't bad at all with three games. If you can wait for 7nm I suggest waiting but if you need a card a RX Vega 64 is a perfect 1440p card.

https://www.newegg.com/Product/Prod...ga 64&cm_re=rx_vega_64-_-14-202-326-_-Product


----------



## rdr09

WannaBeOCer said:


> A card that trades blows with a GTX 1080 for $400 isn't bad at all with three games. If you can wait for 7nm I suggest waiting but if you need a card a RX Vega 64 is a perfect 1440p card.
> 
> https://www.newegg.com/Product/Prod...ga 64&cm_re=rx_vega_64-_-14-202-326-_-Product


Man, wish I'm back there but South Africa is the closes place to buy. I may have to suck it up and get the 64. Thank you.


----------



## miklkit

TrixX said:


> There's a known issue with a series of Seasonic's that were incapable of working properly with Vega. Issue was with Seasonic.



Interesting! I had a 5 year old Seasonic 850 watt that worked fine until I got a Vega 64 and it had nothing but problems. Almost RMAd the Vega 64 but did buy a new Seasonic 850. While waiting for it I ran a Seasonic 620 and it did ok, but when the new 850 arrived it has been much better. 



I currently see a max of around 550 watts from the wall OCed and undervolted.


----------



## neojack

@miklkit i put my two vega 64 at the same clocks and undervolt as you did for the last year, and they started artefacting last month.
check the "gpu hot spot temperature" and "HBM temperature". mine was north of 90C in game
use HWinfo to get the numbers.

I guess one year of this treatment worned out the cards. now i use only the "power saving"or "balanced" mode from wattman and they stopped artefacting.

PS : it's not my cooling : custom watercooling + EK full waterblock +EK thermal paste (credit card method) , opened and clean inside, screws torqued as much as possible.


----------



## miklkit

I'm sorry to hear that. When I pulled this one out of the box and installed it the temps were like you stated and I saw 95C on the hot spot! Setting an aggressive fan profile dropped the temps 20C.


----------



## Minotaurtoo

miklkit said:


> Interesting! I had a 5 year old Seasonic 850 watt that worked fine until I got a Vega 64 and it had nothing but problems. Almost RMAd the Vega 64 but did buy a new Seasonic 850. While waiting for it I ran a Seasonic 620 and it did ok, but when the new 850 arrived it has been much better.
> 
> 
> 
> I currently see a max of around 550 watts from the wall OCed and undervolted.


 I just borrowed a PSU from someone while waiting on my new one to come in... my old 650w rosewill just wouldn't cut it... and when I got this psu in, I saw why... was pulling north of 650w from the wall at peaks... no doubt it was actually higher as my meter only reads on 1s intervals... The psu I'm borrowing is a 750w ultra lsp... and I actually managed to kill it once going for a max OC on both the cpu and the gpu lol... oops... anyway, on balanced plan I'm getting near the 18000 mark in firestrike... when my new psu arrives I'll try a more aggressive OC and fan profile... so far so good though... I have a fan blowing across the heatsink/backplate to help keep it cool.. https://www.3dmark.com/3dm/31206543?


is trix the best software to OC these with?... not sure it will work with mine anyway, I know wattman is glitchy at best... any attempts to undervolt result in reduced clocks even if I set higher clocks.


Anyway, thanks again guys for your patients with me... Vega 64 was a little more power hungry than I expected... but also seems to be a better performer than reviews let on.


----------



## TrixX

I'll just repost what I put on Guru3D.com as far as my process for Overclocking Vega:

When Overclocking I tend to eliminate bottlenecks until I have just 1 or 2 variables I can control to get a reliable OC/UV result.

For Vega there are a number of stock variables to determine:

Core Frequency Target
Core Voltage Target
HBM2 Frequency
"HBM2 Voltage" - really it's DPM5 voltage for the GPU P5 state (if I understand Gupsterg's info correctly!).
Power Target
Temp Target
Fan response curves (if air cooled)

Personally I try to max out or unlock many of these variables from being variable and then use one or two of them to manage the control of temp, power, frequency and voltage.

So for my settings, I have a modified Power Play Table (see first post as Onna links to them) which gives 200% power target instead of just 50%. Removes wattage as a limit though can pull ~400W quite easily. Without a further AMP limit increase it won't get much past 400W. However 400W is more than enough unless aiming for world records 

I'm on water so fans aren't a limitation, but when testing with Air I'd run max fan to get best case scenrio and then run at a usable fan speed for real world testing later. That's dependent on card though so I won't comment on that. Just do what suits your needs here.

For the Temp limits there's a hard limit of 85C on the Air BIOS and 70C on the Liquid BIOS. In reality you'll want to keep it as low as possible as the ACG (Advanced Clock Generator) will throttle at different temp levels. It goes up in roughly 5C increments from about 30C upwards. If you are below 30C you don't have any limitation from this phenomena AFAIK, though I could be wrong on the threshold there. Having a lower target limit and keeping the max limit at it's maximum is the best solution to work with the Air cooled ones. For custom blocks it's mostly irrelevant.

The HBM2 P3 Voltage is quite a simple one, set to 1050MHz and be done with it. Though if going to for max undervolt lowering it can help (i.e. match it to the undervolt on P7 if you are going for lower than 1050mv).

HBM2 Frequency is an absolute value (about the only one!) so set to what you can benchmark test your way through without artifacts. Personally I can run 1100MHz all day, I can bench with 1180MHz though depending on the ambient temp I can get the odd artifact if the HBM gets over 54C during a run. Some can do 1200MHz others can't get past 950MHz. It's a bit of a lottery.

Core Frequency Target is the most confusing one for most. Many of those I've seen start with Vega OC think it's a static value and are disappointed that the clocks don't match the set value. P6 is mostly useless so I tend to just disable P0 to P6 for benchmarking and focus on P7. P7 I now normally leave at 1750 or 1752MHz for my system. It has no issues with that and due to it being a target and not an absolute it doesn't have an issue with lower voltage than the set clock target requires.

Core Voltage. This is my go to variable. I can control all the Power, Temp, Voltage and Frequency ranges with just this variable after unlocking/setting the rest. With the Core set to 1750MHz I can set the voltage to 950mv to get a 1544Mhz idle clock speed and likely ~1480MHz clock speed under load. Setting the P7 mv to 1100mv I get 1719MHz clock speed at idle and ~1660MHz under load. Of course the max for Air BIOS is 1200mv which nets me 1786MHz idle and ~1720MHz under load and with the Liquid BIOS it's 1250mv which bumps it to 1801MHz idle and ~1740MHz under load.

Wattage depends heavily on voltage so you have direct control over power via this method. There's not a huge difference in performance from 1600MHz under load through to 1740MHz, just a few FPS in it but the difference in wattage can be up to 200W or literally double for just 140MHz boost in core speed. So finding the best solution for your rig is the way to go and finding good benchmark results that result in good daily settings. Always fun to go for the top spot, but getting the best perf/W ratio is also quite useful.


----------



## Doubleyoupee

LicSqualo said:


> For my case the only stability test to show artifact when overclock the HBM2 is The Witcher3 (@1440P Ultra settings).
> My limit for (rock) stable (gaming and not only) clock is 1080 MHz (I moved up to 1083 without artifacts).
> With 3DMark and others games (like World of Tanks as example) i can play up to 1150 MHz with my Samsung HBM2.
> I'm hoping that these new features (as auto-undervolting) will move the Vega power in the right direction.
> I've tested the AIDA GPGPU test with my Vega stock settings and I'm happy with my results compared to the Nvidia 2070 RTX and i9 9900K .


 That's weird, in Witcher 3 I can go very high. 1150mhz+ no problem.
In Tombraider Dox I get green spots at 1100mhz.


----------



## THUMPer1

New drivers out. 12.2.2
overdriven doesnt control fans.


----------



## Worldwin

Some quick notes from my testing of the new drivers (18.12.2):
1. Skip Auto-overclock. As expected it performs worse than stock.
2. Auto-undervolt is nice as it leads to a small bump in performance. Stock freq~1550mhz whereas UV sets it to around 1570-1590. Tested in heaven for this and the above.
3. Auto OC memory seems on point at least for me. It set the HBM2 freq @ 1053mhz which is damn close to the 1050mhz I found as max stable. I lost the lottery on this one hard.
4. Memory timing seems to only let you set the presets of either automatic, memory timing level 1 and memory timing level 2. Too bad you cant change it like the first two generations of GCN in bios.
Based on quick testing in firestrike extreme, I have observed no noticeable performance change between the three presets.
Testings was done as single run.
Stock V/F curve:
Automatic Graphics Score(GS):10981
Memory Timing Level 1 GS:10966
Memory Timing Level 2 GS:10974
Custom V/F curve:
Note: Core freq down 50-60mhz from ~1550mhz but memory up from 945mhz to 1050mhz.
Automatic Graphics Score(GS):11258
Memory Timing Level 1 GS:11271
Memory Timing Level 2 GS:11260
Tested on Nitro+ V64 SR with two AP00 strapped on the heatsink @ 1600RPM.
5. It seems it finally possible to use VSR and freesync at the same time now. Will test if tearing occurs.Tested, it works now and no tearing.
6. Sampling interval now is at 0.25, down from 1. I am guessing these are in seconds.
7. The temp target for the fan is now set to 70C, down from 95C.


----------



## Ernwild108

I just drop here my score in FS on newest drivers.

https://www.3dmark.com/3dm/31225866


----------



## Spacebug

I got a tiny (margin of error tiny) but consistently better scores in benchmarks from memorytiming level 1 than level 2.
Neither of them affected max stable HBM clock for me.

Pretty underwhelming though i was probably too optimistic about the level of timing tuning available...


----------



## 113802

Ernwild108 said:


> I just drop here my score in FS on newest drivers.
> 
> https://www.3dmark.com/3dm/31225866


Sweet! What was the highest your HBM could overclock before the update? What are your memory timings now with the new driver?


----------



## Ernwild108

WannaBeOCer said:


> Sweet! What was the highest your HBM could overclock before the update? What are your memory timings now with the new driver?


Before update in normal environment 1180 in FS, the highest overall 1215 but it was when ambient temp was like 5-7 C.
I used mem timing 1 coz it was slighly better than 2 or auto (around 50-70 points max)


----------



## miklkit

Minotaurtoo said:


> I just borrowed a PSU from someone while waiting on my new one to come in... my old 650w rosewill just wouldn't cut it... and when I got this psu in, I saw why... was pulling north of 650w from the wall at peaks... no doubt it was actually higher as my meter only reads on 1s intervals... The psu I'm borrowing is a 750w ultra lsp... and I actually managed to kill it once going for a max OC on both the cpu and the gpu lol... oops... anyway, on balanced plan I'm getting near the 18000 mark in firestrike... when my new psu arrives I'll try a more aggressive OC and fan profile... so far so good though... I have a fan blowing across the heatsink/backplate to help keep it cool.. https://www.3dmark.com/3dm/31206543?
> 
> 
> is trix the best software to OC these with?... not sure it will work with mine anyway, I know wattman is glitchy at best... any attempts to undervolt result in reduced clocks even if I set higher clocks.
> 
> 
> Anyway, thanks again guys for your patients with me... Vega 64 was a little more power hungry than I expected... but also seems to be a better performer than reviews let on.



Trixx is a Sapphire utility and I have a Sapphire Vega 64. For me Wattman did whatever it felt like no matter what I input. Afterburner didn't work too well either. The experts here use a tool that seems to work well but is too geeky for me. Perhaps it will be ok for you. So I settled on Trixx as it works and is easy to use. As always, YMMV.


----------



## LicSqualo

Hi guys, someone know how to have an update release of OverdriveNTTool for these new drivers?


----------



## virpz

Haven't done any extensive testing, just played some BF1 and it felt like if I was getting a worse fps/watt with the new drivers plus the overlay caused really bad stuttering in BF1.


----------



## Minotaurtoo

haven't installed the new drivers yet so this is still on the old ones, but my new psu came in today and it had enough juice I could turn this card lose a bit.... nothing special, just power limit to max and a tiny OC... still much better than it was doing on the old psu... https://www.3dmark.com/3dm/31236744?


----------



## Doubleyoupee

DDU'ed to the new driver yesterday, but already rolled back to the old drivers. 

Radeon settings is super buggy. Crashed twice already. 
Advisor says I need to go to native res, when I'm already on it.
Overlay doesn't work.


Basically the only thing that was a + for me is the fan curve control. That's it.


----------



## VicsPC

Doubleyoupee said:


> DDU'ed to the new driver yesterday, but already rolled back to the old drivers.
> 
> Radeon settings is super buggy. Crashed twice already.
> Advisor says I need to go to native res, when I'm already on it.
> Overlay doesn't work.
> 
> 
> Basically the only thing that was a + for me is the fan curve control. That's it.


I have zero issues with mine, i turned off advisor cuz my rig is already beast. Overlay works fine even in game, i tried memory oc while in game and my memory would stick at 800mhz lol, set it manually at 1050 (first time actually trying to OC memory on my rig) and worked no problem even with ab running.


----------



## Doubleyoupee

VicsPC said:


> I have zero issues with mine, i turned off advisor cuz my rig is already beast. Overlay works fine even in game, i tried memory oc while in game and my memory would stick at 800mhz lol, set it manually at 1050 (first time actually trying to OC memory on my rig) and worked no problem even with ab running.


 Where do you turn it off? I couldn't find it.
Alt+R does nothing for me. If if change it, still doesn't work.


----------



## VicsPC

Doubleyoupee said:


> Where do you turn it off? I couldn't find it.
> Alt+R does nothing for me. If if change it, still doesn't work.


You should try using the AMD uninstaller and then reinstall, ALT+R works fine for me even in desktop. Under preferences you can turn off upgrade advisor. There;s an option for show overlay on or off so yours is either off or needs a clean install.


----------



## Doubleyoupee

VicsPC said:


> You should try using the AMD uninstaller and then reinstall, ALT+R works fine for me even in desktop. Under preferences you can turn off upgrade advisor. There;s an option for show overlay on or off so yours is either off or needs a clean install.


Right I turned that off, but it's still available in the rop right and showing me the problems. I already DDU'ed the first time, so don't see why it using the uninstaller would be better, but i'll try it anyway


----------



## Ne01 OnnA

Here 28k again 
Tess x8, rest as it should be.
GPU ~1750-1760MHz

-> https://www.3dmark.com/fs/17423737

==


----------



## Zero989

broke afterburner
settings in wattman dont stick as per usual (garbage for convenience)
boosted to 1555Mhz instead of 1590Mhz @ same settings
same bug with 4K display where voltage only goes as low as 1.05v which funny enough is P5 voltage, even though I'll be at P7 clocks 
crashed a few times with inexpiable errors at the time


uninstalled + reformatted


----------



## Ernwild108

Ne01 OnnA said:


> Here 28k again
> Tess x8, rest as it should be.
> GPU ~1750-1760MHz
> 
> -> https://www.3dmark.com/fs/17423737
> 
> ==


Strange, 1750-60 MHz core with tess x8 only gives you 28k?

Looking at your FPS from GT1 and GT2 looks like 1660-1665 MHz under load on GT1 and on GT2 is something around 1700-1705 at least for me with HBM on 1190. My GPU can't get past 1700 MHz on GT1 and 1725 on GT2 yet gives better mark with lower HBM clock.


----------



## 113802

Ernwild108 said:


> Strange, 1750-60 MHz core with tess x8 only gives you 28k?
> 
> Looking at your FPS from GT1 and GT2 looks like 1660-1665 MHz under load on GT1 and on GT2 is something around 1700-1705 at least for me with HBM on 1190. My GPU can't get past 1700 MHz on GT1 and 1725 on GT2 yet gives better mark with lower HBM clock.


Yeah that score is strange for his settings. 1730Mhz with 1140HBM gets me 28.2k without any changes.

https://www.3dmark.com/fs/17370977


----------



## Ne01 OnnA

PSU? 
IMO if i want better 3Dmark  i need better PSU
But it's for Gaming so i don't want to invest into another PSU
HDR 1440p 144Hz or M.2 is better for gaming 

===
Or i need to bump V a notch higher.
I will test someday.

Im always using the same Values (to see if driver is fast)

PS. Lads can You test with my V? 1.131v/1050

===
UPD. Yes You're right 

-> https://www.3dmark.com/3dm/31281258?

And my New Best (with 1.150mV 1790/1200HBM2 Tess x8)

-> https://www.3dmark.com/3dm/31281563?
===


----------



## Shadowarez

Is there a Vega Fe version of this yet. Iv been on 18.9.2 as Radeon shows that as latest tryed just installing the new driver but it fails to work. The settings done show up.

So I'm installing Pro driver and choosing option to install 2nd set of drivers. Restart check for update switch to game mode only option is 18.9.2.


----------



## Ernwild108

Ne01 OnnA said:


> PSU?
> IMO if i want better 3Dmark  i need better PSU
> But it's for Gaming so i don't want to invest into another PSU
> HDR 1440p 144Hz or M.2 is better for gaming
> 
> ===
> Or i need to bump V a notch higher.
> I will test someday.
> 
> Im always using the same Values (to see if driver is fast)
> 
> PS. Lads can You test with my V? 1.131v/1050
> 
> ===
> UPD. Yes You're right
> 
> -> https://www.3dmark.com/3dm/31281258?
> 
> And my New Best (with 1.150mV 1790/1200HBM2 Tess x8)
> 
> -> https://www.3dmark.com/3dm/31281563?
> ===



There:
My GPU will insta crash on 1790 with 1131 mV. But even on +-50 MHz lower it gives better result. Your PSU should not be a problem, with volts like that Vega takes +-300 watt on peak. Right now I have an RM850i but on CX750m everything was the same below 1.2 volt.

PS. I am more concerned with your combined score. Ryzen 7 on 4GHz should give you around 9-10k


----------



## Ne01 OnnA

PSU, is not enough for Vega + ZEN OC (I mean Amperage, not tW)
I know 

Just wait for my benches with ZEN 3xxx (8/16 or 10/20) with my hefty RAM 
29.5k easy IMO

==
Zasilacz


----------



## Spacebug

If you guys want high scores, try to glitch out the HBCC  

I usually restart pc between switching hbcc on or off, one time I didn't restart after switching hbcc on, glitched it out for some reason, firestrike ran in black and white and spat out a tad over 30k gpu score 

More effective than turning off tesselation


----------



## Dhoulmagus

So I'm trying out the auto overclocking features since I have nothing to lose, since my powercolor reference v64 has never managed to hit 1700mhz @ 1200mv. Auto overclock GPU seems to start by trying around 1770mhz, all monitors turn black, driver recovers, but radeon software is no longer doing anything. How useful ;P. It's also crashing and vanishing about 50x more than usual. 

To make matters even worse the fan profile "curve" doesn't seem to be a curve at all, my gpu keeps rapidly jumping between low RPM and high RPM very obnoxiously audibly. If I try to instead use afterburner, that curve no longer works. Blah.


----------



## GraveNoX

I have a Sapphire vega 56 pulse and 1700x at 3.9ghz (all cores).
FS score https://i.imgur.com/SOOBNPw.png
FS Ultra score https://i.imgur.com/bMLvrAT.png
The card is not modded. I only used wattman, It was running at 975 hbm2 (from 800 stock), +3.5% gpu, +50% power, undervolted to p6 at 1020 and p7 at 1100. Tried 1000mhz on hbm2 but 3dmark crashed, 975 is stable in everything I played.
The scores are good or not ? Is it worth modding it ? I don't have experience with such things and I don't want to break it.


----------



## Elloquin

So I've been toying with the new driver and came up with this... Sorry not sure why the pic posts sideways. It looks like the card boosted to 1865.0 MHz. This was while playing call of duty advanced warfare in 4k. It didn't crash. Is that a reporting error? First time using hardware info 64.


----------



## VicsPC

Elloquin said:


> So I've been toying with the new driver and came up with this... Sorry not sure why the pic posts sideways. It looks like the card boosted to 1865.0 MHz. This was while playing call of duty advanced warfare in 4k. It didn't crash. Is that a reporting error? First time using hardware info 64.


I believe that is a reporting error yea, used to do that when the card first came out as well. I get reporting errors all the time though even just using afterburner, sometimes ill have my memory tell me its 220°C lol. I stopped caring and just pay attention in game to see what i see it hit max and go with that.


----------



## AmcieK

Guys have a litlle problem with Free sync . monitor agon ag322qc4 . So when i play bf the game works very smoothly but after some time appears stuttering and flickering must go to ccc and turn off FS or play window mode . Any fix idea where is the problem ? i try another driver , buy new cable but the same. Try monitor in other pc and looks no problem . Can ask there or should start topic on monitor forum.


----------



## Ne01 OnnA

Guys we have new OverdriveN Tool v0.2.8 beta1 !

-> https://www.dropbox.com/s/ibxcdlrgkm82ccg/OverdriveNTool 0.2.8beta1.7z?dl=1

-> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/page-22#post-5619672

===
Kudos to #tede# from Our Guru3D
-> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/

Please tick - Run As Admin (This tool needs it)


----------



## LicSqualo

Ne01 OnnA said:


> Guys we have new OverdriveN Tool v0.2.8 beta1 !


A big THANK YOU!


----------



## mtrai

Ne01 OnnA said:


> Guys we have new OverdriveN Tool v0.2.8 beta1 !
> 
> -> https://www.dropbox.com/s/ibxcdlrgkm82ccg/OverdriveNTool 0.2.8beta1.7z?dl=1
> 
> -> https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/page-22#post-5619672


Been testing out the new OverdriveNTool v0.2.8 beta 1 today and it all works. However one thing I am not certain of is the zero RPM on or off is working. I use my own powerplay table with my own fan settings plus using a LC bios. I have modified my registry to show the zero RPM with the LC bios. So my fans never shut off. But everything else seems to be working.


----------



## 99belle99

Well finally picked up a 56. Hynix memory though. Can reach ~1600 on core and 950 on HBM.


----------



## Ne01 OnnA

For everyone:

==


----------



## Jackalito

Happy holidays, everyone!

So, I finally got my hands on the Vega 56 Pulse from Sapphire, and I was lucky enough to get a card with Samsung HBM2 memory. I can raise the memory frequency to 950MHz using WattMan. The thing is, if I'm not mistaken, no vBIOS is safe or even compatible with my card as it doesn't use the reference PCB, but the nano one instead. What can I do if I wanna OC the memory higher?


I would appreciate all the help I could get, as I'm such a noob when it comes to Vega.
Thanks in advance, guys!


----------



## TrixX

Jackalito said:


> Happy holidays, everyone!
> 
> So, I finally got my hands on the Vega 56 Pulse from Sapphire, and I was lucky enough to get a card with Samsung HBM2 memory. I can raise the memory frequency to 950MHz using WattMan. The thing is, if I'm not mistaken, no vBIOS is safe or even compatible with my card as it doesn't use the reference PCB, but the nano one instead. What can I do if I wanna OC the memory higher?
> 
> 
> I would appreciate all the help I could get, as I'm such a noob when it comes to Vega.
> Thanks in advance, guys!


Already replied to you on Guru3D, seeing as you have Samsung memory looks like there is a BIOS you can update to. Details are in the Guru3D thread.


----------



## Shadowarez

I'd there a way to make overdrive ntool work with Vega FE cards all it does is either reset values I put in or flat out spit errors.


----------



## miklkit

I decided to try OverDrivenTool again and dled it from the link above. When I clicked on the .exe it gave an error and I had to close it. What does it mean?


----------



## Jackalito

TrixX said:


> Already replied to you on Guru3D, seeing as you have Samsung memory looks like there is a BIOS you can update to. Details are in the Guru3D thread.



Thanks, man! I've replied to you there asking another question


----------



## Ne01 OnnA

miklkit said:


> I decided to try OverDrivenTool again and dled it from the link above. When I clicked on the .exe it gave an error and I had to close it. What does it mean?


Please tick - Run As Admin (This tool needs it) 
Also tick Unblock (right click)


----------



## miklkit

Run as administrator did it. Never found "unblock". Thanks!


What do "acoustic limit" and "power target" mean? I don't care about fan noise and the damthing throttling down and delivering 20fps for no known reason is the issue I'm trying to correct.


----------



## ZealotKi11er

Here is the best I can do with my Vega 64 Liquid. I have not done any reg tweak or driver tweak. 

https://www.3dmark.com/3dm/31724547?


----------



## Ernwild108

On what voltage did you score this 27.5k? From my observations anything over 1170 mV gives worse score in FS because Vega boost lower. If u can try 1170 mV + max clock it would be nice, cuz I think your card can probably hit 28+


----------



## ZealotKi11er

Ernwild108 said:


> On what voltage did you score this 27.5k? From my observations anything over 1170 mV gives worse score in FS because Vega boost lower. If u can try 1170 mV + max clock it would be nice, cuz I think your card can probably hit 28+


I ran the benchmark and voltage was under 1.2v 95% of the time. I voltage at stock for all DPMs apart from DPM7 which is 1.2v @ 1775MHz. GPU was boosting ~ 1752 MHz.


----------



## Ernwild108

Are you sure it was boosting to 1752? From graphics test 1 I can see that it was boosting to around 1680-1700. You have 136.12 FPS in this test while I have 136.85 with GPU boosting to 1700 on peaks. In GPU test 2 it is possible that it peaked to 1752. Nevertheless I would lower voltage DPM7 to 1.16-1.18v and check if score from test 1 will rise. I always have problems with test 1, higher voltage always lower clocks while test 2 is fine.


----------



## Kyozon

Is there any particular reason as to why, after flashing the 64 LC BIOS, my PowerPlay Mods for 100%+ Power Limits seems to not be doing anything, as WattMan and OverdriveNTool reports +50% Max. Thanks.


----------



## dagget3450

Kyozon said:


> Is there any particular reason as to why, after flashing the 64 LC BIOS, my PowerPlay Mods for 100%+ Power Limits seems to not be doing anything, as WattMan and OverdriveNTool reports +50% Max. Thanks.


Are you still sporting Vega FE? or did you pickup a vega64?


----------



## 113802

Ernwild108 said:


> Are you sure it was boosting to 1752? From graphics test 1 I can see that it was boosting to around 1680-1700. You have 136.12 FPS in this test while I have 136.85 with GPU boosting to 1700 on peaks. In GPU test 2 it is possible that it peaked to 1752. Nevertheless I would lower voltage DPM7 to 1.16-1.18v and check if score from test 1 will rise. I always have problems with test 1, higher voltage always lower clocks while test 2 is fine.


I encountered an odd bug with my card. If I flash the FE 8GB bios and than go back to the stock Vega 64 LC my card gets stuck in P7 and all games and benchmarks run between 1746Mhz-1754Mhz. I was able to replicate it three times when flashing back to the stock bios from theFE bios. After a reboot the card no longer runs in P7. 

Stuck in P7 1200mV +50%: https://www.3dmark.com/3dm/31501430

Normal result P7 1200mV +50%: https://www.3dmark.com/fs/17383255

Highest score: https://www.3dmark.com/fs/17370977


----------



## ZealotKi11er

There is 28K. I think I need faster CPU to get more. 

https://www.3dmark.com/fs/17626136


----------



## Minotaurtoo

Anyone have an issue with a 4k tv/monitor where the pc boots too fast and you end up getting a no signal issue? Mine, if I turn on the pc and let it wake the monitor, it triggers the monitor when the splash screen would be showing, but then will boot into windows before the monitor is ready and then I get a no signal message... if I turn on the monitor myself and let it wake fully, then turn on the pc this doesn't happen... so far that is the only solution to the issue that I've found... nothing major just annoying to add 30s to my startup time just because the monitor is too slow.... on my old fury x card the pc would be fully booted into windows by the time the monitor came alive.


----------



## ZealotKi11er

Minotaurtoo said:


> Anyone have an issue with a 4k tv/monitor where the pc boots too fast and you end up getting a no signal issue? Mine, if I turn on the pc and let it wake the monitor, it triggers the monitor when the splash screen would be showing, but then will boot into windows before the monitor is ready and then I get a no signal message... if I turn on the monitor myself and let it wake fully, then turn on the pc this doesn't happen... so far that is the only solution to the issue that I've found... nothing major just annoying to add 30s to my startup time just because the monitor is too slow.... on my old fury x card the pc would be fully booted into windows by the time the monitor came alive.


Have this problem all the time. I actually lose signal right after Windows splash screen. I just replug HDMI.


----------



## Kyozon

dagget3450 said:


> Are you still sporting Vega FE? or did you pickup a vega64?


Hello dagget, I still have them, but ended up picking the 64 for the fun of overclocking.


----------



## Minotaurtoo

ZealotKi11er said:


> Have this problem all the time. I actually lose signal right after Windows splash screen. I just replug HDMI.


don't know why I did it, but I just shut down and restarted to try that and it worked... still it's an aggravation just to get the thing to work proper... don't get me wrong, it's a very minor one... just wish there was a permanent solution where I could just hit the power button like I used to... one of the things I liked about this hybrid tv/monitor was it had the wake on command function so many tv's don't have... now I have to turn it on anyway lol... oh well... at least the card does better than expected... all those nvidia threads made vega look bad, but so far with undervolting and a mild OC this thing seems to perform somewhere just above Rtx 2070 range at a much better price.


----------



## cplifj

Have you tried reseting your screen/monitor to factory settings ? This may help with the issue in some cases.


----------



## ZealotKi11er

Minotaurtoo said:


> don't know why I did it, but I just shut down and restarted to try that and it worked... still it's an aggravation just to get the thing to work proper... don't get me wrong, it's a very minor one... just wish there was a permanent solution where I could just hit the power button like I used to... one of the things I liked about this hybrid tv/monitor was it had the wake on command function so many tv's don't have... now I have to turn it on anyway lol... oh well... at least the card does better than expected... all those nvidia threads made vega look bad, but so far with undervolting and a mild OC this thing seems to perform somewhere just above Rtx 2070 range at a much better price.


The funny thing I have a similar problem with my 1080 Ti. Vega 64 is a fantastic card but its just never made for gaming. HMB2 and Compute performance which gaming GPUs do not need. Also clock speed could have been higher. Image Vega 64 at 2GHz +.


----------



## 99belle99

Why can I not control fan speed or am I just stupid or is there a option some where to turn on fan speed control?


----------



## ZealotKi11er

Speed/Temperature?


----------



## 99belle99

ZealotKi11er said:


> Speed/Temperature?


You are right I look stupid as I wanted the fan speed slower and it will not go any lower. I pumped it up to 100% and the sound was there.

So I can control the speed just not allowed go really low speed as I can hear the fans while browsing and was thinking set it real low while browsing so as not to annoy me.


----------



## Minotaurtoo

99belle99 said:


> You are right I look stupid as I wanted the fan speed slower and it will not go any lower. I pumped it up to 100% and the sound was there.
> 
> So I can control the speed just not allowed go really low speed as I can hear the fans while browsing and was thinking set it real low while browsing so as not to annoy me.


my fans stop when temps are below 45C.... what card do you have?


----------



## 99belle99

Minotaurtoo said:


> 99belle99 said:
> 
> 
> 
> You are right I look stupid as I wanted the fan speed slower and it will not go any lower. I pumped it up to 100% and the sound was there.
> 
> So I can control the speed just not allowed go really low speed as I can hear the fans while browsing and was thinking set it real low while browsing so as not to annoy me.
> 
> 
> 
> my fans stop when temps are below 45C.... what card do you have?
Click to expand...

Powercolor red devil 56. It is suppose to have a zero fan speed on the silent bios which I set it to but it does not shut off. I did purchase off a miner so maybe he messed around with bios'. Who knows.


----------



## Minotaurtoo

99belle99 said:


> Powercolor red devil 56. It is suppose to have a zero fan speed on the silent bios which I set it to but it does not shut off. I did purchase off a miner so maybe he messed around with bios'. Who knows.


no telling... I understand your desire to have the quiet though... that's one thing about the fury x I didn't like, the constant pump noise... wish I could help you... but short of flashing a stock bios on it downloaded from powercolor, I don't really know what to do, and from what I understand flashing bios's with vega is iffy at best.


----------



## Shadowarez

Anyone able to source the Morpheus ii cooler everywere iv looked is outa stock. That or I can go water eventually. Vega Fe card.


----------



## Naeem

i am looking for Vega 64 LC tear down video  or pictures of cooler from all sides does any of you guys have it ?


----------



## bloot

99belle99 said:


> Why can I not control fan speed or am I just stupid or is there a option some where to turn on fan speed control?


Fan speed control is quite buggy on latest drivers, roll back to 18.12.1.1 or older.


----------



## milkbreak

What do people recommend for voltage curves for the Vega 56/64 now that the driver supports tuning every state?


----------



## nolive721

seeking advice here

running triple 1080p screens with an OC 1080Ti but the center one is Freesync.I have always been considering going back to VEGA after bad experience with a Nitro 64 which was poor OCer and running really hot despite undervolting and all that

looking at the new adrenalin drivers and maybe some further development on voltage management, I could grab a 64LC edition for not silly money and make my rig Full AMD.


I know its an AMD thread so might be getting biased answers lol) but anybody that would have done the move from 1080Ti to a 64LC with limited to no regret TODAY in order to comfort my reasoning?

thanks so much


----------



## sinnedone

If your frames are high enough for you then I'd say stay with the 1080Ti.

The only game the a LC Vega 64 gets equal to a 1080TI is Forza Horizon 4. Other than that gaming performance falls anywhere from 1070 to just above 1080 performance.

Unless your frames are on the lower end of the monitors freesync range it wouldn't help perceived smoothness.


----------



## Ne01 OnnA

Here is my P-State UV
Test & Compare, adjust for Your GPU.

Also i will add Voltage Table (We need correct values) 

===


----------



## Naeem

nolive721 said:


> seeking advice here
> 
> running triple 1080p screens with an OC 1080Ti but the center one is Freesync.I have always been considering going back to VEGA after bad experience with a Nitro 64 which was poor OCer and running really hot despite undervolting and all that
> 
> looking at the new adrenalin drivers and maybe some further development on voltage management, I could grab a 64LC edition for not silly money and make my rig Full AMD.
> 
> 
> I know its an AMD thread so might be getting biased answers lol) but anybody that would have done the move from 1080Ti to a 64LC with limited to no regret TODAY in order to comfort my reasoning?
> 
> thanks so much




grab a water cooled vega 64 they are selling for $499 you can easily push it to 1100mhz hbm2 and 1700+mhz clock stable with just +50% power target and it's close to a 1080ti in games like BFV i am running 1440p144hz freesync screen with it and i won't go to 1080ti even if someone pay me to


----------



## nolive721

wow 499USD? where?


----------



## AmcieK

Hi guys . Someone know this block will be ok for vega56 nitro+ ? i think pcb is the same ? 

https://pl.aliexpress.com/item/Byks...n-RX-Vega-64-8-gb-HBM2-11275/32868393119.html


----------



## bloot

sinnedone said:


> If your frames are high enough for you then I'd say stay with the 1080Ti.
> 
> The only game the a LC Vega 64 gets equal to a 1080TI is Forza Horizon 4. Other than that gaming performance falls anywhere from 1070 to just above 1080 performance.
> 
> Unless your frames are on the lower end of the monitors freesync range it wouldn't help perceived smoothness.


There are some others


----------



## Naeem

nolive721 said:


> wow 499USD? where?




newegg and comes with 3 free games 

https://www.newegg.com/Product/Prod..._Vega_64_Liquid_Cooled-_-14-131-726-_-Product


----------



## Semel

How does Vega 56 (undervolted & OCed that is) fare against an OCed RTX 2070 in general (not in a few cherry-picked games)?

I'd prefer to get an AMD gpu, but vegas are more expensive than nvidia's gpus in my region and right now I can get a very good 2070 card (palit 2070 jetstream) and it would cost me less (515+ eur) than red devil\nitro vega 56 (540+ eur).

As for Vega 64... now that' s a whopping 600 eur price here, so it's out of the question.


----------



## 113802

bloot said:


> sinnedone said:
> 
> 
> 
> If your frames are high enough for you then I'd say stay with the 1080Ti.
> 
> The only game the a LC Vega 64 gets equal to a 1080TI is Forza Horizon 4. Other than that gaming performance falls anywhere from 1070 to just above 1080 performance.
> 
> Unless your frames are on the lower end of the monitors freesync range it wouldn't help perceived smoothness.
> 
> 
> 
> There are some others
> 
> 
> 
> Spoiler
Click to expand...

Those were all fixed with driver updates. The only game that comes close is Wolfenstein 2. 

https://m.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

Even Forza Horizon 4 went up by like 20% with 411.77. The 2070 is around the same as a RX Vega 64 in Horizon 4 and as we see with the new drivers the GTX 1080 Ti crushes it. 

https://youtu.be/KIozTQwgmNo


----------



## bloot

WannaBeOCer said:


> Those were all fixed with driver updates. The only game that comes close is Wolfenstein 2.
> 
> https://m.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3
> 
> Even Forza Horizon 4 went up by like 20% with 411.77. The 2070 is around the same as a RX Vega 64 in Horizon 4 and as we see with the new drivers the GTX 1080 Ti crushes it.
> 
> https://youtu.be/KIozTQwgmNo


I usually don't trust hardocp that much, they had to use MSAA x8 in Dirt 4 to make Vega look poor in that particular game.

Those are real users and recent results with Afterburner OSD running:


----------



## 113802

bloot said:


> I usually don't trust hardocp that much, they had to use MSAA x8 in Dirt 4 to make Vega look poor in that particular game.
> 
> Those are real users and recent results with Afterburner OSD running:
> 
> 
> Spoiler


The GTX 1080 Ti is faster in all Forza games. A ton of articles were released before nVidia's game ready drivers were released for Forza 7 and Horizon 4. Both game ready drivers for Forza 7 and Horizon 4 brought performance up to normal. 

Here's a fair comparison with a decent system along with decent GPU overclocks. 

https://youtu.be/H9qrIwALB_s


----------



## nolive721

oh thanks but its a pity they dont ship to Japan where I live....


----------



## nolive721

Naeem said:


> newegg and comes with 3 free games
> 
> https://www.newegg.com/Product/Prod..._Vega_64_Liquid_Cooled-_-14-131-726-_-Product


oh thanks but its a pity they dont ship to Japan where I live....


----------



## LazarusIV

Hey all, I've got the PowerColor V64 Red Devil and I've been having an issue where my screen will go black and the graphics card fans will go 100%. Anyone else having this issue? I can't tell if it's the card causing it or another component in my computer. I'm also having issues with my computer just straight up restarting, that's within the last couple of days though. Making me think there's another component failing... maybe PSU or mobo.

Anyone have anything like this happen with their PowerColor Red Devil V64?

Thanks!


----------



## 99belle99

I've the red devil 56 and no such issues.


----------



## TrixX

Would love to see a Vega64/LC on the same system as those. Currently the 10xx/20xx data from that vid is useless as a comparison to Vega without them in the same rig.


----------



## 113802

TrixX said:


> Would love to see a Vega64/LC on the same system as those. Currently the 10xx/20xx data from that vid is useless as a comparison to Vega without them in the same rig.


Odd it's showing the wrong video. 

https://youtu.be/H9qrIwALB_s


----------



## Semel

LazarusIV said:


> my screen will go black and the graphics card fans will go 100%. Anyone else having this issue?
> Thanks!


I had the same with Radeon 290. Nothing I tried helped. I RMAed it and got a new card


----------



## octiny

Count me in the club. Finished off my 2950X/Vega 64 Crossfire micro atx build in the Meshify C Mini. Temps are great so far considering the small nature, tops out @ 55c w/ PBO enabled under 100% load while the Vega 64's max out @ 51c/47c @ 99% load in ROTTR DX12 @ 4K undervolted to 1.0v w/ +20% power target after 1hr (hovers close to 1600mhz). These TX240's work damn good for their size.

This was my first custom loop ever, so I'm pretty happy with the results thus far! Some quick pics, forgot to take the little plastic peels off on the radeon logos lol.

Edit: Not sure why the pics went sideways


----------



## colorfuel

@octiny:

That is on nice and tidy build! Congrats!



I have a question concerning OCing and bioses.

I tried the same settings on both the standard bios of my reference Yeston Vega 64 and the Sapphire LC Bios.

On the LC bios I have no issues running Timespy/FS-Ultra/Superposition. The same settings on the standard bios and Timespy crashes because the core overshoots to 1757 (reported on AMD Link just before the crash). 

Here are the settings (standard bios):

https://imgur.com/a/YiaLgsC


On the LC Bios, I set PT -16%, on standard I set PT 0%. The idea is to have a powertarget of 220W and to get the most out of the card at the same power consumption as standard. With these settings, I get about 10% better scores in benchmark and ingame benchmarks.

Interestingly, even with the LC Bios, in Subnautica the core overshoots and the game crashes. Which is why I used FRTC in that game.


----------



## Offler

octiny said:


> Count me in the club. Finished off my 2950X/Vega 64 Crossfire micro atx build in the Meshify C Mini. Temps are great so far considering the small nature, tops out @ 55c w/ PBO enabled under 100% load while the Vega 64's max out @ 51c/47c @ 99% load in ROTTR DX12 @ 4K undervolted to 1.0v w/ +20% power target after 1hr (hovers close to 1600mhz). These TX240's work damn good for their size.
> 
> This was my first custom loop ever, so I'm pretty happy with the results thus far! Some quick pics, forgot to take the little plastic peels off on the radeon logos lol.
> 
> Edit: Not sure why the pics went sideways


Its worth mentioning that board is AsRock X399M. Small form factor without compromising the performance. How are temps of VRM under extended load?


----------



## ZealotKi11er

colorfuel said:


> @octiny:
> 
> That is on nice and tidy build! Congrats!
> 
> 
> 
> I have a question concerning OCing and bioses.
> 
> I tried the same settings on both the standard bios of my reference Yeston Vega 64 and the Sapphire LC Bios.
> 
> On the LC bios I have no issues running Timespy/FS-Ultra/Superposition. The same settings on the standard bios and Timespy crashes because the core overshoots to 1757 (reported on AMD Link just before the crash).
> 
> Here are the settings (standard bios):
> 
> https://imgur.com/a/YiaLgsC
> 
> 
> On the LC Bios, I set PT -16%, on standard I set PT 0%. The idea is to have a powertarget of 220W and to get the most out of the card at the same power consumption as standard. With these settings, I get about 10% better scores in benchmark and ingame benchmarks.
> 
> Interestingly, even with the LC Bios, in Subnautica the core overshoots and the game crashes. Which is why I used FRTC in that game.


By limiting the power and setting lower voltage you might hit some part of the voltage/frequency graphs where the card is not margined for. All this means is that you have to adjust the voltage curves. At 220W I do not think you are getting more than 1v on the board.


----------



## octiny

Offler said:


> Its worth mentioning that board is AsRock X399M. Small form factor without compromising the performance. How are temps of VRM under extended load?


Correct, as it's the only MB that fits in a micro atx case like the Meshify C Mini (smaller sibling of the Meshify C).

The VRM on it under full load after 1hr is 43c, give or take a degree with PBO leveling out @ around 4.15 all-cores @ 1.376-1.392v. It's a beastly design.




colorfuel said:


> @octiny:
> That is on nice and tidy build! Congrats!


Thank you!


----------



## Jackalito

Hi everyone!

I grabbed a Sapphire Pulse Vega 56 during the Black Friday sales and I was lucky enough to get one with Samsung HBM2 memory. I spent some time browsing around the web looking for a good Vega 64 vBIOS candidate, but it was hard as this model from Sapphire uses the nano PCB instead of the reference one. Finally though, and thanks to a thread on Reddit, I learned there is indeed an XFX Vega 64 that uses that same nano PCB. Especifically, this one:
https://www.techpowerup.com/vgabios/199111/199111

I did manage to flash my Sapphire Pulse Vega 56 with Samsung HBM2 successfully. But, it was not as straight forward as I thought it would be. Last time I'd flashed a graphics card vBIOS was back when I had a 290x. So, AtiFlash for DOS is not compatible with Vega, which I didn't know. And in order to flash my new card I had to use the version for Windows. However, it could not be done through the GUI exe, ATIWinFlash. So, here's what I did in order to flash it, in case this may be helpful for someone else.



Download & extract atiflash v2.84
Place the vBIOS rom file from XFX in the same directory for convenience
Open Command Prompt with admin privileges and go to the directory where atiflash has been extracted
Run atiflash -i (and here make sure your adapter is 0, it should be if there's only one card installed on your system, and totally ignore the dID field - trust me on this one; this was the reason it took me so long to flash it)
Run atiflash -f -p 0 xxx.rom (where xxx is obviously the name of the ROM file you're trying to flash) (And yes you must use zero instead of the dID of your graphics card)
Wait until done and restart your system
That's it. Now, I've got the XFX Vega 64 vBIOS flashed into my Pulse from Saphire and I can push the HBM frequencies beyond 950MHz 










I did run into a minor issue after flashing, though. Apparenty that XFX Vega card lack the Zero RPM functionality that my Sapphire one came with. When I went looking for it on WattMan (18.12.2 drivers), it was nowhere to be found. Reinstalling the driver didn't help, so the walkaround method I've been using is ignore WattMan completely and rely on OverdriveNTool to manage the fan setup of the card.











Still in the testing/finetuning process, as Vega is a whole new beast to me (my previous card was an RX 580).
Feel free to ask me anything if you have any questions 


And Happy New Year, everyone


----------



## 99belle99

Damn I wish my 56 had Samsung.

Regarding ATIWinFlash you had to use that versions also with the Fury X so it's been around quite some time. Also afaik you could have pushed the HBM up to 1000MHz with out flashing a 64 bios.


----------



## BradleyW

Hey.

I like to use FPS limiters to smooth out my games, however I can't with my VEGA 64. When using a limiter, I notice that the memory clocks keep reducing/increasing at random and it "might" be causing a stutter each time on FC5. How can I force my memory clock?

Here are my current settings, thank you:

Edit: I used the Over Drive N Tool, created a .ini file and put ;0 at the end of each memory power state except the highest power state. It did the trick.


----------



## Ne01 OnnA

In OverdriveN Tool go into PP-States (right click on Top window Bar) and make P3 at 950
I was surprised, because this Tweak gives me nice boost overall + when driver goes to P3 it have now 950 instead of 800Mhz 
Meansured tW and it is almost the same (0.7tW max difference), so no biggie.

==


----------



## 99belle99

My Polorcolor Red Devil 56 crashes when in mute fan mode.

One day I auto undervolted my card in Wattman and was watching a video and the screen stalled/crashed and I had to hold down the power button to restart the computer. I figured it was from undervolting.

The bios switch was in silent mode. As I turn off computer and flick to performance if I wanted to game. I do not game too often.

Then yesterday I was watching a netflix video(rarely watch netflix also) and I was in silent bios and mute fan enabled so system was quiet and the same crash happened. So it was not from undervolting as the car was in stock form. All I can think of that happened is the card heated up and fan never turned on to cool it down. Did this happen to anybody else?


----------



## Minotaurtoo

I had crashes from memory down clocking at times.... I went into wattman and set the memory to 800mhz as the min state. it's pretty easy to do if you haven't done it before, just right click the next to highest mem speed and select set as min state.... my fans never run playing netflix, youtube or pretty much anything but gaming or compute work so I'm pretty sure it didn't over heat and I have a very aggressive fan profile that kicks the fans on around 50C


----------



## 99belle99

Minotaurtoo said:


> I had crashes from memory down clocking at times.... I went into wattman and set the memory to 800mhz as the min state. it's pretty easy to do if you haven't done it before, just right click the next to highest mem speed and select set as min state.... my fans never run playing netflix, youtube or pretty much anything but gaming or compute work so I'm pretty sure it didn't over heat and I have a very aggressive fan profile that kicks the fans on around 50C


I tried there but the fans will not mute when I set the memory to 800MHz as min even though temps are only 20 degrees. I do not know why. I reset wattman settings and fan turns off.


----------



## Minotaurtoo

99belle99 said:


> I tried there but the fans will not mute when I set the memory to 800MHz as min even though temps are only 20 degrees. I do not know why. I reset wattman settings and fan turns off.



well... that leaves me at a loss... good luck... maybe someone else here will know how to help.


----------



## jearly410

Sometimes after setting values in overdriventool, the memory clocks will not adjust when applying an overclock first. After raising and lowering a voltage value several times, the card then accepts the overclock correctly. Anyone else have this problem and/or a solution?


----------



## Minotaurtoo

jearly410 said:


> Sometimes after setting values in overdriventool, the memory clocks will not adjust when applying an overclock first. After raising and lowering a voltage value several times, the card then accepts the overclock correctly. Anyone else have this problem and/or a solution?


when using wattman... sometimes I have to restart for the clocks to properly set.... especially when changing profiles back to standard settings.


----------



## Xinoxide

Good Evening Gents. 

I am a little late to the game here. I am looking to just lower the fan speeds of the reference cooler for a little while until Vega2 / Ryzen 3000 series.

I would like to inquire just on the ideal thickness of thermal pads for the reference cooler as a quick search hasn't yielded any useful info.

I plan to apply some liquid metal paste and some better thermal pads to the VRM until an upgrade/refresh where I would like to finally get my water loop in this decently little case.


----------



## jearly410

Xinoxide said:


> Good Evening Gents.
> 
> I am a little late to the game here. I am looking to just lower the fan speeds of the reference cooler for a little while until Vega2 / Ryzen 3000 series.
> 
> I would like to inquire just on the ideal thickness of thermal pads for the reference cooler as a quick search hasn't yielded any useful info.
> 
> I plan to apply some liquid metal paste and some better thermal pads to the VRM until an upgrade/refresh where I would like to finally get my water loop in this decently little case.


Look up the installation manuals for waterblocks (EKWB, Alphacool etc.) and that should give you a good starting point for correct thicknesses.


----------



## Xinoxide

I did but those are the thicknesses for the blocks. I'm going to keep the reference cooler on this card until I replace it with Vega2.


----------



## dedt

I sold a second hand Sapphire Vega 64 reference card a few days ago. The new owner is saying the card is failing the 3Dmark stress test where a 3Dmark benchmark is run 20 times in a row and it fails you if your FPS degrades by more than 3% from the first run. He would thus like to warranty it or return it. He is saying that it is failing with a grade of 90%, he hasn't undervolted the card at all.

I had my friend run the Time Spy stress test on his Powercolor Red Devil Vega 64 with the fans set to maximum. Stock the card failed with 95%. Undervolted the card passed with 99.7%

My thoughts are that I would expect a stock Reference Blower Vega 64 to fail out of the box without any undervolt (maybe even with an undervolt?) due to thermal throttling. Or is there a legitimate issue with the card? (and also my friends card which fails with 95%)


----------



## Minotaurtoo

I don't know, but I have a gigabyte OC I could try it out on here... but then it's not reference, just would be another vega card to compare it to... I do know that with a undervolt and power slider maxed out it passes with flying colors.


----------



## dedt

Thanks! It would still be interesting to see if your card running at stock voltage gets a failing score of around 95%. If the non-reference coolers like my buddy's Red Devil are failing at ~95%, then the reference coolers failing at ~90% seems reasonable to me


----------



## brettjv

*Nevermind, made a fresh thread ...*

Nevermind, made a fresh thread ...


----------



## Minotaurtoo

dedt said:


> Thanks! It would still be interesting to see if your card running at stock voltage gets a failing score of around 95%. If the non-reference coolers like my buddy's Red Devil are failing at ~95%, then the reference coolers failing at ~90% seems reasonable to me


not as good as my undervolt overclock, but it passed.


----------



## ZealotKi11er

I am trying to get the highest score for 3DMark for Vega 64 + 3770K. I am having problems with combined score. That seems to change a lot from test to test.


----------



## dedt

Minotaurtoo said:


> not as good as my undervolt overclock, but it passed.


Thanks. With a score of 97 I guess some very mild thermal throttling is occurring without any undervolting. In your opinion do you think a score of 90% for a stock blower is reasonable, or do you think that card is overheating enough to be considered defective?


----------



## Minotaurtoo

dedt said:


> Thanks. With a score of 97 I guess some very mild thermal throttling is occurring without any undervolting. In your opinion do you think a score of 90% for a stock blower is reasonable, or do you think that card is overheating enough to be considered defective?


well, I can't really offer a valid opinion on that, but I will say that it's a good possibility that the way that stability test works it could be an ok score... really need someone with the same cooler to test it.


----------



## 113802

ZealotKi11er said:


> I am trying to get the highest score for 3DMark for Vega 64 + 3770K. I am having problems with combined score. That seems to change a lot from test to test.


First I suggest disabling the spectre/meltdown Windows patch if you haven't. Might be an extra 200-500 points for your CPU score. Was a 700 point increase for my 6700k.

https://www.grc.com/inspectre.htm


----------



## sinnedone

ZealotKi11er said:


> I am trying to get the highest score for 3DMark for Vega 64 + 3770K. I am having problems with combined score. That seems to change a lot from test to test.



Oh really... Interesting. 😄

Please post what you get, I'd like to see where I stand
😉


----------



## jimpsar

Hello 
Anyone tried flashing Sapphire LC bios to Sapphire Nitro + 64 ??


----------



## brettjv

Jackalito said:


> Hi everyone!
> 
> I grabbed a Sapphire Pulse Vega 56 during the Black Friday sales and I was lucky enough to get one with Samsung HBM2 memory. I spent some time browsing around the web looking for a good Vega 64 vBIOS candidate, but it was hard as this model from Sapphire uses the nano PCB instead of the reference one. Finally though, and thanks to a thread on Reddit, I learned there is indeed an XFX Vega 64 that uses that same nano PCB. Especifically, this one:
> https://www.techpowerup.com/vgabios/199111/199111
> 
> I did manage to flash my Sapphire Pulse Vega 56 with Samsung HBM2 successfully. But, it was not as straight forward as I thought it would be. Last time I'd flashed a graphics card vBIOS was back when I had a 290x. So, AtiFlash for DOS is not compatible with Vega, which I didn't know. And in order to flash my new card I had to use the version for Windows. However, it could not be done through the GUI exe, ATIWinFlash. So, here's what I did in order to flash it, in case this may be helpful for someone else.
> 
> 
> 
> Download & extract atiflash v2.84
> Place the vBIOS rom file from XFX in the same directory for convenience
> Open Command Prompt with admin privileges and go to the directory where atiflash has been extracted
> Run atiflash -i (and here make sure your adapter is 0, it should be if there's only one card installed on your system, and totally ignore the dID field - trust me on this one; this was the reason it took me so long to flash it)
> Run atiflash -f -p 0 xxx.rom (where xxx is obviously the name of the ROM file you're trying to flash) (And yes you must use zero instead of the dID of your graphics card)
> Wait until done and restart your system
> That's it. Now, I've got the XFX Vega 64 vBIOS flashed into my Pulse from Saphire and I can push the HBM frequencies beyond 950MHz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did run into a minor issue after flashing, though. Apparenty that XFX Vega card lack the Zero RPM functionality that my Sapphire one came with. When I went looking for it on WattMan (18.12.2 drivers), it was nowhere to be found. Reinstalling the driver didn't help, so the walkaround method I've been using is ignore WattMan completely and rely on OverdriveNTool to manage the fan setup of the card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Still in the testing/finetuning process, as Vega is a whole new beast to me (my previous card was an RX 580).
> Feel free to ask me anything if you have any questions
> 
> 
> And Happy New Year, everyone


Getting my Red Dragon on Monday, this sounds very helpful ... it's also a Nano-based card, so ... are you saying you only use ODNTool for all controls on the card, and it makes the 0 Fan Speed feature work after the flash to the XFX 64 bios? How high have you been able to push the HBM2, if I may ask?

Thanks for the post!


----------



## ZealotKi11er

sinnedone said:


> Oh really... Interesting. 😄
> 
> Please post what you get, I'd like to see where I stand
> 😉


Here you go: https://www.3dmark.com/fs/17867948


----------



## Xinoxide

Anyone want to know what kind of temp drop I'll get on a reference blower with conductonaut?

Trying to get those clocks up before I get a Radeon VII.

What should I test for a before and after?

I'm already going to do a pre and post of FurMark with the 200% power limit. 8]]


----------



## ZealotKi11er

Xinoxide said:


> Anyone want to know what kind of temp drop I'll get on a reference blower with conductonaut?
> 
> Trying to get those clocks up before I get a Radeon VII.
> 
> What should I test for a before and after?
> 
> I'm already going to do a pre and post of FurMark with the 200% power limit. 8]]


0 difference.


----------



## Minotaurtoo

ZealotKi11er said:


> Here you go: https://www.3dmark.com/fs/17867948


I'm curious how you are getting such high clocks... mine no matter what the voltage craps out right before 1700mhz... I can get into the 1650 range with 1.1 volts... but even with 1.2 mine will crap out around 1687 (or at least that's the last recorded value) Temps will be in the low to mid 60's on the core and upper 60's to 70 on the HBM... Hotspot will hit 90 from time to time, but I don't really see that as being what's holding me back since under stress tests it does get that warm and holds it just fine so long as the clocks don't pass 1650 by much.


----------



## Xinoxide

ZealotKi11er said:


> 0 difference.


Half way expecting exactly that.


----------



## TrixX

Xinoxide said:


> Half way expecting exactly that.


Kinda it speeds up the dissipation of the heat away from the die until the cooler is heat saturated. It doesn't reduce the overall temps, it just helps with the speed of the transfer.

Water block it and make it cool properly


----------



## sinnedone

ZealotKi11er said:


> Here you go: https://www.3dmark.com/fs/17867948


I have something to work up to. 

For me on the combined Firestrike (normal) GPU test always drop clocks from like 1700 to 1200. I've tried the powerplay edit to allow for "100%" power limit, but that doesn't seem to do anything for me.(probably doing it wrong but dont know) This is Vega 64 flashed to LC. Highest clocks I can achieve is 1760-5 at 1.25v



I need to get my 3770k to your level though... 5.2!


----------



## ZealotKi11er

sinnedone said:


> I have something to work up to.
> 
> For me on the combined Firestrike (normal) GPU test always drop clocks from like 1700 to 1200. I've tried the powerplay edit to allow for "100%" power limit, but that doesn't seem to do anything for me.(probably doing it wrong but dont know) This is Vega 64 flashed to LC. Highest clocks I can achieve is 1760-5 at 1.25v
> 
> 
> 
> I need to get my 3770k to your level though... 5.2!


The drop there is normal. I am doing 1780-1800 during gt1 and gt2. Yeah I got this 3770K because I saw it had low idle voltage. Its does 5.2GHz more stable than the one I have had for 5 years now which only did 4.8GHz. I am going try to get a faster CPU to get highest Vega 64 score.


----------



## sinnedone

Any power play table mods?

Sounds like my old 3770k. It did 4.6 at 1.18v. Then my board took a dump and killed itself and the CPU. 😭


----------



## BradleyW

Has anyone tried messing around the HBM memory timing option? What's the difference between level 1 and 2?


----------



## ZealotKi11er

BradleyW said:


> Has anyone tried messing around the HBM memory timing option? What's the difference between level 1 and 2?


I tried but no difference in 3DMark.


----------



## BradleyW

ZealotKi11er said:


> I tried but no difference in 3DMark.


I wondered if level 2 is looser, allowing for higher clocks, or vice versa.

Highest my memory can do is 1045MHz with timings set to Auto.


----------



## ZealotKi11er

BradleyW said:


> I wondered if level 2 is looser, allowing for higher clocks, or vice versa.
> 
> Highest my memory can do is 1045MHz with timings set to Auto.


I have not tried with new driver but I did try to see If I can OC more since my limit is 1180 and would still hang with both options.


----------



## Spacebug

BradleyW said:


> I wondered if level 2 is looser, allowing for higher clocks, or vice versa.
> 
> Highest my memory can do is 1045MHz with timings set to Auto.


Level 2 is looser, as in just a slight but consistent worse performance than on level 1. 

If it helps you clock higher depends i guess on what you are limited by now. 
Max clock on my hardmodded V64 doesn't change, 1225 mhz is rock stable and 1230mhz artifacts and locks up benchmarks within a second or two on both timing settings...


----------



## Worldwin

https://www.amd.com/en/support/kb/faq/dh-020

"Memory timing reduces memory latency based on the selected level. A lower memory latency level may improve performance in some applications."

Guess we go for level 1 for tighter timings.


----------



## Deadboy90

Hey guys I just picked up a vega 56 off eBay. It's coming next week and it's probably a retired miner. Being a reference card My plan was To use my Kraken G12 hooked to a Corsair H100i I have on my current RX480. My concern is that I wont be able to cool the VRMs since the fan that is supposed to do that looks like it would be blowing on the bare section of PCB near the power connectors. Has anyone used the Kraken on this to any success?


----------



## BradleyW

I'm getting blue screens, lock ups, CTD's and random issues in Windows on the latest drivers. Tested with fresh install of OS with 19.1. Back to 18.12 now without a single issue.


----------



## sinnedone

Multiple monitors?


----------



## BradleyW

Single, 2560x1080 144Hz DP1.2


----------



## sinnedone

I have multiple monitors and also had strange issues and had to roll back to 18.12. Figured the multiple monitors might have been the problem for me and possibly you. Resolution maybe?


----------



## ZealotKi11er

BradleyW said:


> I'm getting blue screens, lock ups, CTD's and random issues in Windows on the latest drivers. Tested with fresh install of OS with 19.1. Back to 18.12 now without a single issue.


I have been having strange issues too. I was blaming my CPU OC but was not really getting known CPU OC fail issues.


----------



## diggiddi

Multiple monitors here fury Nitro was having problems with 19.1 reverted to 18.12


----------



## Minotaurtoo

I had problems with the first 19.1, but when they issued a second 19.1.1 it seems to be fixed... I did have to DDU though.


----------



## BradleyW

ZealotKi11er said:


> I have been having strange issues too. I was blaming my CPU OC but was not really getting known CPU OC fail issues.


Same here. I've been in and out of the BIOS several times checking and adjusting.


----------



## ZealotKi11er

Guys I am having issue installing driver for Vega FE. They install fine but after restart the install asks me install again.


----------



## miklkit

I've been having performance issues with this Sapphire Vega 64 in 2 games and might have stumbled across the answer. 



After getting this Veg64 I quickly OCed it to 1680/1020 @1100mv and 50% power limit in Trixx. Temps were fine with them staying in the 50-60C range. The hot spot has hit 79C but is usually in the 72-75C range. This works well in most games but in 2 games there are issues. 



In Strange Brigade, which was bundled with the vega64, certain textures kept dropping from dx12 to dx7 or worse. In Subnautica, Unity engine, frame rates would plummet into the 15-25fps range and when it did the system was idling down and not drawing much power. 



Three days ago I was reading where some users were saying that they got their best performance with leaving the clocks stock and only undevolting it. So I tried it and it WORKS! No more dx7 textures in SB and fps and system loads are up in SN! Temps are now in the 45-55C range and if the fans would start up sooner they would be lower yet.


Sometimes less is more.


----------



## furkandeger

Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working. 

Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...

I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


----------



## Maracus

furkandeger said:


> Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working.
> 
> Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...
> 
> I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


Whats the rest of your system setup? Also what cables are you using? I have the same card and a 60hz+144hz monitor setup also HDMI/DP cable with no issues.


----------



## Mikkinen

Hi guys, I'm about to get a gigabyte rx vega 64 and I would like to use the kraken G12 with the corsair H75 (fan noctua) that I used for a rx 480, I have a compact case and the air flow may not be enough even if undervolt.
I would like to apply heatsinks to vrm and mosfet, what size do they have?
Should I just apply a thermal paste (arftic mx-4) and better pads (which ones)?


----------



## furkandeger

Maracus said:


> Whats the rest of your system setup? Also what cables are you using? I have the same card and a 60hz+144hz monitor setup also HDMI/DP cable with no issues.


Hi there, sorry for not including them earlier. I got them running via HDMI + DP (respectively 60 hz + 144 Freesync). System specs are:

ASUS Crosshair VI Hero
Ryzen 7 1700
Corsair Vengeance LPX DDR4 16 GB @ 3000
NZXT Kraken X62
ASUS ROG Strix RX Vega 56
Corsair HX850i
Two ssds, two hdds.
Phanteks Enthoo Evolv ATX TG

I guess that's it.

____________________________________

Adding the original post for visibility:

Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working. 

Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...

I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


----------



## Maracus

furkandeger said:


> Hi there, sorry for not including them earlier. I got them running via HDMI + DP (respectively 60 hz + 144 Freesync). System specs are:
> 
> ASUS Crosshair VI Hero
> Ryzen 7 1700
> Corsair Vengeance LPX DDR4 16 GB @ 3000
> NZXT Kraken X62
> ASUS ROG Strix RX Vega 56
> Corsair HX850i
> Two ssds, two hdds.
> Phanteks Enthoo Evolv ATX TG
> 
> I guess that's it.
> 
> ____________________________________
> 
> Adding the original post for visibility:
> 
> Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working.
> 
> Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...
> 
> I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


What about latest AMD chipset drivers? Using AMD balanced? I did once have a similar issue relating to the PCI Express power management when it was turned on, you could try turning that off to see if it helps under power options.


----------



## furkandeger

Maracus said:


> What about latest AMD chipset drivers? Using AMD balanced? I did once have a similar issue relating to the PCI Express power management when it was turned on, you could try turning that off to see if it helps under power options.


I installed the chipset drivers from AMD and am using high performance with PCI Express power management turned off...


----------



## sinnedone

I experience an odd behavior when overclocking the HBM on a Vega 64 flashed to LC and want to know if this is simply a power limit thing.

When overcloking HBM to 1180mhz (usually at 1050mhz) it brings down the GPU clocks by 100-200mhz average. Temperatures for GPU/HBM/VRM/Hot spot will range from 45c to 60c. Voltages are 1.22mv for GPU and 1.050mv for HBM

Could this be a power limit thing? (already at +50 percent) or does it have something to do with the SOC clock?

Will upping the power limit stop this behavior? If so is there a good guide out there because the steps I've taken while showing up in the power slider seem to do nothing and card throttles at around 370W like normal.


----------



## miklkit

Are you using Samsung SSDs? It should be common knowledge that AMD drivers are not compatible with Samsung SSDs.


----------



## DiscoSubmarine

furkandeger said:


> Hi there, sorry for not including them earlier. I got them running via HDMI + DP (respectively 60 hz + 144 Freesync). System specs are:
> 
> ASUS Crosshair VI Hero
> Ryzen 7 1700
> Corsair Vengeance LPX DDR4 16 GB @ 3000
> NZXT Kraken X62
> ASUS ROG Strix RX Vega 56
> Corsair HX850i
> Two ssds, two hdds.
> Phanteks Enthoo Evolv ATX TG
> 
> I guess that's it.
> 
> ____________________________________
> 
> Adding the original post for visibility:
> 
> Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working.
> 
> Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...
> 
> I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


that's really odd, i've got nearly the same setup as you, except my card is a powercolor red devil and i'm running triple monitor (also at 1440p). memory clock drops to 167 mhz at idle with zero issues on driver 19.1.1
i do however recall (much) older drivers having issues at 1440p 144hz where the screen would occasionally flicker at idle, and this got fixed with newer drivers so i wouldn't be surprised if there's some kind of driver issue at play.
i should probably mention i flashed a reference vega 64 bios onto my card (which has samsung hbm2), so who knows if that makes a difference.


----------



## BradleyW

miklkit said:


> Are you using Samsung SSDs? It should be common knowledge that AMD drivers are not compatible with Samsung SSDs.


Any sources?


----------



## ZealotKi11er

BradleyW said:


> Any sources?


Seems super fishy.


----------



## BradleyW

ZealotKi11er said:


> Seems super fishy.


I've been using AMD GPU's and Samsung SSD's for years. It does seem odd....


----------



## TrixX

BradleyW said:


> I've been using AMD GPU's and Samsung SSD's for years. It does seem odd....


Tell that to my two Samsung EVO 500GB's with my ThreadRipper and Vega64...

If they didn't work with AMD driver's I'd be fooked...


----------



## Spacebug

sinnedone said:


> I experience an odd behavior when overclocking the HBM on a Vega 64 flashed to LC and want to know if this is simply a power limit thing.
> 
> When overcloking HBM to 1180mhz (usually at 1050mhz) it brings down the GPU clocks by 100-200mhz average. Temperatures for GPU/HBM/VRM/Hot spot will range from 45c to 60c. Voltages are 1.22mv for GPU and 1.050mv for HBM
> 
> Could this be a power limit thing? (already at +50 percent) or does it have something to do with the SOC clock?
> 
> Will upping the power limit stop this behavior? If so is there a good guide out there because the steps I've taken while showing up in the power slider seem to do nothing and card throttles at around 370W like normal.


Haven't experienced that I'm afraid, don't think it is related to powerlimit though, or maybe... 

Soc clock shouldn't be it, that is unlocked on all recent drivers, needs to be higher than or equal to hbm clock, and it automatically jumps up if hbm is set higher... 

If you apply 1.22V to "hbm", as in equal values for both core and "memory" voltage, will that help anything? 
Otherwise try to just lock coreclock to p7 state as min/max, that should help it i guess.


I just care about the highest pstate so ive just applied p7 voltage to all pstates as well as "memory", and lock p7 as well as hbm p3 to min/max state. 
That keeps the card at max clocks as well as voltages all the time. 
Idle powerdraw obviously goes up and idle temps but a fairly hefty cooling loop takes care of that... 

Though it might be something related to powerdraw, i remember i never got the card to fully behave if i just upped powerlimit, tried up to 800w and 250% in powerplay table mods. 
got fed up by it and hardmodded the current sense circuit instead so the card sees less powerdraw, that seemed to be the only thing that helped my card from downclocking due to vegas boost algorithm when pushing beyond 1.25v vcore.
But not many people i guess wants to solder on their cards...


----------



## Minotaurtoo

I'm using samsung SSD.... have been for some time... so far nothing I can pin to that...


----------



## miklkit

To clarify, it is the AMD sata drivers that have been causing the problem.


https://www.overclock.net/forum/11-amd-motherboards/1713860-storport-sys-dpc-latency.html


----------



## vokalemilch

*stock BIOS*

Hi, 

I am desperately searching for the stock BIOS for a Powercolor Vega 64 (AXRX VEGA 64 8GBHBM2-3DH). Does anyone have it? My backup was corrupted.


----------



## Dhoulmagus

vokalemilch said:


> Hi,
> 
> I am desperately searching for the stock BIOS for a Powercolor Vega 64 (AXRX VEGA 64 8GBHBM2-3DH). Does anyone have it? My backup was corrupted.


I have a reference powercolor vega 64 w/ samsung memory. Not sure of exact model name off hand. If its of use i can try to dump the bios when i get home


----------



## VicsPC

I see to be having some weird issues with the drivers in custom settings sometimes defaulting to 800mhz, easy fix though, reset factory settings and works just fine again. Ill keep my eye out to see what happens, W10 seems to be installing AMD PSP bus or hd audio randomly.


----------



## vokalemilch

Hey, Serious_Don, that would be amazing. I am fairly new to the whole flashing BIOS part, so just to be sure, this is the card I have. https://www.newegg.com/Product/Prod...=vega 64&cm_re=vega_64-_-14-131-728-_-Product
Thank you so much


----------



## furkandeger

Sorry for not replying sooner. 

I'm not using any Samsung SSDs and checking IDE ATA/ATAPI section in device manager, my system is not using AMD drivers. Still having the issue. Tried different Windows builds but it didn't help. Tried different, older drivers but it also didn't help. 

Currently setting hbm clock to p2 minimum works around the issue, but i can also workaround it by setting the minimum state for gpu clock to something higher as well... 

Also, sometimes if I have issues (i.e. when the minimum state is p0 for some reason) and I restart the computer, I get debug code 94 with white led on (GPU). This doesn't happen if the mem clock minimum is p2 and there are no DPC issues before the restart.



furkandeger said:


> Hi there, sorry for not including them earlier. I got them running via HDMI + DP (respectively 60 hz + 144 Freesync). System specs are:
> 
> ASUS Crosshair VI Hero
> Ryzen 7 1700
> Corsair Vengeance LPX DDR4 16 GB @ 3000
> NZXT Kraken X62
> ASUS ROG Strix RX Vega 56
> Corsair HX850i
> Two ssds, two hdds.
> Phanteks Enthoo Evolv ATX TG
> 
> I guess that's it.
> 
> ____________________________________
> 
> Adding the original post for visibility:
> 
> Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working.
> 
> Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...
> 
> I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


----------



## Dhoulmagus

vokalemilch said:


> Hey, Serious_Don, that would be amazing. I am fairly new to the whole flashing BIOS part, so just to be sure, this is the card I have. https://www.newegg.com/Product/Prod...=vega 64&cm_re=vega_64-_-14-131-728-_-Product
> Thank you so much


Seems to be the same card, I don't have a purchase URL to verify mine but the s/n sticker under the barcode reads: "AXRX VEGA 64 8GBHBM2-3DH" (which matches your url). As far as I know the only difference between any reference card would be samsung vs hynix memory, which should be compatible with one another. Obviously take anything I say with a grain of salt, I can make no guarantees etc.

MD5 checksum:
c2bc2905dbc27a48451541cf2cc9bc10

I'll edit the post with a link momentarily as I cannot attach a bios here

EDIT: https://drive.google.com/open?id=16RRnL0TDr-9NGVlBOhYRhVqH7HcLgd2Y
Hopefully that works, I'm not too familiar with google drive


----------



## Minotaurtoo

miklkit said:


> To clarify, it is the AMD sata drivers that have been causing the problem.
> 
> 
> https://www.overclock.net/forum/11-amd-motherboards/1713860-storport-sys-dpc-latency.html


 ah, ok... that explains it... I'm using the standard MS version of the drivers


----------



## vokalemilch

Serious_Don said:


> Seems to be the same card, I don't have a purchase URL to verify mine but the s/n sticker under the barcode reads: "AXRX VEGA 64 8GBHBM2-3DH" (which matches your url). As far as I know the only difference between any reference card would be samsung vs hynix memory, which should be compatible with one another. Obviously take anything I say with a grain of salt, I can make no guarantees etc.
> 
> MD5 checksum:
> c2bc2905dbc27a48451541cf2cc9bc10
> 
> I'll edit the post with a link momentarily as I cannot attach a bios here
> 
> EDIT: https://drive.google.com/open?id=16RRnL0TDr-9NGVlBOhYRhVqH7HcLgd2Y
> Hopefully that works, I'm not too familiar with google drive


You are a champ! I will give it a shot. I also have Samsung memory so I think this should work.Thank you, again.


----------



## Deadboy90

Mikkinen said:


> Hi guys, I'm about to get a gigabyte rx vega 64 and I would like to use the kraken G12 with the corsair H75 (fan noctua) that I used for a rx 480, I have a compact case and the air flow may not be enough even if undervolt.
> I would like to apply heatsinks to vrm and mosfet, what size do they have?
> Should I just apply a thermal paste (arftic mx-4) and better pads (which ones)?


I asked a similar question, my concern was the fan that comes with it not blowing on anything since the VRMs are so close to the chip


----------



## Mikkinen

Deadboy90 said:


> I asked a similar question, my concern was the fan that comes with it not blowing on anything since the VRMs are so close to the chip


I was thinking of not using the kraken and installing a fan to the direct case on the vrm-mosfet (the gpu is vertically with a horizontal motherboard).
the problem is that I do not know what heatsinks you need on mosfets (even on the backplate) can you help me please? 🙂
I have a gigabyte gaming ....


----------



## furkandeger

furkandeger said:


> Hi there, sorry for not including them earlier. I got them running via HDMI + DP (respectively 60 hz + 144 Freesync). System specs are:
> 
> ASUS Crosshair VI Hero
> Ryzen 7 1700
> Corsair Vengeance LPX DDR4 16 GB @ 3000
> NZXT Kraken X62
> ASUS ROG Strix RX Vega 56
> Corsair HX850i
> Two ssds, two hdds.
> Phanteks Enthoo Evolv ATX TG
> 
> I guess that's it.
> 
> ____________________________________
> 
> Adding the original post for visibility:
> 
> Hi guys, just got myself a Vega 56 (ASUS ROG Strix) however I'm having terrible choppines, mouse lag, sound crackling when the card is idle... In games it's perfectly working.
> 
> Checking the GPU-Z, I discovered that whenever the memory clock goes lower than 700 Mhz, this happens (with terrible DPC latency). So I configured the global wattman to keep memory at 700 mhz minimum (state 2, I guess) and it works fine now but it still makes me wonder if it's a hardware defect or not...
> 
> I'm on Windows 10 Pro x64 (1809), yes I tried DDU and a previous driver. I have two monitors (60 hz + 144 hz) but I also tried single monitor and it was the same... Any ideas?


Sorry guys for kinda shoving this in here, but any ideas on this? What could possibly be causing it?


----------



## Minotaurtoo

I'd try setting the mem state 1 as min and core state 1 or 2 as min instead of the default 0 state.... that may help... including pic as a reference for mem location... core is just above it a bit... just right click on state 1 and click set as min.


----------



## Dhoulmagus

vokalemilch said:


> You are a champ! I will give it a shot. I also have Samsung memory so I think this should work.Thank you, again.


any luck?


----------



## BradleyW

Radeon Settings reports 8176 VRAM on my VEGA 64. Anyone else have this?


----------



## 99belle99

BradleyW said:


> Radeon Settings reports 8176 VRAM on my VEGA 64. Anyone else have this?


I just checked and I am the same. Must be normal.


----------



## sinnedone

Oh yeah! We just downloaded more RAM 😄


----------



## Falkentyne

BradleyW said:


> Radeon Settings reports 8176 VRAM on my VEGA 64. Anyone else have this?


8GB of RAM=8192 MB right?


----------



## octiny

BradleyW said:


> Radeon Settings reports 8176 VRAM on my VEGA 64. Anyone else have this?



A kilobyte is 1024 bytes, a. Therefore 1KB is the same as 1024 x 8 = 8192 binary digits. Megabyte (MB): 1024KB equals one megabyte (MB), Gigabyte (GB): There are 1024MB in one gigabyte.

1GB=1024MB x8= 8192mb

So 8176 is close enough. Normal.


----------



## Trender

Guys the new setting ins Wattman, which one is better? memory timing level 1 or level 2?


----------



## bustacap22

Hoping someone here can assist. Can anyone with a Vega 64 w/ ek waterblock give me measurements in how long the card is with waterblock installed. I am checking for clearance and hoping I dont have to change placement of my reservoir. Thanks.


----------



## BradleyW

bustacap22 said:


> Hoping someone here can assist. Can anyone with a Vega 64 w/ ek waterblock give me measurements in how long the card is with waterblock installed. I am checking for clearance and hoping I dont have to change placement of my reservoir. Thanks.


Same length without a block.


----------



## VicsPC

bustacap22 said:


> Hoping someone here can assist. Can anyone with a Vega 64 w/ ek waterblock give me measurements in how long the card is with waterblock installed. I am checking for clearance and hoping I dont have to change placement of my reservoir. Thanks.


Yup exact same length as someone stated, it becomes a 1slot card if you have a reference card instead of 2.


----------



## ZealotKi11er

Did anyone play Anthem? Had no problem with Vega FE since it uses older drivers but with Vega 64 LC too many crashes, HDR going into black screen. Both Nvidia and AMD really need to test HDR.


----------



## Ne01 OnnA

Trender said:


> Guys the new setting ins Wattman, which one is better? memory timing level 1 or level 2?


Memory Timing -> https://www.amd.com/en/support/kb/faq/dh-020

My Note: Im using Level 1 from the begining (it gives better FireStrike pts.)

Memory timing reduces memory latency based on the selected level. A lower memory latency level may improve performance in some applications.
Memory timing has three options:

Automatic – Default BIOS Timings
Memory Timing Level 1
Memory Timing Level 2

Click Memory Timing to switch between memory timing levels. Click Apply to save your changes.
NOTE! Memory timing is only available on RX400, RX500 and RX Vega series graphics products

https://www.amd.com/system/files/2018-12/DH020-10.png
https://www.amd.com/system/files/2018-12/DH020-11.png


----------



## THUMPer1

How much does the Hot Spot temp effect the overall core boost on the card?


----------



## Xinoxide

THUMPer1 said:


> How much does the Hot Spot temp effect the overall core boost on the card?


Just as much as core temp.


----------



## THUMPer1

Xinoxide said:


> Just as much as core temp.


My hot spot temp is always 20-28c hotter than core temp. Core temp maxes at around 51c for V64 Liquid. Should I re-paste again?


----------



## VicsPC

THUMPer1 said:


> My hot spot temp is always 20-28c hotter than core temp. Core temp maxes at around 51c for V64 Liquid. Should I re-paste again?


I honestly wouldn't worry about hotspot temps, loads of people have theirs in the 70-80 range without an issue. From what we've seen it might be a useless measurement as it's being taken out of most softwares. I stopped even monitoring hotspot temps on my full loop. HBM temperature is probably far more important for keeping timings tight then hotspot temp is.

Here's mine after an hr + of Frostpunk (pretty demanding as well), ambient is probably about 23°C with the windows closed. As you can see no hotspot issues here.


----------



## Xinoxide

VicsPC said:


> I honestly wouldn't worry about hotspot temps, loads of people have theirs in the 70-80 range without an issue. From what we've seen it might be a useless measurement as it's being taken out of most softwares. I stopped even monitoring hotspot temps on my full loop. HBM temperature is probably far more important for keeping timings tight then hotspot temp is.
> 
> Here's mine after an hr + of Frostpunk (pretty demanding as well), ambient is probably about 23°C with the windows closed. As you can see no hotspot issues here.


It only becomes a problem if its peaking 105C~. At this point it will throttle regardless of actual core temps.

I have a total reapplication of paste and pads in the mail. I just redid paste with my last bit of IC diamond and I'm having exactly this problem. I think I didn't have enough for a nice thick application. Seems I don't get a lot of pressure from my stock cooler. It could also be that I reused the existing thermal pads. Not enough info about what really causes high hotspot temps to find a defacto solution.


----------



## Ne01 OnnA

Xinoxide said:


> It only becomes a problem if its peaking 105C~. At this point it will throttle regardless of actual core temps.
> 
> I have a total reapplication of paste and pads in the mail. I just redid paste with my last bit of IC diamond and I'm having exactly this problem. I think I didn't have enough for a nice thick application. Seems I don't get a lot of pressure from my stock cooler. It could also be that I reused the existing thermal pads. Not enough info about what really causes high hotspot temps to find a defacto solution.


That's way too high IMO (85deg Max)
Try to give it additional Fan (on the back of the GPU).
Make sure AirFlow in the case is Fine/Good


----------



## VicsPC

Xinoxide said:


> It only becomes a problem if its peaking 105C~. At this point it will throttle regardless of actual core temps.
> 
> I have a total reapplication of paste and pads in the mail. I just redid paste with my last bit of IC diamond and I'm having exactly this problem. I think I didn't have enough for a nice thick application. Seems I don't get a lot of pressure from my stock cooler. It could also be that I reused the existing thermal pads. Not enough info about what really causes high hotspot temps to find a defacto solution.


I wasn''t even hitting that with my stock cooler, i think i was closer to 80-90s with that. I do have a horizontal mounted board with intake fans right above the card so could be why but i doubt it. It's possible hotspot temps is related to VRM for me it seems to match one of em.


----------



## Xinoxide

Ne01 OnnA said:


> That's way too high IMO (85deg Max)
> Try to give it additional Fan (on the back of the GPU).
> Make sure AirFlow in the case is Fine/Good





VicsPC said:


> I wasn''t even hitting that with my stock cooler, i think i was closer to 80-90s with that. I do have a horizontal mounted board with intake fans right above the card so could be why but i doubt it. It's possible hotspot temps is related to VRM for me it seems to match one of em.


Working on it, Tried fans blowing on it. Going to do the total repaste and repad next.


----------



## 21Dante

I got my Gigabyte Vega 64 gaming 2 weeks ago.
Drivers do have a lot of issues but what's bugging me the most and I can't find out if it's a driver or hardware problem,is chrome flickering.
It happens only at chrome and sometimes when it's minimized and I'm looking at the desktop.
Chrome is at the latest version,tried last 3 driver versions stable and beta,tried different dp ports,now plugged a hdmi cable to test.
Fun story is that it doesn't happen at my second monitor ,an old 19inch LG which is connected via a dvi to dp converter but only to my dell S2719dgf.
With my previous card,a 1070 ,I had no issue.
Is this a driver issue or a faulty card?Any ideas?
I tried some chrome fixes that are suggested,such as disabling hardware acceleration and gpu rasterization but none of them worked.


----------



## VicsPC

21Dante said:


> I got my Gigabyte Vega 64 gaming 2 weeks ago.
> Drivers do have a lot of issues but what's bugging me the most and I can't find out if it's a driver or hardware problem,is chrome flickering.
> It happens only at chrome and sometimes when it's minimized and I'm looking at the desktop.
> Chrome is at the latest version,tried last 3 driver versions stable and beta,tried different dp ports,now plugged a hdmi cable to test.
> Fun story is that it doesn't happen at my second monitor ,an old 19inch LG which is connected via a dvi to dp converter but only to my dell S2719dgf.
> With my previous card,a 1070 ,I had no issue.
> Is this a driver issue or a faulty card?Any ideas?
> I tried some chrome fixes that are suggested,such as disabling hardware acceleration and gpu rasterization but none of them worked.


I had a flickering issue on my r9 390 when i got my freesync monitor, i switched the ports and it solved the issue i was having, wasn't a chrome issue though. Try to turn off hardware acceleration in chrome and see if it still happens. Seems like it's software related if its only chrome, try edge and firefox see if you get the same issue.


----------



## 21Dante

Switching ports didn't fix it.Tried hdmi,happened there too.
Returned the card to stock settings,same result.I think I'm gonna rma it.
Also it started appearing at the other monitor too and even on windows,with chrome closed.


----------



## BradleyW

THUMPer1 said:


> My hot spot temp is always 20-28c hotter than core temp. Core temp maxes at around 51c for V64 Liquid. Should I re-paste again?


Re-pasting won't solve anything. You need active airflow across the card to cool the hot spot. Especially with your model, seen as it doesn't use fans to blow air onto all components situated on the PCB which is an issue. Only other way is to remove the outer casing and have some fans blowing directly onto the card. Or better yet buy a WC block which covers the whole card, but I'm not sure if a block exists for your model.


----------



## abso

Hey guys, I got myself a Vega 56 Red Dragon today and have a few issues. The card is very loud. Seems like fans ramp up way to early. I only stay around 60-63°C but I couldnt find any option to set higher Temperature Target in Wattmann. I also tried my luck with undervolting in Wattmann. I set State 6+7 to 1600Mhz, voltage 1070mV and Powertarget +50%. The Voltage doesnt seem to matter though, Afterburner always shows me the same powerconsumption. Any clue what Im doing wrong here?


----------



## Maracus

Xinoxide said:


> Working on it, Tried fans blowing on it. Going to do the total repaste and repad next.


I re-pasted and put new pads on my strix 56, don't be shy with the paste on the die and HBM


----------



## Xinoxide

Maracus said:


> I re-pasted and put new pads on my strix 56, don't be shy with the paste on the die and HBM


Promise I wont. originally wanted to do a liquid metal tim. come to find out I have an non-molded Die, wouldnt trust questionable the liquid metal contact to the HBM.

Gonna save that tube for my next GPU I guess.


----------



## Trender

Ne01 OnnA said:


> Memory Timing -> https://www.amd.com/en/support/kb/faq/dh-020
> 
> My Note: Im using Level 1 from the begining (it gives better FireStrike pts.)
> 
> Memory timing reduces memory latency based on the selected level. A lower memory latency level may improve performance in some applications.
> Memory timing has three options:
> 
> Automatic – Default BIOS Timings
> Memory Timing Level 1
> Memory Timing Level 2
> 
> Click Memory Timing to switch between memory timing levels. Click Apply to save your changes.
> NOTE! Memory timing is only available on RX400, RX500 and RX Vega series graphics products
> 
> https://www.amd.com/system/files/2018-12/DH020-10.png
> https://www.amd.com/system/files/2018-12/DH020-11.png


Alright, also in your testing does it makes it less stable?
In like maybe is better level 2 but higher speed than level 1 with lower speed *if* its less stable


----------



## abso

I had something strange happen yesterday. After installing my Vega 56 my PC shut off mid game without any error message. It only would restart after disconnecting it from power for a moment. MSI afterburner was showing around 250W just before it happened. I have a 1700X with my Vega card installed. The powersupply is a 3y old Seasonic X-650 KM3 so it should be powerfull enough for this kind of configuration.

After that it didnt happen anymore, just once so far. Any ideas what could be the issue here?


----------



## Ne01 OnnA

Trender said:


> Alright, also in your testing does it makes it less stable?
> In like maybe is better level 2 but higher speed than level 1 with lower speed *if* its less stable


You need to test it.
And Share Your opinion 

Im now too busy to Play 3Dmark  
-> BFV is making me sleepless...


----------



## Minotaurtoo

abso said:


> I had something strange happen yesterday. After installing my Vega 56 my PC shut off mid game without any error message. It only would restart after disconnecting it from power for a moment. MSI afterburner was showing around 250W just before it happened. I have a 1700X with my Vega card installed. The powersupply is a 3y old Seasonic X-650 KM3 so it should be powerfull enough for this kind of configuration.
> 
> After that it didnt happen anymore, just once so far. Any ideas what could be the issue here?


I had similar issues with my vega 64 on my old rosewill 650 watt psu... replaced the psu with an 850 watt unit... problem solved... mine was about the same way... cut off... no error... had to cut power for a few seconds before it would come on again...


----------



## Minotaurtoo

ugh... I'm considering a registry mod to get a higher power limit now... anyone have a trusty mod that won't brick my registry : ) don't know if it matters what specific card it is, but mine is a Gigabyte Gaming OC 8G windforce edition vega 64....


----------



## sinnedone

Yeah I need a PP table for a Vega 64 LC that doesn't drop the P7 voltage to 2.0v and let's me get past the 375ish watts and more amps. My temps stay in the low mid 40's


----------



## Minotaurtoo

Not sure if raising the power limits will be a good solution for me really... the problem I'm having is interesting to me... I can undervolt this card and do quite will in most situations... then certain programs will load the card up and it will hit nearly 1700mhz even though I have 1630 set as max and of coarse it goes unstable and crashes... if I leave the card at stock with max power limit, it hits that speed and does just fine... I'm guessing this card is not really listening to the amd software like a ref card would... temps are not really an issue for me... folding for hours and max temp for the hotspot was 88C... core 63C and this was fans at 50%.... the attached pic shows exactly what is happening... 1685 isn't stable at 1.1v but 1630 is... for some reason this card will scoot right past what I have set as max clock speed... in fact, it really seems to ignore the clocks set all together... if I leave stock volts though, it's fine and max clocks I've seen it hit was 1724mhz... for a brief moment of coarse.


edit: is it normal to get 900k ppd in folding at home on a vega 64... seems high, but it's consistent on mine.


----------



## VicsPC

Minotaurtoo said:


> Not sure if raising the power limits will be a good solution for me really... the problem I'm having is interesting to me... I can undervolt this card and do quite will in most situations... then certain programs will load the card up and it will hit nearly 1700mhz even though I have 1630 set as max and of coarse it goes unstable and crashes... if I leave the card at stock with max power limit, it hits that speed and does just fine... I'm guessing this card is not really listening to the amd software like a ref card would... temps are not really an issue for me... folding for hours and max temp for the hotspot was 88C... core 63C and this was fans at 50%.... the attached pic shows exactly what is happening... 1685 isn't stable at 1.1v but 1630 is... for some reason this card will scoot right past what I have set as max clock speed... in fact, it really seems to ignore the clocks set all together... if I leave stock volts though, it's fine and max clocks I've seen it hit was 1724mhz... for a brief moment of coarse.


You're better off overclocking the HBM you'll get far better performance increase then messing with the core clock, doesn't seem to do much for Vega unfortunately but HBM OC has proved to be quite necessary.


----------



## Minotaurtoo

VicsPC said:


> You're better off overclocking the HBM you'll get far better performance increase then messing with the core clock, doesn't seem to do much for Vega unfortunately but HBM OC has proved to be quite necessary.


not sure how far to push the HBM... not often, but I do have green screens on boot if I push past 1040 mhz.... but using timing level 1 seems to be more stable and in synthetics at least performs almost as good...


----------



## Spacebug

Minotaurtoo said:


> Not sure if raising the power limits will be a good solution for me really... the problem I'm having is interesting to me... I can undervolt this card and do quite will in most situations... then certain programs will load the card up and it will hit nearly 1700mhz even though I have 1630 set as max and of coarse it goes unstable and crashes... if I leave the card at stock with max power limit, it hits that speed and does just fine... I'm guessing this card is not really listening to the amd software like a ref card would... temps are not really an issue for me... folding for hours and max temp for the hotspot was 88C... core 63C and this was fans at 50%.... the attached pic shows exactly what is happening... 1685 isn't stable at 1.1v but 1630 is... for some reason this card will scoot right past what I have set as max clock speed... in fact, it really seems to ignore the clocks set all together... if I leave stock volts though, it's fine and max clocks I've seen it hit was 1724mhz... for a brief moment of coarse.
> 
> 
> edit: is it normal to get 900k ppd in folding at home on a vega 64... seems high, but it's consistent on mine.


Not just your card, that is classic Vega behavior due to the boost algorithm... 
Guessing boost algorithm at some time sees low or lower than expected power consumption and decides there is room to increase clocks, and does that, past the point of stability.
Annoying as hell... 

My card for example, which probably is one of the crappier v64 chips given the voltage i have to shove into it. 

Max stable clock ive reached is just above 1790mhz core at 1.3875v.
1795 will eventually crash during heavy load, at no load but locked p7 state card idles at 1817mhz, way past any stability during load, but surviving idle. 

If lighter loads occur don't be surprised if card clocks higher than you've set... 
The very same algorithm that makes you card pretty much always clocking lower than target clocks during high load works in reverse during light loads, increase clocks past target clock...


----------



## sinnedone

Boosting past max P7 clocks is normal for these cards. My reference card will boost past the 1750mhz all the way up to 178x for some reason under certain conditions.


----------



## Minotaurtoo

glad to know... but yeah it's annoying when it causes stability to go out the window... because mine is air cooled I'm trying to keep voltage down, but when it flies past 1650mhz it loses stability.... guess I'll just leave it stock then...


----------



## 113802

Spacebug said:


> Not just your card, that is classic Vega behavior due to the boost algorithm...
> Guessing boost algorithm at some time sees low or lower than expected power consumption and decides there is room to increase clocks, and does that, past the point of stability.
> Annoying as hell...
> 
> My card for example, which probably is one of the crappier v64 chips given the voltage i have to shove into it.
> 
> Max stable clock ive reached is just above 1790mhz core at 1.3875v.
> 1795 will eventually crash during heavy load, at no load but locked p7 state card idles at 1817mhz, way past any stability during load, but surviving idle.
> 
> If lighter loads occur don't be surprised if card clocks higher than you've set...
> The very same algorithm that makes you card pretty much always clocking lower than target clocks during high load works in reverse during light loads, increase clocks past target clock...


You don't overvolt Vega, when you overvolt Vega the card just hits the power limit much sooner and run worse than stock. Undervolt and overclock and enjoy the sustained 1750Mhz+ frequency in games.


----------



## Spacebug

WannaBeOCer said:


> You don't overvolt Vega, when you overvolt Vega the card just hits the power limit much sooner and run worse than stock. Undervolt and overclock and enjoy the sustained 1750Mhz+ frequency in games.


Undervolt don't work with my crap silicon, since the chip scales with voltage, more voltage it is.
Except not more than this, more towards 1.4v and the display starts to drop out...

My card isn't hindered by powerlimits, it registers around 110w max powerdraw with those settings, hardmods to the current sense circuit, amongst other things...


----------



## 113802

Spacebug said:


> Undervolt don't work with my crap silicon, since the chip scales with voltage, more voltage it is.
> Except not more than this, more towards 1.4v and the display starts to drop out...
> 
> My card isn't hindered by powerlimits, it registers around 110w max powerdraw with those settings, hardmods to the current sense circuit, amongst other things...


Thanks for confirming your hardware mods. Glad it's only registering 110w max power draw. Since it's only registering 110w does it always run at peak boost(P7) or does it just run the peak boost a bit longer than normal?

I noticed I can trick my card to run p7 all the time by using rocm-smi and forcing my card to P7 in Ubuntu and restarting. The card gets stuck in P7 and runs at 1750Mhz in Windows all the time until the next reboot and thats when I have to go back to undervolting. It runs at 1750Mhz without overboosting like it usually would when undervolting.

P7 stuck trick: https://www.3dmark.com/fs/17528640
Normal undervolt: https://www.3dmark.com/fs/17445745


----------



## m70b1jr

Picked up a GigaByte Vega 56 OC Edition on eBay for $230 shipped. Seller stated it ran hot, and needed to be undervolted and underclocked. First thing I did was take it apart to replace the TIM (Which was flaky and crispy) and replaced it with some arctic silver, flashed the Vega 64 bios (I have Hynix memory and cant seem to get past 800mhz on HBM) and I'm currently still tweaking everything.


----------



## Minotaurtoo

anyone else having issues with the 19.1.2 driver or am I doing something wrong... seems completely random "driver stuck in thread" errors and sometimes screens will go black, come back then crash.... don't remember this happening on 18.x drivers.... a couple times I came back to find my screens black and fan at full revs...


----------



## m70b1jr

Minotaurtoo said:


> anyone else having issues with the 19.1.2 driver or am I doing something wrong... seems completely random "driver stuck in thread" errors and sometimes screens will go black, come back then crash.... don't remember this happening on 18.x drivers.... a couple times I came back to find my screens black and fan at full revs...


Having the same issues, do you have the gigabyte gaming OC? Apparently it happens on those cards all the time.


----------



## Minotaurtoo

m70b1jr said:


> Having the same issues, do you have the gigabyte gaming OC? Apparently it happens on those cards all the time.


 yes I do... 



edit: I just went to gigabyte.com and noticed they have an updated bios for this card... says increased stability... think I'll try to flash it.


----------



## Spacebug

WannaBeOCer said:


> Thanks for confirming your hardware mods. Glad it's only registering 110w max power draw. Since it's only registering 110w does it always run at peak boost(P7) or does it just run the peak boost a bit longer than normal?
> 
> I noticed I can trick my card to run p7 all the time by using rocm-smi and forcing my card to P7 in Ubuntu and restarting. The card gets stuck in P7 and runs at 1750Mhz in Windows all the time until the next reboot and thats when I have to go back to undervolting. It runs at 1750Mhz without overboosting like it usually would when undervolting.
> 
> P7 stuck trick: https://www.3dmark.com/fs/17528640
> Normal undervolt: https://www.3dmark.com/fs/17445745


The card runs just normal in regards to boost clocks, starts high then drops about 10mhz as temp (and leakage/powerdraw) increases when loop gets warmed up.
Only difference is that the boost algorithm targets different clocks due to the low powerdraw reading.
I set 1740 p7 targetclock and the card maintains around 1780 during heavy 3d load, and so on...


----------



## miklkit

Minotaurtoo said:


> anyone else having issues with the 19.1.2 driver or am I doing something wrong... seems completely random "driver stuck in thread" errors and sometimes screens will go black, come back then crash.... don't remember this happening on 18.x drivers.... a couple times I came back to find my screens black and fan at full revs...



I started getting that with my Sapphire after going to the 19.1.2 drivers. I bounced back and forth between 18X and 19X drivers for a while and it mostly went away. Now on the 19.1.2 drivers every day after a few minutes on the browser it will black screen and recover. After that it is fine. The only reason I'm sticking with thses drivers is because Vulkan is better. 



Why do I have to be the outlier? Everyone else is complaining about going over the clocks while this one doesn't come close to overclocks. It's set to 1680/1100 @ -90mv (1100mv) +50 power and yesterday after gaming for hours it hit a peak of 1594mhz. Temps are fine as the hot spot peaked at 70C. Gaah.


----------



## Minotaurtoo

miklkit said:


> I started getting that with my Sapphire after going to the 19.1.2 drivers. I bounced back and forth between 18X and 19X drivers for a while and it mostly went away. Now on the 19.1.2 drivers every day after a few minutes on the browser it will black screen and recover. After that it is fine. The only reason I'm sticking with thses drivers is because Vulkan is better.
> 
> 
> 
> Why do I have to be the outlier? Everyone else is complaining about going over the clocks while this one doesn't come close to overclocks. It's set to 1680/1100 @ -90mv (1100mv) +50 power and yesterday after gaming for hours it hit a peak of 1594mhz. Temps are fine as the hot spot peaked at 70C. Gaah.


 I flashed to the new bios from gigabyte... so far so good... still had the "green screen" pause on cold startup... only lasts about 3 seconds while windows is loading (may be normal for these cards)... but 4 hours into a folding session at speeds passing 1680mhz and no crash.... I'm thinking it may be a case of new driver expects new bios now.... not sure... but yeah I'm sticking with new driver because it seems to perform better in some games.... on last bios it would have crashed by now hitting clock speeds this high... it's actually holding 1662mhz right now while folding... not sure how, but new bios seems to run cooler too...maybe fan speeds upped? or better current management? this pic shows temps/clocks after 4 hrs of folding... I started off at my old clock settings of 1630max and upped it to 1680max after seeing how low temps were... power limit is set to +50%.


edit: right after I posted this, it hit 1692 for a brief time.


----------



## RX7-2nr

I've been running the Vega 64 undervolted and 50% power limit with the air cooler for a while but finally tore the system down to clean it and add the vega to my WC loop. Under idle and light load it's pretty much at ambient room temperature, 22-25c. Heavy use where it stays in state 6-7 it gets to 35-37c. The absolute silence is amazing, I can't believe I tolerated that fan for so long. 
All the clocks are stock at the moment, I'm gonna start reading into overclocking the core and HBM soon.


----------



## VicsPC

RX7-2nr said:


> I've been running the Vega 64 undervolted and 50% power limit with the air cooler for a while but finally tore the system down to clean it and add the vega to my WC loop. Under idle and light load it's pretty much at ambient room temperature, 22-25c. Heavy use where it stays in state 6-7 it gets to 35-37c. The absolute silence is amazing, I can't believe I tolerated that fan for so long.
> All the clocks are stock at the moment, I'm gonna start reading into overclocking the core and HBM soon.


Undervolting only nets 17w? I would have though quite a lot more then that. i've left mine stock just using 1050 HBM, might try to go for 1100.


----------



## Naeem

No audio is game recordings and streams with drivers 19.1.2 i went back to 18.12.1 and it worked again anyone else test this please ? 

i have Vega 64 Liquid


----------



## sinnedone

Did you check your playback and recording tabs after right clicking on your speaker?

Sometimes AMD driver install would change defaults for me


----------



## miklkit

I've been rolling back to older drivers and when I got to the 18.12.1.1 drivers the fans started working correctly and temperatures dropped 10-15C. Vulkan works great too! 



An oddity: Every time I uninstalled a driver I got that "driver stuck in thread" and a blue screen. This crashed the puter and I had to repeat the process along with scans until the old drivers were completely cleaned out. I also installed only the drivers and left the sound drivers and Radeon software out. Disabling "high definition audio bus" in device manager also helped. 



tl;dr: No more 2019 drivers for me.


----------



## RX7-2nr

VicsPC said:


> Undervolting only nets 17w? I would have though quite a lot more then that. i've left mine stock just using 1050 HBM, might try to go for 1100.


The wattage didn't drop much because the core clock went up. At the regular balanced setting the core only reaches about 1550 at 220 watts. 
I've been trying out 1050 across p6, 7, and the HBM. Seems to work out a little better than 1100.

All of this seems like a complete waste of time because the GPU responds so horribly to overclocking. The best I've gotten so far, only a couple hours of tweaking in all, is an 12% increase in Superposition at 332 watts. This is pretty much the same as I read other people were finding so I expected it.


----------



## VicsPC

RX7-2nr said:


> The wattage didn't drop much because the core clock went up. At the regular balanced setting the core only reaches about 1550 at 220 watts.
> I've been trying out 1050 across p6, 7, and the HBM. Seems to work out a little better than 1100.
> 
> All of this seems like a complete waste of time because the GPU responds so horribly to overclocking. The best I've gotten so far, only a couple hours of tweaking in all, is an 12% increase in Superposition at 332 watts. This is pretty much the same as I read other people were finding so I expected it.


Ive left mine alone and on water with the factory bios i hit 1600+ no problem, consumes about 220-225w, all ive done is get memory to 1050 and left it alone. AMD cards have always seemed to be highly clocked from the factory so I've honestly not bothered on either my r9 390 or v64 as far as core clocks go.


----------



## RX7-2nr

VicsPC said:


> Ive left mine alone and on water with the factory bios i hit 1600+ no problem, consumes about 220-225w, all ive done is get memory to 1050 and left it alone. AMD cards have always seemed to be highly clocked from the factory so I've honestly not bothered on either my r9 390 or v64 as far as core clocks go.


How did you test your card?


----------



## VicsPC

RX7-2nr said:


> How did you test your card?


Set it to 1050 then just ran a game i know uses a lot of VRAM, Frostpunk, didn't see any artifacts. I get about 1550-1640 depending on the game and that's on stock settings. Uncapped (i have freesync) playing Siege i hit about 222w.


----------



## RX7-2nr

VicsPC said:


> Set it to 1050 then just ran a game i know uses a lot of VRAM, Frostpunk, didn't see any artifacts. I get about 1550-1640 depending on the game and that's on stock settings. Uncapped (i have freesync) playing Siege i hit about 222w.


That's exactly why I was asking. You need to use a benchmark to establish performance changes from one setting to another. Running around in a game isn't going to give you the same repeatable sequence of renders time and time again and that's what you need if you want some kind of basis for comparison. All of my numbers are from running superposition on 1080 extreme. I made a spreadsheet to keep track of scores and settings and it's really helped to see the effect of each change.


----------



## Xinoxide

I spent nearly my entire day poking around at my reference vega64.

Here's what I think I'm going to settle with.

average boost is around 1540mhz with gpuz reported voltage of 950mv. Edit: caught heaven in one of those super random dips. Heaven stays around 1540-1560mhz.

Also just a GPUz sensor shot from about an hour of RE2.


----------



## VicsPC

RX7-2nr said:


> That's exactly why I was asking. You need to use a benchmark to establish performance changes from one setting to another. Running around in a game isn't going to give you the same repeatable sequence of renders time and time again and that's what you need if you want some kind of basis for comparison. All of my numbers are from running superposition on 1080 extreme. I made a spreadsheet to keep track of scores and settings and it's really helped to see the effect of each change.


It does if you run the same scenario over and over and nothing changes but the settings haha. I'd run synthetic benchmarks but i find them so unreliable it's pointless, could jump points and score by just restarting your pc, not very accurate if you ask me. I'll usually run a couple in game benchmarks, take the average then do it again after i changed settings.


----------



## bustacap22

OCN, I am about to purchase Vega 64 today 2/5/19 at Microcenter. I will be getting the reference card as I plan on watercooling. My choices are the MSI Air Boost Overclocked or PowerColor. Seeing which one to grab and hoping OCN members have experience with these two cards. Any input or suggestions greatly appreciated.


----------



## THUMPer1

bustacap22 said:


> OCN, I am about to purchase Vega 64 today 2/5/19 at Microcenter. I will be getting the reference card as I plan on watercooling. My choices are the MSI Air Boost Overclocked or PowerColor. Seeing which one to grab and hoping OCN members have experience with these two cards. Any input or suggestions greatly appreciated.


PowerColor.


----------



## Ne01 OnnA

bustacap22 said:


> OCN, I am about to purchase Vega 64 today 2/5/19 at Microcenter. I will be getting the reference card as I plan on watercooling. My choices are the MSI Air Boost Overclocked or PowerColor. Seeing which one to grab and hoping OCN members have experience with these two cards. Any input or suggestions greatly appreciated.


Power Color -> XFX/Power Color & Sapphire are ATI Technologies only 
So they know how to build Radeon.


----------



## prom

Today I got myself an updated Strix V64 for 430usd. I was told it didn't fit in the NCASE...


----------



## BradleyW

prom said:


> Today I got myself an updated Strix V64 for 430usd. I was told it didn't fit in the NCASE...


Did you take the stock fans off?


----------



## prom

I did, so right now I'm trying to find a decent cooling solution.
I mean the card STAYS cool (2x F12 in exhaust), but they are plugged into the Strix auxiliary PWM ports.
I really dislike GPU Tweak, but I need it to control those fans.

Does anyone know if there is a 6pin adapter for the cards main fan control port?


----------



## BradleyW

prom said:


> I did, so right now I'm trying to find a decent cooling solution.
> I mean the card STAYS cool (2x F12 in exhaust), but they are plugged into the Strix auxiliary PWM ports.
> I really dislike GPU Tweak, but I need it to control those fans.
> 
> Does anyone know if there is a 6pin adapter for the cards main fan control port?


There is, as I've seen people use them after removing the stock fans, but where they got them from beats me. If you find out, let me know.


----------



## 113802

Can anyone beat my Vega 64 GPU score?

http://www.3dmark.com/fs/18269189


----------



## Grin

Can someone beat my Vega 64 core clock? 
https://www.3dmark.com/3dm/32252534


----------



## VicsPC

Grin said:


> Can someone beat my Vega 64 core clock?
> https://www.3dmark.com/3dm/32252534


Not sure it's really accurate lol. My 3dmark says my 2700x hit 4.8ghz turbo core. https://www.3dmark.com/fs/15978275


----------



## 113802

Grin said:


> Can someone beat my Vega 64 core clock?
> https://www.3dmark.com/3dm/32252534


You do realize your card isn't running at 1800Mhz during that benchmark? Most Vega 64 cards with the LC bios only run at 1700-1730Mhz during the benchmark. My 28513 GPU score was running at 1750-1757Mhz the entire benchmark.

Edit: I do want to entertain you though, since I recently found a way to lock P7 here is my card running between 1810Mhz-1825Mhz throughout the benchmark

https://www.3dmark.com/3dm/33420071?

Edit 2: Here's proof, I'm going to take another video in a bit once I setup Rivatuner since the overlay counter is way off. Also I want to add the video is at +3.5 while 3Dmark is +4 on the core. +4 runs fine also and exactly runs between 1810-1825Mhz in Overwatch. Pretty much uses 394w the entire time though.


----------



## Ne01 OnnA

29k  (Yes i know but still is a nice to see score)

-> https://www.3dmark.com/fs/17522412


----------



## 113802

Ne01 OnnA said:


> 29k  (Yes i know but still is a nice to see score)
> 
> -> https://www.3dmark.com/fs/17522412


Yeah it's a nice score but that tessellation mod.


----------



## ZealotKi11er

WannaBeOCer said:


> Can anyone beat my Vega 64 GPU score?
> 
> http://www.3dmark.com/fs/18269189


I could probably beat it with better CPU:

https://www.3dmark.com/fs/17868028


----------



## 113802

ZealotKi11er said:


> I could probably beat it with better CPU:
> 
> https://www.3dmark.com/fs/17868028


I beat it a few minutes ago.

https://www.3dmark.com/3dm/33420071?



Ne01 OnnA said:


> 29k  (Yes i know but still is a nice to see score)
> 
> -> https://www.3dmark.com/fs/17522412


To leave no doubt I crushed everyone's score.

http://www.3dmark.com/fs/18270986


----------



## colorfuel

@WannaBeOCer: 

Very impressive!

Whats your Timespy Score on those settings?


----------



## Ne01 OnnA

My Best TimeSpy (i need to retest with new drivers)

==


----------



## ZealotKi11er

WannaBeOCer said:


> I beat it a few minutes ago.
> 
> https://www.3dmark.com/3dm/33420071?
> 
> 
> 
> To leave no doubt I crushed everyone's score.
> 
> http://www.3dmark.com/fs/18270986


My 28.3K score is done with Air card. I can try to beat your 29.3K. Now I have motivation.


----------



## 113802

ZealotKi11er said:


> My 28.3K score is done with Air card. I can try to beat your 29.3K. Now I have motivation.


28.3k is impressive for an air Vega 64! LC cards are obviously binned. I kinda wanna get a 9900k and mod it to work with my board to crush the HWbot leaderboard and Futuremark. 

I also don't want to spend that much to have the same IPC as I do now.


----------



## ZealotKi11er

WannaBeOCer said:


> 28.3k is impressive for an air Vega 64! LC cards are obviously binned. I kinda wanna get a 9900k and mod it to work with my board to crush the HWbot leaderboard and Futuremark.
> 
> I also don't want to spend that much to have the same IPC as I do now.


Futuremark is easy to beat. The top score for Vega 64 is not even that good.


----------



## 113802

ZealotKi11er said:


> Futuremark is easy to beat. The top score for Vega 64 is not even that good.


HWBot scores are also carried by the CPU: https://hwbot.org/benchmark/3dmark_...Id=videocard_2879&cores=1#start=0#interval=20


----------



## ZealotKi11er

WannaBeOCer said:


> HWBot scores are also carried by the CPU: https://hwbot.org/benchmark/3dmark_...Id=videocard_2879&cores=1#start=0#interval=20


So what was the trick to get clock to stay high?


----------



## 113802

ZealotKi11er said:


> So what was the trick to get clock to stay high?


Unfortunately it's only temporary, I've said it a few times already on this thread. Flash the FE 8GB Bios, install the Radeon Pro drivers. Flash back to LC bios and reinstall the Radeon gaming drivers. For some odd reason P7 state is locked for a few reboots and eventually clocks go back to normal.

I guess no one believed me until I showed proof. We should all yell at AMD until they allow us to properly lock the bios to a state. It's possible since I'm able to.

https://www.techpowerup.com/vgabios/198367/198367


----------



## ZealotKi11er

WannaBeOCer said:


> Unfortunately it's only temporary, I've said it a few times already on this thread. Flash the FE 8GB Bios, install the Radeon Pro drivers. Flash back to LC bios and reinstall the Radeon gaming drivers. For some odd reason P7 state is locked for a few reboots and eventually clocks go back to normal.
> 
> I guess no one believed me until I showed proof. We should all yell at AMD until they allow us to properly lock the bios to a state. It's possible since I'm able to.
> 
> https://www.techpowerup.com/vgabios/198367/198367


That is super strange. I thought maybe you disabling PowerPlay.


----------



## 113802

ZealotKi11er said:


> That is super strange. I thought maybe you disabling PowerPlay.


Trust me I would if it was easy to flash modified bios. I disabled GPU boost on my GTX 780/GTX 780 Ti.


----------



## ZealotKi11er

Was comparing the results and it seems in GT2 is where u get the extra fps. Does GT2 drop in clock normally more than GT1?

https://www.3dmark.com/compare/fs/18270986/fs/17868028


----------



## Grin

colorfuel said:


> @WannaBeOCer:
> 
> Very impressive!
> 
> Whats your Timespy Score on those settings?


Agree What is your voltage at P6-7?


----------



## 113802

Grin said:


> Agree What is your voltage at P6-7?


The card just runs at p7, doesn't run at p6. It uses between 350-396w depending on the scene.


----------



## Grin

Are all voltages in auto? I’m asking about exact numbers let’s say on P7 I am setting close to 1 volt, and you?


----------



## 113802

Grin said:


> Are all voltages in auto? I’m asking about exact numbers let’s say on P7 I am setting close to 1 volt, and you?


At auto P7 is at 1250mV since I'm using a LC card at auto. I could not undervolt when running at 1827Mhz


----------



## ZealotKi11er

WannaBeOCer said:


> At auto P7 is at 1250mV since I'm using a LC card at auto. I could not undervolt when running at 1827Mhz


So you set the clock to 1827? I only set mine to 1780. Could not go anymore.


----------



## 113802

ZealotKi11er said:


> So you set the clock to 1827? I only set mine to 1780. Could not go anymore.


Yeah the 29.3k was at 1827Mhz, it's a reference LC card. I'm annoyed that I know the card can run at 77Mhz higher than stock but I'm limited due to AMD's garbage PP.


----------



## ZealotKi11er

WannaBeOCer said:


> Yeah the 29.3k was at 1827Mhz, it's a reference LC card. I'm annoyed that I know the card can run at 77Mhz higher than stock but I'm limited due to AMD's garbage PP.


1750 is not LC clock. Its 1667MHz.


----------



## 113802

ZealotKi11er said:


> 1750 is not LC clock. Its 1667MHz.


1750Mhz is P7, P6 is 1667Mhz


----------



## ZealotKi11er

WannaBeOCer said:


> 1750Mhz is P7, P6 is 1667Mhz


When u go to AMD side for example vega 64 air is 1630MHz but rated for 1546MHz.

https://www.gigabyte.com/Graphics-Card/GV-RXVEGA64X-W-8GD-B#kf


----------



## Grin

WannaBeOCer said:


> 1750Mhz is P7, P6 is 1667Mhz


Right I have the same numbers. It’s seems that original LC with binned chip is a bit better than my converted using AIO from stock air 64. I could reach only 1810 with some undervolting.


----------



## 113802

ZealotKi11er said:


> When u go to AMD side for example vega 64 air is 1630MHz but rated for 1546MHz.
> 
> https://www.gigabyte.com/Graphics-Card/GV-RXVEGA64X-W-8GD-B#kf


I understand, the peak boost was still tested for stability before shipping the products. That's why the LC is binned and which is why it will most likely clock higher when someone bugs the card to run at P7 all the time.

I currently have it running at 1750Mhz all the time with 1750/1105Mhz @ 1175mV without an issue.


----------



## Grin

Mine running at 1750/1100 @ 1.075 mv Unfortunately memory is not stable at any clocks higher than 1120 in benchmarks. In games it can run 1800/1120 but real clocks is 1730-1770 with PL +50


----------



## ZealotKi11er

WannaBeOCer said:


> I understand, the peak boost was still tested for stability before shipping the products. That's why the LC is binned and which is why it will most likely clock higher when someone bugs the card to run at P7 all the time.
> 
> I currently have it running at 1750Mhz all the time with 1750/1105Mhz @ 1175mV without an issue.


Just getting LC and Air to their peak boost is like a OC from out of box config.


----------



## Grin

LC is best binned, FE is on a second place, common Air is a silicon lottery with low chances to get a good chip and memory.


----------



## ZealotKi11er

Grin said:


> LC is best binned, FE is on a second place, common Air is a silicon lottery with low chances to get a good chip and memory.


My FE Liquid only does 1700MHz with 1.25v. With Air I can get 1780MHz


----------



## Grin

It’s because of bios, change it to LC bios and it will fly


----------



## ZealotKi11er

Grin said:


> It’s because of bios, change it to LC bios and it will fly


You can flash FE card with LC bios?


----------



## geriatricpollywog

Just ran a benchmark for the first time in almost a year. How come I can only get 6500 in Superposition 4K? Last year, I was getting 7200+. I am about to ditch AMD for good.


----------



## Spacebug

0451 said:


> Just ran a benchmark for the first time in almost a year. How come I can only get 6500 in Superposition 4K? Last year, I was getting 7200+. I am about to ditch AMD for good.


Check the settings in wattman i'd say, superposition is a good benchmark as it displays current core and hbm clock during the run, check that those are reasonable... 

Last time i got around 6500 in superposition 4k optimized i think was close to when i got the card, few months after launch...
For the most time now ive got around 7650 i think it was.

Only time i got low 7100-7200 points despite good clocks was when i fed the memory controller +100mV in hopes of getting higher hbm clock, didn't work and got lower score with the same clocks so i guess memory controller didn't like the higher voltage and threw a lot of errors which got corrected cause no artifacts where visible...


----------



## gupsterg

ZealotKi11er said:


> You can flash FE card with LC bios?


No.


----------



## geriatricpollywog

Spacebug said:


> 0451 said:
> 
> 
> 
> Just ran a benchmark for the first time in almost a year. How come I can only get 6500 in Superposition 4K? Last year, I was getting 7200+. I am about to ditch AMD for good.
> 
> 
> 
> Check the settings in wattman i'd say, superposition is a good benchmark as it displays current core and hbm clock during the run, check that those are reasonable...
> 
> Last time i got around 6500 in superposition 4k optimized i think was close to when i got the card, few months after launch...
> For the most time now ive got around 7650 i think it was.
> 
> Only time i got low 7100-7200 points despite good clocks was when i fed the memory controller +100mV in hopes of getting higher hbm clock, didn't work and got lower score with the same clocks so i guess memory controller didn't like the higher voltage and threw a lot of errors which got corrected cause no artifacts where visible...
Click to expand...

Thanks for the tip. Clocks are yo-yoing between 1300 and 1750. I have never seen this. Usually the clock starts at 1700 and ends up at 1730.


----------



## 113802

0451 said:


> Thanks for the tip. Clocks are yo-yoing between 1300 and 1750. I have never seen this. Usually the clock starts at 1700 and ends up at 1730.


Check if Radeon chill is enabled for some odd reason?


----------



## BradleyW

WannaBeOCer said:


> I understand, the peak boost was still tested for stability before shipping the products. That's why the LC is binned and which is why it will most likely clock higher when someone bugs the card to run at P7 all the time.
> 
> I currently have it running at 1750Mhz all the time with 1750/1105Mhz @ 1175mV without an issue.


How did you get the card to run at P7 all the time? The only way I know is to disable all but P7 state in the OverDriveNTool.


----------



## 113802

BradleyW said:


> How did you get the card to run at P7 all the time? The only way I know is to disable all but P7 state in the OverDriveNTool.


I responded to you previously in one of the threads already and posted the temporary way to lock it in this thread a few pages back.

Edit: I want to add it doesn't run P7 all the time. It goes through all the states correctly but when gaming with +50% power limit it doesn't cause the frequency to fluctuate and actually runs at P7 when at 100% load without over boosting. 



WannaBeOCer said:


> Unfortunately it's only temporary, I've said it a few times already on this thread. Flash the FE 8GB Bios, install the Radeon Pro drivers. Flash back to LC bios and reinstall the Radeon gaming drivers. For some odd reason P7 state is locked for a few reboots and eventually clocks go back to normal.
> 
> I guess no one believed me until I showed proof. We should all yell at AMD until they allow us to properly lock the bios to a state. It's possible since I'm able to.
> 
> https://www.techpowerup.com/vgabios/198367/198367


----------



## geriatricpollywog

WannaBeOCer said:


> 0451 said:
> 
> 
> 
> Thanks for the tip. Clocks are yo-yoing between 1300 and 1750. I have never seen this. Usually the clock starts at 1700 and ends up at 1730.
> 
> 
> 
> Check if Radeon chill is enabled for some odd reason?
Click to expand...

Radeon Chill is disabled. Heat is not an issue, since my card is on water. I just want a way of locking the frequency to 1730 mhz.


----------



## 113802

0451 said:


> Radeon Chill is disabled. Heat is not an issue, since my card is on water. I just want a way of locking the frequency to 1730 mhz.


Never said heat was an issue. You mentioned low frequencies which was why I asked.

Are you using the LC bios or is it the standard Vega card?

Undervolt P7 to 1200mV if you are on the LC bios and you should see it will stay between 1700Mhz-1730Mhz.

Along with 50% power limit increase.


----------



## VicsPC

Not sure if anyone plays Rainbow Six Siege but ubisofts latest hotfix has broken the game for Vega owners. Reports all over that it adds made artifacts to the game. I thought it was a corrupt file or something, reinstalled the game and still there.


----------



## ZealotKi11er

VicsPC said:


> Not sure if anyone plays Rainbow Six Siege but ubisofts latest hotfix has broken the game for Vega owners. Reports all over that it adds made artifacts to the game. I thought it was a corrupt file or something, reinstalled the game and still there.


Yeah my friend play Siege and he updated the driver for Apex and when he tried to play Siege he got artifacts.


----------



## VicsPC

ZealotKi11er said:


> Yeah my friend play Siege and he updated the driver for Apex and when he tried to play Siege he got artifacts.


Yea it's on Ubis end though, i played a couple days ago on 19.2.1 and had no issues, they patched it for an fps issue some people we're having and now it's broken.


----------



## 113802

Just a tease of what's to come on HWbot before this weekend.

https://hwbot.org/benchmark/unigine...Id=videocard_2879&cores=1#start=0#interval=20


----------



## Grin

WannaBeOCer said:


> Just a tease of what's to come on HWbot before this weekend.
> 
> https://hwbot.org/benchmark/unigine...Id=videocard_2879&cores=1#start=0#interval=20


Congrats!


----------



## Grin

I am getting 5230-5270 on my 24/7 settings
In gaming a bit more


----------



## Naeem

anyone else getting crash in battlefield 5 new combined arms game mode ? i have vega 64 liquid and ryzen 1800x also there is no audio in game recordings i tried 19.1.2 and 19.2.1 drivers


----------



## BradleyW

19.2.2 is out.


----------



## Grin

Sad to tell but 19.2.2 is not for us it’s for Vega7 I am expecting that we may get some troubles after 19.1.1 I will wait


----------



## Grin

Naeem said:


> anyone else getting crash in battlefield 5 new combined arms game mode ? i have vega 64 liquid and ryzen 1800x also there is no audio in game recordings i tried 19.1.2 and 19.2.1 drivers


Go back to 19.1.1 it’s the last stable for Vega64 I tried a couple of next drivers and got some minor issues like clocks drops or even frizzes required to use the reboot button. I would also recommend to use DDU when you go from 19.2.x to 19.1.1


----------



## jearly410

Got a crash with 19.2.2 in apex legends. Rolling back to 19.1.1


----------



## 113802

I'm having a ton of fun benching with the latest ROCm

Both operating systems are running the card at 1750Mhz @ 1200mV and 1105Mhz on the memory for Luxmark. While the GeekBench4 on Windows was run at 1,827MHz/1150Mhz.

Ubuntu 18.04 with kernel 5.0 RC6 ROCm 2.1

https://browser.geekbench.com/v4/compute/3659644
http://www.luxmark.info/node/6314

Windows 10 1809 19.2.2

https://browser.geekbench.com/v4/compute/3659215
http://www.luxmark.info/node/6315


----------



## Naeem

Grin said:


> Go back to 19.1.1 it’s the last stable for Vega64 I tried a couple of next drivers and got some minor issues like clocks drops or even frizzes required to use the reboot button. I would also recommend to use DDU when you go from 19.2.x to 19.1.1




can you test if you get audio in game recording with 19.2.2 ? or 19.2.1? i had 18.12.1 before and it worked but not with these new drivers


----------



## Ne01 OnnA

19.2.2 gives me Blue screen in BFV Multi !
19.1.1 WHQL is strongly recomended, until ATI driver team makes things right for both Vega uArch 


IMO ATI is implementing new Render paths (new approach) in 19.2.x and upwards.
and we need to wait for them to do it right (1-2months) and we can easily add +10% performance to our Vegas (first for Vega 1 and 2nd Gen. then Polaris)

DAL (Display Abstraction Layer):
-> https://www.x.org/wiki/Events/XDC2016/Program/amd_dal.pdf


----------



## VicsPC

Ne01 OnnA said:


> 19.2.2 gives me Blue screen in BFV Multi !
> 19.1.1 WHQL is strongly recomended, until ATI driver team makes things right for both Vega uArch
> 
> 
> IMO ATI is implementing new Render paths (new approach) in 19.2.x and upwards.
> and we need to wait for them to do it right (1-2months) and we can easily add +10% performance to our Vegas (first for Vega 1 and 2nd Gen. then Polaris)
> 
> DAL (Display Abstraction Layer):
> -> https://www.x.org/wiki/Events/XDC2016/Program/amd_dal.pdf


New render path, interesting. I wonder if going back a month old driver fix the issue I'm having in Siege, still a ubisoft problem as they updated the game and broke it so who knows.


----------



## BradleyW

Ne01 OnnA said:


> 19.2.2 gives me Blue screen in BFV Multi !
> 19.1.1 WHQL is strongly recomended, until ATI driver team makes things right for both Vega uArch /forum/images/smilies/biggrin.gif
> 
> 
> IMO ATI is implementing new Render paths (new approach) in 19.2.x and upwards.
> and we need to wait for them to do it right (1-2months) and we can easily add +10% performance to our Vegas (first for Vega 1 and 2nd Gen. then Polaris)
> 
> DAL (Display Abstraction Layer):
> -> https://www.x.org/wiki/Events/XDC2016/Program/amd_dal.pdf


10% performance increase? In games or professional applications?


----------



## cplifj

use 19.2.2 , the 19.2.1 driver is not even complete , it misses gaming VR. 19.2.2 might be beta but it's a lot better than 19.2.1.

There are a lot of other things to patch aswell, among which is windows10.

I recently done some patching of firmware , and by changing SATA/RAID firmware to a newer version , all of a sudden my system does not make any file corruption anymore.

Been running with BAD SATA firmware for more than a year..... with only slight corruption you can start wondering where it comes from (not always compatible with all available disks). I thank INTEL for their bogus support of not even old hardware, as do I thank the mobo suppliers for not supporting their goods anymore after 2/3 years and leaving all bugs in to aid their new sales number.

Hope you get it fixed and running as one can expect a "computer" to run, that is without errors.


----------



## sinnedone

cplifj said:


> use 19.2.2 , the 19.2.1 driver is not even complete , it misses gaming VR. 19.2.2 might be beta but it's a lot better than 19.2.1.
> 
> There are a lot of other things to patch aswell, among which is windows10.
> 
> I recently done some patching of firmware , and by changing SATA/RAID firmware to a newer version , all of a sudden my system does not make any file corruption anymore.
> 
> Been running with BAD SATA firmware for more than a year..... with only slight corruption you can start wondering where it comes from (not always compatible with all available disks). I thank INTEL for their bogus support of not even old hardware, as do I thank the mobo suppliers for not supporting their goods anymore after 2/3 years and leaving all bugs in to aid their new sales number.
> 
> Hope you get it fixed and running as one can expect a "computer" to run, that is without errors.


What was it you did exactly? Bios or windows related?


----------



## Mikkinen

Grin said:


> I am getting 5230-5270 on my 24/7 settings
> In gaming a bit more


Can you post your settings?


----------



## Xinoxide

19.2.2 seems to have my Vega64 running a couple degrees hotter.

Also getting big while blobs of artifacting at the same OC I had with 19.1.1.

Going back to 19.1.1 right after I save my power play tables.


----------



## colorfuel

Is it just me or is HBM2 OC more stable since 19.2.x ? I could go up to 1170/965mv stable on TS. No errors on TombRaider DOX demo (which I use for memory testing), until 1150mhz. I could also set SOC mv down to 925mv on 1100Mhz without any issues. This wasnt possible before, or so I thought.

This is on a Yeston Ref. Vega 64 with a Raijintek Morpheus.


----------



## Ne01 OnnA

colorfuel said:


> Is it just me or is HBM2 OC more stable since 19.2.x ? I could go up to 1170/965mv stable on TS. No errors on TombRaider DOX demo (which I use for memory testing), until 1150mhz. I could also set SOC mv down to 925mv on 1100Mhz without any issues. This wasnt possible before, or so I thought.
> 
> This is on a Yeston Ref. Vega 64 with a Raijintek Morpheus.


Yup, thanks to Vega 2 we have better OC potential 
It's not over yet  (New DAL is comming)

PS.
DAL (Display Abstraction Layer):
-> https://www.x.org/wiki/Events/XDC201...am/amd_dal.pdf


----------



## 113802

colorfuel said:


> Is it just me or is HBM2 OC more stable since 19.2.x ? I could go up to 1170/965mv stable on TS. No errors on TombRaider DOX demo (which I use for memory testing), until 1150mhz. I could also set SOC mv down to 925mv on 1100Mhz without any issues. This wasnt possible before, or so I thought.
> 
> This is on a Yeston Ref. Vega 64 with a Raijintek Morpheus.


I haven't noticed a difference overclocking Vega since release drivers. 



Ne01 OnnA said:


> Yup, thanks to Vega 2 we have better OC potential /forum/images/smilies/biggrin.gif
> It's not over yet /forum/images/smilies/wink.gif (New DAL is comming)
> 
> PS.
> DAL (Display Abstraction Layer):
> -> https://www.x.org/wiki/Events/XDC201...am/amd_dal.pdf


You do realize DAL3 is almost three years old? It was introduced with the amdgpu-pro drivers. SMU is the new software driver and it supports Vega 20 and up. 

https://www.phoronix.com/scan.php?page=news_item&px=AMD-New-SW-SMU-Driver-Future



> The code patches explain, "The powerplay driver will be retired. The final version is for vega20 with SMU11. However, the future asic will use the new swSMU framework to implement as well. Here is the first version of new sw smu driver that is basing on vega20...We would like to do re-arch for linux power codes to use a new sw SMU ip block for future asics. We hope to write a simple and readable framework for Linux."


----------



## Naeem

does Relive recordings have sound for you guys with new drivers like 19.1 x and 19.2.x ?


----------



## 113802

Naeem said:


> does Relive recordings have sound for you guys with new drivers like 19.1 x and 19.2.x ?


Yes, Relive does still record all my audio in 19.2.1 and 19.2.2. Make sure you have the correct audio output device set to default in Windows if you're having trouble. 

This was recorder with 19.2.1*

https://youtu.be/GXDced_nNPw


----------



## Naeem

WannaBeOCer said:


> Yes, Relive does still record all my audio in 19.2.1 and 19.2.2. Make sure you have the correct audio output device set to default in Windows if you're having trouble.
> 
> This was recorder with 19.2.1*
> 
> https://youtu.be/GXDced_nNPw


it does not work for me for some reason, i do have this in sound settings it works with 18.12.1 drivers i went back and forth few times does not work with 19.1.1 19.2.1 and 19.2.2


----------



## 113802

Naeem said:


> it does not work for me for some reasom i do have this is sound settings it works with 18.12.1 drivers i went back and forth few times does not work with 19.1.1 19.2.1 and 19.2.2


Interesting, I use an external DAC. Curious if it can't detect your Realtek audio device. 

Try to install the latest driver from this thread. Realtek drivers are universal. These usually solve all my Realtek audio problems. 

https://www.tenforums.com/sound-audio/5993-latest-realtek-hd-audio-driver-version-198.html

Edit: have you tried using the built in AMD card audio to rule out it's a sound device issue?


----------



## Naeem

WannaBeOCer said:


> Interesting, I use an external DAC. Curious if it can't detect your Realtek audio device.
> 
> Try to install the latest driver from this thread. Realtek drivers are universal. These usually solve all my Realtek audio problems.
> 
> https://www.tenforums.com/sound-audio/5993-latest-realtek-hd-audio-driver-version-198.html
> 
> Edit: have you tried using the built in AMD card audio to rule out it's a sound device issue?




yes it worked with AMD audio i thin k i need new drivers for my sound card


edit : installed latest realtek drivers and still no sound from recordings i think amd forget to add realtek into it or something


----------



## 113802

Naeem said:


> yes it worked with AMD audio i thin k i need new drivers for my sound card
> 
> 
> edit : installed latest realtek drivers and still no sound from recordings i think amd forget to add realtek into it or something


Glad you were able to narrow it down. I suggest submitting the bug: http://www.amd.com/report


----------



## tolis626

Hello everyone!

I have been using an MSI R9 390x for the past 3.5-4 years and, while it has served me well, it has started to heavily show its age. Mediocre performance being the most obvious problem, but I also got a 1440p 144Hz monitor and I have some blanking issues with the screen that are caused by the GPU somehow, so I want to get something newer. Thing is, I don't want to go NVidia because I don't want my money to go to them, I won't have the cash for the Radeon VII for a very long time, and the RX 580/590 are barely faster than the 390x after overclocking in some games/resolutions. Which leaves me thinking about getting a Vega, leaning towards the 56 'cause forking out 450+€ for an old card stinks a bit. I mean, it's a fitting upgrade to my rig (the one in my sig), the 4790k is holding up just 'cause it's overclocked, so I will probably build a new rig with Zen 2 and Navi within the year. I just want something to perform better than what I have now that will last me until I get newer stuff. So, the tl;dr of the question is, is Vega worth the trouble in early 2019? I'm comfortable with overclocking to stupid extents, so there's that.

PS : Bonus points to anyone who can suggest a specific model/brand of card for me. I was looking at the Sapphire Nitro and ROG Strix models.


----------



## Naeem

tolis626 said:


> Hello everyone!
> 
> I have been using an MSI R9 390x for the past 3.5-4 years and, while it has served me well, it has started to heavily show its age. Mediocre performance being the most obvious problem, but I also got a 1440p 144Hz monitor and I have some blanking issues with the screen that are caused by the GPU somehow, so I want to get something newer. Thing is, I don't want to go NVidia because I don't want my money to go to them, I won't have the cash for the Radeon VII for a very long time, and the RX 580/590 are barely faster than the 390x after overclocking in some games/resolutions. Which leaves me thinking about getting a Vega, leaning towards the 56 'cause forking out 450+€ for an old card stinks a bit. I mean, it's a fitting upgrade to my rig (the one in my sig), the 4790k is holding up just 'cause it's overclocked, so I will probably build a new rig with Zen 2 and Navi within the year. I just want something to perform better than what I have now that will last me until I get newer stuff. So, the tl;dr of the question is, is Vega worth the trouble in early 2019? I'm comfortable with overclocking to stupid extents, so there's that.
> 
> PS : Bonus points to anyone who can suggest a specific model/brand of card for me. I was looking at the Sapphire Nitro and ROG Strix models.



get one of these ? not sure if it's right website for your country just searched it up

https://www.skroutz.gr/s/14477068/Sapphire-Radeon-Nitro-RX-Vega-64-8GB-11275-03.html


----------



## tolis626

Naeem said:


> get one of these ? not sure if it's right website for your country just searched it up
> 
> https://www.skroutz.gr/s/14477068/Sapphire-Radeon-Nitro-RX-Vega-64-8GB-11275-03.html


Thanks for your answer mate!

Well, you hit the nail on the head with the website. Skroutz.gr is a price search engine for the greek market that actually works wonders. But electronics in Greece are usually a bit overpriced, so I am also looking at Amazon and stuff. Basically, whatever's cheapest within the EU will work for me.

I'm more concerned about whether Vega as an architecture is worth the hassle. It's probably a huge upgrade over my 390x, I suppose, and it'll be a nice complement to my aging 4790k, but that's about as much justification as I can muster for it. 

I'm also concerned about whether the 64 is worth the extra ~100€ over the 56. My brain tells me no, but I never liked getting these second tier products for some reason. Even when I overpaid for the 390x over the 390, it was for like 5% more performance or whatever. Still, logic should dictate that I get the 56, correct? I mean, from what little I've read about these cards, with some modding you can get the 56 pretty close to the 64, no?


----------



## Minotaurtoo

tolis626 said:


> Thanks for your answer mate!
> 
> Well, you hit the nail on the head with the website. Skroutz.gr is a price search engine for the greek market that actually works wonders. But electronics in Greece are usually a bit overpriced, so I am also looking at Amazon and stuff. Basically, whatever's cheapest within the EU will work for me.
> 
> I'm more concerned about whether Vega as an architecture is worth the hassle. It's probably a huge upgrade over my 390x, I suppose, and it'll be a nice complement to my aging 4790k, but that's about as much justification as I can muster for it.
> 
> I'm also concerned about whether the 64 is worth the extra ~100€ over the 56. My brain tells me no, but I never liked getting these second tier products for some reason. Even when I overpaid for the 390x over the 390, it was for like 5% more performance or whatever. Still, logic should dictate that I get the 56, correct? I mean, from what little I've read about these cards, with some modding you can get the 56 pretty close to the 64, no?



vega 64 is a pretty good upgrade over a 390x... I had a fury x that was overclocked and memory modded... the vega once undervolting, overclocking and memory modding netted me an average 40% improvement in games over fury x... from what I hear vega 56 is better than fury x... 



I got lucky and got mine for 300$ when there was a big dip in prices right after the rtx launch.... if you can get a 56 for 100 less than the 64 and you plan on overclocking it, you'll do pretty good there.


----------



## Grin

«I'm also concerned about whether the 64 is worth the extra ~100€ over the 56. »
25-30 per year of its life before you will change it on Arcturus For this money you will get better overclocking potential and Samsung memory +10-15 fps average + good or OK 4k gaming Think!


----------



## BradleyW

I upgraded to a Vega64 from a 290x a few months back. Double the fps on average at 1440p, very high settings.


----------



## RX7-2nr

Ne01 OnnA said:


> Yup, thanks to Vega 2 we have better OC potential
> It's not over yet  (New DAL is comming)
> 
> PS.
> DAL (Display Abstraction Layer):
> -> https://www.x.org/wiki/Events/XDC201...am/amd_dal.pdf


Maybe the new drivers will make more than a 10% increase for more 110 watts. :\


----------



## tolis626

Minotaurtoo said:


> vega 64 is a pretty good upgrade over a 390x... I had a fury x that was overclocked and memory modded... the vega once undervolting, overclocking and memory modding netted me an average 40% improvement in games over fury x... from what I hear vega 56 is better than fury x...
> 
> I got lucky and got mine for 300$ when there was a big dip in prices right after the rtx launch.... if you can get a 56 for 100 less than the 64 and you plan on overclocking it, you'll do pretty good there.


Well, whether it's a good upgrade is non debatable, it is. It's a matter of cost more than anything, considering it's what? 1.5-2 years old at this point? Thing is, there isn't anything better in this range when considering performance per dollar (well, euro, whatevs) while being considerably faster than the 390x. And considering overclocking will be happening, I think it's good to go. I just need some assurance from owners of the cards, as your opinions matter more than my educated guesses. 


Grin said:


> «I'm also concerned about whether the 64 is worth the extra ~100€ over the 56. »
> 25-30 per year of its life before you will change it on Arcturus For this money you will get better overclocking potential and Samsung memory +10-15 fps average + good or OK 4k gaming Think!


Well, if I decide to get it, I'm probably not going to upgrade that Vega card. It's going in my current rig which is getting old. Once I settle things with my job and Zen 2 comes out, I will get a new rig and this will become my secondary. So it'll probably spend the rest of its life, however long, with whatever I end up getting for now. I'm not concerned about longevity over anything longer than 6-9 months. 

Now, if you're telling me that when factoring in modding/undervolting/overclocking, the 64 has more headroom for improvement than the 56 has, that's interesting... I really don't care about stock operating conditions, honestly, the card MAY run for a day at stock, but that's about it. If, say, I can improve a Vega 56 some 10% over stock by messing with it, but a 64 can be improved by 15-20% over its stock form, then we're realistically talking about a much bigger performance delta. That alone is worth 100€ extra for me, just for the fun of messing with it, honestly. I was under the assumption that most things are identical between the 56 and 64 hardware-wise, except for the CUs. If there's more differences, 64 looks all the better to me. 


BradleyW said:


> I upgraded to a Vega64 from a 290x a few months back. Double the fps on average at 1440p, very high settings.


That's what I'm talking about. The 390x may be a bit faster than the 290x, but they are both getting way too old.


----------



## geriatricpollywog

WannaBeOCer said:


> Never said heat was an issue. You mentioned low frequencies which was why I asked.
> 
> Are you using the LC bios or is it the standard Vega card?
> 
> Undervolt P7 to 1200mV if you are on the LC bios and you should see it will stay between 1700Mhz-1730Mhz.
> 
> Along with 50% power limit increase.


I have the LC bios. I thought P7 was at 1200, but it was at 1250. I set it to 1200 and that made a small difference, but core clock and FPS tanks every 10 seconds or so. The red lights on the GPU also go down to 1-2 lights, then ramp back up. I also used the AMD clean install utility and reinstalled the 19.2.2 drivers.

In Wattman, I only changed the following settings: +50% power limit and 1100mhz memory. I was running 150% power limit before I used the Clean Install Utility and the result was the same.


----------



## 113802

0451 said:


> I have the LC bios. I thought P7 was at 1200, but it was at 1250. I set it to 1200 and that made a small difference, but core clock and FPS tanks every 10 seconds or so. The red lights on the GPU also go down to 1-2 lights, then ramp back up. I also used the AMD clean install utility and reinstalled the 19.2.2 drivers.
> 
> In Wattman, I only changed the following settings: +50% power limit and 1100mhz memory. I was running 150% power limit before I used the Clean Install Utility and the result was the same.


Is this in every single game? Sounds like a CPU limited game like a MMO. Can you run FireStrike with those settings? Should be around 26.5-27.2k


----------



## geriatricpollywog

WannaBeOCer said:


> 0451 said:
> 
> 
> 
> I have the LC bios. I thought P7 was at 1200, but it was at 1250. I set it to 1200 and that made a small difference, but core clock and FPS tanks every 10 seconds or so. The red lights on the GPU also go down to 1-2 lights, then ramp back up. I also used the AMD clean install utility and reinstalled the 19.2.2 drivers.
> 
> In Wattman, I only changed the following settings: +50% power limit and 1100mhz memory. I was running 150% power limit before I used the Clean Install Utility and the result was the same.
> 
> 
> 
> Is this in every single game? Sounds like a CPU limited game like a MMO. Can you run FireStrike with those settings? Should be around 26.5-27.2k
Click to expand...

I used to hit 28000 in fire strike, but ny graphics score is 25000 and it drops with every run. My issue occurs in Metro Redux, Firestrike, and Unigine Superposition. I haven't tried anything else. If I rolled back to old drivers, I wouldn't be able to play Metro.


----------



## 113802

0451 said:


> I used to hit 28000 in fire strike, but ny graphics score is 25000 and it drops with every run. My issue occurs in Metro Redux, Firestrike, and Unigine Superposition. I haven't tried anything else. If I rolled back to old drivers, I wouldn't be able to play Metro.


25k is low for 1750Mhz/1100Mhz @ 1200mV, what is your LC bios version? I might just give you mine instead to flash.


----------



## geriatricpollywog

WannaBeOCer said:


> 0451 said:
> 
> 
> 
> I used to hit 28000 in fire strike, but ny graphics score is 25000 and it drops with every run. My issue occurs in Metro Redux, Firestrike, and Unigine Superposition. I haven't tried anything else. If I rolled back to old drivers, I wouldn't be able to play Metro.
> 
> 
> 
> 25k is low for 1750Mhz/1100Mhz @ 1200mV, what is your LC bios version? I might just give you mine instead to flash.
Click to expand...

Not sure what bios version I have since I'm not on my PC right now, but I'll try yours. It's definitely from 2017 though. I'll PM you my email address. Thanks!


----------



## 113802

0451 said:


> Not sure what bios version I have since I'm not on my PC right now, but I'll try yours. It's definitely from 2017 though. I'll PM you my email address. Thanks!


I was just going to zip it up and upload it here.


----------



## Naeem

WannaBeOCer said:


> 25k is low for 1750Mhz/1100Mhz @ 1200mV, what is your LC bios version? I might just give you mine instead to flash.




i get 26200+ with 1750 50% power target abd 11000hbm2


----------



## geriatricpollywog

WannaBeOCer said:


> I was just going to zip it up and upload it here.


-I installed your bios
-I reformatted my HD and reinstalled Windows 10
-I replaced my video card cables

The issue still persists. The GPU tach lights ramp down every few seconds and frame rate tanks. Does anybody know where I can get the older Adrenaline drivers to see if the issue is drivers?


----------



## VicsPC

0451 said:


> -I installed your bios
> -I reformatted my HD and reinstalled Windows 10
> -I replaced my video card cables
> 
> The issue still persists. The GPU tach lights ramp down every few seconds and frame rate tanks. Does anybody know where I can get the older Adrenaline drivers to see if the issue is drivers?


Yup, why everyone should bookmark it, it's a God sent. Has Nvidia drivers as well. 

https://www.guru3d.com/files-categories/videocards-ati-catalyst-vista-win-7.html


----------



## YellowBlackGod

Hello everybody! A i am planning on buying a Sapphire Vega 64 Nitro. And I would like to ask some questions. If any Nitro Vega 64 owner would answer that would be nice. 

1. On a single 144hz 1080p monitor does the HBM2 memory get stuck on max speed while idle in 144hz mode, or runs it at a low P-state (idle speed)? 

2. Is the support bracket mandatory to install or just an additional choiche for anyone who wants to do so?

3. Is undervolting with Sapphire Trixx easy? I have never played with volts around and I would like to know if this is generally easy to do. 

Thank you very much in advance Vega users!


----------



## miklkit

I have a Sapphire Vega64 Nitro air cooled. 



1. At 1440P 75hz there is no problem with anything running too fast at idle. 



2. The support bracket is easy to install and doesn't get in the way so I put it on. It seems to reduce sag a little bit. 



3. Trixx is the only software that actually does work on this gpu. It doesn't have the most features but at least it works. Set a custom fan profile for the best results. I have the fans hitting 100% @ 60C. These are my current settings. They used to be higher but it went unstable in one new game so I backed them off a bit.


----------



## YellowBlackGod

Thank you for your reply! The 144hz refresh rate problem is a common known problem even for single monitor setups. When the monitor is set at 144hz the memory runs all the time at max speed even when the card sits idle doing nothing. This causes high idle temps and procs the fans to work all the time. I read somewhere that this issue has been fixed for the vega gpus. 75 hz 2k sounds nice as a setup, yet the problem is caused only with 144hz+ refresh rates.

So i wanted to know if it still persists with Vega gpus. If not i can fully enjoy my 144hz monitor with a Vega GPU without having skyhigh idle temps.

Also would you advice to buy the Limited esition or the normal version of the GPU?

Thank you! 🙂


----------



## flash2021

hey everyone!
I recently bought a used gigabyte Vega 64 (air cooler) and have worked to undervolt it etc and wanted to post some results and see how I compare to y'all. these are all with my i7-4770K at stock (i need to get my OC back on it and rerun, I know)

P6: 1572/1010 (anything below 1010 crashes in superpositon specifically)
P7: 1717/1145 (i can push the core higher here for 3dmark runs, but this is stable steady state in heaven, doom, crysis 3, superposition)
edit: +50 power limit too

I typically see a consistent steady core clock of 1649-1652

graphics scores only:
firestrike: 23,942
Firestrike extreme: 11,678
FSU: 5943
Timespy: 7814 (6510 combined)
heaven benchmark with x8 AA: 2230
superpositon 1080p Extreme: 4975

fan profile is set high (like 80%) but I wear headphones so w/e
this is a great upgrade from my dated 7970's!

I've got a little thermal headroom left, especially in benchmarks bc HBM is in the 73 deg range and thermal diode reads 66ish...while sustained gaming is a few deg higher ones the temp curves plateau...but the voltage increases needed per 10mhz step on core P7 are growing.

I think i've got a decent chip?

edit: going a little higher on core for p6/p7 enough to pass firestrike, I can get 25,500ish...the memory seems to be stuck at 1040mhz to not crash, but I'm going to start over from my undervolt base and rework to max mem mhz before jumping core clocks up. my hwinfo reported card power draw peaks at 330W with my run that can get 25,500


----------



## Bartouille

Yeah I remember having that problem on older AMD gpus. On my Vega 64 with a 1440p 165Hz monitor it idles at 30MHz core and 167MHz memory. I guess they fixed it.


----------



## ZealotKi11er

I got 27K GPU score for 1750/1100 @ 1200 mV. 50% Power.


----------



## YellowBlackGod

Bartouille said:


> Yeah I remember having that problem on older AMD gpus. On my Vega 64 with a 1440p 165Hz monitor it idles at 30MHz core and 167MHz memory. I guess they fixed it.


Exactly what i wanted to read! Is the L.E. worth buying or should i go for the normal edition?


----------



## Ipak

1080p 75hz + 1440p 144hz, card also idles no problem.


----------



## YellowBlackGod

Ipak said:


> 1080p 75hz + 1440p 144hz, card also idles no problem.


....that is quite a reason to change my rx 590 and go for a vega gpu.


----------



## geriatricpollywog

ZealotKi11er said:


> I got 27K GPU score for 1750/1100 @ 1200 mV. 50% Power.


I rolled back to last year's 18.2.3 drivers and now my benchmark scores are normal and the GPU is no longer throttling. What drivers are you on?


----------



## 113802

0451 said:


> I rolled back to last year's 18.2.3 drivers and now my benchmark scores are normal and the GPU is no longer throttling. What drivers are you on?


I'm on 19.2.2 with 1750Mhz/1105Mhz @ 1200mV and it's constantly hitting 27.2k - 1700-1730Mhz without dropping. My card was in the first Vega batches, ordered it first day of release. It has Samsung memory, does yours?

https://www.3dmark.com/3dm/33683416?


----------



## Naeem

WannaBeOCer said:


> I'm on 19.2.2 with 1750Mhz/1105Mhz @ 1200mV and it's constantly hitting 27.2k - 1700-1730Mhz without dropping. My card was in the first Vega batches, ordered it first day of release. It has Samsung memory, does yours?
> 
> https://www.3dmark.com/3dm/33683416?




maybe your intel cpu is helping you


----------



## bustacap22

New Vega owner here. Wondering what Vega owners are using to undervolt these GPU's. Afterburner has been my go to application. When I undervolt to -25mv, I get crashes when running Heaven Benchmark. Now I see that many have been able to undervolt these cards up to -100mv. Trying to see if I have dud's or not. Or should I be using Wattman??? Any input greatly appreciated.


----------



## Xinoxide

bustacap22 said:


> New Vega owner here. Wondering what Vega owners are using to undervolt these GPU's. Afterburner has been my go to application. When I undervolt to -25mv, I get crashes when running Heaven Benchmark. Now I see that many have been able to undervolt these cards up to -100mv. Trying to see if I have dud's or not. Or should I be using Wattman??? Any input greatly appreciated.


I've just been using wattman to change the settings, but I have a custom power play table in use that was generated with OverdriveNTool. Even when wattman resets my settings the PPT has the undervolt already in it.


----------



## Minotaurtoo

ok.. I thought I had all my issues solved... but I'm still having two frustratingly random issues... 



1. This one is only annoying, but at random on cold boots I'll have the bios splash screen come up on one screen and then when it attempts to activate all 3 screens, one goes green and the other two look garbled... this only lasts until windows loads the driver... very minor annoyance... not a real problem for me (yet) and doesn't even trigger a wattman reset...


2. random black screen crashes under load with fans suddenly going up to full speed... this doesn't seem to affect what's actually going on other than what the video card is doing itself.... like once while folding and watching youtube it crashed and the audio for the youtube video went on uninterrupted... and I could even pause the video!... I hoped it would be a driver crash and recover, but it didn't.... a quick power cycle and didn't do it again for days... one day it did it twice in an hour playing games and then went the rest of the day with no issue using default settings the whole time... actually it seems to do it less with power limit being turned all the way up than it does at "stock" settings... in fact it doesn't seem to get any more frequent no matter what I do so long as what I do doesn't result in an instability crash.


Things I've tried... different PSU's, different PC's, Different bios, Different drivers (used DDU to uninstall).... only thing I haven't done is reinstall windows fresh lol


picture shows what happens on cold boot at random...


----------



## Trender

bustacap22 said:


> New Vega owner here. Wondering what Vega owners are using to undervolt these GPU's. Afterburner has been my go to application. When I undervolt to -25mv, I get crashes when running Heaven Benchmark. Now I see that many have been able to undervolt these cards up to -100mv. Trying to see if I have dud's or not. Or should I be using Wattman??? Any input greatly appreciated.


Lower your core clocks to p7 1590 and undervolt to like 1000 mv p7, p6 975, it should run at about 1550 mhz stable


----------



## ZealotKi11er

Minotaurtoo said:


> ok.. I thought I had all my issues solved... but I'm still having two frustratingly random issues...
> 
> 
> 
> 1. This one is only annoying, but at random on cold boots I'll have the bios splash screen come up on one screen and then when it attempts to activate all 3 screens, one goes green and the other two look garbled... this only lasts until windows loads the driver... very minor annoyance... not a real problem for me (yet) and doesn't even trigger a wattman reset...
> 
> 
> 2. random black screen crashes under load with fans suddenly going up to full speed... this doesn't seem to affect what's actually going on other than what the video card is doing itself.... like once while folding and watching youtube it crashed and the audio for the youtube video went on uninterrupted... and I could even pause the video!... I hoped it would be a driver crash and recover, but it didn't.... a quick power cycle and didn't do it again for days... one day it did it twice in an hour playing games and then went the rest of the day with no issue using default settings the whole time... actually it seems to do it less with power limit being turned all the way up than it does at "stock" settings... in fact it doesn't seem to get any more frequent no matter what I do so long as what I do doesn't result in an instability crash.
> 
> 
> Things I've tried... different PSU's, different PC's, Different bios, Different drivers (used DDU to uninstall).... only thing I haven't done is reinstall windows fresh lol
> 
> 
> picture shows what happens on cold boot at random...



Try fresh install of Windows and RMA after.


----------



## geriatricpollywog

WannaBeOCer said:


> 0451 said:
> 
> 
> 
> I rolled back to last year's 18.2.3 drivers and now my benchmark scores are normal and the GPU is no longer throttling. What drivers are you on?
> 
> 
> 
> I'm on 19.2.2 with 1750Mhz/1105Mhz @ 1200mV and it's constantly hitting 27.2k - 1700-1730Mhz without dropping. My card was in the first Vega batches, ordered it first day of release. It has Samsung memory, does yours?
> 
> https://www.3dmark.com/3dm/33683416?
Click to expand...

I bought mine like 2 months in. It has Samsung memory. After hours of driver testing, I am finally sorted with 19.1.2 drivers. Metro Exodus runs at 60+ fps in Ultra at 3440x1440, which is way better tha, I hoped for. Core averages around 1720 mhz with HBM at 1100. All benchmark scores are back to normal. Hopefully the issue will be fixed with 19.3 so I'm not stuck with January drivers. I'm just glad this isn't a hardware degredation issue, as my EVGA 1200 P2 blew recently and could have taken my GPU with it!


----------



## Minotaurtoo

ZealotKi11er said:


> Try fresh install of Windows and RMA after.


I was afraid you'd say that... oh well... weekend project I suppose... think I'll wait till I have a spare SSD to try it on... I lost the key to the windows install I'm using right now...


----------



## THUMPer1

Minotaurtoo said:


> I was afraid you'd say that... oh well... weekend project I suppose... think I'll wait till I have a spare SSD to try it on... I lost the key to the windows install I'm using right now...


If it's win10 you don't need a key for a reinstall. If it asks you for the key during install just skip it. It will auto activate once it installs and connects to the internet.


----------



## Ne01 OnnA

Im using Mixed WattMan & OverdriveNTool + PP_Power states.
All is OK, worth noting is that You need to adjust UV sometimes.

It is related to Drivers, WHQL is always better in terms of stability, also they need less v 

e.g.
For BFV i have 1680/1120 @ 1.087/937mV in last WHQL 19.1.1
Now with 19.2.2 i need to bump GPU v to 1.094mV


----------



## Maracus

Ne01 OnnA said:


> Im using Mixed WattMan & OverdriveNTool + PP_Power states.
> All is OK, worth noting is that You need to adjust UV sometimes.
> 
> It is related to Drivers, WHQL is always better in terms of stability, also they need less v
> 
> e.g.
> For BFV i have 1680/1120 @ 1.087/937mV in last WHQL 19.1.1
> Now with 19.2.2 i need to bump GPU v to 1.094mV


I noticed the same thing with 19.2.2, the voltage set in OverdriveNTool and the actual voltage read in Hwinfo was lower than 18.12 drivers. This is the second time I have re-installed 19.2.2 all previous 19.. drivers i gave up on because of game crashes. It wasn't till I actually re-installed 19.2.2 and went into Wattman and put everything in manual that i did not crash in games.

The strange thing is the lower voltage with 19.2.2 drivers, maybe it was happening on all the 19.. drivers ,maybe auto undervolt is on even though its saying its off? I cant be bothered going back and testing 19.1.1 or 19.2.1


----------



## Naeem

anyone else here using 


RYZEN with Crosshair Hero 6 and Vega 64 and had working audio in game recordings and streaming via Relive ?


----------



## VicsPC

Naeem said:


> anyone else here using
> 
> 
> RYZEN with Crosshair Hero 6 and Vega 64 and had working audio in game recordings and streaming via Relive ?


Just curious, why not use OBS? I tried relive once the day it came out and it was so s**t at rainbow six siege that i stopped even installing it. It made the game freeze every 15sec for a sec it was awful.


----------



## madmanmarz

Has anyone tried using a thermal pad on the die/hbm? 

I've had a Vega 56 from the beginning on an alphacool nexxxos gpx block. Molded die/Samsung, but hotspot hits 70-100c while core/hbm are at 40c. Any voltage over 1000 and I'm in the 80s on hotspot.

Today I picked up a 64, molded package/Samsung, and I put the nexxxos on with the same results. I have tried repasting dozens of times, different pastes, techniques, mounting pressure and that's the best I can do on either card. It was used and I tried stock cooler and temps were soaring right off the bat. 

I figured since we now know hotspot is hottest spot on die, that a thermal pad might make for higher avg temps but lower hotspot.

Halp!


----------



## Mikkinen

madmanmarz said:


> Has anyone tried using a thermal pad on the die/hbm?
> 
> I've had a Vega 56 from the beginning on an alphacool nexxxos gpx block. Molded die/Samsung, but hotspot hits 70-100c while core/hbm are at 40c. Any voltage over 1000 and I'm in the 80s on hotspot.
> 
> Today I picked up a 64, molded package/Samsung, and I put the nexxxos on with the same results. I have tried repasting dozens of times, different pastes, techniques, mounting pressure and that's the best I can do on either card. It was used and I tried stock cooler and temps were soaring right off the bat.
> 
> I figured since we now know hotspot is hottest spot on die, that a thermal pad might make for higher avg temps but lower hotspot.
> 
> Halp!


Hi, same problem too (kraken with aio), I followed the recommendations for fixing and various attempts on the quantity and method of application of the paste but always hot hotspot under stress (85°@ 1000mv p7 and @975mv on the floor and pl +25, 55°core temp).
I even tried to reposition the blackplate origin to cool better under the gpu but don't work.


----------



## Naeem

VicsPC said:


> Just curious, why not use OBS? I tried relive once the day it came out and it was so s**t at rainbow six siege that i stopped even installing it. It made the game freeze every 15sec for a sec it was awful.




i don't want another software else i have xsplit premium relieve got much better with file size and ease to use untill it was broken with new drivers for me


----------



## VicsPC

madmanmarz said:


> Has anyone tried using a thermal pad on the die/hbm?
> 
> I've had a Vega 56 from the beginning on an alphacool nexxxos gpx block. Molded die/Samsung, but hotspot hits 70-100c while core/hbm are at 40c. Any voltage over 1000 and I'm in the 80s on hotspot.
> 
> Today I picked up a 64, molded package/Samsung, and I put the nexxxos on with the same results. I have tried repasting dozens of times, different pastes, techniques, mounting pressure and that's the best I can do on either card. It was used and I tried stock cooler and temps were soaring right off the bat.
> 
> I figured since we now know hotspot is hottest spot on die, that a thermal pad might make for higher avg temps but lower hotspot.
> 
> Halp!





Mikkinen said:


> Hi, same problem too (kraken with aio), I followed the recommendations for fixing and various attempts on the quantity and method of application of the paste but always hot hotspot under stress (85°@ 1000mv p7 and @975mv on the floor and pl +25, 55°core temp).
> I even tried to reposition the blackplate origin to cool better under the gpu but don't work.


Try having a fan blowing right onto the card, either from the side or from the top. Thermal pads don't work nearly as well as paste because of their thickness. If hotspot temps are causing issues then id worry, if not it's perfectly fine and nothing to worry about. My hotspot only hits in the 60s with my ekwb but my card is mounted vertically with my case fans blowing down onto it as intake, and that's with the factory voltage as well.


----------



## madmanmarz

VicsPC said:


> madmanmarz said:
> 
> 
> 
> Has anyone tried using a thermal pad on the die/hbm?
> 
> I've had a Vega 56 from the beginning on an alphacool nexxxos gpx block. Molded die/Samsung, but hotspot hits 70-100c while core/hbm are at 40c. Any voltage over 1000 and I'm in the 80s on hotspot.
> 
> Today I picked up a 64, molded package/Samsung, and I put the nexxxos on with the same results. I have tried repasting dozens of times, different pastes, techniques, mounting pressure and that's the best I can do on either card. It was used and I tried stock cooler and temps were soaring right off the bat.
> 
> I figured since we now know hotspot is hottest spot on die, that a thermal pad might make for higher avg temps but lower hotspot.
> 
> Halp!
> 
> 
> 
> 
> 
> 
> Mikkinen said:
> 
> 
> 
> Hi, same problem too (kraken with aio), I followed the recommendations for fixing and various attempts on the quantity and method of application of the paste but always hot hotspot under stress (85°@ 1000mv p7 and @975mv on the floor and pl +25, 55°core temp).
> I even tried to reposition the blackplate origin to cool better under the gpu but don't work.
> 
> Click to expand...
> 
> Try having a fan blowing right onto the card, either from the side or from the top. Thermal pads don't work nearly as well as paste because of their thickness. If hotspot temps are causing issues then id worry, if not it's perfectly fine and nothing to worry about. My hotspot only hits in the 60s with my ekwb but my card is mounted vertically with my case fans blowing down onto it as intake, and that's with the factory voltage as well.
Click to expand...

Vega VII is using a thermal pad. Seems to make sense if you're having trouble getting perfect contact, and the graphite pads are supposed to perform on par with mid grade pastes with no degradation. I'm hoping I will sacrifice GPU/HBM temps for cooler hotspot. 

I have a 120mm blowing on it and it makes no difference. Vega VII confirmed that the die has multiple sensors on it and hotspot is the highest reading. 

I hit about 85c at 1600/1100 and 1.05v (actual as reported by gpu-z) which doesn’t sound terrible but it is when GPU and HBM are at 40c. 

I emailed alphacool and they suggested a screw tightening pattern I haven’t tried so I guess I’ll try that in the meantime. 

https://www.tomshw.de/2018/07/23/am...-das-richtige-auftragen-von-waermeleitpaste/#


----------



## VicsPC

madmanmarz said:


> Vega VII is using a thermal pad. Seems to make sense if you're having trouble getting perfect contact, and the graphite pads are supposed to perform on par with mid grade pastes with no degradation. I'm hoping I will sacrifice GPU/HBM temps for cooler hotspot.
> 
> I have a 120mm blowing on it and it makes no difference. Vega VII confirmed that the die has multiple sensors on it and hotspot is the highest reading.
> 
> I hit about 85c at 1600/1100 and 1.05v (actual as reported by gpu-z) which doesn’t sound terrible but it is when GPU and HBM are at 40c.
> 
> I emailed alphacool and they suggested a screw tightening pattern I haven’t tried so I guess I’ll try that in the meantime.
> 
> https://www.tomshw.de/2018/07/23/am...-das-richtige-auftragen-von-waermeleitpaste/#


Vega uses a graphite pad, most thermal pads are just thermal pad and not graphite. I haven't found anyone that actually sells graphite pads. Derbaeur did a video and got better temps with LM then the graphite pad.


----------



## Maracus

madmanmarz said:


> Has anyone tried using a thermal pad on the die/hbm?
> 
> I've had a Vega 56 from the beginning on an alphacool nexxxos gpx block. Molded die/Samsung, but hotspot hits 70-100c while core/hbm are at 40c. Any voltage over 1000 and I'm in the 80s on hotspot.
> 
> Today I picked up a 64, molded package/Samsung, and I put the nexxxos on with the same results. I have tried repasting dozens of times, different pastes, techniques, mounting pressure and that's the best I can do on either card. It was used and I tried stock cooler and temps were soaring right off the bat.
> 
> I figured since we now know hotspot is hottest spot on die, that a thermal pad might make for higher avg temps but lower hotspot.
> 
> Halp!


I used kryonaut on the die/HBM depending on the game and length of time played my vega 56 hotspot can reach towards mid 80's which is fine. I probably could do with some venting fans in my R6 define case because the vega strix and cpu dumping heat straight into the case doesn't help the cause.


----------



## madmanmarz

VicsPC said:


> madmanmarz said:
> 
> 
> 
> Vega VII is using a thermal pad. Seems to make sense if you're having trouble getting perfect contact, and the graphite pads are supposed to perform on par with mid grade pastes with no degradation. I'm hoping I will sacrifice GPU/HBM temps for cooler hotspot.
> 
> I have a 120mm blowing on it and it makes no difference. Vega VII confirmed that the die has multiple sensors on it and hotspot is the highest reading.
> 
> I hit about 85c at 1600/1100 and 1.05v (actual as reported by gpu-z) which doesn’t sound terrible but it is when GPU and HBM are at 40c.
> 
> I emailed alphacool and they suggested a screw tightening pattern I haven’t tried so I guess I’ll try that in the meantime.
> 
> https://www.tomshw.de/2018/07/23/am...-das-richtige-auftragen-von-waermeleitpaste/#
> 
> 
> 
> Vega uses a graphite pad, most thermal pads are just thermal pad and not graphite. I haven't found anyone that actually sells graphite pads. Derbaeur did a video and got better temps with LM then the graphite pad.
Click to expand...

IC sells a 30x30 graphite thermal pad for $10. I have read that they really are just rebranded Panasonic pyrolytic soft pads. AMD uses Hitatchi HM03 I believe. They are known as pyrolytic thermal pads.

They ARE conductive but if you cut to size and use nail polish around die should be no problem.


----------



## Sickened1

Hey all. I've got a Sapphire Vega 64 with an EK block with the stock bios running at 1737/1075 @1.2v in wattman. This gives me actual freq's of about 1680-1690. Do I have any ground to gain flashing to the LC bios? Heat not a factor since I have 2x 360 rads cooling my cpu/gpu. Hotspot temps will sit around 79-81, GPU/HBM temps in the low to mid 50's. I'm not really concerned with power draw either.


----------



## sinnedone

The Vega 64 Liquid cooled bios gives you 1.25v on the core instead of 1.2 but lowers the throttling temperature. I don't know the temperature off the top of my head but you might hit it if your hotspot temp gets to almost 80c.

(Try some Liquid metal maybe to get the hot spot temps down. My hotspot temp doesn't really pass 58-60C)

As for gains, you might get higher clocks out of the box with maybe the ability to overclock past 1750mhz. Depends on how much undervolting at whatever temps and whatever the hell AMD's algorithm decides on doing that day.


----------



## VicsPC

Sickened1 said:


> Hey all. I've got a Sapphire Vega 64 with an EK block with the stock bios running at 1737/1075 @1.2v in wattman. This gives me actual freq's of about 1680-1690. Do I have any ground to gain flashing to the LC bios? Heat not a factor since I have 2x 360 rads cooling my cpu/gpu. Hotspot temps will sit around 79-81, GPU/HBM temps in the low to mid 50's. I'm not really concerned with power draw either.


I got a 2700x and the v64 with a 360/240 in push pull and even with an ambient of around 25°C mine hasn't hit mid 50s yet, just now streaming and playing Siege i hit about 43 on the GPU and around 48 on the HBM, using Kryonaut as well on both. I've only OCed my memory but depending on the game on totally stock bios i hit around 1550-1640mhz on the core and i changed HBM to 1050 without issues. I absolutely love the ek block compared to the alphachool i had on my 390. The LC bios, meh, unless u got a really good card i don't see a point. On water temps shouldn't be an issue, not even hotspot unless your case has seriously poor airflow.


----------



## Sickened1

VicsPC said:


> I got a 2700x and the v64 with a 360/240 in push pull and even with an ambient of around 25°C mine hasn't hit mid 50s yet, just now streaming and playing Siege i hit about 43 on the GPU and around 48 on the HBM, using Kryonaut as well on both. I've only OCed my memory but depending on the game on totally stock bios i hit around 1550-1640mhz on the core and i changed HBM to 1050 without issues. I absolutely love the ek block compared to the alphachool i had on my 390. The LC bios, meh, unless u got a really good card i don't see a point. On water temps shouldn't be an issue, not even hotspot unless your case has seriously poor airflow.


Well mine seems like it could go higher it just seems like it's running out of wattage. If I set my core any higher it'll over boost and crash my drivers.


----------



## BradleyW

Has anyone got Metro Exodus? Here are my results using the in-game benchmark located in the game's directory. It dips into the 30's. Seems low for a Vega 64 at High Preset, 2560 x 1080.


----------



## tolis626

Sorry to bother you guys again, but out of curiosity, what would you say are the best Vega models out there? I mean, when it comes to cooling and overclocking capabilities. For now, my eye is on the Sapphire Nitro, Asus Strix and Powercolor Red Devil, but that's all in theory because I lack the funds right now. I hope the prices don't go up in the near future because these cards are already expensive for how old they are and I have yet to see any interesting listings for used ones in the EU.


----------



## Sickened1

tolis626 said:


> Sorry to bother you guys again, but out of curiosity, what would you say are the best Vega models out there? I mean, when it comes to cooling and overclocking capabilities. For now, my eye is on the Sapphire Nitro, Asus Strix and Powercolor Red Devil, but that's all in theory because I lack the funds right now. I hope the prices don't go up in the near future because these cards are already expensive for how old they are and I have yet to see any interesting listings for used ones in the EU.


The Sapphire Nitro+ has always been regarded as the best cooler of all the Vega models. Then red devil, and the strix has been bad due to VRM cooling issues. Idk if those have been resolved.


----------



## 113802

tolis626 said:


> Sorry to bother you guys again, but out of curiosity, what would you say are the best Vega models out there? I mean, when it comes to cooling and overclocking capabilities. For now, my eye is on the Sapphire Nitro, Asus Strix and Powercolor Red Devil, but that's all in theory because I lack the funds right now. I hope the prices don't go up in the near future because these cards are already expensive for how old they are and I have yet to see any interesting listings for used ones in the EU.


The best model for overclocking is the reference card. The absolute best card is the liquid cooled reference card since it's binned.


----------



## tolis626

Sickened1 said:


> The Sapphire Nitro+ has always been regarded as the best cooler of all the Vega models. Then red devil, and the strix has been bad due to VRM cooling issues. Idk if those have been resolved.


Ok, gotcha. That's good for me because a) It's among the cheapest cards I can find and b) I kind of like the way it looks. Thanks for your input!


WannaBeOCer said:


> The best model for overclocking is the reference card. The absolute best card is the liquid cooled reference card since it's binned.


Well, that I know. But that costs a pretty penny more than the other cards and is super hard to find where I am. If I could land a deal on a used LC reference card I'd get it in a heartbeat for the better cooling alone, but that seems to be out of the question, the only card I've found is just over 550€.


----------



## BradleyW

Vega strix has lower clocks and bad cooling. I've had to completely mod my strix with new thermal pads and different fans along with hours of tweaking the clocks/voltages to get the clocks running as high as its counterparts.

Without tweaks in BFV, my Vega 64 strix will run at 1430mhz on the core. Diabolical! After tweaking it's 1560 to 1590.


----------



## Maracus

BradleyW said:


> Vega strix has lower clocks and bad cooling. I've had to completely mod my strix with new thermal pads and different fans along with hours of tweaking the clocks/voltages to get the clocks running as high as its counterparts.
> 
> Without tweaks in BFV, my Vega 64 strix will run at 1430mhz on the core. Diabolical! After tweaking it's 1560 to 1590.


Wow ouch dude even my Strix 56 clocks higher, I also replaced the pads and paste


----------



## tolis626

BradleyW said:


> Vega strix has lower clocks and bad cooling. I've had to completely mod my strix with new thermal pads and different fans along with hours of tweaking the clocks/voltages to get the clocks running as high as its counterparts.
> 
> Without tweaks in BFV, my Vega 64 strix will run at 1430mhz on the core. Diabolical! After tweaking it's 1560 to 1590.


Asus still hasn't gotten their act together with AMD cards? Jesus man, you'd think a huge company with such engineering talent could make a cooler that actually works. I remember back when I bought my 390x, when the Asus card was the worst along with the Gigabyte one. Both had reused coolers from the 780 and 780ti that didn't make proper contact with the core and none with the VRM. Gee...

Nitro+ it is then if I go Vega! Thanks!


----------



## snipernote

please add me to the owners club !
AXRX VEGA 56 8GBHBM2-2D2HD/OC
power color red dragon vega 56


----------



## snipernote

can you help me under volt or flash my card to vega 64 ? ( power color red dragon vega 56 )
would love to get the most out of my card
on stock without any change i am getting 1450 mhz @ 186w used
i tried a little underclocking at 1000mv p5 1050mv on p7 stock speeds , i have all p states modify-able except P0 and the card reached 1540mhz, +50 power limit @212w
yesterday i wanted to reach the lowest undervolt possible and i got a very wierd result ... i put p1 with 800mv and then added 25mv till the end i reach 975 or 950mv .... card speed never got over 1350mhz and total power draw was 150w only 

hbm is samsung memory i under volt to 800mv as well 



how can i make my card more effiecient ? 

should i flash Vega 64 bios instead ?


the reason i am doing that is to gain more performance with less power consumption as you can see my PSU is not gold rated and maxes out at 576w


----------



## snipernote

this is even strange behavior !!
i dropped all the volt to 800mv which is the lowest i can put for my vega 56 through wattman
the card runs at 1450mhz with 700mhz hbm memory 120w ( weird because i did not change any clocks )


is wattman accurate ? 
@hellm can you please help me ?


----------



## snipernote

this is my current profile in wattman


----------



## hellm

Sorry, can't help you. I don't even have a Radeon card at the moment, so i can't test anything or retrace what you have done. Has to be someone with a lot more knowledge about Vega64/56. But i can say, everything that appears strange for you is happening for some reason. It's called boost algorithm and there is a arbitrator responsable for all the clock and voltage decisions. Tricky..


----------



## Xinoxide

snipernote said:


> this is even strange behavior !!
> i dropped all the volt to 800mv which is the lowest i can put for my vega 56 through wattman
> the card runs at 1450mhz with 700mhz hbm memory 120w ( weird because i did not change any clocks )
> 
> 
> is wattman accurate ?
> @hellm can you please help me ?


I recommend you do this a little more methodically. Reset back to defaults and change one pstate at a time until you see through testing where your problem occurs.


----------



## snipernote

hellm said:


> Sorry, can't help you. I don't even have a Radeon card at the moment, so i can't test anything or retrace what you have done. Has to be someone with a lot more knowledge about Vega64/56. But i can say, everything that appears strange for you is happening for some reason. It's called boost algorithm and there is a arbitrator responsable for all the clock and voltage decisions. Tricky..



thank you for reply, i will search more about that boost algorithm, this boost thing when it happened it was nice and working great for 120w usage only only and it was super quite
i am currently using different settings as per my taste ... having the card working at the lowest noise per performance gain is my current target as my small case can get hot really fast





Xinoxide said:


> I recommend you do this a little more methodically. Reset back to defaults and change one pstate at a time until you see through testing where your problem occurs.


i changed back to my older setting with stock step speeds ( p1 900mv , p2 925 , p3 950, p4 975, p5 1000, p6 1025, p7 1050 ) and the card can boost up to 1540mhz only ... can i get this thing to 1592mhz as it is the max frequency on stock ?
how do i know exactly what p state my card is running now ? all i can notice is Vega like to increase its speed 1mhz at a time which is very unusual to me 



ps: if the volt goes under 1025mv on p6 or p7 states i notice that the speed barely goes to 1500mhz so basically it kinda stuck between p5 and p6 in the middle adding +/-5mhz depending on the benchmark


----------



## Xinoxide

I think you might be asuming P7 setting is the actual clock speed? Its actually like a max boost that you'll never see unless you're running something light, like I see when utilizing my install of SVP.

I run this: nets decent temps on my Vega64. Boost in games maxes at 1601MHz, but I rarely see that.

https://www.overclock.net/forum/attachment.php?attachmentid=256088&thumb=1

I have some Asetek parts on the way for a hybrid mod btw. I am hoping to see better boost with cooler temps.

Vega takes a lot of tweaking. I have seen the issue you first mentioned when I was matching P6 and P7, my card got stuck at P5 speeds reporting P6 on the GPU tach lights.
This is why I recommend taking every step, one step at a time.

Also the tach lights on the ref PCB, are P0-P7.


----------



## Sickened1

Xinoxide said:


> I think you might be asuming P7 setting is the actual clock speed? Its actually like a max boost that you'll never see unless you're running something light, like I see when utilizing my install of SVP.
> 
> .


This is inaccurate. You can hit that P7 speed assuming your cooling/wattage/voltage lets you. I have my P7 set at 1687 and it will easily hit it playing BFV 1440P max settings. Same with benching. If I set it over 1687 though, such as 1692, it over boosts to 1737+ and crashes my drivers. No idea how to fix that though. Maybe powerplay mods could help me with that.


----------



## 113802

Sickened1 said:


> This is inaccurate. You can hit that P7 speed assuming your cooling/wattage/voltage lets you. I have my P7 set at 1687 and it will easily hit it playing BFV 1440P max settings. Same with benching. If I set it over 1687 though, such as 1692, it over boosts to 1737+ and crashes my drivers. No idea how to fix that though. Maybe powerplay mods could help me with that.


P7 on Vega 64 is a "max boost" which only runs at that speed for a minute or a few seconds. I've found a way to bug it to run at P7 sustained by flashing the FE bios and back to the XTX bios but the next reboot it runs normally again. When I get the card to run at P7 @ stock core clock 1750/1105Mhz it scores a GPU score around 27700 in FireStrike while when it's not bugged at running P7 all the time it scores 27.2k. Same as the Radeon VII P8 is a max boost and it rarely runs at that speed on Windows.





https://www.3dmark.com/fs/18270986


----------



## VicsPC

WannaBeOCer said:


> P7 on Vega 64 is a "max boost" which only runs at that speed for a minute or a few seconds. I've found a way to bug it to run at P7 sustained by flashing the FE bios and back to the XTX bios but the next reboot it runs normally again. When I get the card to run at P7 @ stock core clock 1750/1105Mhz it scores a GPU score around 27700 in FireStrike while when it's not bugged at running P7 all the time it scores 27.2k. Same as the Radeon VII P8 is a max boost and it rarely runs at that speed on Windows.
> 
> https://youtu.be/GXDced_nNPw
> https://www.3dmark.com/fs/18270986


I'm on water and i can hit between p6 and p7 constantly without issues and i can hit p7 sustained in ets2 in demanding areas especially playing it in multi player. It will stay there for a good while as well. This is on a stock BIOS with nothing changed except HBM at 1050.


----------



## 113802

VicsPC said:


> I'm on water and i can hit between p6 and p7 constantly without issues and i can hit p7 sustained in ets2 in demanding areas especially playing it in multi player. It will stay there for a good while as well. This is on a stock BIOS with nothing changed except HBM at 1050.


I've never been able to "sustain" P7 without bugging it. It will run between P6-P7 1700Mhz-1730Mhz but will never run at 1750Mhz all the time without bugging the bios unless it's power usage is 49% or below the target limit. For example benchmarks like GeekBench, GPUPI, Blender and other task that don't use a lot of power.


----------



## snipernote

@*Xinoxide*
i managed to make these settings in this week and i find them stable so far but i cannot reach more than 1540mhz without extra voltage 

would you guide me to make it better ?
hbcc is on at 16gb total dedicated memory

profile 1 for best performance so far 4447 points in superposition 1080p extreme custom fan curve and undervolted


profile 2 same under-volt settings but power limit set to -30% to keep it well under 130w usage i am using for 2017 demanding games to keep my pc cool & quite like Assassins Creed Origins at ultra high settings 1080p i am getting about 60fps average 

profile 2 stock fan curve and zero fan mode on gets 3459 on 1080p extreme


for comparison , stock settings net about 186w with 4000 points

note : i will never use my vega for mining , i just like it quite ! 
if you can help me achieve better undervolting for now i would appreciate it 

question : does flashing My Vega to 64 revoke my warranty ?

current system validated specs : https://valid.x86.fr/fpcgd4
ps : i sometimes oc my cpu to 3.8ghz @ 1.368v which gives a nice boost in performance but i am currently trying the xmp memory profile


----------



## VicsPC

WannaBeOCer said:


> I've never been able to "sustain" P7 without bugging it. It will run between P6-P7 1700Mhz-1730Mhz but will never run at 1750Mhz all the time without bugging the bios unless it's power usage is 49% or below the target limit. For example benchmarks like GeekBench, GPUPI, Blender and other task that don't use a lot of power.


I'm playing some Frostpunk right now and can sustain a good 1620ish on stock BIOS and stock power limit as well. I'm assuming the VRMs and HBM are well cooled so it's not a problem. 42°/47° respectively. Ambient is around 26°C water temp around 36°C, demanding game.


----------



## Ne01 OnnA

Sickened1 said:


> This is inaccurate. You can hit that P7 speed assuming your cooling/wattage/voltage lets you. I have my P7 set at 1687 and it will easily hit it playing BFV 1440P max settings. Same with benching. If I set it over 1687 though, such as 1692, it over boosts to 1737+ and crashes my drivers. No idea how to fix that though. Maybe powerplay mods could help me with that.


Try to lower the Power slider -> 0%, 5%, 8%, 12% & max 25% is recomended if you don't have PMod and PSU at min. 1000W (75A-83A)


----------



## 113802

VicsPC said:


> I'm playing some Frostpunk right now and can sustain a good 1620ish on stock BIOS and stock power limit as well. I'm assuming the VRMs and HBM are well cooled so it's not a problem. 42°/47° respectively. Ambient is around 26°C water temp around 36°C, demanding game.


Are you using a LC bios or Air bios? Not sure what kind of card you have. My card is a liquid card that I installed an EK waterblock because the AIO is garbage. Just using the stock bios without undervolting the card runs at around 1680Mhz and with a 50mV undervolt (1200mV) it runs between 1700-1730Mhz. From all the videos I've seen people on air required an UV/OC +50% power limit to run between 1580-1620Mhz on the air bios since the stock power limit is 220w and 330w with +50%.


----------



## VicsPC

WannaBeOCer said:


> Are you using a LC bios or Air bios? Not sure what kind of card you have. My card is a liquid card that I installed an EK waterblock because the AIO is garbage. Just using the stock bios without undervolting the card runs at around 1680Mhz and with a 50mV undervolt (1200mV) it runs between 1700-1730Mhz. From all the videos I've seen people on air required an UV/OC +50% power limit to run between 1580-1620Mhz on the air bios since the stock power limit is 220w and 330w with +50%.


Reference card i bought on day one, sapphire haven't touched the bios or the voltages lol. I have a 360/240 in push pull with high static pressure fans. Playing Frostpunk depending on what im looking at on a fully loaded city it goes from 1550-1620 and stays around there. In Euro Truck Simulator 2 in multiplayer on some congested areas ive seen it hit 1642 and stay there as well. Ek block on mine as well, when playing Siege im at a steady 1560-1580 maxed out settings and uncapped. I have a freesync display so i cap my fps at 73fps, even capped Frostpunk uses 8gb of vram and maxes out core and hbm no problem. My card is mounted vertically with 3 140mm intake fans right above it ( Core X5 case), my hotspot temps never go past 70°C it's usually low 60s so I'm assuming i can hit higher clocks then most people on air bios on water.

P.S. That's in a room at about 25°C case temp with no windows open so no airflow, cooler temps my core and hbm stay at or below 40°C.

Took a quick shot of hwinfo, left Frostpunk running for 10mins and reset the measurements 5mins in to get a 5min average. Temps go a bit higher after 15-20mins. Left the camera in one spot without moving it.


----------



## Minotaurtoo

here's a random question... my card's still under warranty... gigabyte confirmed this, just not in my region... how does this happen and how can I tell where exactly it's under warranty at? Although, I'm thinking I figured out the problem... turns out that if I close hwinfo... it doesn't crash.... or at least it hasn't in over a week now... 



This card does have the memory bug though... I have to set the min memory speed to 500mhz or it'll randomly crash at idle... google did me a solid and pointed out that one.


----------



## LeadbyFaith21

Sickened1 said:


> This is inaccurate. You can hit that P7 speed assuming your cooling/wattage/voltage lets you. I have my P7 set at 1687 and it will easily hit it playing BFV 1440P max settings. Same with benching. If I set it over 1687 though, such as 1692, it over boosts to 1737+ and crashes my drivers. No idea how to fix that though. Maybe powerplay mods could help me with that.


I also have this issue (I think it's boosting to that same 1737, but close regardless) when I push the frequency above 1690. I'm currently using powerplay tables to allow +150% offset for power and having this issue with the slider maxed out. If you find a fix for this, I would love to know, because I'm not hitting thermal (~50 C with ~75 C hotspot) or power limits (reporting around 310 W in software), my card just spikes in frequency and the driver crashes.

I wonder if flashing the liquid bios would fix that problem...


----------



## MrPerforations

hello forum,
I lost my mind last week and ordered two msi radeon vega 56 air boost cards as they where on sale at £250 each to replace my ageing gigabyte radeon r9 280 crossfire.
after doing some research I found that two vega 56's would be comparable to a 1070/2070 sli rig, I used game debate for this info, and after testing bf 1 seems they well over estimate the performance.
I got issues with the monitoring of the gpu's, I have msi afterburner installed and ran fallout 4 in crossfire mode, I got two times 300mhz while running it, which made me realise that the cards are doing nothing much running this.
I tried the game again and got one gpu running at 300mhz and the other is saying its doing 1150mhz, I tested with one card and it give me 30mhz for one and 1150 for the other.
the temps when its running 300/1150mhz crossfire say the cards are hot the other way round and that the watt's are the same.

im confused as to why it doing this and wonder is there a monitor that works please?

do you think I should keep the cards please?
I read about crossfire/mgpu being left to dev's to add in if they think its a good idea, which is not promising, but the performance for the price has got me hooked.
my cpu now is totally lagging as I have a ryzen 1700 which is just no putting out the frames. I intend to upgrade to a ryzen 3600 at 4.8 when it comes out to power it.
I haven't started on overclocking this yet as I am going for waterblocks next month.


----------



## Xinoxide

Sickened1 said:


> This is inaccurate. You can hit that P7 speed assuming your cooling/wattage/voltage lets you. I have my P7 set at 1687 and it will easily hit it playing BFV 1440P max settings. Same with benching. If I set it over 1687 though, such as 1692, it over boosts to 1737+ and crashes my drivers. No idea how to fix that though. Maybe powerplay mods could help me with that.


I have a Ref unmoulded Vega 64. I havent seen my card hit the actual P7 set speed or overboost not even once with a 3d load. 
I do use my vega for gpu acceleration in SVP and that's the only use case I have seen the card sustain the p7 set speed.

I didn't mean to say its not possible to stay within the P7 boosting range, just to hit the set freq isn't quite a thing to expect.

I run my card with 1060mv, 1630 boost set for P7 and see 1601MHz boost sustained quite regularly. hitting around 70C/83C Hotspot with ref cooler.
I've got an asetek cooler inbound for playing with strapped to my card soon. I am expecting to see better boost frequencies with the lower temps. 

Will be damn nice to finally OC around power limits instead of temperature.


----------



## poisson21

@MrPerforations I have same problem with my 2 vega 64 in crossfire, most game don't work with it and sadly fallout 4 is one of them, it is cpu sensitive in most era.


----------



## snipernote

snipernote said:


> @*Xinoxide*
> i managed to make these settings in this week and i find them stable so far but i cannot reach more than 1540mhz without extra voltage
> 
> would you guide me to make it better ?
> hbcc is on at 16gb total dedicated memory
> 
> profile 1 for best performance so far 4447 points in superposition 1080p extreme custom fan curve and undervolted
> 
> 
> profile 2 same under-volt settings but power limit set to -30% to keep it well under 130w usage i am using for 2017 demanding games to keep my pc cool & quite like Assassins Creed Origins at ultra high settings 1080p i am getting about 60fps average
> 
> profile 2 stock fan curve and zero fan mode on gets 3459 on 1080p extreme
> 
> 
> for comparison , stock settings net about 186w with 4000 points
> 
> note : i will never use my vega for mining , i just like it quite !
> if you can help me achieve better undervolting for now i would appreciate it
> 
> question : does flashing My Vega to 64 revoke my warranty ?
> 
> current system validated specs : https://valid.x86.fr/fpcgd4
> ps : i sometimes oc my cpu to 3.8ghz @ 1.368v which gives a nice boost in performance but i am currently trying the xmp memory profile



i had some extra help today on reddit and i got the Correct V64 bios that works for my Card 

now its very interesting !!!
stock vega 64 bios gave me 4253 score without under volting

i am getting better results , 4514 points in Superposition 1080pExtreme with 0% Power Limit @ 205w usage HBCC OFF 
undervolted and overclocked ! hbm memory at 1100mhz , i will have to test it more but i like it more now !
you need to read this reddit reply to my questions to flash the card properly if you are interested ( use it at your own risk , my card works after flashing the right bios without issues )


https://www.reddit.com/r/radeon/comments/avcxow/powercolor_red_dragon_vega_56_bios_and/
thanks to kagoromo , he is my hero for today !
i am so happy now !
edit : updated settings because it was crashing in assassins creed origin ... i had to change the clock table by the one default from my vega 56 oc bios and memory OC reduced to 1050mhz 1025mv .... now i can play the game without crashing 
i benchmarked the stable settings again and i am getiing 4383 points
edit 2: Updated settings again to get the best undervolt settings ,,,, now i score 4447 points in superposition 1080p extreme 
attached the latest version 4 of my settings ... gonna try power play tables now


Update : Power Play Tables used to keep the UV and Fan table with full RPM at 75c temp target , just because my pc case cannot cool my card properly at 2400RPM i had to implement a custom fan settings table between my original V56 and the flashed V64 bios


now my card runs super position benchmark with ~205 watt only scoring 4487 , zero fan mode works and wattman settings for overclocking HBM to 1050mhz only ... getting 1560mhz stable which is great ... if i feel the time to lower the speed on core clocks i can do it easily from wattman ... i think 1560/1050 combination is working great now 
@*hellm* i tried many changes to memory speed and volt levels in PPT all made my card run worse ... higher watt consumption and memory speed always fall back to 800 or 500mhz ,,, so i left all memory settings stock as in V64 bios and overclocking in wattman gave me the best result so far ... having the best results so far


Update: thanks to some testing i discovered that my gpu needs more voltage to keep working without crashes ... updated power play tables and xml files for the new settings ( p6 1050mv , p7 1075mv , hbm 1075mv and floor voltage 950mv ) having less crashes so far especially in assassins creed odessy and the gpu is reaching 1580 - 1600mhz


----------



## Alastair

*New Vega 64 owner here*

So guys just pulled the trigger on a used Vega 64. I love my dual Fury's but crossfire just isn't where it is at anymore. I held on for as long as I could. But alas it wasn't to be. So I decided to side grade. Definitely will see improvements in single GPU only games but will likely loose a bit in the few titles that do use mGPU. 

So as a new Vega owner and also as someone who hasn't been on OCN for well over a year I wanted to know if there are anythings a first time Vega owner should know. 
I do plan on putting this thing under a custom water block. 
I would like to know how does vega perform OC wise? Especially with maybe more volts thrown at it (not less) and higher target limits? 
Also there was a fair bit of support for Fiji bios modding. Is there some good stuff out there for Vega BIOS modding? Modifying PowerPlay tables etc?

EDIT: Also can anyone give me an idea on the difference between the Sapphire Vega 64 21275-02 vs 21275-03?


----------



## BTViolence

Alastair said:


> So guys just pulled the trigger on a used Vega 64. I love my dual Fury's but crossfire just isn't where it is at anymore. I held on for as long as I could. But alas it wasn't to be. So I decided to side grade. Definitely will see improvements in single GPU only games but will likely loose a bit in the few titles that do use mGPU.
> 
> So as a new Vega owner and also as someone who hasn't been on OCN for well over a year I wanted to know if there are anythings a first time Vega owner should know.
> I do plan on putting this thing under a custom water block.
> I would like to know how does vega perform OC wise? Especially with maybe more volts thrown at it (not less) and higher target limits?
> Also there was a fair bit of support for Fiji bios modding. Is there some good stuff out there for Vega BIOS modding? Modifying PowerPlay tables etc?
> 
> EDIT: Also can anyone give me an idea on the difference between the Sapphire Vega 64 21275-02 vs 21275-03?



I've only had my VII for a week but I've tuned a Frontier Edition and a 64 in other builds. This is what I've learned that I can pass along, there are probably some more masterful 56/64 owners that can add onto this but this would be a good starting point.


1.) get some baseline benchmarks on completely stock (Fire Strike, Time Spy, Superposition, etc.)
2.) Undervolt to lowest stable voltage for stock clocks and raise power limit to 50% and create a fan curve that gets to around 2500-2700 rpm a good bit before 80C. Keep in mind that these cards will not boost to their full P7 clocks and maintain them. However, the closest you can get to them is by keeping the card as cool as possible by using the lowest voltage and a proper fan curve.
3.) And then just go ahead and maintain that 50% power limit with the more aggressive fan curve and inch the voltage up while taking the P6/P7 clocks up 25 or so on both (these should remain staggered, with P6 lower than P7 as it is a phased clock). Then once you get closer to 1500/1580 on P6/P7 drop it down to like 10 between each test. Test memory by clocking it up about 10 each time once you have FULLY stabilized core clocks. 



Note: Save any power play tables or bios flashing until you get water since you'll most likely hit thermal throttling at a point. Make sure to validate each level of overclock (jump in volts/frequency) with performance since the P7 clock can vary if you get to a point of diminishing returns with too high of voltage/heat.


Hopefully, this helps, it might be a bit scatterbrained though. But, to answer your question about performance: https://www.gamersnexus.net/guides/3382-rtx-2070-versus-power-modded-vega-56


Vega are great overclockers and people should only be buying them if they intend to overclock them since that's where they really shine the most.


----------



## m70b1jr

Hey guys!
Still doing some overclocking on my GigaByte Gaming OC Vega 56, I currently have -75mv Core, 1680 set Coreclock, 880 HBM (Hynix) stable (99 looped runs on Metro LL benchmark and other programs) with my temps still in range (65c - 70c on core and HBM)

So, Im trying to do PPT mods to increase my power limit to 100% or even 150%, and Im using the tables found here
https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios-26.html#post_26297003

I changed the 32 to 96 however I'm not seeing a boost in my power limit (Still limited to 50%), and yes, i've rebooted my PC. Even went into regedit to make sure the regkey is in the proper folder and place.
I re-installed windows yesterday and have done a DDU and reinstall of drivers. Can I get some help here?


----------



## Alastair

BTViolence said:


> I've only had my VII for a week but I've tuned a Frontier Edition and a 64 in other builds. This is what I've learned that I can pass along, there are probably some more masterful 56/64 owners that can add onto this but this would be a good starting point.
> 
> 
> 1.) get some baseline benchmarks on completely stock (Fire Strike, Time Spy, Superposition, etc.)
> 2.) Undervolt to lowest stable voltage for stock clocks and raise power limit to 50% and create a fan curve that gets to around 2500-2700 rpm a good bit before 80C. Keep in mind that these cards will not boost to their full P7 clocks and maintain them. However, the closest you can get to them is by keeping the card as cool as possible by using the lowest voltage and a proper fan curve.
> 3.) And then just go ahead and maintain that 50% power limit with the more aggressive fan curve and inch the voltage up while taking the P6/P7 clocks up 25 or so on both (these should remain staggered, with P6 lower than P7 as it is a phased clock). Then once you get closer to 1500/1580 on P6/P7 drop it down to like 10 between each test. Test memory by clocking it up about 10 each time once you have FULLY stabilized core clocks.
> 
> 
> 
> Note: Save any power play tables or bios flashing until you get water since you'll most likely hit thermal throttling at a point. Make sure to validate each level of overclock (jump in volts/frequency) with performance since the P7 clock can vary if you get to a point of diminishing returns with too high of voltage/heat.
> 
> 
> Hopefully, this helps, it might be a bit scatterbrained though. But, to answer your question about performance: https://www.gamersnexus.net/guides/3382-rtx-2070-versus-power-modded-vega-56
> 
> 
> Vega are great overclockers and people should only be buying them if they intend to overclock them since that's where they really shine the most.


So there are some good BIOS tools? Cause I plan to go balls to the wall on power limits. I've got a 500 (or is it 600) watt BIOS on mu Fury's. I plan to do something similar with the Vega with a full cover block I don't think thermals will be a concern. So max power and max volts. Because this is overclock.net! With my rig the way it is, it's going to be plumbed into my rig straight away. Which means my stock results will be with better thermals from a block. 

I hope to hit 1700MHz core. Reasonable goal? How do the guys under custom loops fair with their Vega's?


----------



## sinnedone

There are no Bios mods as of yet. Only Bios you can even flash is the Liquid cooled versions since custom Bios will not boot on Vega.

If you decide to flash a liquid Bios onto your "reference" Vega 64 it will up the power limit, up max core voltage to 1.25v, but also lowers the throttling temp a little. (Don't remember what temperature exactly)


I personally have a Vega 64 reference flashed with Liquid Bios. It's not a very good overclocker or undervolter. Max overclock on core is 1760 at 1.25v but I can undervolt to 1.23v at 1750.


Very rarely do I see max clocks. Only game I currently play that hits max clock and sometimes a little faster is Forza Horizon 4. In that game it will fluctuate from 1720-1760. Timespy and Valley benchmarks will also hit 1750mhz and pass with ease.

I'd you need more than +50% power limit you can play with the power play tables mods to up the power limit a bit more.


----------



## BTViolence

sinnedone said:


> There are no Bios mods as of yet. Only Bios you can even flash is the Liquid cooled versions since custom Bios will not boot on Vega.
> 
> If you decide to flash a liquid Bios onto your "reference" Vega 64 it will up the power limit, up max core voltage to 1.25v, but also lowers the throttling temp a little. (Don't remember what temperature exactly)
> 
> 
> I personally have a Vega 64 reference flashed with Liquid Bios. It's not a very good overclocker or undervolter. Max overclock on core is 1760 at 1.25v but I can undervolt to 1.23v at 1750.
> 
> 
> Very rarely do I see max clocks. Only game I currently play that hits max clock and sometimes a little faster is Forza Horizon 4. In that game it will fluctuate from 1720-1760. Timespy and Valley benchmarks will also hit 1750mhz and pass with ease.
> 
> I'd you need more than +50% power limit you can play with the power play tables mods to up the power limit a bit more.



Are you actually on water with the 64? Or are you just running the water BIOS on air?


----------



## 113802

sinnedone said:


> There are no Bios mods as of yet. Only Bios you can even flash is the Liquid cooled versions since custom Bios will not boot on Vega.
> 
> If you decide to flash a liquid Bios onto your "reference" Vega 64 it will up the power limit, up max core voltage to 1.25v, but also lowers the throttling temp a little. (Don't remember what temperature exactly)
> 
> 
> I personally have a Vega 64 reference flashed with Liquid Bios. It's not a very good overclocker or undervolter. Max overclock on core is 1760 at 1.25v but I can undervolt to 1.23v at 1750.
> 
> 
> Very rarely do I see max clocks. Only game I currently play that hits max clock and sometimes a little faster is Forza Horizon 4. In that game it will fluctuate from 1720-1760. Timespy and Valley benchmarks will also hit 1750mhz and pass with ease.
> 
> I'd you need more than +50% power limit you can play with the power play tables mods to up the power limit a bit more.


First you have to understand that Vega 10/20 cards are advertised to run at their second highest power state all the time while the "peak boost" is their highest power state which is 1750Mhz on the Vega 64. The throttling temp is 65C on the liquid bios. The LC bios advertised speed is 1668Mhz so if you are getting higher than 1668Mhz than you are exceeding the advertised frequency. 

I've only found 1 way to bug it to run P7 all the time without fluctuations and the performance gain is massive and it's only temporary. The reason the cards can't run at P7 all the time is since they will use 364w all the time even when undervolted which the AIO wasn't designed for. As you can see from my video the card is running at 1800Mhz @ 1.25v with a single 360mm rad and it's at 46c with the Gentle Typhoon fans @ 2250RPM. Same goes for the Air cooled version, it wasn't designed to run at 330w all the time.

https://hwbot.org/submission/407152...___1080p_xtreme_radeon_rx_vega_64_5503_points
https://www.3dmark.com/fs/18270986


----------



## Alastair

WannaBeOCer said:


> First you have to understand that Vega 10/20 cards are advertised to run at their second power state all the time while the "peak boost" is their highest power state which is 1750Mhz on the Vega 64. The throttling temp is 65C on the liquid bios. The LC bios advertised speed is 1668Mhz so if you are getting higher than 1668Mhz than you are exceeding the advertised frequency.
> 
> I've only found 1 way to bug it to run P7 all the time without fluctuations and the performance gain is massive and it's only temporary. The reason the cards can't run at P7 all the time is since they will use 364w all the time even when undervolted which the AIO wasn't designed for. As you can see from my video the card is running at 1800Mhz @ 1.25v with a single 360mm rad and it's at 46c with the Gentle Typhoon fans @ 2250RPM. Same goes for the Air cooled version, it wasn't designed to run at 330w all the time.
> 
> https://hwbot.org/submission/407152...___1080p_xtreme_radeon_rx_vega_64_5503_points
> https://www.3dmark.com/fs/18270986
> 
> https://www.youtube.com/watch?v=GXDced_nNPw&t=7s


So you are running 1800 on the AIO? How long can you hold it for? Reckon it will sustain 1800 with upped power limits and cooler temps like say a custom block? Can you set P6 to = P7 or does there have to be a difference between the two? I am. So excited for my Vega. I really hope I can hit 1800 in my loop. 

My loop is 360mm with Jetflo's in P/P and a 280mm with Corsair ML140s in P/P. All hooked up to a Lamptron controller. Fury's maintain 45C each underload at 1150MHz. I reckon the single Vega should hit about 40s maybe less even with OC considering the lower heat load in the loop?


----------



## 113802

Alastair said:


> So you are running 1800 on the AIO? How long can you hold it for? Reckon it will sustain 1800 with upped power limits and cooler temps like say a custom block? Can you set P6 to = P7 or does there have to be a difference between the two? I am. So excited for my Vega. I really hope I can hit 1800 in my loop.
> 
> My loop is 360mm with Jetflo's in P/P and a 280mm with Corsair ML140s in P/P. All hooked up to a Lamptron controller. Fury's maintain 45C each underload at 1150MHz. I reckon the single Vega should hit about 40s maybe less even with OC considering the lower heat load in the loop?


My Vega 64 was in a custom loop with a single 360mm with an EK copper waterblock. This 1800Mhz is stable 24/7 but this is a bug when flashing back from the FE 8GB bios for some odd reason it runs at P7 all the time until the next reboot. I am able to replicate the bug 100% of the time. Don't expect to ever touch this frequency, you'll most likely see 1700-1730Mhz all the time. 

I guess I wasn't clear on what I said, Vega 10 will never run P7 100% of the time since the stock air/AIO weren't designed for that high of a TDP. Unfortunately there isn't a way to get it to run at P7 all the time after reboots.


----------



## sinnedone

BTViolence said:


> Are you actually on water with the 64? Or are you just running the water BIOS on air?



I'm sorry, yes I am running water cooled using a Heatkiller water block. Gpu and HBM stay in the 40's while the hot spot reaches mid to high 50's





WannaBeOCer said:


> First you have to understand that Vega 10/20 cards are advertised to run at their second highest power state all the time while the "peak boost" is their highest power state which is 1750Mhz on the Vega 64. The throttling temp is 65C on the liquid bios. The LC bios advertised speed is 1668Mhz so if you are getting higher than 1668Mhz than you are exceeding the advertised frequency.


Thank you for the clarification on the throttling temp. I think on air its 80 or 85 correct?

Also I've tried keying in P7 clocks and voltage into P6 so its the same (think I've tried all the way down to P2) and it doesn't seem to do anything on my rig. Something about AMD's GPU boost algorithm that just says NO. I've also noticed sometimes you achieve higher clocks if you don't touch anything but power limit vs putting in your own values for everything. Then also when you bump up the HBM clocks it affects core clocks and drops them some. Guessing its a power limit thing.


----------



## Alastair

Guys. I know people on OCN don't really use Bykski or Barrow water cooling products. But XSPC and EKWB is all anyone stocks in South Africa and well noone has ANYTHING for vega. I would love an EKWB block for my VEga like I have on my Fury's but all anyone ever stocks in South Africa are water parts for Nvidia. Heck some retailers still have Geforce 900 series stuff and one even 8800 series water parts. But not a Vega block to be seen. Importing a EKWB block for 150 dollars excl shipping and customs is going to be a hard pill to swallow. But I can get a Bykski or Barrow block for 100US. 90US once I use my Aliexpress coupon..... So between the Barrow Vega block and the Bykski which would you pick? I am leaning towards the Barrow. I have heard more about Barrow in recent years and Bykski to me is a new name so not sure what to expect.


----------



## sinnedone

Bykski has been around for as long as Barrow I believe.

Pick whichever is aesthetically best to you. I highly doubtful their performance is more that a couple of degrees from each other.( Highly doubt they're more than 3-5C from EK either)


----------



## Alastair

sinnedone said:


> Bykski has been around for as long as Barrow I believe.
> 
> Pick whichever is aesthetically best to you. I highly doubtful their performance is more that a couple of degrees from each other.( Highly doubt they're more than 3-5C from EK either)


Went for the Barrow. Looking at the picture the waterblock had better channels on the VRM section. So went for the Barrow.


----------



## Particle

Holy crap. People aren't kidding when they talk about the paste job on Vega cards being crap. Check out this hot garbage that I found when I took the cooler off my own Vega 64.


----------



## Alastair

Its here!

And man it is heavy! I did not realize the entire cooler shroud was metal as well. It is plastic on my RX480. I figured it was the same on V64. I am certainly impressed. Got a molded die on mine. Kinda disappointing I was hoping for a unmolded die. I am used to unmolded dies as that is obviously what Fiji has and I figured unmolded would be a tad bit cooler. Oh well. Now I just need to wait for my water block to arrive.


----------



## candasulas

Hi everyone.

I have an Asus Strix Vega 64 graphics card.
I would like to run the Undervolt process to lower the GPU temperature a little further and gain some performance. Can you help me with this?

I did an operation myself. My values are:

P6: 1000Mv
P7: 1060Mv
HBM2: 1000Mv

GPU: 1550 MHZ
RAM: 945 MHZ
GPU Temp: 74C
WRM GPU Temp: 103C
WRM SOC Temp: 103C
WRM MEM Temp: 88C
226 WATT.


----------



## Alastair

candasulas said:


> Hi everyone.
> 
> I have an Asus Strix Vega 64 graphics card.
> I would like to run the Undervolt process to lower the GPU temperature a little further and gain some performance. Can you help me with this?
> 
> I did an operation myself. My values are:
> 
> P6: 1000Mv
> P7: 1060Mv
> HBM2: 1000Mv
> 
> GPU: 1550 MHZ
> RAM: 945 MHZ
> GPU Temp: 74C
> WRM GPU Temp: 103C
> WRM SOC Temp: 103C
> WRM MEM Temp: 88C
> 226 WATT.


Set power limit to 50% then start undervolting from there.


----------



## candasulas

Alastair said:


> Set power limit to 50% then start undervolting from there.


Maximizing the power limit won't be a problem, will it?
I'll try to get P6 and P7 values below 1000 Mv. To reduce heating. I guess under 1000 Mv it won't be a problem.


----------



## Alastair

candasulas said:


> Maximizing the power limit won't be a problem, will it?
> I'll try to get P6 and P7 values below 1000 Mv. To reduce heating. I guess under 1000 Mv it won't be a problem.


Not as I understand it. As your card will be allowed to draw more power for a given voltage. Meaning you could lower volts a bit more before reaching instability as the card has more room to draw more power to compensate for the dropping volts. Most of the undervolt OC's I have seen in this thread have been with +50% power limits.


----------



## candasulas

Alastair said:


> Not as I understand it. As your card will be allowed to draw more power for a given wattage. Meaning you could lower volts a bit more before reaching instability as the card has more room to draw more power to compensate for the dropping volts. Most of the undervolt OC's I have seen in this thread have been with +50% power limits.


I understand. In order for the card to go down to lower Mv values, the power drawn from the PSU needs to be increased. I'm sharing a screenshot of the user who is doing Undervoltage on another site. Will you take a look? The power limit here is 20%. But P7, P6 is low. However, the system is stable. 

Asus Strix Vega 64. The card is the same as my own card. But when I set these values, the system crashes.


----------



## Alastair

candasulas said:


> I understand. In order for the card to go down to lower Mv values, the power drawn from the PSU needs to be increased. I'm sharing a screenshot of the user who is doing Undervoltage on another site. Will you take a look? The power limit here is 20%. But P7, P6 is low. However, the system is stable.
> 
> Asus Strix Vega 64. The card is the same as my own card. But when I set these values, the system crashes.


that's the goal here yes. However I have never used Vega before so I am merely going by what other users have posted. My vega is still waiting for a water block before I can play with it.


----------



## candasulas

Alastair said:


> that's the goal here yes. However I have never used Vega before so I am merely going by what other users have posted. My vega is still waiting for a water block before I can play with it.


What happens if we pull the voltages down without a power limit?
In some forums they say the power limit will damage the card.


----------



## sinnedone

1. Lower sustained GPU clocks.
2. No such thing as card being damaged from adding +50% power limit.



Also remember about the silicon lottery.

Say card #1 can undervolt P7 1650MHZ to 1.00mv, card #2 might only be able to do 1650mhz at 1.19mv.

It's not an exact science and you need to run tests to make sure everything is stable.


----------



## Alastair

candasulas said:


> What happens if we pull the voltages down without a power limit?
> In some forums they say the power limit will damage the card.


Voltages down without increasing power limit means your cards would not be able to hit the same sustained clocks as a card thay is free to pull more power.


----------



## candasulas

Alastair said:


> Voltages down without increasing power limit means your cards would not be able to hit the same sustained clocks as a card thay is free to pull more power.


I'il write you step by step,
1. I will the power limit + 50%.
2. I will drop P6 and P7 values.
3. The HBM memory value will be less than or equal to P6.
4. I will stop P6 and P7 when the frequency starts to drop.
5. I will gradually reduce the power limit to the point where it does not affect the frequency.

The process is basically going to be that way, right?


----------



## candasulas

sinnedone said:


> 1. Lower sustained GPU clocks.
> 2. No such thing as card being damaged from adding +50% power limit.
> 
> 
> 
> Also remember about the silicon lottery.
> 
> Say card #1 can undervolt P7 1650MHZ to 1.00mv, card #2 might only be able to do 1650mhz at 1.19mv.
> 
> It's not an exact science and you need to run tests to make sure everything is stable.



I'il write you step by step,
1. I will the power limit + 50%.
2. I will drop P6 and P7 values.
3. The HBM memory value will be less than or equal to P6.
4. I will stop P6 and P7 when the frequency starts to drop.
5. I will gradually reduce the power limit to the point where it does not affect the frequency.

The process is basically going to be that way, right?


----------



## tolis626

candasulas said:


> I'il write you step by step,
> 1. I will the power limit + 50%.
> 2. I will drop P6 and P7 values.
> 3. The HBM memory value will be less than or equal to P6.
> 4. I will stop P6 and P7 when the frequency starts to drop.
> 5. I will gradually reduce the power limit to the point where it does not affect the frequency.
> 
> The process is basically going to be that way, right?


Why do you insist on lowering the power limit? Unless your PSU is insufficient, there's no other reason to do that IMO. +50% power limit just means that the card won't throttle due to power limitations that easily. It won't draw more power than spec, it won't overvolt, it won't do anything other than tell the card "Hey, just draw as much as you need". If we were talking about registry edits that would allow for +100% power limit, that's another story, but +50% is safe.


----------



## Alastair

candasulas said:


> I'il write you step by step,
> 1. I will the power limit + 50%.
> 2. I will drop P6 and P7 values.
> 3. The HBM memory value will be less than or equal to P6.
> 4. I will stop P6 and P7 when the frequency starts to drop.
> 
> The process is basically going to be that way, right?


Fixed it for you


----------



## 113802

tolis626 said:


> Why do you insist on lowering the power limit? Unless your PSU is insufficient, there's no other reason to do that IMO. +50% power limit just means that the card won't throttle due to power limitations that easily. It won't draw more power than spec, it won't overvolt, it won't do anything other than tell the card "Hey, just draw as much as you need". If we were talking about registry edits that would allow for +100% power limit, that's another story, but +50% is safe.


+50% power limit increases the power limitations compared to the stock "Balanced" profile. +50% power limit will increase heat which will thermal throttle unless the fan is set to around 80%. Depends if they want the most performance or if they want a mix with bearable fan acoustics.

Undervolted to 900mV which doesn't require a PL increase since it's not using anywhere near the Balanced profile power limit of 264w. 






Stock voltage with a 50% PL increase with P7 bugged to run 100% of the time. 






Balanced on Air uses 220w
+50% power limit uses 330w

Balanced on Liquid uses 264w
+50% power limit uses 396w


----------



## candasulas

I share my Undervoltage transaction values. 
These values are stable and smooth. (Asus Strix Vega 64)

GPU Frequency: 1630Mhz (Between 1530-1575Mhz)
HBM Frequency: 1100Mhz (Default 945Mhz)
P7 Voltage: 1060Mv
P6 Voltage: 1000Mv
HBM Voltage: 1000Mv
Power Limit: + 20%
Full Load Value: 221-230 Watt.

Average Temperature GPU: 68-72 Degree
GPU VRM Temp: 43 Degree Idle - 95 Degree Load
SOC VRM Temp: 43 Degree Idle - 95 Degree Load
Mem VRM Temp: 41 Degree Idle - 86 Degree Load

How do you think these values are?


----------



## BradleyW

Vega Strix 64 (VRM mod, aftermarket fans) 

HBM 1045mhz
P6/7 1662mhz
Vcore 1060mv
Floor v 1010mv


----------



## candasulas

BradleyW said:


> Vega Strix 64 (VRM mod, aftermarket fans)
> 
> HBM 1045mhz
> P6/7 1662mhz
> Vcore 1060mv
> Floor v 1010mv


Ls your values like this?

P6: 1010Mv
P7: 1060Mv
HBM: 1010Mv
P6:1662Mhz
P7:1662Mhz
HBM:1045Mhz

What percentage is your power limit?


----------



## PontiacGTX

Hey Recently I got a RX Vega56 but benchmarked it against my R9 FURY and the score isnt that different also I notice the load clock isnt near the P7, I wonder what do I need to change to get near 1600mhz core clock in superposition?


----------



## BradleyW

candasulas said:


> BradleyW said:
> 
> 
> 
> Vega Strix 64 (VRM mod, aftermarket fans)
> 
> HBM 1045mhz
> P6/7 1662mhz
> Vcore 1060mv
> Floor v 1010mv
> 
> 
> 
> Ls your values like this?
> 
> P6: 1010Mv
> P7: 1060Mv
> HBM: 1010Mv
> P6:1662Mhz
> P7:1662Mhz
> HBM:1045Mhz
> 
> What percentage is your power limit?
Click to expand...

All of the above is correct except P6/7 use the same voltage value of 1060mv.

Power limit is always +50% when I make changes to a GPU.


----------



## Trender

candasulas said:


> I share my Undervoltage transaction values.
> These values are stable and smooth. (Asus Strix Vega 64)
> 
> GPU Frequency: 1630Mhz (Between 1530-1575Mhz)
> HBM Frequency: 1100Mhz (Default 945Mhz)
> P7 Voltage: 1060Mv
> P6 Voltage: 1000Mv
> HBM Voltage: 1000Mv
> Power Limit: + 20%
> Full Load Value: 221-230 Watt.
> 
> Average Temperature GPU: 68-72 Degree
> GPU VRM Temp: 43 Degree Idle - 95 Degree Load
> SOC VRM Temp: 43 Degree Idle - 95 Degree Load
> Mem VRM Temp: 41 Degree Idle - 86 Degree Load
> 
> How do you think these values are?


bro i do p7 1000mv and 1597 mhz which actually is about -1550


----------



## Maracus

PontiacGTX said:


> Hey Recently I got a RX Vega56 but benchmarked it against my R9 FURY and the score isnt that different also I notice the load clock isnt near the P7, I wonder what do I need to change to get near 1600mhz core clock in superposition?


Undervolt and overclock it, your CPU maybe holding you back also.


----------



## Maracus

PontiacGTX said:


> Hey Recently I got a RX Vega56 but benchmarked it against my R9 FURY and the score isnt that different also I notice the load clock isnt near the P7, I wonder what do I need to change to get near 1600mhz core clock in superposition?


Just did a run on Superpostition at stock frequency and voltages and got 9011 score on 1080P High settings just so you can compare. With undervolt and overclock i get 9692, so again probably your CPU holding you back.


----------



## candasulas

I've been applying undervoltage to my Asus Strix Vega 64 card for a few days. But I haven't found stable values. I think I'm misusing the process. I didn't get that.

1. I enter the wattman settings when the card is in a balanced profile.
2. I turn off the Chill feature.
3. I set the card to manual settings.
4. I make the power limit + 50%.
5. I am dropping P7 value from 1200 to 1100.
6. I reduce the P6 value from 1150 to 1050.
7. Set the HBM Memory Value to the same as P6 (1050).
8. I then do the Valley test and the Tomb Raider Benchmark test.
9. Up to this point no problem.
10. Then I continue to reduce P7 and P6 values.
11. I draw the memory value of HBM to the same level as P6.
12. I'm P6 and HBM Reduce memory to 1000Mv. When I lower the value of P7 to 1060Mv, the card crashes. Can't pass the test.

Can you help me?

I've been writing in forums for a few days but couldn't figure out this problem. I don't want to overclock my card. I just want to reduce the voltage and reduce the temperature as much as possible.


----------



## PontiacGTX

Maracus said:


> Just did a run on Superpostition at stock frequency and voltages and got 9011 score on 1080P High settings just so you can compare. With undervolt and overclock i get 9692, so again probably your CPU holding you back.


It isnt the CPU it is the thermal throttling of the card it reaches certain temp and the clock speed throttles to 1050-950mhz


----------



## BTViolence

PontiacGTX said:


> Hey Recently I got a RX Vega56 but benchmarked it against my R9 FURY and the score isnt that different also I notice the load clock isnt near the P7, I wonder what do I need to change to get near 1600mhz core clock in superposition?



Crank the fans and see if it still happens and then work down from there.


----------



## Alastair

Is this a batch number on our Vega's? Week '17 of year '18? Do other Vega owners also have such markings on their GPU's?


----------



## Leons

PontiacGTX said:


> Hey Recently I got a RX Vega56 but benchmarked it against my R9 FURY and the score isnt that different also I notice the load clock isnt near the P7, I wonder what do I need to change to get near 1600mhz core clock in superposition?



Hello
Just for comparison, it's not a race.
My Vega 56 ref. (1500MHz actual)

(Google translator)


----------



## PontiacGTX

BTViolence said:


> Crank the fans and see if it still happens and then work down from there.


the fans were at 4800RPM and the score didnt improve at all




Leons said:


> Hello
> Just for comparison, it's not a race.
> My Vega 56 ref. (1500MHz actual)
> 
> (Google translator)




let me try your settings

well I tried the same setting and it was the same I think should be my temperature


----------



## Ne01 OnnA

Neon Noir by Crytek (This Demo was rendered real-time using single RX Vega 56)

Crytek has released a new video demonstrating the results of a CRYENGINE research and development project.
Neon Noir shows how real-time mesh ray-traced reflections and refractions can deliver highly realistic visuals for games.

According to the press release, the Neon Noir demo was created with the new advanced version of CRYENGINE’s Total Illumination showcasing real time ray tracing.
This feature will be added to CRYENGINE release roadmap in 2019, enabling developers around the world to build more immersive scenes, more easily, with a production-ready version of the feature.

Neon Noir follows the journey of a police drone investigating a crime scene.
As the drone descends into the streets of a futuristic city, illuminated by neon lights, we see its reflection accurately displayed in the windows it passes by,
or scattered across the shards of a broken mirror while it emits a red and blue lighting routine that will bounce off the different surfaces
utilizing CRYENGINE’s advanced Total Illumination feature. Demonstrating further how ray tracing can deliver a lifelike environment,
neon lights are reflected in the puddles below them, street lights flicker on wet surfaces, and windows reflect the scene opposite them accurately.

Neon Noir was developed on a bespoke version of CRYENGINE 5.5., and the experimental ray tracing feature based on CRYENGINE’s Total Illumination used to create the demo is both API and hardware agnostic,
enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this
new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.

The experimental ray tracing feature simplifies and automates the rendering and content creation process to ensure that animated objects and changes in lighting are correctly reflected with a high level of detail in real-time.
This eliminates the known limitation of pre-baked cube maps and local screen space reflections when creating smooth surfaces like mirrors, and allows developers to create more realistic, consistent scenes.
To showcase the benefits of real time ray tracing, screen space reflections were not used in this demo.

PS.
IMhO ACE is the Key here  ( & Compute is the magic word, because Raytrace is a form of Compute)
So we can have dedicated normal GPU workload + asynchronous raytrace compute at no cost.
We have ACE


----------



## 113802

Ne01 OnnA said:


> Neon Noir by Crytek (This Demo was rendered by RX Vega 56)
> 
> Crytek has released a new video demonstrating the results of a CRYENGINE research and development project.
> Neon Noir shows how real-time mesh ray-traced reflections and refractions can deliver highly realistic visuals for games.
> 
> According to the press release, the Neon Noir demo was created with the new advanced version of CRYENGINE’s Total Illumination showcasing real time ray tracing.
> This feature will be added to CRYENGINE release roadmap in 2019, enabling developers around the world to build more immersive scenes, more easily, with a production-ready version of the feature.
> 
> Neon Noir follows the journey of a police drone investigating a crime scene.
> As the drone descends into the streets of a futuristic city, illuminated by neon lights, we see its reflection accurately displayed in the windows it passes by,
> or scattered across the shards of a broken mirror while it emits a red and blue lighting routine that will bounce off the different surfaces
> utilizing CRYENGINE’s advanced Total Illumination feature. Demonstrating further how ray tracing can deliver a lifelike environment,
> neon lights are reflected in the puddles below them, street lights flicker on wet surfaces, and windows reflect the scene opposite them accurately.
> 
> Neon Noir was developed on a bespoke version of CRYENGINE 5.5., and the experimental ray tracing feature based on CRYENGINE’s Total Illumination used to create the demo is both API and hardware agnostic,
> enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. *However, the future integration of this
> new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards* and supported APIs like Vulkan and DX12.
> 
> The experimental ray tracing feature simplifies and automates the rendering and content creation process to ensure that animated objects and changes in lighting are correctly reflected with a high level of detail in real-time.
> This eliminates the known limitation of pre-baked cube maps and local screen space reflections when creating smooth surfaces like mirrors, and allows developers to create more realistic, consistent scenes.
> To showcase the benefits of real time ray tracing, screen space reflections were not used in this demo.
> 
> https://youtu.be/1nqhkDm2_Tw


Can't wait to see the performance increase when RT cores are used.


----------



## BradleyW

Just shows you how good AMD are at Ray tracing on current hardware.


----------



## Alastair

So we get ray tracing on Radeon?


----------



## Ipak

Mine reads 1728.

BTW. that's the reason my temps were bad, stock EK paste after 6 months, replaced with thermal hydronaut and got over 10'C impovment, and card now never throttle (probably hotspot temps were bad, dunno can't see them after some driver updates )


----------



## VicsPC

Ipak said:


> Mine reads 1728.
> 
> BTW. that's the reason my temps were bad, stock EK paste after 6 months, replaced with thermal hydronaut and got over 10'C impovment, and card now never throttle (probably hotspot temps were bad, dunno can't see them after some driver updates )


Jesus thats runny lol. I love kryonaut, didnt have luck with hydronaut bought some and it was so thick wouldnt spread right, could have been mine was just dry though.


----------



## cg4200

So I am out of the loop with my old vega 56 cards.
I have 2 that are flashed with 64 xfx bios .. cards work fine no problems..
here is pic game stable one will do up to 1775 on core other one only 1720 or so both at least 1050 stable..
Question is I had my best card third card flashed with xfx liquid bios ran great would game 1750 1200 back 8 months ago..
I am getting ready to sell them so I am testing and taking off waterblocks..
But this card when in game every 20 seconds or so would like skip and I could see core drop and come back up from 1750 to 1200-1400 back up same time soc temp would go from 32 degrees to 0 back to 32 looks like same time as game skips..using gpu-z
I flashes to reg xfx 64 bios and the game runs fine is there a software problem with running liquid cooled cards now or is my card just tweaking.... Thanks
anyone still running 56 cards with liquid cooled bios with no issues thanks..???


----------



## PontiacGTX

ok I found that the only cooler compatible is the Raijintek Morpheus


----------



## Xinoxide

PontiacGTX said:


> ok I found that the only cooler compatible is the Raijintek Morpheus


I strapped a $15 open box asetek cooler on mine. Ebay surplus from... Origin PC I think.

Its definitely nice to be able to OC the card instead of just under-volt it.



cg4200 said:


> So I am out of the loop with my old vega 56 cards.
> I have 2 that are flashed with 64 xfx bios .. cards work fine no problems..
> here is pic game stable one will do up to 1775 on core other one only 1720 or so both at least 1050 stable..
> Question is I had my best card third card flashed with xfx liquid bios ran great would game 1750 1200 back 8 months ago..
> I am getting ready to sell them so I am testing and taking off waterblocks..
> But this card when in game every 20 seconds or so would like skip and I could see core drop and come back up from 1750 to 1200-1400 back up same time soc temp would go from 32 degrees to 0 back to 32 looks like same time as game skips..using gpu-z
> I flashes to reg xfx 64 bios and the game runs fine is there a software problem with running liquid cooled cards now or is my card just tweaking.... Thanks
> anyone still running 56 cards with liquid cooled bios with no issues thanks..???


My Vega 64 Does the same thing with any LC Bios. 
No idea why, but it performs quite fine on stock bios with the 1200mv limit and a PPTable mod.


----------



## cplifj

WannaBeOCer said:


> Can't wait to see the performance increase when RT cores are used.


That's just it, this does not need RT cores at all, kinda obsolete before it ever were a thing really.


----------



## Alastair

Ipak said:


> Mine reads 1728.
> 
> BTW. that's the reason my temps were bad, stock EK paste after 6 months, replaced with thermal hydronaut and got over 10'C impovment, and card now never throttle (probably hotspot temps were bad, dunno can't see them after some driver updates )


Oh ok. So then not batch numbers. OK was just curious. Nice to see Your temps are nice again. I'm. Putting Cooler Master Master Gel Maker gel (what a bloody mouthful) which is as I understand is as good as Kryonaught.


----------



## PontiacGTX

Xinoxide said:


> I strapped a $15 open box asetek cooler on mine. Ebay surplus from... Origin PC I think.
> 
> Its definitely nice to be able to OC the card instead of just under-volt it.
> 
> 
> 
> My Vega 64 Does the same thing with any LC Bios.
> No idea why, but it performs quite fine on stock bios with the 1200mv limit and a PPTable mod.


ok, Can you tell me what kind of controller/drive and splitter you use for controlling the AIO pump and the fan speed,what heatsinks for VRM could I use?


----------



## Xinoxide

PontiacGTX said:


> ok, Can you tell me what kind of controller/drive and splitter you use for controlling the AIO pump and the fan speed,what heatsinks for VRM could I use?


Pulled up my order history on the fleabay.

Theres 10mm and change between the first ceramic cap and choke on the VRM, 22mm and change available for the side by side high and low side regs. So these Sinks fit just right.

You will need to replace the crap pads on the little 10x10 heatsinks with something that actually works though. They hold fine but VRM temp improvements with the pads are ... tiny.

I just keep the pump at full speed. Its the Delta AFB1212FHE I have to worry about controlling.


----------



## 113802

cplifj said:


> That's just it, this does not need RT cores at all, kinda obsolete before it ever were a thing really.


Guess you missed the part I balded, it was impressive seeing the RX Vega 56 run that demo at 4k 30 FPS but there isn't any action going on in those scenes. We've seen the performance up lift of using RT cores in the Port Royal demo when comparing any RTX card vs a Volta GPU. AMD will add their own cores dedicated for RT. RT cores increase ray/path tracing performance dramatically, we've seen it in games along with rendering programs that utilize RT cores. Just like how both AMD and nVidia have Tessellation Units for tessellation. I'm sure AMD learned from the 5000/6000 series since the GTX 480 outperformed both top end cards from both generations. 



> However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards


Three years ago we had an impressive demo from PowerVR mobile GPUs.


----------



## PontiacGTX

Xinoxide said:


> Pulled up my order history on the fleabay.
> 
> Theres 10mm and change between the first ceramic cap and choke on the VRM, 22mm and change available for the side by side high and low side regs. So these Sinks fit just right.
> 
> You will need to replace the crap pads on the little 10x10 heatsinks with something that actually works though. They hold fine but VRM temp improvements with the pads are ... tiny.
> 
> I just keep the pump at full speed. Its the Delta AFB1212FHE I have to worry about controlling.


so you just connect the AIO pump to the PSU and it stays at 100%? or you use the mobo to set the pump speed? the thing is I dont have any fan header left in the motherboard(I think?) and the fans I have are kinda noisy I would need some kind of fan controller I could use the case's fan controller but the cables from my Koolance 12025HBK arent long enough, also there is some risk that the AIO stop working or that have some leak ? I will try to see if I cant find a better option then do what you just mentioned(the morpheus II takes 5slot with fans..)


----------



## Xinoxide

PontiacGTX said:


> so you just connect the AIO pump to the PSU and it stays at 100%? or you use the mobo to set the pump speed? the thing is I dont have any fan header left in the motherboard(I think?) and the fans I have are kinda noisy I would need some kind of fan controller I could use the case's fan controller but the cables from my Koolance 12025HBK arent long enough, also there is some risk that the AIO stop working or that have some leak ? I will try to see if I cant find a better option then do what you just mentioned(the morpheus II takes 5slot with fans..)


You can slap the radiator on an existing exhaust fan, but yes, my pump is just powered, not controlled.

Ill try and get an image of my setup next time I pull the card out. I have a thermal epoxy on the way.

I initially tested with just sticking them on, and then with standard thermal paste. Thermal paste seems to hold them on okay but I bumped my desk last night and one of the heatsinks fell off.


----------



## PontiacGTX

just wonder how are you supposed to slap the waterblock on the cpu? you didnt use zip ties, what kind of bracket/screws did you use?


----------



## Xinoxide

PontiacGTX said:


> just wonder how are you supposed to slap the waterblock on the cpu? you didnt use zip ties, what kind of bracket/screws did you use?


I did skip that detail. I used 6/32 screws and the Intel bracket from my H100i v2. Drilled new holes to match the pattern on the vega and bolted it up. I also trimmed the ends off the bracket. 

I wont get better images until its next cleaning/pasting. possible upgrade of the VRM heatsinks. These are great and all, but I want to be able to sustain higher voltages for synthetic workloads.

Zipties work okay~, I had issues with hotspot temps though both times I mounted with zipties for initial tests. So I wont recommend it.


----------



## VicsPC

New drivers, about to test it on Siege, Ubisoft had to pass the buck to AMD to fix and AMD fixed it within 2 drivers of it being a known issue. Well done. 

https://www.amd.com/en/support/grap...eries/radeon-rx-vega-series/radeon-rx-vega-64


----------



## tolis626

VicsPC said:


> New drivers, about to test it on Siege, Ubisoft had to pass the buck to AMD to fix and AMD fixed it within 2 drivers of it being a known issue. Well done.
> 
> https://www.amd.com/en/support/grap...eries/radeon-rx-vega-series/radeon-rx-vega-64


"Pass the Buck". He he he... If that was intentional, GG. 

Just out of curiosity mate, what kind of performance are you getting in Siege? What resolution and graphics settings are you using? Thanks in advance!


----------



## VicsPC

tolis626 said:


> "Pass the Buck". He he he... If that was intentional, GG.
> 
> Just out of curiosity mate, what kind of performance are you getting in Siege? What resolution and graphics settings are you using? Thanks in advance!


Yea not too bad huh, Ubi had an update that broke Siege for VEGA owners then didn't wanna fix it. I'd have to check my settings but I'm running it pretty high up and get anywhere between 100-160fps depending on the map. I think i even run fxaa or something.


----------



## tolis626

VicsPC said:


> Yea not too bad huh, Ubi had an update that broke Siege for VEGA owners then didn't wanna fix it. I'd have to check my settings but I'm running it pretty high up and get anywhere between 100-160fps depending on the map. I think i even run fxaa or something.


Ah, so the problem was for Vega. No wonder I never noticed anything on my 390x.

Well, I'd really appreciate it if you could check and report back. Right now I'm pretty much exclusively playing Siege and the 390x is begging for any mercy it can find. Right now, to maintain >100FPS I use a weird mix of settings. So, resolution is 1440p but scaling is 75%, everything is on low/off except for textures that are on ultra (no performance impact and I do have 8GB of VRAM, so meh, might as well have it look good), shadows, shading and LOD are on medium. That way I get about the same FPS as you. Usually 100-120FPS in maps and up to 150-160 outside. My other set of settings is everything on ultra, balls to the wall but resolution scaling on 50% and I get like 80-110 FPS on practically 1080p. That's still good, but might as well try to use most of these 144Hz I paid for. Vega would probably be more than adequate for this game on 1440p, even on ultra...


----------



## MrPerforations

hello forum,
so I got a pair of MSI Radeon vega 56 air boost oc editions in the sale, yes, I should have just brought a radeon 7, but its a bit to late now.
main question is, they look like reference cards, but are they please?
im thinking of water blocking the pair with those cheap bykski blocks from ali express and hope they fit, it will be my first water cooled gpu so far.
whats the heads up on the thermal paste with vega gpu's please?
read that liquid metal could mess up the stuff around the die and wondered if its true and what would be best to use.
I have xspc k3, liquid metal copper and some artic silver. they should have some paste with the kits but no idea what that will be.
the clocks on the msi oc's goes up to 1620mhz at stock, looks good but there 75c locked, and I cant change the max temp with wattman.


----------



## VicsPC

tolis626 said:


> Ah, so the problem was for Vega. No wonder I never noticed anything on my 390x.
> 
> Well, I'd really appreciate it if you could check and report back. Right now I'm pretty much exclusively playing Siege and the 390x is begging for any mercy it can find. Right now, to maintain >100FPS I use a weird mix of settings. So, resolution is 1440p but scaling is 75%, everything is on low/off except for textures that are on ultra (no performance impact and I do have 8GB of VRAM, so meh, might as well have it look good), shadows, shading and LOD are on medium. That way I get about the same FPS as you. Usually 100-120FPS in maps and up to 150-160 outside. My other set of settings is everything on ultra, balls to the wall but resolution scaling on 50% and I get like 80-110 FPS on practically 1080p. That's still good, but might as well try to use most of these 144Hz I paid for. Vega would probably be more than adequate for this game on 1440p, even on ultra...


Yea was for all VEGA owners even the APUs had the issue. 

I play on ultrawide (2560x1080) but Siege is VERY CPU demanding, its possible thats whats holding u back although a 390x might not be enough for 144hz at 1440p. Only thing id keep on high would be the shadows its pretty handy. PMed you a screenshot of my settings.


----------



## tolis626

VicsPC said:


> Yea was for all VEGA owners even the APUs had the issue.
> 
> I play on ultrawide (2560x1080) but Siege is VERY CPU demanding, its possible thats whats holding u back although a 390x might not be enough for 144hz at 1440p. Only thing id keep on high would be the shadows its pretty handy. PMed you a screenshot of my settings.


First off, thanks a lot!

Well, I think that the only difference between medium and high for shadows is the resolution of the shadows. Low disables dynamic shadows completely and could be a disadvantage when playing against other players.

Is it so CPU demanding that a 4.7GHz 4790k would bottleneck a 390x? I don't think so. Lowering my settings does give me higher FPS in a pretty predictable manner. I don't know if I could keep a steady 144Hz with a better GPU, but I don't think that >110FPS min would be unreasonable in most circumstances. Oh well... It IS an NVidia title...

PS : Just for my sanity, I did a quick search online and indeed, Siege isn't THAT demanding on the CPU. Basically it's my GPU that's the bottleneck, sadly. Oh well...


----------



## VicsPC

tolis626 said:


> First off, thanks a lot!
> 
> Well, I think that the only difference between medium and high for shadows is the resolution of the shadows. Low disables dynamic shadows completely and could be a disadvantage when playing against other players.
> 
> Is it so CPU demanding that a 4.7GHz 4790k would bottleneck a 390x? I don't think so. Lowering my settings does give me higher FPS in a pretty predictable manner. I don't know if I could keep a steady 144Hz with a better GPU, but I don't think that >110FPS min would be unreasonable in most circumstances. Oh well... It IS an NVidia title...
> 
> PS : Just for my sanity, I did a quick search online and indeed, Siege isn't THAT demanding on the CPU. Basically it's my GPU that's the bottleneck, sadly. Oh well...


That's interesting, cuz i hit 60% on my 2700x without any problems lol, a friend of mine with his 1600x hits 100%. I played it with my 4690 and r9 390 but don't remember at all what i was getting for fps but was def no where near what I'm getting now, i was probably around the 80s/90s.


----------



## tolis626

VicsPC said:


> That's interesting, cuz i hit 60% on my 2700x without any problems lol, a friend of mine with his 1600x hits 100%. I played it with my 4690 and r9 390 but don't remember at all what i was getting for fps but was def no where near what I'm getting now, i was probably around the 80s/90s.


As I said via PM, with a weird mix of low and ultra settings (basically, only ultra textures, medium LOD and shadows and everything else low) and 75% scaling, I manage to get 110-130FPS average on most maps. CPU doesn't seem to really be a factor. It will get stressed every now and then, but it doesn't seem to correlate with frame dips. Like, I get low fps sometimes on Hereford base, but during the time I get low fps (80-100) my CPU is practically doing nothing, so it's a GPU thing.

Mind you, my CPU is at 4.8GHz and my GPU is at 1160/1600MHz. The 4790k may be old, but it's no slouch at these speeds.


----------



## DDSZ

I recently bought Strix V56, and there is a problem with it: as soon as I disable CSM in MoBo bios and connect monitor using DisplayPort, monitor starts flashing (turning on and off every second), and its happening until Windows starts up (driver loads up?)
The same cable&monitor works just fine with my old R9290 and CSM disabled.
Any ideas how to get that fixed?


----------



## faizreds

Joining the vega family with my Sapphire Vega56 Nitro+.


----------



## MrPerforations

hello,
got me two in the sale and just got two bykski waterblocks for them for £130 for both.
but im not impressed with them performance wise as I went 4k and I need better stuff already, which is a stupid price.
I had a crossfire and then I brought a crossfire that was twice as powerful but most games use only one card so I got, the same power.


----------



## RAZZTA01

*vega64 nitro+ put under Water!*

Hi, just wanted to share my actual build. I finally put my vega64 under water. I also put a new rad in the build (external rad): MORA3 with 4X180 fans @700rpm. Noise problems disappeared!! Very important to me.
Also, cooling performance improved a lot.
I upload some pics. The one with the benchmark is also working while playing games.
I am starting to tweak it now. Anyone on water that could share with me his max setup in wattman?
Hacve a nice day!


----------



## sinnedone

Your GPU Hotspot is hitting 87C. Something is not right there if you are water cooling that card.
I would re mount checking thermal compound spread or high/low spots.


----------



## VicsPC

sinnedone said:


> Your GPU Hotspot is hitting 87C. Something is not right there if you are water cooling that card.
> I would re mount checking thermal compound spread or high/low spots.


It's because they're doesnt seem to be much case flow, we've been over this with other people who have high hotspot temps and try to repaste, it doesn't help. Mine mounted the same way he has with 3 140mm intake fans above the card my hotspot doesnt go past 70°C, usually stay in the mid 60s. I would love to know a definitive answer on where the hotspot measurement actually is but I've yet to see one.


----------



## Alastair

VicsPC said:


> It's because they're doesnt seem to be much case flow, we've been over this with other people who have high hotspot temps and try to repaste, it doesn't help. Mine mounted the same way he has with 3 140mm intake fans above the card my hotspot doesnt go past 70°C, usually stay in the mid 60s. I would love to know a definitive answer on where the hotspot measurement actually is but I've yet to see one.


Hotspot temp is the hottest part on the GPU die. This has been documented extensively on the Radeon 7 and reviews on the R7


----------



## VicsPC

Alastair said:


> Hotspot temp is the hottest part on the GPU die. This has been documented extensively on the Radeon 7 and reviews on the R7


On the die or behind the die. We don't have Radeon VIIs though we have 64s. AMD themselves at one point told us not to worry about hotspot temps. Don't forget that it seems like his chip is drawing 315w, mine only draws 215w so could be the increase in hotspot temps. My temps are around 42-46°C core/hbm respectively, my hotspot has never ever been anywhere near 90°C with a 240/360 in push/pull. I have 3 intake fans at the top of my case that blow ambient air into the case right into and around the gpu. Does seem quite hot at 80°C especially while running super when his core and hbm are in the 30s. I'm starting to doubt hotspot temp is on the die, maybe behind the die on the pcb but with low temps like that no way.


----------



## VicsPC

Actually here you go for those thinking reseating will do anything. Scroll down to the thermals that's your hotspot temps. https://www.tomshardware.com/reviews/radeon-rx-vega-56,5202-22.html










Here's mine after 15mins of frostpunk, left the camera that peaks my core and hbm ~1620/1050. Left is idle, right was after 15mins, ambient around 23-24°C


----------



## MrPerforations

hello,
so I ordered cheap water blocks for the MSI™ Radeon™ Vega 56 Air boost oc graphics cards. I have been fiddling with wattman and afterburner and I cant get nothing out of it worth while. I watched some you tube videos but cant find how to destroy the cards in a way that the manufactures would not know.
from what I can tell, you just add water blocks and then they work better and you cant do nothing really to improve them, is that the case please?
do I have to try to overclock them please?, is just adding the 50% power off-set all you do and them wait and prey for them to die please? 
how long can I expect them to live?, will it be in the warrantied period please?


----------



## Alastair

VicsPC said:


> On the die or behind the die. We don't have Radeon VIIs though we have 64s. AMD themselves at one point told us not to worry about hotspot temps. Don't forget that it seems like his chip is drawing 315w, mine only draws 215w so could be the increase in hotspot temps. My temps are around 42-46°C core/hbm respectively, my hotspot has never ever been anywhere near 90°C with a 240/360 in push/pull. I have 3 intake fans at the top of my case that blow ambient air into the case right into and around the gpu. Does seem quite hot at 80°C especially while running super when his core and hbm are in the 30s. I'm starting to doubt hotspot temp is on the die, maybe behind the die on the pcb but with low temps like that no way.


Radeon 7 is Vega. Just shrunk. So most likely rather than no its the exact same.


----------



## VicsPC

Alastair said:


> Radeon 7 is Vega. Just shrunk. So most likely rather than no its the exact same.


Pic i posted shows exactly where it's over 90°C which is what most people have for hotspot temps. I even have the same case he does so i know for a fact more fans can be mounted. I installed my ekwb without using the mounting clips from the AMD cooler i went by the instructions and used kryonaut.


----------



## BTViolence

MrPerforations said:


> hello,
> so I ordered cheap water blocks for the MSI™ Radeon™ Vega 56 Air boost oc graphics cards. I have been fiddling with wattman and afterburner and I cant get nothing out of it worth while. I watched some you tube videos but cant find how to destroy the cards in a way that the manufactures would not know.
> from what I can tell, you just add water blocks and then they work better and you cant do nothing really to improve them, is that the case please?
> do I have to try to overclock them please?, is just adding the 50% power off-set all you do and them wait and prey for them to die please?
> how long can I expect them to live?, will it be in the warrantied period please?



You probably need to flash a new BIOs and do power play tables in addition to the power limit increase. Pumping the extra power into them will most definitely decrease their life. I've ran extreme OCs for a couple of years and the cards still kept on chugging without issues. As long as your PSU can handle the wattage it should generally be okay if your loop keeps the cards cool.


----------



## Xinoxide

BTViolence said:


> You probably need to flash a new BIOs and do power play tables in addition to the power limit increase. Pumping the extra power into them will most definitely decrease their life. I've ran extreme OCs for a couple of years and the cards still kept on chugging without issues. As long as your PSU can handle the wattage it should generally be okay if your loop keeps the cards cool.


The cards seem pretty resilient. I have had mine with a CLC pushing 440~ watts measured by GPUz at 1840/1170 with 1200mv for a few weeks now.

Just working on finding a way to blow some air over my VRM sinks before I decide to go my gaming in this manner.

Div2 is pushing my VRM temps to 115C without good airflow. with my 890mv undervolt though the VRM doesnt go over 60c, 80~ without heatsinks.


----------



## MrPerforations

BTViolence said:


> You probably need to flash a new BIOs and do power play tables in addition to the power limit increase. Pumping the extra power into them will most definitely decrease their life. I've ran extreme OCs for a couple of years and the cards still kept on chugging without issues. As long as your PSU can handle the wattage it should generally be okay if your loop keeps the cards cool.


thanks for the info, I found that the auto overclocking system had a very good try at killing them, but it appears AMD™ have failed the customer again in this area.
the bios idea is good but unlocking that voltage without editing the bios is what the customers need really.



Xinoxide said:


> The cards seem pretty resilient. I have had mine with a CLC pushing 440~ watts measured by GPUz at 1840/1170 with 1200mv for a few weeks now.
> 
> Just working on finding a way to blow some air over my VRM sinks before I decide to go my gaming in this manner.
> 
> Div2 is pushing my VRM temps to 115C without good airflow. with my 890mv undervolt though the VRM doesnt go over 60c, 80~ without heatsinks.


its sad to hear that you didn't win the silicon lottery when it came to your AMD™ Vega™ device, I hope your device fails you soon.
all the best,
Mr Perforations.

p.s, I just managed to overclock my AMD™ MSI™ Radeon™ Vega™ 56 Crossfire™...


----------



## Naeem

i have a dream , that one day someone will open his vega liquid cooled refrance cooler and upload the pictures here , i want to see the cooler and it's contact with chip


----------



## jearly410

Naeem said:


> i have a dream , that one day someone will open his vega liquid cooled refrance cooler and upload the pictures here , i want to see the cooler and it's contact with chip


You know, I’ve been thinking about opening mine...


----------



## VicsPC

Naeem said:


> i have a dream , that one day someone will open his vega liquid cooled refrance cooler and upload the pictures here , i want to see the cooler and it's contact with chip


Didn't reviewers do that when the card came out? I bet nexus has done it, they do it to every card. I remember seeing lots of pics online about the LC edition being taken apart, unless I'm thinking of the Fury.


----------



## 113802

Naeem said:


> i have a dream , that one day someone will open his vega liquid cooled refrance cooler and upload the pictures here , i want to see the cooler and it's contact with chip


I posted it awhile back when you asked.


----------



## nolive721

running a 1080Ti but still would like to give AMD a chance. I have a 1080Hybrid that I kept in case the Ti dies on me.Its a pretty good overclocker on core, memory and is of course really quiet and not so power hungry

since I have a triple screen set-up 100% Freesync, I am thinking to sell the 1080Hybrid to buy a VEGA 64 LC.

Of course asking in this thread might be biased but are the 2 cards comparable from a Performance point of view (read FPS in Games being DX11 and DX12) now that AMD drivers have matured?


----------



## Naeem

nolive721 said:


> running a 1080Ti but still would like to give AMD a chance. I have a 1080Hybrid that I kept in case the Ti dies on me.Its a pretty good overclocker on core, memory and is of course really quiet and not so power hungry
> 
> since I have a triple screen set-up 100% Freesync, I am thinking to sell the 1080Hybrid to buy a VEGA 64 LC.
> 
> Of course asking in this thread might be biased but are the 2 cards comparable from a Performance point of view (read FPS in Games being DX11 and DX12) now that AMD drivers have matured?




Vega 64 LC is good card running it from more than 1 and half years drivers are good


----------



## Ne01 OnnA

nolive721 said:


> running a 1080Ti but still would like to give AMD a chance. I have a 1080Hybrid that I kept in case the Ti dies on me.Its a pretty good overclocker on core, memory and is of course really quiet and not so power hungry
> 
> since I have a triple screen set-up 100% Freesync, I am thinking to sell the 1080Hybrid to buy a VEGA 64 LC.
> 
> Of course asking in this thread might be biased but are the 2 cards comparable from a Performance point of view (read FPS in Games being DX11 and DX12) now that AMD drivers have matured?


Looking at Average and LOWs 0.1% & 0.01% Vega XTX is Very Good GPU.

Here's the site that has it 
-> https://gamegpu.com/

If You wanna Play Games at FreeSync CAP -> Then You should be happy

Read some tests on the site, then decide.
Also UV + HBM OC and HBCC can add more to this.


==
Here latest Hitman 2 in DX12


----------



## sinnedone

nolive721 said:


> running a 1080Ti but still would like to give AMD a chance. I have a 1080Hybrid that I kept in case the Ti dies on me.Its a pretty good overclocker on core, memory and is of course really quiet and not so power hungry
> 
> since I have a triple screen set-up 100% Freesync, I am thinking to sell the 1080Hybrid to buy a VEGA 64 LC.
> 
> Of course asking in this thread might be biased but are the 2 cards comparable from a Performance point of view (read FPS in Games being DX11 and DX12) now that AMD drivers have matured?


Check what games you want to play. Generally DX12 games Vega 64 LC does pretty good and usually falls in between 1080 and 1080Ti as far as performance. A few games it reaches 1080ti or more levels. DX11 games can be all over the place from below 1080(almost 1070) levels to almost 1080ti.


----------



## MrPerforations

hello's,
what I would like to know is what is the best water cooled AMD™ Radeon™ RX Vega 56 overclock you have seen please?

the MSI™ Radeon™ RX Vega 56 cards I have use Hynix™ HBM and I cant use a AMD™ Radeon™ RX Vega 64 bios as they only use Samsung™ HBM, but the cards are clocked at 1620mhz, which is a highest clock AMD™ Radeon™ RX Vega 56 anyway.

I hope that cooling the cards will bring me a good overclock as I have the Bykski™ Vega waterblocks coming in next few days. iI have here new red tubing and compression fittings instead of barbs, another Cooler Master™ Blade master fan to make it to six of them to put on my two ex360 radiators along with a set of pwm 1 in to 4 headers. I got some Cooler Master™ Mastergel maker diamond paste as the XSPC™ Raystorm block I have is copper and am unsure is liquid metal is a good idea. this is a lot of money to spend on the chance that it might improve the clock of my two MSI™ Radeon™ RX Vega 56's.

I hope that the rebuild of my system I have in mind will look and work much better and might even take some photo's to post in another thread when completed, the fan headers is going to take a month to get here, so i have a wait yet.

Best regards,
MrPerforations


----------



## nolive721

nolive721 said:


> running a 1080Ti but still would like to give AMD a chance. I have a 1080Hybrid that I kept in case the Ti dies on me.Its a pretty good overclocker on core, memory and is of course really quiet and not so power hungry
> 
> since I have a triple screen set-up 100% Freesync, I am thinking to sell the 1080Hybrid to buy a VEGA 64 LC.
> 
> Of course asking in this thread might be biased but are the 2 cards comparable from a Performance point of view (read FPS in Games being DX11 and DX12) now that AMD drivers have matured?


decided to buy the VEGA, cant wait for this baby to arrive

What do you recommend to play with clocks on core and memory? Wattman or overdriventool? is the latter still working on the latest adrenalin drivers I have seen some issues reported by few people?

thanks

Olivier


----------



## Ne01 OnnA

nolive721 said:


> decided to buy the VEGA, cant wait for this baby to arrive
> 
> What do you recommend to play with clocks on core and memory? Wattman or overdriventool? is the latter still working on the latest adrenalin drivers I have seen some issues reported by few people?
> 
> thanks
> 
> Olivier


Try to be patient 

1600MHz and upwards, if You can Stabilize it at 1650-1692MHz @ 1075mV-->1.150mV
HBM2 (Set HBCC to min. 12GB in Adrenalin) 1050MHz up to 1125MHz @ 950mV-->1000mV

Set Pow to 0% up to 12--20--25% or more if needed.
Set Custom in Wattman

Use -> OverdriveN Tool 0.2.8 Beta 11 + WattMan

Here:
https://www.dropbox.com/s/f3j6m0ca7icyzdd/OverdriveNTool 0.2.8beta11.7z?dl=1


----------



## nolive721

I am with you about being patient with AMD cards.

I had a RX480 when it launched 2yrs ago and enjoyed my Polaris editor a lot to get a hell of a performance out of it, was really fun overclock lol)

I should have mentioned that I had a VEGA64 experience last year, it was a NITRO the Std version but it was running really hot and loud despite undervolting the thing, to an extent obviously

when you suggest to use that overdriventool beta, are you saying in combination with Wattman ? I guess the clocks and voltage in overdriventool would override anything set in Wattman hence your suggestion to set "Custom"?


----------



## Ne01 OnnA

nolive721 said:


> I am with you about being patient with AMD cards.
> 
> I had a RX480 when it launched 2yrs ago and enjoyed my Polaris editor a lot to get a hell of a performance out of it, was really fun overclock lol)
> 
> I should have mentioned that I had a VEGA64 experience last year, it was a NITRO the Std version but it was running really hot and loud despite undervolting the thing, to an extent obviously
> 
> when you suggest to use that overdriventool beta, are you saying in combination with Wattman ? I guess the clocks and voltage in overdriventool would override anything set in Wattman hence your suggestion to set "Custom"?


Yup, with WattMan
Im using this combo since last year summer -> It just works


----------



## nolive721

ok makes sense. card has left the US yesterday so should reach me in Japan sometimes next week and I will swap my 1080Ti with the Vega over the next week-end

lets see how it goes but really excited.if it can perform between my 1080Hybrid and the 1080Ti Hybrid in the Simracing games and FPS I play the most then I will be happy


----------



## sinnedone

Ne01 OnnA said:


> (Set HBCC to min. 12GB in Adrenalin)




Have you found this useful? Last I read and saw it didn't do very much if anything.


----------



## Ne01 OnnA

sinnedone said:


> Have you found this useful? Last I read and saw it didn't do very much if anything.


I have 16GB of HBCC  (32GB RAM on my side)
It did Help a lot, in Averages and 1% & 0.1% LOWs
Also HBCC is a lot faster than HDD/SDD or even RAM.


----------



## diggiddi

I thought the HBCC was using ram


----------



## TheQuentincc

Hi, I have an RX Vega 64 which is not displaying any image, it seems to be detected as a video card since my Maximus V gene disable the HD4000 of my I7 and I got a normal "one beep", the GPU and HBM is warming up as well as all the mosfet, it didn't seems to have any hot point, I checked with my finger and everything is not warming up very high.
I checked multiple thing on the card, at first some resistance (named from the picture attached) :
GPU : 0~0.2ohm (I don't have a miliohm range multimeter)
HBM : 10 ohm
Vddci "core" (the part of the chip powered by this voltage) : 24 ohm
Vpp "core" (between C383 or C1583 as well) : 72 ohm
1.8v "core" : 636 ohm (no pcie connected nor video output)
0.8v "core" : 17.5 ohm
So everything seems to be coherent , so the chip seems not shorted at all, then I measure some voltage and I got :
Vcore (vdcc) : 0.9v
Vhbm (mvdd) : 1.2v (it isn't 1.35v normally on vega 64?)
Vddci : 0.9v
Vpp : 2.55v (this is the one I think is not good, I read on this thread that someone got 1.8v)
The "1.8v" : 1.8v
The "0.8v" : 0.8v

So I think the Vpp is too high but this sould gave an image, right ? and is it normal to see the vHBM to 1.2v instead of the more common 1.35v ? 
Thanks to answer me and gave me some help


----------



## sinnedone

Ne01 OnnA said:


> I have 16GB of HBCC  (32GB RAM on my side)
> It did Help a lot, in Averages and 1% & 0.1% LOWs
> Also HBCC is a lot faster than HDD/SDD or even RAM.




Are you playing at 4k?

What situation (or game) causes the use of more than 16GB HBM to be dumped to system ram?


----------



## nolive721

I had not looked at the HBCC potential when I got my 1st VEGA64 a year ago but I am now that my new GPU is landing soon to Japan.

I have 16Gb of RAM on my PC but 2Gb are dedicated to the FUZE drive I am using for my Games (a 7200rpm HDD linked to a fast NVME storage)

Would that make any sense to use HBCC on my machine?what are the drawbacks of using HBCC mode? I am guessing the Ryzen memory controller will need to work harder so CPU getting warmer but I have a 240 AIO cooler and its working great so I am not too much worried here

My 1500X cpu runs Corsair LPX 3200Mhz RAM at 3066Mhz.Thats the best I could do even after numerous Bios upgrades and tinkering with RAM settings in the BIOS so I guess my Memory controller is a bit on the weak side so I am not overly excited to put more loads on it and get my PC to crash frequently as a consequence of HBCC usage


----------



## Ne01 OnnA

diggiddi said:


> I thought the HBCC was using ram


Yup, Pro HBCC can use SSD.
Our HBCC uses RAM, but thanks to this we can have faster Texutres Load etc.
GPU don't need to solely rely on SSD/HDD....


----------



## Ne01 OnnA

sinnedone said:


> Are you playing at 4k?
> 
> What situation (or game) causes the use of more than 16GB HBM to be dumped to system ram?


1440p -> HBCC uses what it needs in the current moment.
Very good Feature IMhO.


----------



## Ne01 OnnA

nolive721 said:


> I had not looked at the HBCC potential when I got my 1st VEGA64 a year ago but I am now that my new GPU is landing soon to Japan.
> 
> I have 16Gb of RAM on my PC but 2Gb are dedicated to the FUZE drive I am using for my Games (a 7200rpm HDD linked to a fast NVME storage)
> 
> Would that make any sense to use HBCC on my machine?what are the drawbacks of using HBCC mode? I am guessing the Ryzen memory controller will need to work harder so CPU getting warmer but I have a 240 AIO cooler and its working great so I am not too much worried here
> 
> My 1500X cpu runs Corsair LPX 3200Mhz RAM at 3066Mhz.Thats the best I could do even after numerous Bios upgrades and tinkering with RAM settings in the BIOS so I guess my Memory controller is a bit on the weak side so I am not overly excited to put more loads on it and get my PC to crash frequently as a consequence of HBCC usage


You don't loose any RAM when it's not needed 
Many Games are fine with 8GB HBM2 
For 'Brutal HD Textures' gaming it is Better to have it On (12GB set in Adrenalin)

I have Primo cache, HDD + L2 SDD + L1 RAM
When gaming Game have always ~2GB to 6144MB of RAM as cache.


----------



## Ne01 OnnA

My Advise is to use:

HBCC always (with 12GB set)
Enchanced Sync
FreeSync
and If game is Ok with, then Shader cache ON


----------



## TheQuentincc

Nobody could tell me if having a vpp at 2.5v is normal or not ? same for hbm at 1.2v instead of 1.35v ? 
Thanks


----------



## MrPerforations

wow, just idling after a rebuild of my cooling system, I did have a leck at first and managed to stall the whole system, I change the layout and it blazing though.
im rolling gpu 1 25c, cpu 24c, gpu2 22c.
waiting for the bubbles to finish.


----------



## Ne01 OnnA

*Radeon Software Adrenalin 2019 Edition 19.4.3 driver*

*Support For*

Mortal Kombat XI

*Fixed Issues*

With AMD Link connected to Radeon Settings the update notifications feature may list incorrect installed versions.

*Package Contents*

The Radeon Software Adrenalin 2019 Edition 19.4.3 installation package contains the following:

Radeon Software Adrenalin 2019 Edition 19.4.3 Driver Version 18.50.31.09 (Windows Driver Store Version 25.20.15031.9002)

-> https://www.amd.com/en/support/grap...amd-radeon-2nd-generation-vega/amd-radeon-vii


----------



## MrPhilo

Can someone confirm that using Display Port 1.2 it uses less power than HDMI 2.0 (by quite a bit).

Time Spy Graphic 1 2560 x 1440p Loop Test

Display Port 1.2 : 189-197 watt
HDMI 2.0 (No Freesync): 240 watt+
HDMI 2.0 (Freesync): 240 watt+

I used AMD own Overlay to capture the watt and my GPU does get around 10-15c hotter using HDMI.


----------



## Ne01 OnnA

MrPhilo said:


> Can someone confirm that using Display Port 1.2 it uses less power than HDMI 2.0 (by quite a bit).
> 
> Time Spy Graphic 1 2560 x 1440p Loop Test
> 
> Display Port 1.2 : 189-197 watt
> HDMI 2.0 (No Freesync): 240 watt+
> HDMI 2.0 (Freesync): 240 watt+
> 
> I used AMD own Overlay to capture the watt and my GPU does get around 10-15c hotter using HDMI.


Nice, Im using DP from the start.
IMhO DP is far better solution than HDMI.


----------



## miklkit

Ne01 OnnA said:


> *Radeon Software Adrenalin 2019 Edition 19.4.3 driver*
> 
> *Support For*
> 
> Mortal Kombat XI
> 
> *Fixed Issues*
> 
> With AMD Link connected to Radeon Settings the update notifications feature may list incorrect installed versions.
> 
> *Package Contents*
> 
> The Radeon Software Adrenalin 2019 Edition 19.4.3 installation package contains the following:
> 
> Radeon Software Adrenalin 2019 Edition 19.4.3 Driver Version 18.50.31.09 (Windows Driver Store Version 25.20.15031.9002)
> 
> -> https://www.amd.com/en/support/grap...amd-radeon-2nd-generation-vega/amd-radeon-vii



I just installed this driver, I think. In 2 attempts something called CNext failed to install properly, but it seems to run ok anyway.


----------



## hvora70

Ne01 OnnA said:


> My Advise is to use:
> 
> HBCC always (with 12GB set)
> Enchanced Sync
> FreeSync
> and If game is Ok with, then Shader cache ON





I was curious and decided to check how HBCC will improve performance. This article is quite negative about it --> https://techgage.com/article/a-look-at-amd-radeon-vega-hbcc/


Hope I am not breaking any rules with linking that ..


----------



## Ne01 OnnA

hvora70 said:


> I was curious and decided to check how HBCC will improve performance. This article is quite negative about it --> https://techgage.com/article/a-look-at-amd-radeon-vega-hbcc/
> 
> 
> Hope I am not breaking any rules with linking that ..


Set, Play & have Fun

Features are for Use.... This one really helps with better 0.1 & 1%


----------



## diggiddi

Hi guys can anyone with crossfire please test Crysis 3 and Project Cars/2 Assetto Corsa Competizione if you have it thx
edit 2560x1080 and 3440x1440 resolutions would be most helpful but 2560 and 4k are cool too


----------



## miklkit

It seems my Vega 64 is a hog. When i first got it it came bundled with some games, one of which is Strange Brigade. This Vega 64 was always plagued with bad textures in that game and I finally decided to see if anything could be done about it. 

It turned out that reducing the clocks and bumping up the volts is the answer. At one time it seemed to be ok in other games at 1680/1100 and -90mv. Now it is running well at 1630/980 and -50mv and it can run on ultra settings in SB. Other games seem to do a little better too. Oh well.


----------



## BeetleatWar1977

diggiddi said:


> Hi guys can anyone with crossfire please test Crysis 3 and Project Cars/2 Assetto Corsa Competizione if you have it thx
> edit 2560x1080 and 3440x1440 resolutions would be most helpful but 2560 and 4k are cool too


Crysis 3 i can do......
2x Vega 56 @ Ryzen 2700X
(1 MSI referenz @ 64 Bios, 1x Gigabyte custom 56) both cards -100mV and PT +10%

Preset very high, SMAA(1x), benched with Fraps first 112sek of map swamp

2160p:
Avg: 63.072 - Min: 48 - Max: 76

1440p:
Avg: 113.721 - Min: 69 - Max: 149

1080p:
Avg: 115.865 - Min: 55 - Max: 155

720p:
Avg: 116.901 - Min: 66 - Max: 152


Edit: bring them to the knees

2160p 8xMSAA
Avg: 29.432 - Min: 22 - Max: 41


btw: Firestrike https://www.3dmark.com/fs/19181587 first Place so far with 2x Vega 56 and a AMD CPU......



Edit: Projekt Cars 
2 Laps on Hockenheim GP
30-04-2019, 05:54:14 pCARS64.exe benchmark completed, 26706 frames rendered in 254.563 s
Average framerate : 104.9 FPS
Minimum framerate : 82.8 FPS
Maximum framerate : 152.3 FPS
1% low framerate : 78.2 FPS
0.1% low framerate : 69.0 FPS

with this Settings:


----------



## diggiddi

Thx repped up !


----------



## Ne01 OnnA

*AMD Memory Tweak - Read/Modify Timings on the fly!*

What is this?

Well, with this tool, you are able to change memory timings on the fly.
Yes, you read that correct, you can modify almost any value while the GPU is running.

-> https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/

-> https://github.com/Eliovp/amdmemorytweak/releases/tag/0.2

REP goes to -> @Eliovp


----------



## Alastair

Ne01 OnnA said:


> *AMD Memory Tweak - Read/Modify Timings on the fly!*
> 
> What is this?
> 
> Well, with this tool, you are able to change memory timings on the fly.
> Yes, you read that correct, you can modify almost any value while the GPU is running.
> 
> -> https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/
> 
> -> https://github.com/Eliovp/amdmemorytweak/releases/tag/0.2
> 
> REP goes to -> @*Eliovp*


 And have you tried it yet?


----------



## Ne01 OnnA

Alastair said:


> And have you tried it yet?


Yup, if You go too far -> Artifacts came 
Great tool, read my post in Guru3D.... (don't need to log in)

IMhO better is to pick loosen timings and better HBM2 OC... try to input HBM2 Timings from Vega VII..


----------



## VicsPC

Ne01 OnnA said:


> Yup, if You go too far -> Artifacts came
> Great tool, read my post in Guru3D.... (don't need to log in)
> 
> IMhO better is to pick loosen timings and better HBM2 OC... try to input HBM2 Timings from Vega VII..


Very cool, you should try changing the memory timings in wattman (the 2/3 settings you have haha) and see if it even changes anything in the timings. I finally had the balls to go from 1050 to 1100 without changing anything else and my v64 reference seems to have no issues with 1100 HBM.


----------



## Alastair

Ne01 OnnA said:


> Yup, if You go too far -> Artifacts came
> Great tool, read my post in Guru3D.... (don't need to log in)
> 
> IMhO better is to pick loosen timings and better HBM2 OC... try to input HBM2 Timings from Vega VII..


I'll have a look sometime. I am looking for a decent forum that isn't OCN. but yeah. Haven't found a new home yet. 

So you reckon clocks > timings hey? Your scores backing up these changes?


----------



## Ne01 OnnA

Alastair said:


> I'll have a look sometime. I am looking for a decent forum that isn't OCN. but yeah. Haven't found a new home yet.
> 
> So you reckon clocks > timings hey? Your scores backing up these changes?


Not benched in FS or TS (no time for it now)
First i need to find sweet spot.

Tested only in Vampyr (UE4 is sensitive to OC) & BFV.
1175MHz with CL22 and 955mV? (or similar).


----------



## Alastair

Ne01 OnnA said:


> Not benched in FS or TS (no time for it now)
> First i need to find sweet spot.
> 
> Tested only in Vampyr (UE4 is sensitive to OC) & BFV.
> 1175MHz with CL22 and 955mV? (or similar).


what is stock timing?


----------



## Ne01 OnnA

Alastair said:


> what is stock timing?


Here all 3 Timings for Vega (Auto, T1 & T2)

-> https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/


----------



## jearly410

@Ne01 OnnA

What values are you getting in OCLMembench?

For reference here are mine


----------



## Ne01 OnnA

jearly410 said:


> @Ne01 OnnA
> 
> What values are you getting in OCLMembench?
> 
> For reference here are mine


Up to 550?
1200MHz w/My settings

==


----------



## jearly410

Ne01 OnnA said:


> jearly410 said:
> 
> 
> 
> @Ne01 OnnA
> 
> What values are you getting in OCLMembench?
> 
> For reference here are mine
> 
> 
> 
> Up to 550?
> 1200MHz w/My settings
> 
> ==
Click to expand...

Thanks! It’s helpful having a comparison before going whole hog:thumb:


----------



## Ne01 OnnA

jearly410 said:


> Thanks! It’s helpful having a comparison before going whole hog:thumb:


Try my settings, those are fully stable for me in any game so far 
Try to now surpass 1150MHz with 950mV (for benching 1200MHz is Ok )
Im using for Gaming 1120 @937mV


----------



## Ne01 OnnA

My new Stable HBM2 OC (Now it's Blazing Fast & Stable  )

UPD. New ver. is out:
-> https://github.com/Eliovp/amdmemorytweak/releases/tag/0.2.1
==

Try (good for Vega64 & 56 w/Samsung HBM2)

Note:
Here are the Tests fro HBM2 OC (with my settings Gain will be even better )

Translated w/Google
-> http://translate.google.com/transla...ega-64-ethereum-50-8-mh-s-i-vliyanie-na-igry/

Org.
-> https://hardwarepc.ru/modifikatsiya...ega-64-ethereum-50-8-mh-s-i-vliyanie-na-igry/


----------



## Wuest3nFuchs

hi all! 

Would you guys buy a vega56 nitro these days or should i wait for navi?
Currently im on a r9 fury nitro .just had a 1070ti FOR 3 MONTHS and sold it today. AND I ALSO GOT MY FIRST FREESYNC MONITOR,IM SO HAPPY WITH IT. 
AOC AGON AG21QX 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Hwgeek

Big thread on bitcointalk too:
https://bitcointalk.org/index.php?topic=5123724.0


----------



## Alastair

Wuest3nFuchs said:


> hi all!
> 
> Would you guys buy a vega56 nitro these days or should i wait for navi?
> Currently im on a r9 fury nitro .just had a 1070ti FOR 3 MONTHS and sold it today. AND I ALSO GOT MY FIRST FREESYNC MONITOR,IM SO HAPPY WITH IT.
> AOC AGON AG21QX
> 
> Gesendet von meinem SM-G950F mit Tapatalk


 I would say go with Vega, most recent rumour coming through adored TV says Navi isnt turning out that great.


----------



## Minotaurtoo

Alastair said:


> I would say go with Vega, most recent rumour coming through adored TV says Navi isnt turning out that great.


I thought I was the only one on here who watched him regularly... and if he's not that excited about it, you can pretty much bet it's going to be another Vega type card that does great in compute but mediocrity is the rule in games


----------



## Alastair

Anyone broken 7K superposition 1080P extreme on a single card?


----------



## 99belle99

Alastair said:


> Anyone broken 7K superposition 1080P extreme on a single card?


I didn't think that was possible.


----------



## Alastair

99belle99 said:


> I didn't think that was possible.


 If I could just view the thread gallery I could find out if this was possible OH WAIT nevermind cause OCN is still a mess. 
Typo I meant 6K score


----------



## Alastair

Minotaurtoo said:


> I thought I was the only one on here who watched him regularly... and if he's not that excited about it, you can pretty much bet it's going to be another Vega type card that does great in compute but mediocrity is the rule in games


yeah. While initial results were promising. It hasn't gone far from there. Along with the retape and power draw and thermal issues as he says it's turning out to be an engineering nightmare and they can't wait to move on. I really didn't think GCN had any legs left in it after Vega. Never mind going on till Navi.... It's a shame.


----------



## 99belle99

They should have dumped GCN after Vega, but it takes years of planning and testing for a new architecture. They obviously kept GCN for Navi as it will be used in the new Playstaion and Xbox once they are released next year.


----------



## Ne01 OnnA

Can somebody beat 30k on Vega64 already?
Use Timing Tweak + OC on new WinX 1903 


@mtrai just beat 29k


----------



## Alastair

99belle99 said:


> They should have dumped GCN after Vega, but it takes years of planning and testing for a new architecture. They obviously kept GCN for Navi as it will be used in the new Playstaion and Xbox once they are released next year.


Even if it's an engineering nightmare from a power and heat perspective. If it can match NVs offerings like Adored says, even if it's at substantially more power and heat than they wanted, and they beat them at price. I'll see that as a win in my eyes. Cause that's kinda what Vega did. It can get really close to a 1080ti once it's been messed around with a bit. And it's a hang load cheaper.


----------



## Ne01 OnnA

Alastair said:


> Even if it's an engineering nightmare from a power and heat perspective. If it can match NVs offerings like Adored says, even if it's at substantially more power and heat than they wanted, and they beat them at price. I'll see that as a win in my eyes. Cause that's kinda what Vega did. It can get really close to a 1080ti once it's been messed around with a bit. And it's a hang load cheaper.


Power? I mean tW?
Vega can be (and is) very power efficient uArch IMhO


Look: my latest DOOM session (1440p Nightmare 70Hz Freesync)
and AC:O missions (not desert, HU Settings, mixed Ultra/High + AA)

Note:
When one uses +50% POW then it's considered as extreme OC thus can't fall into efficient term 

Im using 0% or +1% or -5%/-8% for gaming (Cool & Quiet)
+12% up to +25% for 3Dmark or benches.

Note no.2:
In the end it depends on the user what kind of Vega/Vega 2 he have.
It can be Power efficient or good overclocker.
I like the middle ground, Some good UV + Tweaks to maintain good 1% & 0.1% when Gaming.
=


----------



## Ne01 OnnA

Here Mem Tweak used for AC:O (Stable)
Very Playable at Ultra (FreeSync, Chill 65-70FPS)

==


----------



## sinnedone

When using that program must the timings be loaded after every stat up?


----------



## Ne01 OnnA

sinnedone said:


> When using that program must the timings be loaded after every stat up?


Yup, and Make sure You have always the same Profile.
Im making changes on my Desktop Profile (850MHz lol)

Then changing OC for Gaming in OverdriveN Tool (WattMan set to Custom with Manual V)


----------



## mtrai

@Ne01_OnnA

Firestrike 29K+ Graphics score with Windows 1903. Also please note there is also huge change with Windows 10 1903 with the "windows scheduler" Notice that both the physics and combined score also saw a big uplift in scores. Seems that Microsoft did some back end changes no one really noticed while it was still in the Insider program. It seems the "AMD cpu penalty" is highly mitigated. We are gonna need more testing, to be sure.

https://i.imgur.com/fjtdGBJ.jpg


----------



## VicsPC

mtrai said:


> @Ne01_OnnA
> 
> Firestrike 29K+ Graphics score with Windows 1903. Also please note there is also huge change with Windows 10 1903 with the "windows scheduler" Notice that both the physics and combined score also saw a big uplift in scores. Seems that Microsoft did some back end changes no one really noticed while it was still in the Insider program. It seems the "AMD cpu penalty" is highly mitigated. We are gonna need more testing, to be sure.
> 
> https://i.imgur.com/fjtdGBJ.jpg


Very nice, seems quite helpful. More specs on ur 2700x and v64 would be handy. Speeds and all that. This is mine watercooled, 2700x left alone and hbm2 at 1100mhz. You can see its a good jump in performance with higher hbm2 speeds. 

https://www.3dmark.com/compare/fs/19322337/fs/15978275#


----------



## mtrai

VicsPC said:


> Very nice, seems quite helpful. More specs on ur 2700x and v64 would be handy. Speeds and all that. This is mine watercooled, 2700x left alone and hbm2 at 1100mhz. You can see its a good jump in performance with higher hbm2 speeds.
> 
> https://www.3dmark.com/compare/fs/19322337/fs/15978275#


Standard PBO overclocking on my 2700X nothing about those have changed in months. I actually dropped my ram speed just to ensure stability with my GPU overclock...to eliminate issues there. I am using the new gpu timing tool. I can p[ost some screenshot if needed.


----------



## VicsPC

mtrai said:


> Standard PBO overclocking on my 2700X nothing about those have changed in months. I actually dropped my ram speed just to ensure stability with my GPU overclock...to eliminate issues there. I am using the new gpu timing tool. I can p[ost some screenshot if needed.


I'm trying out PE2 on my 2700x, seems to increase my clocks ~50mhz on all cores, from 3.90-3.95 to 4.05-4.1. Not sure it even does much haha.


----------



## 113802

mtrai said:


> @Ne01_OnnA
> 
> Firestrike 29K+ Graphics score with Windows 1903. Also please note there is also huge change with Windows 10 1903 with the "windows scheduler" Notice that both the physics and combined score also saw a big uplift in scores. Seems that Microsoft did some back end changes no one really noticed while it was still in the Insider program. It seems the "AMD cpu penalty" is highly mitigated. We are gonna need more testing, to be sure.
> 
> https://i.imgur.com/fjtdGBJ.jpg


Nice GPU score, you were close to beating my score.

https://www.3dmark.com/fs/18270986


----------



## Wuest3nFuchs

Alastair said:


> I would say go with Vega, most recent rumour coming through adored TV says Navi isnt turning out that great.


Maybe i'll get one after 27th may. looking intoniert navi presentation. 
Does anyone here know if a sapphire nitro would do the 64cus unlock? or which card does it and which one not. and are there any issues when unlocking a vega56 to vega64? back in 2016 i tried that on my r9 fury with no luck...hardlock .

cheers 


Gesendet von meinem SM-G950F mit Tapatalk


----------



## Ne01 OnnA

New Driver Tested: 19.5.1 Pre-WHQL WDDM 2.6
Even on WinX 1809 Combined is a lot better 

-> https://www.3dmark.com/compare/fs/19264565/fs/19328287

1732/1175 (1.094v/993v) +1% CPU @ 4GHz 38.5x FSB 104
*Tess x8 (as Always, it's my default for Gaming)


----------



## mtrai

WannaBeOCer said:


> Nice GPU score, you were close to beating my score.
> 
> https://www.3dmark.com/fs/18270986


Yeah but I am on air...you are liquid cooled.


----------



## mtrai

Okay now my 29k on Air are valid with the release of yesterday's driver. https://www.3dmark.com/fs/18270986


----------



## Ne01 OnnA

Here is my working HBM2 Mem Tweak
So far is stable (You can test is on Vega64)
Test in 3Dmark at 1175MHz 993mV

1185-1900MHz is max for this Tweak
==


----------



## mtrai

Ne01 OnnA said:


> Here is my working HBM2 Mem Tweak
> So far is stable (You can test is on Vega64)
> Test in 3Dmark at 1175MHz 993mV
> 
> 1185-1900MHz is max for this Tweak
> ==


Remember you no longer need to use both overdriveNTool and the the timing tool. It will simplify things. You are showing different clocks with no reason which will be confusing for people. Also people remember that one person settings may not work for another...each GPU is different. So take it with a grain of salt when he says it will work for everyone. This is not true. Now his timing do not work for mebut yet he claims they work for all. The timings are a good starting point if you have Samsung HBM his or mine. And you are using a vega 64 with Samsung HBM if not these or his timings will not work at all.. All HBM is not the same...nor can use use timings from VII to vega 64. They are different. And the timings are very different. I will not state this again. I understand a lot of how you do things but enough is enough on this.


----------



## Naeem

Ne01 OnnA said:


> Here is my working HBM2 Mem Tweak
> So far is stable (You can test is on Vega64)
> Test in 3Dmark at 1175MHz 993mV
> 
> 1185-1900MHz is max for this Tweak
> ==




didn't work on my Vega 64 LC i thinki got a dud chip mine never go obove 1750 clock


----------



## 113802

mtrai said:


> Okay now my 29k on Air are valid with the release of yesterday's driver. https://www.3dmark.com/fs/18270986


Cool but you posted my verification link.


----------



## LicSqualo

Ne01 OnnA said:


> Here is my working HBM2 Mem Tweak
> So far is stable (You can test is on Vega64)
> Test in 3Dmark at 1175MHz 993mV
> 
> 1185-1900MHz is max for this Tweak
> ==


Hi Ne01, I tried your timings, without lucky also for me (I've a MSI Vega64 LC).
But a value is very strange, can you explain? 
Stock the value is 3900 (is a timing, the n°12: tREF) but you have increased this until 25000 (!!! is a x6 value!)... 
Perhaps you wanted 2500 because all your new timings are low compared to the stock one...
Just to understand.


----------



## Ne01 OnnA

When You have like 8GB + then this value can be longer (in µsec)
Thus GPU don't need to check the VRAM so often, that's why You'll have good speed boost.

So the 4000 and lower value is good for Fury w/HBM1
*The amount of time it takes before a charge is refreshed so it does not lose its charge and corrupt. Measured in micro-seconds (µsec).*

Here -> http://alexanderhuzar.angelfire.com/files/ram_timings.htm

Note:
Do note that every GPU is a little different, make sure You go (as i did) Try & Error Path.
Our Users settings are for starting point.

Try:
tCL 18 30 14 10
tRCAB 44 43 14 14
tRRDS 3 4 5
tREF 24000

As a starting point, 

Remember that not every Mem Tweak will work for any OC boost for memory
Im trying to get stable in 1175-1190MHz HBM2
What is stable for 1100MHz can be unstable with 1150MHz etc. (try to check for stability when HBM is at 1120-1150MHz if You can)

===
UPD. 
Im testing now this OC (so far is stable for me in: BFV MP & other games)


----------



## LicSqualo

Thank you! Much appreciated. I'm testing to found my config!


----------



## Ne01 OnnA

Our user (on Guru3D) @sideeffect

Makes some benches of Mem Tweak using V56

-> Org. 
https://forums.guru3d.com/threads/a...timings-on-the-fly.426435/page-8#post-5670646

==


----------



## 99belle99

I've a 56 and tried his settings and it crashed. Crashed also when I tried my own settings also. And cannot be bothered to keep trying until I find the sweet spot as I have no idea what any setting does.


----------



## Ne01 OnnA

99belle99 said:


> I've a 56 and tried his settings and it crashed. Crashed also when I tried my own settings also. And cannot be bothered to keep trying until I find the sweet spot as I have no idea what any setting does.


First adjust 1st Row at -1 -> Test, then if OK go the next row


----------



## 98uk

Hey guys, quick question. I have one of those ex-mining reference Vega 64 cards. Works fine, but I want to replace the thermal paste and thermal pads on it... to try and reduce the hotspot temps and memory/vrm. 

This is what i've bought to do it:

https://www.amazon.co.uk/gp/product/B011F7W3LU/
https://www.amazon.co.uk/gp/product/B00ZJSBRE6/

Do you know if one pack of the thermal pads is enough for a single reference card? Secondly, slightly stupid question, one side of the pad is adhesive... i'm guessing this side stick to the heatsink (as opposed to the chips) right?

Cheers


----------



## VicsPC

98uk said:


> Hey guys, quick question. I have one of those ex-mining reference Vega 64 cards. Works fine, but I want to replace the thermal paste and thermal pads on it... to try and reduce the hotspot temps and memory/vrm.
> 
> This is what i've bought to do it:
> 
> https://www.amazon.co.uk/gp/product/B011F7W3LU/
> https://www.amazon.co.uk/gp/product/B00ZJSBRE6/
> 
> Do you know if one pack of the thermal pads is enough for a single reference card? Secondly, slightly stupid question, one side of the pad is adhesive... i'm guessing this side stick to the heatsink (as opposed to the chips) right?
> 
> Cheers


You can put the sticky side onto either the chip or the heatsink its fine either way, after the pressure is put on it its gonna stick wherever anyways haha. I'd order 2 just to have if 1 isn't enough, if one is then just keep it. Thermal pads are good to have around.


----------



## 98uk

VicsPC said:


> You can put the sticky side onto either the chip or the heatsink its fine either way, after the pressure is put on it its gonna stick wherever anyways haha. I'd order 2 just to have if 1 isn't enough, if one is then just keep it. Thermal pads are good to have around.


Yeah, true. It would suck if I got half way through and ran out ha!


----------



## Ne01 OnnA

Next Big Update for Gamers !
Make sure You have all updates installed then Upgrade to WinX 1903 

-> https://www.microsoft.com/en-us/sof...d=_ns6icib920kfryfqkk0sohz30m2xmeo3htgo0y1y00


----------



## sinnedone

Ne01 OnnA said:


> Next Big Update for Gamers !
> Make sure You have all updates installed then Upgrade to WinX 1903
> 
> -> https://www.microsoft.com/en-us/sof...d=_ns6icib920kfryfqkk0sohz30m2xmeo3htgo0y1y00



Does it improve system performance for games or just add more intel patches that kill CPU performance?


----------



## Ne01 OnnA

sinnedone said:


> Does it improve system performance for games or just add more intel patches that kill CPU performance?


Better Combined in 3Dmark, also overall is up by ~2k !

-> https://www.3dmark.com/compare/fs/19393826/fs/19328287

Needs more testing, but AFAIK it is way better for gaming.
Tested only Rage 2 VLK

Gaming OC (always the same to check drivers uplift) -> 1732MHz 1.094v| 1175MHz HBM2 993mV +1% POW | ZEN at 4GHz | 3466 CL14-15-15-14 1T


----------



## Naeem

Ne01 OnnA said:


> Next Big Update for Gamers !
> Make sure You have all updates installed then Upgrade to WinX 1903
> 
> -> https://www.microsoft.com/en-us/sof...d=_ns6icib920kfryfqkk0sohz30m2xmeo3htgo0y1y00



your link take me to oct 2018 update is this page region locked or something i see no update inside windows either


----------



## LicSqualo

Naeem said:


> your link take me to oct 2018 update is this page region locked or something i see no update inside windows either


I've follow the same link and for me the update is present:

https://www.microsoft.com/en-us/sof...d=_ekmohujz30kfrgea0er6am96sn2xmeqgacuqhkzn00

In any case thank you Ne01. 
Yesterday I've reach my highest Firestrike before the update to verify.


----------



## VicsPC

Here we go guys, 1809 on the left, 1903 in the middle and 1903 best performance balanced power to the right . Didn't see as big a jump as i thought, but 8% combined is still pretty good. Wondering if i should change my core parking to 100% might get a higher score but this is direct comparison. 
Edit: So i redid it, there's now an option in power options for performance and energy and i set it to max performance. Gained another 2% on combined and physics. 



https://www.3dmark.com/compare/fs/19328812/fs/19397165/fs/19397353


----------



## LicSqualo

FANTASTIC! 9296 points from 6450!!!! This is a huge jump for me!

https://www.3dmark.com/3dm/36285789?

Unluckily I've problem now with my HBM speed. The first test is totally at 800 MHz. Is like the idle state. The set speed is 1080 MHz, my 24/7 clock.
I will do some test to investigate how to solve this issue.


----------



## Minotaurtoo

I still haven't got the update yet... for some reason or another it's not showing up when I check for updates.


----------



## geronimo

I'm thinking of flashing vega 64 bios to my MSI RX Vega 56 Air Boost 8G OC to unlock higher mem freq. I have samsung memory.

how dangerous this actually is, taking into consideration I have dual bios?

are both bios unlocked cos I have seen some info on the web that sometimes one of them is locked?

I'm willing to take a chance if I'm 100% sure that the other BIOS will remain ok if something goes wrong.
I've seen mentioned that some people loose DP connection and stuff like that.
any of you actually flashed MSI vega 56?
thx.


----------



## Minotaurtoo

geronimo said:


> I'm thinking of flashing vega 64 bios to my MSI RX Vega 56 Air Boost 8G OC to unlock higher mem freq. I have samsung memory.
> 
> how dangerous this actually is, taking into consideration I have dual bios?
> 
> are both bios unlocked cos I have seen some info on the web that sometimes one of them is locked?
> 
> I'm willing to take a chance if I'm 100% sure that the other BIOS will remain ok if something goes wrong.
> I've seen mentioned that some people loose DP connection and stuff like that.
> any of you actually flashed MSI vega 56?
> thx.


with a dual bios you are pretty safe, I'd still make a backup of the original just in case... also test the other bios first to make sure it's good. Other than that, you should be pretty safe.


----------



## geronimo

I backup them already. both of them are working properly. one is 150W the other 165W.
I'm thinking to go powerplay tables route maybe but I have to read up on that cos I have no idea about it. 
thx.


----------



## 98uk

Ne01 OnnA said:


> Next Big Update for Gamers !
> Make sure You have all updates installed then Upgrade to WinX 1903
> 
> -> https://www.microsoft.com/en-us/sof...d=_ns6icib920kfryfqkk0sohz30m2xmeo3htgo0y1y00



Weird, I get:

"This PC can't be upgraded to Windows 10"

"Your PC has a driver or service that isn't ready for this version of Windows 10"


----------



## LicSqualo

Minotaurtoo said:


> I still haven't got the update yet... for some reason or another it's not showing up when I check for updates.


Is not present with the standard and usual windows update. Only under request. You have to download the installer.



98uk said:


> Weird, I get:
> 
> "This PC can't be upgraded to Windows 10"
> 
> "Your PC has a driver or service that isn't ready for this version of Windows 10"


Same for me. I've done a backup, restored windows (1803) and updated to 1903. All ok. Reinstalled all the driver without issues.


----------



## 98uk

LicSqualo said:


> Same for me. I've done a backup, restored windows (1803) and updated to 1903. All ok. Reinstalled all the driver without issues.


Do you know what driver caused it? 

I'm tempted just to leave it be and let it update in it's own time. No point fixing what ain't broke.


----------



## 98uk

Meh... tried removing chipset drivers and GPU drivers and doing the update, but to no avail. 

Shame it doesn't tell you exactly what service or driver isn't ready...


----------



## VicsPC

98uk said:


> Meh... tried removing chipset drivers and GPU drivers and doing the update, but to no avail.
> 
> Shame it doesn't tell you exactly what service or driver isn't ready...


I went from 1809 to 1903 without issues, didn't have to delete anything. I did have the latest video card drivers for the may update so that might be why it's not letting you, not sure. Seems to be a decent update so far.


----------



## LicSqualo

98uk said:


> Do you know what driver caused it?
> 
> I'm tempted just to leave it be and let it update in it's own time. No point fixing what ain't broke.


Yes, probably was GOverlay USB Driver. But was a SERVICE/DLL so really difficult to search and found. I've tried 4 hours.
But finally, I've solved simply backup my files, restored my Windows copy to the original state (version 1803) and after this I've update without error.


----------



## Naeem

did anyone else notice firestrike graphics score drop with windows 10 version 1903 and combined score going up ? with vega 64 ?


----------



## VicsPC

Naeem said:


> did anyone else notice firestrike graphics score drop with windows 10 version 1903 and combined score going up ? with vega 64 ?


If it did it's probably within margin of error. I posted a comparison between the 2 but it was very minute. 

https://www.3dmark.com/compare/fs/19399561/fs/19328812#


----------



## Naeem

VicsPC said:


> If it did it's probably within margin of error. I posted a comparison between the 2 but it was very minute.
> 
> https://www.3dmark.com/compare/fs/19399561/fs/19328812#


Here is my score with Ryzen i was hitting 26000+ with 50% power target on Vega 64 LC with 1100 HBM2 and -18 voltage but i always got bad combined score now i get 8000+ vs 6000 to 7000 before

https://www.3dmark.com/compare/fs/19402627/fs/18502136


----------



## VicsPC

Naeem said:


> Here is my score with Ryzen i was hitting 26000+ with 50% power target on Vega 64 LC with 1100 HBM2 and -18 voltage but i always got bad combined score now i get 8000+ vs 6000 to 7000 before
> 
> https://www.3dmark.com/compare/fs/19402627/fs/18502136


Yea not a massive difference in the gpu scores so i wouldnt worry too much. You guys with first gen ryzen are getting massive bumps. Your physics is even higher then my 2700x so idk whats going on there. Maybe i need to use pbo or something.


----------



## 98uk

Many people reporting that BattlEye has been causing the upgrade error for 1903.

Need to check that later, gpu is currently out for new thermal paste.


----------



## VicsPC

98uk said:


> Many people reporting that BattlEye has been causing the upgrade error for 1903.
> 
> Need to check that later, gpu is currently out for new thermal paste.


That's interesting. I have Siege installed on my HDD but not sure if battleye installs anything on my SSD OS drive, mine upgraded just fine.


----------



## 98uk

VicsPC said:


> 98uk said:
> 
> 
> 
> Many people reporting that BattlEye has been causing the upgrade error for 1903.
> 
> Need to check that later, gpu is currently out for new thermal paste.
> 
> 
> 
> That's interesting. I have Siege installed on my HDD but not sure if battleye installs anything on my SSD OS drive, mine upgraded just fine.
Click to expand...

Apparently it's installed to /Program Files (x86)/Common Files/

You can check to see where it is on your PC. 

It looks like BattleEye also had crashing issues during the "Insider" release for 1903, but this was resolved.


----------



## mtrai

Naeem said:


> did anyone else notice firestrike graphics score drop with windows 10 version 1903 and combined score going up ? with vega 64 ?


Actually I saw all 3 scores increased. Here is the comparisons between 1903 on the left and 1803 on right. Note the GPU was set to the exact same clocks despite how 3dmark reads it.

https://www.3dmark.com/compare/fs/19304391/fs/19293021


----------



## ZealotKi11er

Naeem said:


> Here is my score with Ryzen i was hitting 26000+ with 50% power target on Vega 64 LC with 1100 HBM2 and -18 voltage but i always got bad combined score now i get 8000+ vs 6000 to 7000 before
> 
> https://www.3dmark.com/compare/fs/19402627/fs/18502136


Looks like they finally fixed the Ryzen issue with Combined score.


----------



## VicsPC

98uk said:


> Apparently it's installed to /Program Files (x86)/Common Files/
> 
> You can check to see where it is on your PC.
> 
> It looks like BattleEye also had crashing issues during the "Insider" release for 1903, but this was resolved.


Yup i have it installed there without issues, only game that uses it for me is Siege but updated without any issues. Haven't seen it crash in Siege but that game has crazy lag and frame rate drop for me all of a sudden.


----------



## tolis626

98uk said:


> Many people reporting that BattlEye has been causing the upgrade error for 1903.
> 
> Need to check that later, gpu is currently out for new thermal paste.





VicsPC said:


> Yup i have it installed there without issues, only game that uses it for me is Siege but updated without any issues. Haven't seen it crash in Siege but that game has crazy lag and frame rate drop for me all of a sudden.


Well, I have Siege and BattleEye installed on my OS SSD and I have none of these issues. Upgrading to 1903 even fixed a weird issue I had with Afterburner, where it would just shut down when I opened Siege, so I had to Alt+Tab out of Siege and restart it, otherwise my fan curve wouldn't apply and my card would quickly reach 80C due to it being overclocked. Granted, I'm not using a Vega card, but I don't think there should be a difference regarding this.


----------



## VicsPC

tolis626 said:


> Well, I have Siege and BattleEye installed on my OS SSD and I have none of these issues. Upgrading to 1903 even fixed a weird issue I had with Afterburner, where it would just shut down when I opened Siege, so I had to Alt+Tab out of Siege and restart it, otherwise my fan curve wouldn't apply and my card would quickly reach 80C due to it being overclocked. Granted, I'm not using a Vega card, but I don't think there should be a difference regarding this.


I have the same issue with ab and siege. I just launch siege then launch afterburner anyways, ill try it now and see if it did fix it.

Edit: Yup seems fixed for me as well.


----------



## 98uk

I replaced the TIM and thermal pads on my reference Vega 64 for Thermal Grizzly stuff. This is my totally unscientific back of a cigarette box before/after data, tested on BFV on toughest graphical map (Twisted Steel) at roughly same ambient temps:

*Stock TIM/thermal pads:*

Max GPU Diode: 68c (Average: 59.6c)
Max GPU Hotspot: 83c (Average: 74.8c)
Max GPU Memory: 75c (Average: 67.3c)

*With TG Kryonaut + TG Minus Pad 8:*

Max GPU Diode: 62c (Average: 54.4c)
Max GPU Hotspot: 81c (Average: 70.5c)
Max GPU Memory: 68c (Average: 61c)

So, there was a decent drop across the board, give or take a bit for variations in my totally not scientific testing. However, hotspot remains relatively high which what I was aiming to reduce.

Thoughts? I don't have enough stuff left to reseat and I think perhaps that is just the limitation of the chips heatspreader and there isn't much more TIM can do (in terms of high hot spot). Then again, perhaps I am over stating how bad the temps are?


EDIT: The grand question, was it worth it? 

*No.* :thumb:


----------



## Dhoulmagus

Thats a very good improvement, though I would have gone with a benchmark for a more reliable load to compare. The changes in memory and average gpu temps are significant.

Those are good hotspot numbers. All you can do is attempt to reapply the paste repeatedly until you find a lucky fit, but it's not likely.


----------



## 98uk

Serious_Don said:


> Thats a very good improvement, though I would have gone with a benchmark for a more reliable load to compare.
> 
> Those are good hotspot numbers. All you can do is attempt to reapply the paste repeatedly until you find a lucky fit, but it's not likely.


Thanks for confirming. I suppose I didn't really know what to expect. So, if that is good, i'm happy. You're right, benchmark figures probably would have been better, but it was a bit of a spur of the moment thing as summer temps dawn!

I think i'll leave it as is, taking it apart again would probably mean requiring more thermal pads since they tend to rip... and with no guarentee hot spot would get better. It's likely just the limit of the chip heatspreader.


----------



## VicsPC

Serious_Don said:


> Thats a very good improvement, though I would have gone with a benchmark for a more reliable load to compare. The changes in memory and average gpu temps are significant.
> 
> Those are good hotspot numbers. All you can do is attempt to reapply the paste repeatedly until you find a lucky fit, but it's not likely.


That might help but case airflow will help hotspot temps way more. Mine reaches around 60°C or so on water with ambient of around 23°C. Id love to try dif pads and see what i can get temps down too, i think next time i refill my loop i might give it a go.


----------



## 98uk

VicsPC said:


> That might help but case airflow will help hotspot temps way more. Mine reaches around 60°C or so on water with ambient of around 23°C. Id love to try dif pads and see what i can get temps down too, i think next time i refill my loop i might give it a go.


That would make sense. I'm using a really old Corsair 800d case which has pretty poor air cooling capability. It only has a single intake from the bottom. 

Also, regarding Windows 1903, deleting BattlEye from services and Common Files fixed it and I was able to upgrade without an issue.


----------



## Dhoulmagus

VicsPC said:


> That might help but case airflow will help hotspot temps way more. Mine reaches around 60°C or so on water with ambient of around 23°C. Id love to try dif pads and see what i can get temps down too, i think next time i refill my loop i might give it a go.


I had similar temps under water (probably faster heat transfer and overall lower temps surrounding the hot spots) but once I swapped back to the reference cooler my hot spot temps were getting back to 80-90. Re-applying paste would vary it a bit but I absolutely cannot get those numbers to come down anymore without undervolting. This is in a corsair 900D with enough fan power to make it levitate . I've seen so many people with hot spots reporting over 100-110C and nothing they could do besides undervolt would fix it, maybe it's caused by variations in the HBM height on those cards without the resin leveling it all out.. who knows.

I'm sure 80C hot spots are nothing to worry about though, just saying these temps are on the good side of normal for air cooled.


----------



## candasulas

Hi guys. 

I applied Under Voltage to my video card. I was using it smoothly. But my card is no longer responding to UV. I can't maintain stability when I change P6, P7 and HBM settings. 

My settings were as follows;

P6 1450Mhz / 960Mv
P7 1550Mhz / 1050Mv
HBM 945Mhz / 960Mv
Power Limit +50

I have used these values for months. I had no problems. But now my card is not stable. In the Valley test, the GPU speed drops to 1200Mhz. 
My HBM Memory speed is fixed to 945 Mhz and sometimes falls to 800 MHz. 
What should I do for a stable UV value? 

By the way;

My card is running smoothly in Power Saving, Balanced or Turbo modes in Radeon settings.
My goal is to maintain speed and reduce voltages.


----------



## 98uk

Anyone heard of or had specific DisplayPort out ports gone bad? 

I just had a sudden failure (was playing a game and then it began flickering) after which one port will not work. When connected, I get flicker across all monitors and occassionally it will lock up/freeze. It just about runs at desktop, but in games it flickers. Sometimes the monitor on that port will come up all kinds of weird and wonderful colours (see below).

When nothing is plugged into that specific port, it's all fine. Everything works as expected and overclock is fine. I've tried different monitors and cables in that one port, but nothing seems to help. I also get the error below when that port is in use!


----------



## sinnedone

Bad cable or poor quality cable. 

Are you using an adapter of any kind?


----------



## 98uk

sinnedone said:


> Bad cable or poor quality cable.
> 
> Are you using an adapter of any kind?


Unfortunately i've tried different cables and different monitors. The only factor that remains the same is the port :s

I tried two different monitors/cables which are just DP, no adaptor. Tried one with DVI to DP, same behaviour.

As you can see, the weird colours occurr on both monitors with different cables, same DP port on the graphics card. Again, both monitors work fine when plugged into the other DP ports on the graphics card.


----------



## sinnedone

Honestly other than damaged port or maybe foreign material in the port the only thing I can think of to see if it still does it is different drivers or maybe swapping bios switch to see if its a strange software issue.


----------



## Ne01 OnnA

98uk said:


> Unfortunately i've tried different cables and different monitors. The only factor that remains the same is the port :s
> 
> I tried two different monitors/cables which are just DP, no adaptor. Tried one with DVI to DP, same behaviour.
> 
> As you can see, the weird colours occurr on both monitors with different cables, same DP port on the graphics card. Again, both monitors work fine when plugged into the other DP ports on the graphics card.


It's WIndows 1903 known bug, some guys have this anomalies as well.

To solve this:
1. After boot/restart go into Display settings once (right click on desktop, bottom settings)
2. Done, Bug is gone for good.

Not Your fault


----------



## creatron

98uk said:


> Unfortunately i've tried different cables and different monitors. The only factor that remains the same is the port :s
> 
> I tried two different monitors/cables which are just DP, no adaptor. Tried one with DVI to DP, same behaviour.
> 
> As you can see, the weird colours occurr on both monitors with different cables, same DP port on the graphics card. Again, both monitors work fine when plugged into the other DP ports on the graphics card.


Install the new adrenalin 19.5.2 june3 !


----------



## 98uk

creatron said:


> Install the new adrenalin 19.5.2 june3 !


Ah does this fix the bug described below?



Ne01 OnnA said:


> It's WIndows 1903 known bug, some guys have this anomalies as well.
> 
> To solve this:
> 1. After boot/restart go into Display settings once (right click on desktop, bottom settings)
> 2. Done, Bug is gone for good.
> 
> Not Your fault


----------



## Ne01 OnnA

98uk said:


> Ah does this fix the bug described below?


And? It worked?
After restart Once go into Windows -> Right click on desktop - Display settings


----------



## 98uk

Ne01 OnnA said:


> And? It worked?
> After restart Once go into Windows -> Right click on desktop - Display settings


Tried that, didn't do anything i'm afraid.


----------



## Ne01 OnnA

98uk said:


> Tried that, didn't do anything i'm afraid.


Check this thread (need to do after each restart BTW)

-> https://community.amd.com/thread/239781


----------



## 98uk

Ne01 OnnA said:


> Check this thread (need to do after each restart BTW)
> 
> -> https://community.amd.com/thread/239781


hmm, strange. It seems similar, but also different. I don't have a freesync or gsync screen, so I don't see the option for variable refresh.

Also, I get screen tearing, freezes and crashing which doesn't seem to be an issue in that thread. It only started yesterday, a week or two after I installed 1903 and hadn't touched the drivers since.


----------



## Ne01 OnnA

98uk said:


> hmm, strange. It seems similar, but also different. I don't have a freesync or gsync screen, so I don't see the option for variable refresh.
> 
> Also, I get screen tearing, freezes and crashing which doesn't seem to be an issue in that thread. It only started yesterday, a week or two after I installed 1903 and hadn't touched the drivers since.


Make Thread on AMD/ATI Driver or GPU page.
Let them know there are some issues.


----------



## dagget3450

98uk said:


> Anyone heard of or had specific DisplayPort out ports gone bad?
> 
> I just had a sudden failure (was playing a game and then it began flickering) after which one port will not work. When connected, I get flicker across all monitors and occassionally it will lock up/freeze. It just about runs at desktop, but in games it flickers. Sometimes the monitor on that port will come up all kinds of weird and wonderful colours (see below).
> 
> When nothing is plugged into that specific port, it's all fine. Everything works as expected and overclock is fine. I've tried different monitors and cables in that one port, but nothing seems to help. I also get the error below when that port is in use!



i was getting this on my Vega FE the other day but it was temporary so i just said oh well, i think i also got it on my vega 64 once recently as well(color issues not stability)


----------



## pbiernik

Hey guys! Do you know how to Undervolt Vega 56 with the lastest version of Msi Afterburner?
Now there is the possibility of doing undervolt in each p state, but although I adjust the curve and then press the apply button, the changes are not saved.
You can try it by pressing CTRL + F


----------



## jearly410

dagget3450 said:


> 98uk said:
> 
> 
> 
> Anyone heard of or had specific DisplayPort out ports gone bad?
> 
> I just had a sudden failure (was playing a game and then it began flickering) after which one port will not work. When connected, I get flicker across all monitors and occassionally it will lock up/freeze. It just about runs at desktop, but in games it flickers. Sometimes the monitor on that port will come up all kinds of weird and wonderful colours (see below).
> 
> When nothing is plugged into that specific port, it's all fine. Everything works as expected and overclock is fine. I've tried different monitors and cables in that one port, but nothing seems to help. I also get the error below when that port is in use!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i was getting this on my Vega FE the other day but it was temporary so i just said oh well, i think i also got it on my vega 64 once recently as well(color issues not stability)
Click to expand...

I’m getting this too. Been happening for a long while.


----------



## Wuest3nFuchs

Now rx5700xt has a bunxh of nice features, but i fear of navi 20 and it could really be awesome .But in real world i tend to choose wisely between a vega56 pulse or nitro!? Nitro or Pulse...?

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Ne01 OnnA

Wuest3nFuchs said:


> Now rx5700xt has a bunxh of nice features, but i fear of navi 20 and it could really be awesome .But in real world i tend to choose wisely between a vega56 pulse or nitro!? Nitro or Pulse...?
> 
> Gesendet von meinem SM-G950F mit Tapatalk


HBCC at 12GB or 16GB (if You have 32GB)
-> https://www.mindfactory.de/product_...Aktiv-PCIe-3-0-x16--Full-Retail-_1235216.html


----------



## Wuest3nFuchs

Ne01 OnnA said:


> HBCC at 12GB or 16GB (if You have 32GB)
> 
> -> https://www.mindfactory.de/product_...Aktiv-PCIe-3-0-x16--Full-Retail-_1235216.html


thx onna

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Ipak

Today this wierd color shifting happend to me too. Screen was flickering with red green and blue colors every couple of seconds, but it dissapiered after restart. I'm on 19.6.1 and for screen atm i use LG 4k HDR TV over HDMI ([email protected] 60hz rgb 4:4:4 hdr disabled)


----------



## Ne01 OnnA

Ipak said:


> Today this wierd color shifting happend to me too. Screen was flickering with red green and blue colors every couple of seconds, but it dissapiered after restart. I'm on 19.6.1 and for screen atm i use LG 4k HDR TV over HDMI ([email protected] 60hz rgb 4:4:4 hdr disabled)


After restart Once go into (When Windows is loaded) -> Right click on desktop - Display settings

Every day, every time until MS fix this issue 

Note: 
This issues occurs only in WinX 1903.


----------



## ht_addict

Have you upgraded to 1903? I have a feeling it that. Get the same issue on my 65 OLED.


----------



## 98uk

Anyone know why my card occassionally gets stuck clocks?

I'll be playing and then memory always gets stuck at 167mhz until I reboot. It's like everything freezes, but it's always 167mhz that it sticks at.


----------



## Xinoxide

98uk said:


> Anyone know why my card occassionally gets stuck clocks?
> 
> I'll be playing and then memory always gets stuck at 167mhz until I reboot. It's like everything freezes, but it's always 167mhz that it sticks at.


What have you tried? what card do you have? What bios is on it?

We need information to formulate suggestions.


----------



## 98uk

Xinoxide said:


> What have you tried? what card do you have? What bios is on it?
> 
> We need information to formulate suggestions.


I think it's driver related to he honest. Anything after 19.4.1 does it for me. I've tried the latest, 19.5.2 and the same behaviour occurs, it's strange. 

I have rolled back again and will have to wait and see.


----------



## cplifj

do you use HBCC when clocks get stuck ? I get clocks stuck when quiting some games when hbcc is on, it doesn't happen when hbcc is off.


----------



## 98uk

cplifj said:


> do you use HBCC when clocks get stuck ? I get clocks stuck when quiting some games when hbcc is on, it doesn't happen when hbcc is off.


Nah, not used HBCC. Haven't edited anything bar an overclock on my memory and custom fan profile, all via Wattman.


----------



## Xinoxide

98uk said:


> I think it's driver related to he honest. Anything after 19.4.1 does it for me. I've tried the latest, 19.5.2 and the same behaviour occurs, it's strange.
> 
> I have rolled back again and will have to wait and see.


I had this issue when I first got my vega 64. I had issues with like every driver until I found an updated Bios.

I might recommend seeing if you can get a newer bios flashed to the card.

These days, the newer drivers and that same bios I had found all play quite nicely with each other.

I now push my card to 400W on the reg.


----------



## 113802

98uk said:


> Anyone know why my card occassionally gets stuck clocks?
> 
> I'll be playing and then memory always gets stuck at 167mhz until I reboot. It's like everything freezes, but it's always 167mhz that it sticks at.


It's due to browsers and any other program GUI that uses the GPU, not GPU drivers. Close out all background task when gaming and you'll see what I mean.

It might be related to how Windows 1903 utilizes the GPU. I tried older drivers and they exhibited the same issue.


----------



## nolive721

hello

so received my VEGA 64 LC last month after some post on this thread here.really nice looking card even compared to my 1080Ti EVGA HYbrid

but performance wise, focusing on Undervolting, it seems performing slightly better than the 1080Hybrid I had but falls behind by a good 15% on FPS vs the 1080Ti

In synthetic benchmark like Firestrike, the VEGA manages to score around 25,200points but my Ti reaches easily 30,000points

Will do some more overclocking over the weekend but I was hoping for something closer to 27,000points, it might be now a bit unrealistic here.

I have managed to get my 3 1080p monitors to output at their max refersh rate of 75hZ(3 of them are freesync) and will test the Freesync feature in Games, which was one the main drivers to go back to try AMD

One thing I have noticed and its annoying, after setting EYEFINITY res at 5760x1080 over the 3 screens, I cant find a way to toggle to single screen unless I discard the Eyefinity set-up but then have to set it again

I thought I could make use of Windows key+P to toggle projection display 3 vs 1 screen quickly, like it does with NV Surround, but it doesn't work.
Anybody here knows a fix for that?


----------



## 98uk

Xinoxide said:


> I had this issue when I first got my vega 64. I had issues with like every driver until I found an updated Bios.
> 
> I might recommend seeing if you can get a newer bios flashed to the card.
> 
> These days, the newer drivers and that same bios I had found all play quite nicely with each other.
> 
> I now push my card to 400W on the reg.


Hmm, okay. I'll reach out to Sapphire see what they can suggest. They don't seem to make their BIOS public, only when you contact them.




WannaBeOCer said:


> It's due to browsers and any other program GUI that uses the GPU, not GPU drivers. Close out all background task when gaming and you'll see what I mean.
> 
> It might be related to how Windows 1903 utilizes the GPU. I tried older drivers and they exhibited the same issue.


I hadn't thought of that, though that said, I don't really do too much on my PC, so there isn't anything specific that I can think would be open and would cause this.:thumbsdow


----------



## 113802

98uk said:


> I hadn't thought of that, though that said, I don't really do too much on my PC, so there isn't anything specific that I can think would be open and would cause this.:thumbsdow


Doesn't have to be GPU intensive, just having Chrome or Edge opened in the background and minimized also causes it. I'll check if FireFox also causes it if not I'll be using it. Can't even listen to Google Music anymore while playing.


----------



## VicsPC

WannaBeOCer said:


> Doesn't have to be GPU intensive, just having Chrome or Edge opened in the background and minimized also causes it. I'll check if FireFox also causes it if not I'll be using it. Can't even listen to Google Music anymore while playing.


I think turning off hardware acceleration fixes the issue. I don't have this problem on my v64 when i play games and use youtube looper. Don't have it using youtube as well.


----------



## Xinoxide

VicsPC said:


> I think turning off hardware acceleration fixes the issue. I don't have this problem on my v64 when i play games and use youtube looper. Don't have it using youtube as well.


I have hardware acceleration enabled because I play Hole.io. 

I constantly have chrome open in the background and never run into this issue as described.

I DO use FRC in most all games, so I do see memory clocks drop to 800 when boost clocks are low due to the FRC.


----------



## VicsPC

Xinoxide said:


> I have hardware acceleration enabled because I play Hole.io.
> 
> I constantly have chrome open in the background and never run into this issue as described.
> 
> I DO use FRC in most all games, so I do see memory clocks drop to 800 when boost clocks are low due to the FRC.


Same because of a freesync monitor, I dont think ive ever had it get stuck at 167mhz while playing games, i do have my memory at 1100 but even at 945 where ive had it for the past year i dont think ive ever seen it get stuck at 167.


----------



## nolive721

nolive721 said:


> hello
> 
> so received my VEGA 64 LC last month after some post on this thread here.really nice looking card even compared to my 1080Ti EVGA HYbrid
> 
> but performance wise, focusing on Undervolting, it seems performing slightly better than the 1080Hybrid I had but falls behind by a good 15% on FPS vs the 1080Ti
> 
> In synthetic benchmark like Firestrike, the VEGA manages to score around 25,200points but my Ti reaches easily 30,000points
> 
> Will do some more overclocking over the weekend but I was hoping for something closer to 27,000points, it might be now a bit unrealistic here.
> 
> I have managed to get my 3 1080p monitors to output at their max refersh rate of 75hZ(3 of them are freesync) and will test the Freesync feature in Games, which was one the main drivers to go back to try AMD
> 
> One thing I have noticed and its annoying, after setting EYEFINITY res at 5760x1080 over the 3 screens, I cant find a way to toggle to single screen unless I discard the Eyefinity set-up but then have to set it again
> 
> I thought I could make use of Windows key+P to toggle projection display 3 vs 1 screen quickly, like it does with NV Surround, but it doesn't work.
> Anybody here knows a fix for that?


looking at the lack of traction, I guess its not possible to achieve this.I will switch off one of my monitors and keep only 2 ON when I am not gaming then


did some more UVolting/OCing runs over the week-end and its really nice to have the 3monitor Freesync feature working

I bumped into something bit weird thouh but maybe you guys here experienced something similar

Using synthetic benchmarks like Heaven or 3D mark, as well as Games like AC or Crysis 3,I have not problem to run my Ram OCed at 3066Mhz (Corsair Vengeance Hynix Die) in paralel to GPU testing

But in PCARS2 it does constantly crash whatever UV/OC I apply, even what i consider minor ones.

Any idea why? this Game was perfectly fine with my 1080Ti so I am puzzled at why is happening with the VEGA


thanks so much


----------



## Blackops_2

So there are some quality deals on vega 56 right now at newegg. Mainly MSI ariboost for 260 after MIR. But i can't find a block under 150. Does anyone know where to get one? That's what's holding me back i'm not about to drop $175 on a block.


----------



## sinnedone

Blackops_2 said:


> So there are some quality deals on vega 56 right now at newegg. Mainly MSI ariboost for 260 after MIR. But i can't find a block under 150. Does anyone know where to get one? That's what's holding me back i'm not about to drop $175 on a block.


Someone was selling a Vega 64 with block for 400 in the marketplace.

Blocks are expensive unless you go used.

You can check eBay for a barrow or byksi block too.


----------



## Ne01 OnnA

Some F1 2019 test with Zen @ 1440p

==


----------



## Blackops_2

Found a reference 56 with Samsung memory on ebay, for $250. Probably mined but meh opens up block compatibility. Bykski/Barrow GTG? If so i'm about to order a block for it.


----------



## Bruizer

Blackops_2 said:


> So there are some quality deals on vega 56 right now at newegg. Mainly MSI ariboost for 260 after MIR. But i can't find a block under 150. Does anyone know where to get one? That's what's holding me back i'm not about to drop $175 on a block.


Just pulled the trigger on the MSI Airboost this past weekend. I've enjoyed my R9 Nano and it's small size (wish the Vega 56 Nano would have been readily available at these prices but oh well), but I'm looking forward to joining the Vega club and doing some undervolting! Personally, I like the idea of a blower (despite general increase in noise). I miss HIS' custom blowers. Owned a HIS 5770 and 7950 and those blowers were great.


----------



## Blackops_2

Bruizer said:


> Just pulled the trigger on the MSI Airboost this past weekend. I've enjoyed my R9 Nano and it's small size (wish the Vega 56 Nano would have been readily available at these prices but oh well), but I'm looking forward to joining the Vega club and doing some undervolting! Personally, I like the idea of a blower (despite general increase in noise). I miss HIS' custom blowers. Owned a HIS 5770 and 7950 and those blowers were great.


I probably should've gotten a new one but what bothered me with newegg was the tax which basically only had it at $30 off. Then started reading on Hynix and Samsung memory and was thinking if i could get one with Samsung memory i'd be better off. Came across a guy that has good reviews, says it was "for professional use only" i messaged him asking what that entailed, but $250 to my door was hard to turn down. 

I should be able to put an EK backplate on it with a Bykski water block on it with the EK backplate right? I mean i found EK for like $150 but still pricey for a block IMO.


----------



## Loladinas

Blackops_2 said:


> I should be able to put an EK backplate on it with a Bykski water block on it with the EK backplate right? I mean i found EK for like $150 but still pricey for a block IMO.


I mean, the holes are all standard, there should be no reason for it not to fit.

I'm in the same boat as you. Vega 56's were pretty cheap on the local second hand market just before Navi announcement. I saw a couple go as low as 150€, but they seemed a bit suspicious. Grabbed one for 180€. With Samsung memory. Haven't tried overclocking it properly because the card's just begging for better cooling, but it undervolts alright. Holds stock boost at 1v, memory is stable at 950Mhz. Waiting for now is my Barrow block to get delivered. All in all, for ~250€ total I don't think I can complain.


----------



## Bruizer

Blackops_2 said:


> I probably should've gotten a new one but what bothered me with newegg was the tax which basically only had it at $30 off. Then started reading on Hynix and Samsung memory and was thinking if i could get one with Samsung memory i'd be better off. Came across a guy that has good reviews, says it was "for professional use only" i messaged him asking what that entailed, but $250 to my door was hard to turn down.
> 
> I should be able to put an EK backplate on it with a Bykski water block on it with the EK backplate right? I mean i found EK for like $150 but still pricey for a block IMO.


Definitely hard to turn down, and, yeah, the tax thing is never cool. BUT...for peace of mind knowing that if there is an issue I can return it relatively easy and get taken care of, plus a warranty, is also hard to pass on. Haha Granted I'm playing the Hynix-Samsung lottery, I don't plan to do too much other than undervolt and maximize boost. Wasn't planning on flashing the V64 bios.

But I feel ya!


----------



## Blackops_2

Loladinas said:


> I mean, the holes are all standard, there should be no reason for it not to fit.
> 
> I'm in the same boat as you. Vega 56's were pretty cheap on the local second hand market just before Navi announcement. I saw a couple go as low as 150€, but they seemed a bit suspicious. Grabbed one for 180€. With Samsung memory. Haven't tried overclocking it properly because the card's just begging for better cooling, but it undervolts alright. Holds stock boost at 1v, memory is stable at 950Mhz. Waiting for now is my Barrow block to get delivered. All in all, for ~250€ total I don't think I can complain.


That's what i was thinking i don't see why it wouldn't fit. Reviews for Bykski's stuff seems legit. It's $25 cheaper than the EK. Though what is $25 in the long run? 



Bruizer said:


> Definitely hard to turn down, and, yeah, the tax thing is never cool. BUT...for peace of mind knowing that if there is an issue I can return it relatively easy and get taken care of, plus a warranty, is also hard to pass on. Haha Granted I'm playing the Hynix-Samsung lottery, I don't plan to do too much other than undervolt and maximize boost. Wasn't planning on flashing the V64 bios.
> 
> But I feel ya!


Yeah i just wanted to maximize performance and be sitting at 1080 levels of performance. My thing was the MSI airboost had pretty terrible reviews too granted not that many reviews but it worried me, then block compatibility was another thing. I know the EK fits wasn't sure about the Bykski. Warranty is a huge plus and now kind of feeling dumb for not spending $30 more in the long run for warranty and saying above what is $25 in the long run lol. I will say though in the past i've had far better results sticking reference PCB with AMD than AIB, less it's a specialty card like a lightening. Going to see what it will hit. 

Guy said used professionally so i take it content creation?


----------



## Bruizer

Blackops_2 said:


> Yeah i just wanted to maximize performance and be sitting at 1080 levels of performance. My thing was the MSI airboost had pretty terrible reviews too granted not that many reviews but it worried me, then block compatibility was another thing. I know the EK fits wasn't sure about the Bykski. Warranty is a huge plus and now kind of feeling dumb for not spending $30 more in the long run for warranty and saying above what is $25 in the long run lol. I will say though in the past i've had far better results sticking reference PCB with AMD than AIB, less it's a specialty card like a lightening. Going to see what it will hit.
> 
> Guy said used professionally so i take it content creation?


"Pretty terrible reviews"!!! Don't make me start second-guessing myself, too! Hahaha JK! My take away from the negative reviews were that it's not plug-n-play friendly but shines when undervolted. And whether its Hynix or Samsung is about 50/50.

I also got the free Division 2 Gold Edition and World War Z but I'm probably not going to ever play those games and apparently I have to subscribe to Uplay if I wanted Division 2 and you can't just sale the key due to some form of hardware validation. I didn't even realize I was getting them until I was checking out so it's whatever. Haha


----------



## Blackops_2

Bruizer said:


> "Pretty terrible reviews"!!! Don't make me start second-guessing myself, too! Hahaha JK! My take away from the negative reviews were that it's not plug-n-play friendly but shines when undervolted. And whether its Hynix or Samsung is about 50/50.
> 
> I also got the free Division 2 Gold Edition and World War Z but I'm probably not going to ever play those games and apparently I have to subscribe to Uplay if I wanted Division 2 and you can't just sale the key due to some form of hardware validation. I didn't even realize I was getting them until I was checking out so it's whatever. Haha


Yeah i figured the same but idk i went through XFX 7970s so i'm always a little weary but given the the reputation of Vega you're probably right it's nothing to worry about MSI is good company as well. I'll take the division 2 off your hands if you don't want it haha i think it only works if you have Vega or something like that?


----------



## Bruizer

Blackops_2 said:


> Yeah i figured the same but idk i went through XFX 7970s so i'm always a little weary but given the the reputation of Vega you're probably right it's nothing to worry about MSI is good company as well. I'll take the division 2 off your hands if you don't want it haha i think it only works if you have Vega or something like that?


It looks like all new AMD cards (at least rx580/590 and Vegas). I'll keep you in mind.


----------



## Blackops_2

Bruizer said:


> It looks like all new AMD cards (at least rx580/590 and Vegas). I'll keep you in mind.


Please do just ordered the Bykski block. 

What's a good goal to hope for for Vega 56 on water? 1600/1000?


----------



## diggiddi

nolive721 said:


> hello
> 
> so received my VEGA 64 LC last month after some post on this thread here.really nice looking card even compared to my 1080Ti EVGA HYbrid
> 
> but performance wise, focusing on Undervolting, it seems performing slightly better than the 1080Hybrid I had but falls behind by a good 15% on FPS vs the 1080Ti
> 
> In synthetic benchmark like Firestrike, the VEGA manages to score around 25,200points but my Ti reaches easily 30,000points
> 
> Will do some more overclocking over the weekend but I was hoping for something closer to 27,000points, it might be now a bit unrealistic here.
> 
> I have managed to get my 3 1080p monitors to output at their max refersh rate of 75hZ(3 of them are freesync) and will test the Freesync feature in Games, which was one the main drivers to go back to try AMD
> 
> One thing I have noticed and its annoying, after setting EYEFINITY res at 5760x1080 over the 3 screens, I cant find a way to toggle to single screen unless I discard the Eyefinity set-up but then have to set it again
> 
> I thought I could make use of Windows key+P to toggle projection display 3 vs 1 screen quickly, like it does with NV Surround, but it doesn't work.
> Anybody here knows a fix for that?


I think your best bet is to submit complaint to AMD and suggest they add this feature


----------



## Ne01 OnnA

Blackops_2 said:


> Please do just ordered the Bykski block.
> 
> What's a good goal to hope for for Vega 56 on water? 1600/1000?


ImHO

You can have (range):

1600-1668 or even 1700MHz (With good cooling)
and w/Samsung HBM up to 1120MHz (Use Mem Timing Tweak instead of getting more than 1150MHz)

-> https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/


----------



## diggiddi

Ne01 OnnA said:


> ImHO
> 
> You can have (range):
> 
> 1600-1668 or even 1700MHz (With good cooling)
> and w/Samsung HBM up to 1120MHz (Use Mem Timing Tweak instead of getting more than 1150MHz)
> 
> -> https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/


At those speeds how does it compare to a 64 LC?


----------



## Ne01 OnnA

diggiddi said:


> At those speeds how does it compare to a 64 LC?


I have a LC (Vega 64 XTX by AMD)
It can much more, but i don't need that speed - 1700MHz+

I saw 1766MHz at 1137mV 
So it can do ~1800MHz at 1150-1175mV (IMhO only good for FS  29k+ or even 30k is possible)


----------



## LicSqualo

Hi guys, someone has tested the 3dMark PCI-Ex test?
This is my result. Seems a bit lower for a 3.0 x16 lane bus interface, or not?


----------



## Bruizer

My MSI Air Boost came in yesterday. Had scuffs/scratches brand new. The box was in original shrink wrap so it's not like I even got accidentally sent an open box. Anywho... I decided to give it a run regardless. Ran well, but had Hynix memory (the most recent batches probably all due). So, I returned it for being scuffed/scratched out of the box and we'll see if I get lucky and land one with Samsung memory. Not holding my breath though.

ALSO... Feel like an idiot. That "Free Gift" game combo with purchase isn't free. Haha. If for some reason they run out of stock, they will only refund me $171.00 because the "Free Gift" is not returnable/refundable. Instead, they deduct the value of $119 from the graphics card price. So everyone send good vibes my way that the next one I get just looks good and works at this point. Haha Would rather a working card then a refund of $171 instead of the $300 I spent.


----------



## doritos93

Crashing with a solid color (random it seems) on the screen, then to black. Machine stays on and doesn't reboot. Seems to happen when I up the power target (even only by 20%) card uses around 280w as per afterburner

I'm reading a lot of people RMA cards for this? Is this a power issue? I've got a evga 800 b3


----------



## Drake87

doritos93 said:


> Crashing with a solid color (random it seems) on the screen, then to black. Machine stays on and doesn't reboot. Seems to happen when I up the power target (even only by 20%) card uses around 280w as per afterburner
> 
> I'm reading a lot of people RMA cards for this? Is this a power issue? I've got a evga 800 b3


I had a Vega 64 that acted similarly. RMA it.


----------



## nolive721

nearly there to break my 27k FS barrier!

any advice on the core, memory UV/OC and the memory timings tweaking appreciated


----------



## VicsPC

LicSqualo said:


> Hi guys, someone has tested the 3dMark PCI-Ex test?
> This is my result. Seems a bit lower for a 3.0 x16 lane bus interface, or not?


Gave mine a quick test, getting about 6.5GB.s. In BIOS my C7H shows x8 native but gpuz and hwinfo both show it running in 16x so not sure what's going on there.


----------



## LicSqualo

VicsPC said:


> Gave mine a quick test, getting about 6.5GB.s. In BIOS my C7H shows x8 native but gpuz and hwinfo both show it running in 16x so not sure what's going on there.


Same for me. All the Software show I'm running at x16 and in bios x8.
But I've two NVME disks, so x16 is really strange.
I'm re-running the test, but the max reached until now is 7,26 GB/s. In wiki this is the speed for the x8: https://en.wikipedia.org/wiki/PCI_Express


----------



## VicsPC

LicSqualo said:


> Same for me. All the Software show I'm running at x16 and in bios x8.
> But I've two NVME disks, so x16 is really strange.
> I'm re-running the test, but the max reached until now is 7,26 GB/s. In wiki this is the speed for the x8: https://en.wikipedia.org/wiki/PCI_Express


If you have 2 m.2 drives and both slots are populated it's supposed to be x8, the top m.2 uses cpu pcie lanes the bottom doesn't. I took my gpu out, cleaned the contacts with alcohol and my bios now shows 16x, still only g etting about 7.5gb/s in pcie test so not sure what's up with that.


----------



## LicSqualo

VicsPC said:


> If you have 2 m.2 drives and both slots are populated it's supposed to be x8, the top m.2 uses cpu pcie lanes the bottom doesn't. I took my gpu out, cleaned the contacts with alcohol and my bios now shows 16x, still only g etting about 7.5gb/s in pcie test so not sure what's up with that.


My VGA run at 16x, no-one software show a x8. Only in bios I see a x8, but is not relevant in this case.
My C6H don't have two NVME slots, one of this drive is on a PCI-Ex card in the 2nd slot (the electrical x8).
Just to be clear: AIDA64, HWInfo, SIV, GPU-Z, HWMonitor, 3dMark, and all the games I play, show my VGA running at x16.
But I've two NVME drive and BOTH are running at x4 (PCI-Ex 3.0).
So, I need to compare my 3dMark results with other and sure x16 VGA (also in other system) to understand what is true in my configuration.

I'm searching for a result in 3dMark that is more than 8 GB/s, simply.


----------



## VicsPC

LicSqualo said:


> My VGA run at 16x, no-one software show a x8. Only in bios I see a x8, but is not relevant in this case.
> My C6H don't have two NVME slots, one of this drive is on a PCI-Ex card in the 2nd slot (the electrical x8).
> Just to be clear: AIDA64, HWInfo, SIV, GPU-Z, HWMonitor, 3dMark, and all the games I play, show my VGA running at x16.
> But I've two NVME drive and BOTH are running at x4 (PCI-Ex 3.0).
> So, I need to compare my 3dMark results with other and sure x16 VGA (also in other system) to understand what is true in my configuration.
> 
> I'm searching for a result in 3dMark that is more than 8 GB/s, simply.


Gotcha, if you have the second slot populated, first slot will run at 8x, and just for reference, all software showed my card running at 16x but software can and is usually wrong in some instances. I think they only show it's native link speed, not actual speed used. Like i said, i cleaned the contacts, put my card back in and bios now shows x16, and that also changed my 3dmark test as it went from 6.5ish to 7.5ish.


----------



## LicSqualo

VicsPC said:


> Gotcha, if you have the second slot populated, first slot will run at 8x, and just for reference, all software showed my card running at 16x but software can and is usually wrong in some instances. I think they only show it's native link speed, not actual speed used. Like i said, i cleaned the contacts, put my card back in and bios now shows x16, and that also changed my 3dmark test as it went from 6.5ish to 7.5ish.


I'm agree with you and the logic explained. 
But for some reason GPU-Z, the most reliable tool for this "investigation" (PCI-Ex speed/lanes), show me a x16.
No one software show me a x8 for my VGA... No ONE, sorry if repeated. This is the reason I'm asking for.


----------



## nolive721

is there any benefit to run the LC Rad as Push-Pull?

with my 1080Ti, I could save 3 to 4degC on the core temp allowing the card to maintain its max boost longer

my 64LC can reach in the low 50s in heavy synthetic benchmark or gaming so I am wondering if thats worth the effort?


----------



## LicSqualo

nolive721 said:


> is there any benefit to run the LC Rad as Push-Pull?
> 
> with my 1080Ti, I could save 3 to 4degC on the core temp allowing the card to maintain its max boost longer
> 
> my 64LC can reach in the low 50s in heavy synthetic benchmark or gaming so I am wondering if thats worth the effort?


Yes of course, the limit for the boost (but not only, all the frequencies are) is temperature related, as for your 1080Ti.
In GPU-Z you can found the "Hot Spot" temperature that cause to lower the clocks.


----------



## VicsPC

LicSqualo said:


> I'm agree with you and the logic explained.
> But for some reason GPU-Z, the most reliable tool for this "investigation" (PCI-Ex speed/lanes), show me a x16.
> No one software show me a x8 for my VGA... No ONE, sorry if repeated. This is the reason I'm asking for.


Yup and that's why BIOS is far more reliable. Again, i think software just shows link speed and not speed actually being used. If the BIOS was wrong then both my tests would be showing the same link speed in 3dmark.


----------



## LicSqualo

VicsPC said:


> Yup and that's why BIOS is far more reliable. Again, i think software just shows link speed and not speed actually being used. If the BIOS was wrong then both my tests would be showing the same link speed in 3dmark.


I'm a bit confused, this result is x8 or x16 speed?


----------



## VicsPC

LicSqualo said:


> I'm a bit confused, this result is x8 or x16 speed?


I'd have to test it again but that seems to be x16, i was getting about 6.2 on x8 in BIOS and 7.2 on x16 in BIOS. So the BIOS seems to be reading correctly in that regard, on my board it does anyways. But ive seen people getting 13gb/s+ on x470 so i have no idea.


----------



## LicSqualo

VicsPC said:


> I'd have to test it again but that seems to be x16, i was getting about 6.2 on x8 in BIOS and 7.2 on x16 in BIOS. So the BIOS seems to be reading correctly in that regard, on my board it does anyways. But ive seen people getting 13gb/s+ on x470 so i have no idea.


Thank you.


----------



## doritos93

Drake87 said:


> I had a Vega 64 that acted similarly. RMA it.


Man that sucks. Any ideas on how I could attenuate the issue a little? Lowering clocks might help yeah?

Dud overclocker, and now it's unstable. Going to stay away from Strix cards when it comes to AMD from now on


----------



## Drake87

doritos93 said:


> Man that sucks. Any ideas on how I could attenuate the issue a little? Lowering clocks might help yeah?
> 
> Dud overclocker, and now it's unstable. Going to stay away from Strix cards when it comes to AMD from now on


Try the low power mode. Should be a switch on the card.


----------



## Wuest3nFuchs

VicsPC said:


> I'd have to test it again but that seems to be x16, i was getting about 6.2 on x8 in BIOS and 7.2 on x16 in BIOS. So the BIOS seems to be reading correctly in that regard, on my board it does anyways. But ive seen people getting 13gb/s+ on x470 so i have no idea.


i dont have a vega yet maybe i'll order the vega 56 Pulse today which is in Stock in EU. yesterday i tested my r9 fury and i got ~13gb/s on my ch7

Gesendet von meinem SM-G950F mit Tapatalk


----------



## VicsPC

Wuest3nFuchs said:


> i dont have a vega yet maybe i'll order the vega 56 Pulse today which is in Stock in EU. yesterday i tested my r9 fury and i got ~13gb/s on my ch7
> 
> Gesendet von meinem SM-G950F mit Tapatalk


Yea that's what i mean lol. Not sure why I'm only getting 7gb/s. Maybe i have 2 many satas plugged in but doubt that would cause it. BIOS still says im at native x16, and i have it set to gen 3.


----------



## Wuest3nFuchs

VicsPC said:


> Yea that's what i mean lol. Not sure why I'm only getting 7gb/s. Maybe i have 2 many satas plugged in but doubt that would cause it. BIOS still says im at native x16, and i have it set to gen 3.


i guess you got the latest drivers already.
I stayed on 19.4.3 because newer drivers are buggy on my end (win 10 1803).So i really would like to know why it only gives you 7gb/s...
How much Sata drives do you have connected?

Gesendet von meinem SM-G950F mit Tapatalk


----------



## doritos93

Drake87 said:


> Try the low power mode. Should be a switch on the card.


Thanks man. I'll try that. I tried backing off core clocks by -10% for the moment, havent had a crash yet

I started getting something new within the last day or so lol.. when I woke the machine up this morning, my primary monitor was colored green while the other two were fine. I turned off the monitor and the "green problem" moved to the new primary monitor. Turning back on the original primary caused the problem to transfer back to it. After all the monitors went to sleep and woke up again, the problem went away. Windows is usable in this state

Happened again a couple hours later but with a different colour, looked like grey

Man alive the card is slowly deteriorating

I already filled out the RMA. I just got to find a stop gap before I send this in


----------



## VicsPC

Wuest3nFuchs said:


> i guess you got the latest drivers already.
> I stayed on 19.4.3 because newer drivers are buggy on my end (win 10 1803).So i really would like to know why it only gives you 7gb/s...
> How much Sata drives do you have connected?
> 
> Gesendet von meinem SM-G950F mit Tapatalk


Got 2 SSDs 1HDD and one optical drive. From what i read it was ports 5-6 that might give lower pcie bandwidth or something but not sure. Could try unplugging a few to see what it does.


----------



## Wuest3nFuchs

VicsPC said:


> Got 2 SSDs 1HDD and one optical drive. From what i read it was ports 5-6 that might give lower pcie bandwidth or something but not sure. Could try unplugging a few to see what it does.


also got 2ssds and one hdd and one optical drive ...so this issue shouldn't rely on the Sataports in any kind. You can try testing all h/sdd's with gsmartcontrol and also try out crystaldiskinfo. Since i moved away from intel from a year ago i didn't had any h/ssd issues in any way i had before with intel ,and the cause of the issue i've had had to do with the sata cables ! one thing i didn't have and also deactivated in bios is the nvme thing! 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## VicsPC

Wuest3nFuchs said:


> also got 2ssds and one hdd and one optical drive ...so this issue shouldn't rely on the Sataports in any kind. You can try testing all h/sdd's with gsmartcontrol and also try out crystaldiskinfo. Since i moved away from intel from a year ago i didn't had any h/ssd issues in any way i had before with intel ,and the cause of the issue i've had had to do with the sata cables ! one thing i didn't have and also deactivated in bios is the nvme thing!
> 
> Gesendet von meinem SM-G950F mit Tapatalk


Yea i think i tried that but might try it again, i am on an old bios for my mobo as well but honestly doubt that would be the issue.


----------



## Ne01 OnnA

History:
29.48m Things gets interesting


----------



## nolive721

are people here in Xfire VEGA 64 planning to buy a new X570 with PCIE Gen board?

I am scratching with teh idea to move from my B350 board to an X570 to create a Xfire path with a 2nd cheapo VEGA64 in addition to my LC

maybe the launch of 5700 and 5700Xt is reducing the appeal to people here though,I would understand

thanks


----------



## Minotaurtoo

not sure if this really is needed for us, but here you go... new drivers.. https://drivers.amd.com/drivers/bet...tware-adrenalin-2019-edition-19.7.1-july7.exe


----------



## PontiacGTX

that link doesnt seem to work
guru3d direct link


----------



## THUMPer1

hmm 5700xt in my future?


----------



## miklkit

In the last week or so I have updated to win10 1903, installed those chipset drivers, and installed the 19.7.1 drivers. Now the cpu is running as well as it ever has, ram is running better as well, and the Vega 64 is also running better than ever. The biggest jump in performance has been in the I/O which is almost as good now as it was before all the intel mitigations trashed overall performance. 



This all has also convinced me that there is a problem with the V64 vram. It can not be OCed past 1000 without causing CTDs and BSODs related to running out of video memory. That 5700XT is suddenly looking pretty good.


----------



## VicsPC

miklkit said:


> In the last week or so I have updated to win10 1903, installed those chipset drivers, and installed the 19.7.1 drivers. Now the cpu is running as well as it ever has, ram is running better as well, and the Vega 64 is also running better than ever. The biggest jump in performance has been in the I/O which is almost as good now as it was before all the intel mitigations trashed overall performance.
> 
> 
> 
> This all has also convinced me that there is a problem with the V64 vram. It can not be OCed past 1000 without causing CTDs and BSODs related to running out of video memory. That 5700XT is suddenly looking pretty good.


Really? I've run mine at 1100mhz without any issues, even played a few games that use all 8gb of HBM memory.


----------



## Trender

VicsPC said:


> Really? I've run mine at 1100mhz without any issues, even played a few games that use all 8gb of HBM memory.


Depends on ur temps, i can do 1100 mhz ez but not with blower refferece, blower is 1000 mhz...


----------



## VicsPC

Trender said:


> Depends on ur temps, i can do 1100 mhz ez but not with blower refferece, blower is 1000 mhz...


Ah alright, wasn't sure what cooler he was on. 1100mhz without any tweaks on water is quite easy, i have a reference card as well so doesn't seem to be an issue. I think the reference cooler is barely enough to be honest not sure how GPU manufacturers get away with such crap designs.


----------



## Bruizer

After returning my first MSI Vega 56 Air Boost OC (which had some scuffs out of the box and Hynix memory), my replacement came today scuff free and with Samsung memory!

Popped the memory to 925mhz, upped the power limit to 50%, and undervolted to 1050mv (seems to be the sweet spot between power consumption vs clock speeds). So far I am quite pleased upgrading from my R9 Fury Nano. :]


----------



## miklkit

VicsPC said:


> Really? I've run mine at 1100mhz without any issues, even played a few games that use all 8gb of HBM memory.



That is why I think mine is defective. Temperatures are well within specs with it running around 55C and the hot spot around 75C peak temps. It has used over 6 gb of vram a few times but usually stays under 6gb. 



The gpu seems to be fine as it's currently at 1660 and could go higher but then temps start getting high.


EDIT: It's the Sapphire Nitro+ in my sig rig.


----------



## VicsPC

miklkit said:


> That is why I think mine is defective. Temperatures are well within specs with it running around 55C and the hot spot around 75C peak temps. It has used over 6 gb of vram a few times but usually stays under 6gb.
> 
> 
> 
> The gpu seems to be fine as it's currently at 1660 and could go higher but then temps start getting high.
> 
> 
> EDIT: It's the Sapphire Nitro+ in my sig rig.


Yea seems a bit off, not being able to get 1000mhz out of 945 base HBM is a bit odd. I have a few games that use 8gb of ram, i think Frostpunk and STTR are probably the main 2.


----------



## miklkit

Heh. Just to muddy the waters some more, last night I messed with other drivers and system memory settings, then plopped in an old profile. 1660 gpu, 1040 vram, +50 power, -70 undervolt. Been playing games for hours with no crashes at all. The hot spot hit 76C and averaged 69C.


It's still not there though. In Strange Brigade, my only DX12 game, it still falls back to DX7 textures. It says in the menu that GPUs with low vram will struggle and it is. The Fury it replaced could run SB just fine on ultra but this V64 can not.


----------



## miklkit

Well it looks like I waited almost long enough before bad mouthing this thing. 


It has been getting a little bit better with all the recent changes but was still not there. Then this morning In the windows power section I found something called "Samsung High Performance". It seems to be the last piece of the puzzle as performance is noticeably better and is now acceptable. 



Look what it just did in Below Zero, a Unity based early access game. Ambients are a little higher than normal at 77F or 25C but overall temps are up across the board as well as power draw. It was pulling as high as 580 watts from the wall where it had previously never gone over 480 watts.


----------



## Wuest3nFuchs

Since my Vega56 Pulse arrived i'm really impressed by the chip architecture compaired to a 1070ti i sold 2 months ago and put in back my r9 fury until midweek.

The hbm² is from samsung didn't did any OC yet but @ stock highest coreclock was ~1610mhz .
The small PCB is so cute , but it's a devil inside in terms of perfomance ,not on temps so far.

Is there a tool to look if i could unlock cores like it was with atomtool/cuinfo ?


----------



## Loladinas

Right, so my chinese waterblock arrived last Tuesday, but I only got around to installing it yesterday. I didn't have much time to play around with it so I don't have much to say about it, but, visuals notwithstanding, I think it performs quite well.


----------



## Ne01 OnnA

Wuest3nFuchs said:


> Since my Vega56 Pulse arrived i'm really impressed by the chip architecture compaired to a 1070ti i sold 2 months ago and put in back my r9 fury until midweek.
> 
> The hbm² is from samsung didn't did any OC yet but @ stock highest coreclock was ~1610mhz .
> The small PCB is so cute , but it's a devil inside in terms of perfomance ,not on temps so far.
> 
> Is there a tool to look if i could unlock cores like it was with atomtool/cuinfo ?


Try:

1620MHz with 1.100v
HBM2 at 950 with 918mV

Test


----------



## Wuest3nFuchs

Ne01 OnnA said:


> Try:
> 
> 1620MHz with 1.100v
> HBM2 at 950 with 918mV
> 
> Test


Thanks for the tip OnnA !Will try it out asap!

What are the mhz-steps on core and hbm when doing OC on vega?
i wanna start with low overclocking first.


----------



## Ne01 OnnA

Voltage steps for GCN & Vega uArch

==


----------



## Wuest3nFuchs

Ne01 OnnA said:


> Voltage steps for GCN & Vega uArch
> 
> ==


THANKs ONNA thats veryx nice from you !
+rep


----------



## Irev

anyone thinking of moving to 5700XT ... it's only about 15% faster then vega64 ... I really hope a 5800 comes out.. IMO unless the upgrade is at least 30% it's alot of $$ for such a little gain... it also depends on if you're totally fine playing games with lower settings to achieve the same frame rate.,


----------



## VicsPC

Irev said:


> anyone thinking of moving to 5700XT ... it's only about 15% faster then vega64 ... I really hope a 5800 comes out.. IMO unless the upgrade is at least 30% it's alot of $$ for such a little gain... it also depends on if you're totally fine playing games with lower settings to achieve the same frame rate.,


Oh its coming. https://wccftech.com/amd-radeon-rx-5950-5900-5850-5800-graphics-cards-leaked/


----------



## ZealotKi11er

Irev said:


> anyone thinking of moving to 5700XT ... it's only about 15% faster then vega64 ... I really hope a 5800 comes out.. IMO unless the upgrade is at least 30% it's alot of $$ for such a little gain... it also depends on if you're totally fine playing games with lower settings to achieve the same frame rate.,


Its is more than 15%. Its more like 25-30%. That is like going from 2080 to 2080 Ti. Also, it will only get slower as AMD shift to RDNA only.


----------



## Wuest3nFuchs

Hello !

Thinking about a waterblock for my Vega56 Pulse, but are there any for this small PCB ?
Cause the PCB looks like more it's a Vega Nano card ?
Or am i wrong here ?

*EDIT:*
Only found this one here https://geizhals.eu/alphacool-eisbl...1646-a1878324.html?t=alle&plz=&va=b&vl=de&v=l

But nowhere in stock !


----------



## Ne01 OnnA

Proper Test w/UV: 5700 XT vs Radeon VII vs Vega 64 vs Vega 56 UV/OC | 3900X | 1440P Benchmarks


----------



## PontiacGTX

in that video doesnt looks that impressive vs a RX Vega 64? then Vega 64 wasnt that slow compared to a 2070 as some reviews suggest


----------



## S.M.

PontiacGTX said:


> in that video doesnt looks that impressive vs a RX Vega 64? then Vega 64 wasnt that slow compared to a 2070 as some reviews suggest


Apples to apples reviewers don't massage Vega and Nvidia cards have excellent boost algorithms in stock form.


----------



## Ne01 OnnA

PontiacGTX said:


> in that video doesnt looks that impressive vs a RX Vega 64? then Vega 64 wasnt that slow compared to a 2070 as some reviews suggest


1.2v Stock Vega (and other GPUs) is a bit too much (that's why Vega in reviews are looking weak, only looking ).
But 1.1v is fine for almost all Vega out there.
As for Navi, good to have UV at 975mV to 1.025v


----------



## PontiacGTX

well that wasnt what I referring I dont know what ae the undervolt voltages for Navi but for vega most reviews show that it is quite slower here it isnt the case


----------



## dagget3450

S.M. said:


> Apples to apples reviewers don't massage Vega and Nvidia cards have excellent boost algorithms in stock form.


If you look in the comments of said video you will see the OP post:

"The Radeon VII, Vega 64 and Vega 56 were all recorded using a capture card. Multi Display/ Capture devices don't currently work with the RX 5700XT. The 5700XT was recorded using AMD Relive so there was a small drop in fps so just keep that in mind. Hopefully this issue is fixed in future drivers."

So they weren't apples to apples exactly.


----------



## Wuest3nFuchs

VicsPC said:


> Yea that's what i mean lol. Not sure why I'm only getting 7gb/s. Maybe i have 2 many satas plugged in but doubt that would cause it. BIOS still says im at native x16, and i have it set to gen 3.


 @VicsPC

I tooked a closer look @ the GPU Utilization while benching and testing my vega56 pulse and compared to a R9 Fury it doesn't run over 5% Util. measured with RTSS OSD.

Used 19.6.3 driver .
Freeseync was deactivated .

*3dmarkd PCI-E Tests*

*Fury*
https://www.3dmark.com/3dm/37395372









*Vega56*
https://www.3dmark.com/3dm/37840815?


----------



## VicsPC

Wuest3nFuchs said:


> @VicsPC
> 
> I tooked a closer look @ the GPU Utilization while benching and testing my vega56 pulse and compared to a R9 Fury it doesn't run over 5% Util. measured with RTSS OSD.
> 
> Used 19.6.3 driver .
> Freeseync was deactivated .
> 
> *3dmarkd PCI-E Tests*
> 
> *Fury*
> https://www.3dmark.com/3dm/37395372
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Vega56*
> https://www.3dmark.com/3dm/37840815?


Interesting, i knew it couldnt just be me haha. I'm not bothered anyways, i get the same 3dmark scores as reviews and others so not a worry.


----------



## Wuest3nFuchs

Hey guys, having a few questions regarding Hot Spot Temp...

What is the GPU Hot Spot Temperature for ? Is it really only the hottest part on the GPU ?
Should i monitor it ?
What's the maximum Hot Spot Temperature for a vega56 ?


----------



## PontiacGTX

Wuest3nFuchs said:


> Hey guys, having a few questions regarding Hot Spot Temp...
> 
> What is the GPU Hot Spot Temperature for ? Is it really only the hottest part on the GPU ?
> Should i monitor it ?
> What's the maximum Hot Spot Temperature for a vega56 ?


ref model reach 110c


----------



## S.M.

I just repasted my reference PowerColor RX 64 and the stock thermal paste was like a cement/chalk that was impervious to alcohol and acetone.

I had to carefully chip away at it with a plastic spudger/credit card. Did anyone else have this experience?

My die is unmoulded and my HBM temperatures are the same, so I don't recommend it.


----------



## Wuest3nFuchs

PontiacGTX said:


> ref model reach 110c


Thank you


----------



## Bruizer

Anyone else with a Vega 56 (non-flashed) and Samsung HBM have trouble getting 950mhz on memory? I've gotten to 925. I may be stable to 935 as I always pass Furmark and Superposition with no artifacting/crashing, etc. but get random CTDs once in a blue moon in BFV which has issues of it's own so can't conclude if its just BFV crashing as it would normally, a driver issue, or 935 is an issue.

Slightly bummed but may just be the silicon lottery.


----------



## Wuest3nFuchs

hi there!
RX Vega56 Pulse User here and hbm2 oc testing since 2 weeks (im new to vega)testing 950MHz @950mV kinda worked out on pubg ,BfV,gtav,insurgency sandstorm.
Also tried to lower voltage to 940mV works since 2 days of testing.
Using overdriventool 2.8 .

may you try out a powertarget up to +50%,i tried that yesterday and also lowered coteckocks [email protected] , [email protected] , [email protected] MHz. Tested 3dmark Stresstests 4k and 1440p and it was really hot with 75°Cel. Hotspot was ~103°Cel. with Coreclocks max. @ ~1500MHz.
So yes i think i finally figured out how overclocking is working on my vega. Cause @stock most of the time my vega56 is around ~1100-1350MHz on the core Not producing so much heat as with 1500 .

I also noticed that pubg is very sensitive when it comes to overclocking hbm. 
As i was lowering hbm voltage down to 910mV ,still @950MHz.
And it wasnt stable ,game crashed !!!

The core @1500MHz i only tested on 3dmark. 



Gesendet von meinem SM-G950F mit Tapatalk


----------



## PontiacGTX

Wuest3nFuchs said:


> hi there!
> RX Vega56 Pulse User here and hbm2 oc testing since 2 weeks (im new to vega)testing 950MHz @950mV kinda worked out on pubg ,BfV,gtav,insurgency sandstorm.
> Also tried to lower voltage to 940mV works since 2 days of testing.
> Using overdriventool 2.8 .
> 
> may you try out a powertarget up to +50%,i tried that yesterday and also lowered coteckocks [email protected] , [email protected] , [email protected] MHz. Tested 3dmark Stresstests 4k and 1440p and it was really hot with 75°Cel. Hotspot was ~103°Cel. with Coreclocks max. @ ~1500MHz.
> So yes i think i finally figured out how overclocking is working on my vega. Cause @stock most of the time my vega56 is around ~1100-1350MHz on the core Not producing so much heat as with 1500 .
> 
> I also noticed that pubg is very sensitive when it comes to overclocking hbm.
> As i was lowering hbm voltage down to 910mV ,still @950MHz.
> And it wasnt stable ,game crashed !!!
> 
> The core @1500MHz i only tested on 3dmark.
> 
> 
> 
> Gesendet von meinem SM-G950F mit Tapatalk


are you sure the temperatures are ok? I have seen my RX vega 56 red dragon and reaches 75 c to 65c. maybe try repasting (dont tighten the cooler too much)?
to


----------



## nolive721

enjoying the memory tweaking tool recommended here.these are my settings so far for Core and then memory. nice bump in 3Dmark but also tough games like Crysis3


----------



## Wuest3nFuchs

PontiacGTX said:


> are you sure the temperatures are ok? I have seen my RX vega 56 red dragon and reaches 75 c to 65c. maybe try repasting (dont tighten the cooler too much)?
> 
> to


the higher temps i only got cause i downlcocked core on P5,P6,P7 .I didnt played with those clocks yet, just 3dmark stresstests in 4k and 1440p. 

I also ordered this bykski waterblock here https://de.aliexpress.com/item/32912568591.html with the POM topping. 

I hope it's good in terms of quality.
Since i inspected my barrow fittings and their not great in quality from the shop ordered before on aliexpress. I inspected 8/20 and all 8 were not looking good. 










Gesendet von meinem SM-G950F mit Tapatalk


----------



## jak234

Jackalito said:


> Hi everyone!
> 
> I grabbed a Sapphire Pulse Vega 56 during the Black Friday sales and I was lucky enough to get one with Samsung HBM2 memory. I spent some time browsing around the web looking for a good Vega 64 vBIOS candidate, but it was hard as this model from Sapphire uses the nano PCB instead of the reference one. Finally though, and thanks to a thread on Reddit, I learned there is indeed an XFX Vega 64 that uses that same nano PCB. Especifically, this one:
> https://www.techpowerup.com/vgabios/199111/199111
> 
> I did manage to flash my Sapphire Pulse Vega 56 with Samsung HBM2 successfully. But, it was not as straight forward as I thought it would be. Last time I'd flashed a graphics card vBIOS was back when I had a 290x. So, AtiFlash for DOS is not compatible with Vega, which I didn't know. And in order to flash my new card I had to use the version for Windows. However, it could not be done through the GUI exe, ATIWinFlash. So, here's what I did in order to flash it, in case this may be helpful for someone else.
> 
> 
> 
> Download & extract atiflash v2.84
> Place the vBIOS rom file from XFX in the same directory for convenience
> Open Command Prompt with admin privileges and go to the directory where atiflash has been extracted
> Run atiflash -i (and here make sure your adapter is 0, it should be if there's only one card installed on your system, and totally ignore the dID field - trust me on this one; this was the reason it took me so long to flash it)
> Run atiflash -f -p 0 xxx.rom (where xxx is obviously the name of the ROM file you're trying to flash) (And yes you must use zero instead of the dID of your graphics card)
> Wait until done and restart your system
> That's it. Now, I've got the XFX Vega 64 vBIOS flashed into my Pulse from Saphire and I can push the HBM frequencies beyond 950MHz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did run into a minor issue after flashing, though. Apparenty that XFX Vega card lack the Zero RPM functionality that my Sapphire one came with. When I went looking for it on WattMan (18.12.2 drivers), it was nowhere to be found. Reinstalling the driver didn't help, so the walkaround method I've been using is ignore WattMan completely and rely on OverdriveNTool to manage the fan setup of the card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Still in the testing/finetuning process, as Vega is a whole new beast to me (my previous card was an RX 580).
> Feel free to ask me anything if you have any questions
> 
> 
> And Happy New Year, everyone



Hi Jackalito,

i also want to flash my 56 pulse to 64 but i am a bit concerned because of the 0%rpm fan issue.
Is it now working for you with maybe newer drivers? Or still, without NT Tool, you cant activate the 0%rpm?
Did i get it right that you just put fan rpm to 0% until a specific temperature is reached as a solution (but you cant do this in wattman?)?
But this only works with this nttool, which i dont know.
I would love to just use the wattmann. Is nttool difficult to install?
i will only use the card for davinci resolve
Thanks for your help!


----------



## PontiacGTX

Wuest3nFuchs said:


> the higher temps i only got cause i downlcocked core on P5,P6,P7 .I didnt played with those clocks yet, just 3dmark stresstests in 4k and 1440p.
> 
> I also ordered this bykski waterblock here https://de.aliexpress.com/item/32912568591.html with the POM topping.
> 
> I hope it's good in terms of quality.
> Since i inspected my barrow fittings and their not great in quality from the shop ordered before on aliexpress. I inspected 8/20 and all 8 were not looking good.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gesendet von meinem SM-G950F mit Tapatalk


some should be wrong because I have undervolted and underclocked and mine doesnt get hotter it gets cooler by 5-7c


----------



## Wuest3nFuchs

PontiacGTX said:


> some should be wrong because I have undervolted and underclocked and mine doesnt get hotter it gets cooler by 5-7c


ok that sounds nice how did you do that? Can you post your settings ,please so i have a standpoint where i should begin. 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## PontiacGTX

Wuest3nFuchs said:


> ok that sounds nice how did you do that? Can you post your settings ,please so i have a standpoint where i should begin.
> 
> Gesendet von meinem SM-G950F mit Tapatalk


copy this in notepad and paste, save as whatever_you_want.xml(or save as .txt and replace file extension to .xml) https://pastebin.com/BadKjk6B

alternatively the power state values you can set from driver



> Clocks
> <STATE ID="0" Enabled="True" Value="852"/>
> <STATE ID="1" Enabled="True" Value="991"/>
> <STATE ID="2" Enabled="True" Value="1138"/>
> <STATE ID="3" Enabled="True" Value="1252"/>
> <STATE ID="4" Enabled="True" Value="1302"/>
> <STATE ID="5" Enabled="True" Value="1452"/>
> <STATE ID="6" Enabled="True" Value="1477"/>
> <STATE ID="7" Enabled="True" Value="1527"/>
> Voltages
> <STATE ID="0" Enabled="True" Value="800"/>
> <STATE ID="1" Enabled="True" Value="900"/>
> <STATE ID="2" Enabled="True" Value="950"/>
> <STATE ID="3" Enabled="True" Value="1015"/>
> <STATE ID="4" Enabled="True" Value="1035"/>
> <STATE ID="5" Enabled="True" Value="1050"/>
> <STATE ID="6" Enabled="True" Value="1075"/>
> <STATE ID="7" Enabled="True" Value="1115"/>


----------



## Wuest3nFuchs

PontiacGTX said:


> copy this in notepad and paste, save as whatever_you_want.xml(or save as .txt and replace file extension to .xml) https://pastebin.com/BadKjk6B
> 
> 
> 
> alternatively the power state values you can set from driver


THANK YOU SO MUCH!
next step testing 










Pubg for 10minutes


Gesendet von meinem SM-G950F mit Tapatalk


----------



## PontiacGTX

Wuest3nFuchs said:


> THANK YOU SO MUCH!
> next step testing
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pubg for 10minutes
> 
> 
> Gesendet von meinem SM-G950F mit Tapatalk


well maybe you can overclock with thoses voltages and increase a bit the voltage if required but I dont know why your hotspot is on 79c maybe too much TIM, or bad airflow


----------



## Jackalito

jak234 said:


> Hi Jackalito,
> 
> i also want to flash my 56 pulse to 64 but i am a bit concerned because of the 0%rpm fan issue.
> Is it now working for you with maybe newer drivers? Or still, without NT Tool, you cant activate the 0%rpm?
> Did i get it right that you just put fan rpm to 0% until a specific temperature is reached as a solution (but you cant do this in wattman?)?
> But this only works with this nttool, which i dont know.
> I would love to just use the wattmann. Is nttool difficult to install?
> i will only use the card for davinci resolve
> Thanks for your help!



I'm afraid I still have to rely on OverdriveN Tool to keep my fans stopped at low temperatures. I guess it's just as I thought when I firstly flashed it, that model from XFX does not have the Zero Fan feature present in other models.


----------



## Wuest3nFuchs

PontiacGTX said:


> well maybe you can overclock with thoses voltages and increase a bit the voltage if required but I dont know why your hotspot is on 79c maybe too much TIM, or bad airflow


when i come home later i'll test it with the case open. bequiet dark base pro 900 rev.2 hasn't the best airflow when a aio is already in there. 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## PontiacGTX

Wuest3nFuchs said:


> when i come home later i'll test it with the case open. bequiet dark base pro 900 rev.2 hasn't the best airflow when a aio is already in there.
> 
> Gesendet von meinem SM-G950F mit Tapatalk


nevermind, It seems mine reaches about the same temperature in metro exodus


----------



## Wuest3nFuchs

PontiacGTX said:


> nevermind, It seems mine reaches about the same temperature in metro exodus


upsi ...but when i touch the tempered glass i feel the heat where the gpu and backside fan is installed.

so i could also have a heatspot here.


...the heat is oh on ...
anyone remember that song from the 80's?









pc and monitor was idleing for Half an hour and got white screen. 

Monitor off/on fixes it...but wth is this guys ?

Gesendet von meinem SM-G950F mit Tapatalk








you can still see some symbols but wth....
having this once a week but not in the fury...


----------



## Eliovp

Got a new release which you Vega users will like a lot 

Full PowerPlay control (directly)
Full strap control (can inject directly)

link: Here


Screenshot











Have fun!


----------



## jak234

then maybe flash the nitro+ 64 bios and not rely on overdriventool?
I can life with one dead displayport, but not without the 0%rpm in Desktop mode.


----------



## Satanello

Eliovp said:


> Got a new release which you Vega users will like a lot
> 
> 
> 
> Full PowerPlay control (directly)
> 
> Full strap control (can inject directly)
> 
> 
> 
> link: Here
> 
> 
> 
> 
> 
> Screenshot
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Have fun!


Thank you! I'll play with ram timings soon! What can i do to apply all the settings at windows 10 startup?

Inviato dal mio MI 8 utilizzando Tapatalk


----------



## jimpsar

*Asus Strix Vega 64*

Hello ,
trying UV / OC with Asus Vega 64 , looking good so far, temps are maxed @ 67 hotspot @80 [email protected] celsius.
However only in AC: Odyssey I see around ~1600 core Usually I game at 1520-1550 max. 
Used to have Nitro+ and Vega LC with better results .
Is there a way to improve the core or anyone with the same card can share settings? 
Thank you.

https://www.3dmark.com/fs/19940874

https://www.3dmark.com/fs/19973675

https://www.3dmark.com/spy/7865676


----------



## sinnedone

Try upping the voltage slightly and see if core clocks raise or fall. 

AMD's boost algorithm is odd and sometimes when you take away too much voltage the clocks fall along with it.


----------



## dagget3450

jimpsar said:


> Hello ,
> trying UV / OC with Asus Vega 64 , looking good so far, temps are maxed @ 67 hotspot @80 [email protected] celsius.
> However only in AC: Odyssey I see around ~1600 core Usually I game at 1520-1550 max.
> Used to have Nitro+ and Vega LC with better results .
> Is there a way to improve the core or anyone with the same card can share settings?
> Thank you.
> 
> https://www.3dmark.com/fs/19940874
> 
> https://www.3dmark.com/fs/19973675
> 
> https://www.3dmark.com/spy/7865676


Just now getting my r5 3600 setup, still need to tweak and ram, then gpu but i did a quick test run with FS and compared to yours its not to far behind. Which makes me feel a little better for rolling a r5 3600 to hold off and see what else is coming for amd side on cpu. GPU is stock still on air with blower

Edit: update with a slight undervolt and oc on gpu https://www.3dmark.com/3dm/38574022?

going to keep pushing just doing it in small increments


----------



## geriatricpollywog

My Vega stutters every 8-10 seconds with any drivers newer than 18.2. This problem has been going on for over a year. I've tried reinstalling Windows. Any advice?


----------



## 113802

0451 said:


> My Vega stutters every 8-10 seconds with any drivers newer than 18.2. This problem has been going on for over a year. I've tried reinstalling Windows. Any advice?


What's stuttering? Is it a specific game you play? Or all games?


----------



## geriatricpollywog

WannaBeOCer said:


> What's stuttering? Is it a specific game you play? Or all games?


All games


----------



## Loladinas

0451 said:


> My Vega stutters every 8-10 seconds with any drivers newer than 18.2. This problem has been going on for over a year. I've tried reinstalling Windows. Any advice?


Try turning HDCP off in the driver settings. I've had the same problem with some odd GPU/cable/monitor combos, but not other.


----------



## geriatricpollywog

Loladinas said:


> Try turning HDCP off in the driver settings. I've had the same problem with some odd GPU/cable/monitor combos, but not other.


I don't see an option to turn off HDCP. Do you mean HBCC?


----------



## 98uk

Wondering if anyone can help with this really weird issue.

I am working with a fresh copy of Windows 10, when I try and install any AMD drivers, installation gets about half way and then the monitor loses signal and computer appears to freeze.

After that, boot fails. It gets to the windows logo and spinning icon and then loses signal and locks up.

Only way around is to install Windows fresh again or delete drivers via safe mode.

I have newest (19.8.x) and older (19.5.x) drivers, both display same behaviour.

Card is a reference Sapphire Vega 64 and using a Dell S2719DGF monitor.


----------



## Ne01 OnnA

0451 said:


> I don't see an option to turn off HDCP. Do you mean HBCC?


Relive -> Display -> Specs -> Override


----------



## PontiacGTX

98uk said:


> Wondering if anyone can help with this really weird issue.
> 
> I am working with a fresh copy of Windows 10, when I try and install any AMD drivers, installation gets about half way and then the monitor loses signal and computer appears to freeze.
> 
> After that, boot fails. It gets to the windows logo and spinning icon and then loses signal and locks up.
> 
> Only way around is to install Windows fresh again or delete drivers via safe mode.
> 
> I have newest (19.8.x) and older (19.5.x) drivers, both display same behaviour.
> 
> Card is a reference Sapphire Vega 64 and using a Dell S2719DGF monitor.


I think the card isnt stable I think you should try RMA



0451 said:


> My Vega stutters every 8-10 seconds with any drivers newer than 18.2. This problem has been going on for over a year. I've tried reinstalling Windows. Any advice?


repaste the card OR Try windows 10 1809/1903


----------



## PontiacGTX

just wondering how could be posible a card with a different memory type but same brand performs worse even though both have same clocks? is it due to the VRAM? why Samsung underperforms so bad?


----------



## 98uk

PontiacGTX said:


> I think the card isnt stable I think you should try RMA


Can't get RMA, ex mining card and used when i bought it. Seems weird it'd work one second and gone the next... But i cannot get it to function for love nor money.

Bought a 2070 Super instead as it pissed me off that much and I see that the 5700xt already has big driver issues...



PontiacGTX said:


> just wondering how could be posible a card with a different memory type but same brand performs worse even though both have same clocks? is it due to the VRAM? why Samsung underperforms so bad?


Perhaps power delivery? If one can deliver power more efficiently, perhaps it can better boost clocks.

Same for aftermarket cooling.


----------



## FragZero

Recently my R9 290 died, the AIB 5700's were not available yet, the reference were but expensive and i needed a new card so i bought a MSI Vega 56 air boost for cheap. Awesome performance, run it at 1100mv 1600ish mhz Core 1050 mhz HBM 50% power limit.

But it runs hot and really noisy, seems i underestimated this when i bought the card. 

Can anyone advice on an economical way to fix/improve this? Spending lots of money on custom watercooling makes no sense, it would make more sense to resell the card and buy an thirdparty 5700 or even 5700 XT.

I'm not a big fan of the standard Morpheus mod, i see 2 options right now.

1) Purchase the Morpheus and modify the stock baseplate

2) Purchase a cheap 2x 120mm, drill the Intel bracket and modify the stock baseplate

Both would cost me around 65 euro, that amount makes sense. 

Can anyone comment on my idea? Anyone using a modded AIO? Are there any specific models which are recommended?


----------



## PontiacGTX

98uk said:


> Can't get RMA, ex mining card and used when i bought it. Seems weird it'd work one second and gone the next... But i cannot get it to function for love nor money.
> 
> Bought a 2070 Super instead as it pissed me off that much and I see that the 5700xt already has big driver issues...
> 
> 
> 
> Perhaps power delivery? If one can deliver power more efficiently, perhaps it can better boost clocks.
> 
> Same for aftermarket cooling.


well both are exactly red dragon the only difference is the memory one is samsung and the other is hynix, samsung never again


----------



## 99belle99

I wish I had Samsung HBM. My Hynix only clocks up to 955 and some benchmarks 960MHz.


----------



## 98uk

99belle99 said:


> I wish I had Samsung HBM. My Hynix only clocks up to 955 and some benchmarks 960MHz.


My reference card with Samsung was running memory at 1080mhz!


----------



## Bruizer

When I see people complaining about what their memory ONLY clocks to when the Samsung memory on my MSI Vega 56 Air Boost OC only goes to 925mhz stable... Hahaha

Appreciate what you got! Others of us are less fortunate!


----------



## 113802

Bruizer said:


> When I see people complaining about what their memory ONLY clocks to when the Samsung memory on my MSI Vega 56 Air Boost OC only goes to 925mhz stable... Hahaha
> 
> Appreciate what you got! Others of us are less fortunate!


Flash the RX Vega 64 bios and enjoy your memory overclocking.


----------



## VicsPC

Bruizer said:


> When I see people complaining about what their memory ONLY clocks to when the Samsung memory on my MSI Vega 56 Air Boost OC only goes to 925mhz stable... Hahaha
> 
> Appreciate what you got! Others of us are less fortunate!


Yup agreed. I think mine is from the first batch, bought on release date. My Samsung does about 1100mhz without touching anything else, of course I'm also on water so I'm sure that helps.


----------



## PontiacGTX

Bruizer said:


> When I see people complaining about what their memory ONLY clocks to when the Samsung memory on my MSI Vega 56 Air Boost OC only goes to 925mhz stable... Hahaha
> 
> Appreciate what you got! Others of us are less fortunate!


at least yours isnt underperforming, sigh


----------



## 113802

VicsPC said:


> Yup agreed. I think mine is from the first batch, bought on release date. My Samsung does about 1100mhz without touching anything else, of course I'm also on water so I'm sure that helps.


The RX Vega 56 is just capped by AMD just like the RX 5700. Most users wouldn't have bought a RX vega 64 if they could overclock the HBM: https://hothardware.com/news/amd-radeon-rx-vega-56-unlocked-vega-64-bios-flash

Even Micron memory benefited from the bios flash.


----------



## Bruizer

WannaBeOCer said:


> Flash the RX Vega 64 bios and enjoy your memory overclocking.


If only you knew my luck... Which if you did, you would know nothing. Because that is how much there is to know about my luck. There has to be luck in the first place to know it.

You picking up what I'm putting down?


----------



## PontiacGTX

Bruizer said:


> If only you knew my luck... Which if you did, you would know nothing. Because that is how much there is to know about my luck. There has to be luck in the first place to know it.
> 
> You picking up what I'm putting down?


well Vega 56 has a limited OC ability that was true for reference cards so there is a chance you cna get a bit higher OC if you flash the bios but certainly might not be much


----------



## 99belle99

I flashed a Vega 64 to a Red Devil Hynix 56 earlier today and got 980 on HBM where on 56 bios I got to 960 didn't help much in timespy, plus a few niggles in Windows 10 so i flashed back.


----------



## 99belle99

I can confirm can only do 960MHz now that I'm back on the stock Red Devil 56 bios tried pushing it up but crashes timespy. The extra voltage of 64 bios helps.


----------



## By-Tor

While gaming my Power Color Vega 64 I have noticed the tachometer lights cycling from all on to rolling down to just one then back up to all on again and keeps happening. My frame rates also tank while this is happening. 

Anyone know what may be going on?

Thank you


----------



## PontiacGTX

Anyone here has Vega 10 56CU with SAMSUNG KHA843801B ? if so have you modded the timmings?


----------



## 113802

By-Tor said:


> While gaming my Power Color Vega 64 I have noticed the tachometer lights cycling from all on to rolling down to just one then back up to all on again and keeps happening. My frame rates also tank while this is happening.
> 
> Anyone know what may be going on?
> 
> Thank you


All games if not can you list the games?


----------



## By-Tor

WannaBeOCer said:


> All games if not can you list the games?


Guess I should have said, all the games I'm playing right now.

BF4
BF1
BFV
Rise of the tomb raider


----------



## LionS7

Hello everyone. Im wondering did somebody have an experience with flashing RX Vega 56 Nitro to Vega 64 Nitro ? Are there any problems to look for about that ?


----------



## Loladinas

LionS7 said:


> Hello everyone. Im wondering did somebody have an experience with flashing RX Vega 56 Nitro to Vega 64 Nitro ? Are there any problems to look for about that ?


Supposedly only works with V56 that have Samsung memory chips. I flashed mine, but it's barely worth it, all I got is about 100MHz on HBM, due to higher voltage.

EDIT: nvm, I missed the "Nitro" part, mine's the ref card


----------



## 99belle99

I flashed a Hynix Red Devil 56 to a Samsung red Devil 64 and got 20MHz on HBM but there wasn't much of a difference in Timespy scores and I also had some odd issues in Windows 10 so I flashed back.

Big question have you Hynix or Samsung HBM as that is a big point in having a stable system and non stable after the flash.


----------



## erase

Wow is all I can say about the Sapphire Pulse Vega 56 card. Pick up one of these secondhand barely used, barely over a year old, I paid third of the price the original purchaser paid.

I already have 2x Vega 56 reference cards, one of them is just ok, and the other is a overheating unstable mess which no amount of tweaking could fix. The one ok reference card would drop clocks very low and the fan noise would start to piss me off after awhile.

This Pulse version of the card is what it should of been from the beginning, and that goes for most other aftermarket cards. The heat sink design says it all, the back fan just cruises along and pushes hot air away up and out from the card totally.

I read that cannot flash any Vega 64 BIOS to the pulse cards, due to some kind block to allow to flash back to stock. Reading once flashed it will take the 64 BIOS but then is kind of bricked permanently with that BIOS, some of the display outputs may not work anymore too.

This card runs so damn cool. Playing doom 2560 x 1600 Ultra, card only get to 55c max. If run FFXV benchmark loop, only seeing max temp of 65c.

Only cheap part of the card is the plastic pulse themed shroud, other than that looks high quality. 

Best of all, I got Samsung HBM2 memory on mine, does half a TB/s, clocks to 980MHz

https://www.overclock.net/forum/attachment.php?attachmentid=293780&thumb=1


----------



## Bruizer

Anybody updated to the newest drivers? Finally got everything stable with 19.5.2 (no crashing in BFV) for my V56. The cautious side of me is like: "You know what? All is well. Leave it alone." The tinkerer though in me is like: "But what if you are leaving frames on the table?... Could be fun! Completely ignore your previous frustrations with instability!"


----------



## PontiacGTX

Bruizer said:


> Anybody updated to the newest drivers? Finally got everything stable with 19.5.2 (no crashing in BFV) for my V56. The cautious side of me is like: "You know what? All is well. Leave it alone." The tinkerer though in me is like: "But what if you are leaving frames on the table?... Could be fun! Completely ignore your previous frustrations with instability!"


why not just use a different API? instead dx12/dx11 use the other because the problem might be that, not sure why you would get instability in 4-5+ releases in a row


----------



## LionS7

Loladinas said:


> Supposedly only works with V56 that have Samsung memory chips. I flashed mine, but it's barely worth it, all I got is about 100MHz on HBM, due to higher voltage.
> 
> EDIT: nvm, I missed the "Nitro" part, mine's the ref card


Ok, thanks for that, but maybe I'll get 56 Pulse for good price. Then I'll gather what to do with the bios.


----------



## maddangerous

Just curious... is anyone using GPU-Z and not seeing perfcap at all?


----------



## SystemTech

What are you guys seeing for temps who have full watercooling loops?
My V64 is Ummm rather cool. Literally.

I have it OCd at 1800 core and 1050mem and its sitting idle at 25*C (Ambient is about 20-22*C) and under load, after 15min of benchmark loops, its sitting at 37*C. 
My loop is : 450ml Reservoir > Pump > EK PE 360 Radiator > EK Vega 64 Waterblock > Reservoir

So its a decent bit of cooling capacity but here is the really freaky bit...


The Rad fans are only set to come on when the water hits 40*C
Yes ladies and gentleman, my loop is currently running and cooling a Vega 64 OCd Passively (other than the pump)
Is anyone else seeing anything similar?


----------



## doritos93

Should be getting mine back from ASUS RMA today

Should I change paste right away? Will it be a newish card? What should I expect 

Haven't had to RMA in years..


----------



## Minotaurtoo

SystemTech said:


> What are you guys seeing for temps who have full watercooling loops?
> My V64 is Ummm rather cool. Literally.
> 
> I have it OCd at 1800 core and 1050mem and its sitting idle at 25*C (Ambient is about 20-22*C) and under load, after 15min of benchmark loops, its sitting at 37*C.
> My loop is : 450ml Reservoir > Pump > EK PE 360 Radiator > EK Vega 64 Waterblock > Reservoir
> 
> So its a decent bit of cooling capacity but here is the really freaky bit...
> 
> 
> The Rad fans are only set to come on when the water hits 40*C
> Yes ladies and gentleman, my loop is currently running and cooling a Vega 64 OCd Passively (other than the pump)
> Is anyone else seeing anything similar?



no idea about your temps... but I have two questions that may lead to an answer...


1. what is your power draw at idle vs full gpu load


2. what is your timespy graphics score

... if they are both in expected range, I've got nothing and you've gotten lucky... if not, gpu may be playing tricks on you. Vega is bad about "lying" to people about what it's doing... it'll act like all is fine, and it's doing what you say... but be doing something completely different... 



I'd also like to see your userbench scores too just for comparison to mine... people always talking down vega, but I've found that even on air it can be tuned to do much better than stock... I'd like to see what it's really capable of... I've actually thought of buying a used water block for mine since I already have a cpu loop, but not sure the gains would be worth it.


----------



## SystemTech

Minotaurtoo said:


> no idea about your temps... but I have two questions that may lead to an answer...
> 
> 
> 1. what is your power draw at idle vs full gpu load
> 
> 
> 2. what is your timespy graphics score
> 
> ... if they are both in expected range, I've got nothing and you've gotten lucky... if not, gpu may be playing tricks on you. Vega is bad about "lying" to people about what it's doing... it'll act like all is fine, and it's doing what you say... but be doing something completely different...
> 
> 
> 
> I'd also like to see your userbench scores too just for comparison to mine... people always talking down vega, but I've found that even on air it can be tuned to do much better than stock... I'd like to see what it's really capable of... I've actually thought of buying a used water block for mine since I already have a cpu loop, but not sure the gains would be worth it.


Ahh all valid questions haha. I can see its lying to me a bit... My Time Spy scores are about 250-500 points lower than I would expect at these clocks which is not HUGE but definitely still leaving performance on the table. Im also about 2k points shy on Sky Diver.
I guess some more tweaking is required haha. I will check the power draw tonight and see what it is doing.
I am also using about 1.3 meters of PCI-E Extenders but that should not have a major effect I would think.
I will also run userbench so that we can compare.


----------



## Minotaurtoo

SystemTech said:


> Ahh all valid questions haha. I can see its lying to me a bit... My Time Spy scores are about 250-500 points lower than I would expect at these clocks which is not HUGE but definitely still leaving performance on the table. Im also about 2k points shy on Sky Diver.
> I guess some more tweaking is required haha. I will check the power draw tonight and see what it is doing.
> I am also using about 1.3 meters of PCI-E Extenders but that should not have a major effect I would think.
> I will also run userbench so that we can compare.



I'll go ahead and toss mine out here: https://www.userbenchmark.com/UserRun/19959444


Timespy: https://www.3dmark.com/spy/5682393
Firestrike: https://www.3dmark.com/fs/19854936


----------



## m70b1jr

SystemTech said:


> What are you guys seeing for temps who have full watercooling loops?
> My V64 is Ummm rather cool. Literally.
> 
> I have it OCd at 1800 core and 1050mem and its sitting idle at 25*C (Ambient is about 20-22*C) and under load, after 15min of benchmark loops, its sitting at 37*C.
> My loop is : 450ml Reservoir > Pump > EK PE 360 Radiator > EK Vega 64 Waterblock > Reservoir
> 
> So its a decent bit of cooling capacity but here is the really freaky bit...
> 
> 
> The Rad fans are only set to come on when the water hits 40*C
> Yes ladies and gentleman, my loop is currently running and cooling a Vega 64 OCd Passively (other than the pump)
> Is anyone else seeing anything similar?


I'm on water, I get to about 60c MAX during long periods of gaming. Im on a vega 56 with PPT Mods, sitting around 1725mhz, but I'm also using cheap chinese rads, I ordered a Black Nemisis 240 GTS Rad off amazon to see if my temps dropped.


----------



## Ne01 OnnA

Vega owners Please cast Your Vote (RIS for Vega)

-> https://www.feedback.amd.com/se/5A1E27D203B57D32

From here:
https://twitter.com/CatalystMaker/status/1172526559368028161?s=20


----------



## Ne01 OnnA

19.9.1 WHQL Test (Sept 10 edition, new download at amd.com)

24 019 pts. in FS
(1795MHz at 1.162v HBM2 at 1150MHz CL17 +25% POW)

-> https://www.3dmark.com/3dm/39388778?

8 792 in TS
(1750MHz 1.118v 1150HBM2 CL17 +1%POW)
-> https://www.3dmark.com/spy/8523950


----------



## LicSqualo

Ne01 OnnA said:


> 19.9.1 WHQL Test (Sept 10 edition, new download at amd.com)
> 
> 24 019 pts. in FS
> (1795MHz at 1.162v HBM2 at 1150MHz CL17 +25% POW)
> 
> -> https://www.3dmark.com/3dm/39388778?
> 
> 8 792 in TS
> (1750MHz 1.118v 1150HBM2 CL17 +1%POW)
> -> https://www.3dmark.com/spy/8523950


Mine with a Ryzen 1700 and valid result with 19.9.2, WITHOUT power target (0%) 264W max power (is too hot these days).

https://www.3dmark.com/3dm/39396774?


----------



## PontiacGTX

Ne01 OnnA said:


> Vega owners Please cast Your Vote (RIS for Vega)
> 
> -> https://www.feedback.amd.com/se/5A1E27D203B57D32
> 
> From here:
> https://twitter.com/CatalystMaker/status/1172526559368028161?s=20


you mean integer scaling* CAS already let people use sharpening from RIS


----------



## Ne01 OnnA

PontiacGTX said:


> you mean integer scaling* CAS already let people use sharpening from RIS


Im using Reshade with RIS (Dx11/10/9)


----------



## PontiacGTX

Ne01 OnnA said:


> Im using Reshade with RIS (Dx11/10/9)


yes exactly I dont see why we need a feature that is available in reshade,we need a feature doesnt exist, also integer scaling will improve image quality in many games I would like to try CAS+integer scaling


----------



## SystemTech

Minotaurtoo said:


> I'll go ahead and toss mine out here: https://www.userbenchmark.com/UserRun/19959444
> 
> 
> Timespy: https://www.3dmark.com/spy/5682393
> Firestrike: https://www.3dmark.com/fs/19854936


Here is a basic firestrike run of mine : https://www.3dmark.com/3dm/39466547?

This Time Spy run of mine was at lower settings, but the graphics score is not that much lower than yours : https://www.3dmark.com/spy/8513480

And my GPU maxes out at 41*C during a run, and water temps still max out at mid 30s it seems...so still, the fans on my radiator are not engaging :Snorkle:


----------



## jearly410

Here's my TS: https://www.3dmark.com/spy/8489553

-100mv uv, 1085mhz mem +50% power target


----------



## LicSqualo

This is mine with +50% Power 

https://www.3dmark.com/3dm/39492975?

And here my settings (from OverdriveNTTool 0.2.8):

[Profile_6]
Name=Luglio 2019
GPU_P0=852;800
GPU_P1=1138;900
GPU_P2=1302;950
GPU_P3=1348;1000
GPU_P4=1408;1050
GPU_P5=1560;1100
GPU_P6=1668;1130
GPU_P7=1760;1175
Mem_P0=167;800
Mem_P1=500;800
Mem_P2=800;900
Mem_P3=1080;980
Mem_TimingLevel=0
Fan_P0=30;30
Fan_P1=50;40
Fan_P2=54;50
Fan_P3=58;60
Fan_P4=65;70
Fan_ZeroRPM=0
Fan_Acoustic=1500
Power_Target=50


----------



## Ipak

Mine daily oc TS score, gpu is not happy with voltage lower then 1,175 unfortunately 

https://www.3dmark.com/spy/8556130


----------



## cephelix

Hey guys, been a long time since I've posted, currently running a v56 flashed to a v64.

Running an OC in Wattman with the latest version of the software. 

OC is as follows, 

Core P6: 1536mHz, 1100mV
P7: 1652mHz, 1150mV

HBM P3: 1090mHz, 1000mV

Power Limit: +50%

I'm experiencing random crashes in non-GPU intensive senarios. The screen would start flickering and my whole system would crash, not even a BSOD. Just a white screen wherein I'll have to push the reset button to restart my system. When it loads back to the desktop, Radeon would state that it has crashed/experienced a fault and settings have been reset. Thing is, it never happens when I'm gaming.

Anyone can help?


----------



## Minotaurtoo

SystemTech said:


> Here is a basic firestrike run of mine : https://www.3dmark.com/3dm/39466547?
> 
> This Time Spy run of mine was at lower settings, but the graphics score is not that much lower than yours : https://www.3dmark.com/spy/8513480
> 
> And my GPU maxes out at 41*C during a run, and water temps still max out at mid 30s it seems...so still, the fans on my radiator are not engaging :Snorkle:


 very interesting... your firestrike graphics score is higher than mine as expected: https://www.3dmark.com/compare/fs/19535559/fs/20459780


but your timespy graphic score is lower for some reason: https://www.3dmark.com/compare/spy/8513480/spy/6064953

Now I understand why the overall is lower due to cpu differences, but your graphics score should be higher on both... my gpu clocks are set lower and I'm on air... At least though we can conclude that your card isn't under performing though, at least not by much, however I would have expected to have seen a bigger gap on FS in your favor and a similar gap in your favor in timespy.


----------



## Ne01 OnnA

Here 24k FS 
ryZEN 2 at 4300MHz

-> https://www.3dmark.com/fs/20435620


----------



## SystemTech

Minotaurtoo said:


> very interesting... your firestrike graphics score is higher than mine as expected: https://www.3dmark.com/compare/fs/19535559/fs/20459780
> 
> 
> but your timespy graphic score is lower for some reason: https://www.3dmark.com/compare/spy/8513480/spy/6064953
> 
> Now I understand why the overall is lower due to cpu differences, but your graphics score should be higher on both... my gpu clocks are set lower and I'm on air... At least though we can conclude that your card isn't under performing though, at least not by much, however I would have expected to have seen a bigger gap on FS in your favor and a similar gap in your favor in timespy.


Well, I think part of the difference, is my clocks were a bit different between the 2 runs. I need to spend more time tweaking my card. ALOT more time haha. I've just been doing cheap and nasty afterburner OC's so far but am going to get into powerplay tables when I have a bit of time and go that way so that should bring some consistency as I can see the max Clocks are not going as high as I have set them to go and temps are NOT an issue. I've now switched on the fans, so they are always on their lowest, and idle temps are about ambient +2 to 4*C. Load temps just go over 30*C but not much past that haha.


----------



## m70b1jr

Does anyone know the max Amps these cards can / should take?


----------



## Minotaurtoo

SystemTech said:


> Well, I think part of the difference, is my clocks were a bit different between the 2 runs. I need to spend more time tweaking my card. ALOT more time haha. I've just been doing cheap and nasty afterburner OC's so far but am going to get into powerplay tables when I have a bit of time and go that way so that should bring some consistency as I can see the max Clocks are not going as high as I have set them to go and temps are NOT an issue. I've now switched on the fans, so they are always on their lowest, and idle temps are about ambient +2 to 4*C. Load temps just go over 30*C but not much past that haha.


nothing wrong with afterburner... I've went even nastier lol.. all I've used is wattman... if you haven't looked at it, it's actually pretty decent for what it is... I've managed to get what I've gotten out of my card with no powerplay tricks or any special mods... just straight normal wattman profiles.


----------



## PontiacGTX

Minotaurtoo said:


> nothing wrong with afterburner... I've went even nastier lol.. all I've used is wattman... if you haven't looked at it, it's actually pretty decent for what it is... I've managed to get what I've gotten out of my card with no powerplay tricks or any special mods... just straight normal wattman profiles.


when you open afterburner then the clock profiles change it would be good if the program couldnt override the clock speed and core voltage


----------



## Minotaurtoo

PontiacGTX said:


> when you open afterburner then the clock profiles change it would be good if the program couldnt override the clock speed and core voltage


The thing I hate about wattman, that actually is a life saver at times, is that every time my power blinks or xxx error occurs it resets the profile back to standard.... that's great an all, except my vega 64 is a bit of a flake... at stock I get completely random errors, like not being able to come back from sleep... black screens etc... not very often and honestly it's not often enough to bother with shipping it back for warranty... and I've found that most of the issue is with the memory clocks... so in my wattman profile I have min ram speed of 500mhz... also drop voltage a bit on the core as well as a minor OC... this seems to knock out all the issues I have but one... there is this one game that can cause it to still give me a black screen with fans spinning up... even then it's really rare, hasn't happened in over a month actually... thing is, I forget to re-instate my profile after it... then poof *random black screen* with everything still happening in the background... still can hear videos playing and can even pause them and restart... wish there was a way I could get it to default back to my known good profile each time.

I've been told that the issue has to do with my 3 monitors one of which is 4k... no idea... I think the black screen with fans spinning up is unrelated and likely due to bad paste/pad...


----------



## Ipak

Hibernating and sleeping windows is what causes most of my problems with big vega. After I disabled hybrid windows hibernation (aka fast startup) and stop hibernating at all, all of my random crashes and instabilities are gone. Unless i apply to big overclock


----------



## doritos93

gdmfsob got my card back from RMA and I still have the crashing during gaming with a solid color on the monitor

2700x
Prime X470-PRO
Ripjaws V DDR4 3200 C16 16-18-18-36-58
Strix Vega 64 (fresh from RMA)
Corsair RM750 (was EVGA 850 B3)

Tried loosening timings, thinking about turning off PBO to see if that helps. Any other ideas?


----------



## Deadboy90

doritos93 said:


> gdmfsob got my card back from RMA and I still have the crashing during gaming with a solid color on the monitor
> 
> 2700x
> Prime X470-PRO
> Ripjaws V DDR4 3200 C16 16-18-18-36-58
> Strix Vega 64 (fresh from RMA)
> Corsair RM750 (was EVGA 850 B3)
> 
> Tried loosening timings, thinking about turning off PBO to see if that helps. Any other ideas?


have you tried lowering the GPU and memory clocks?


----------



## doritos93

Deadboy90 said:


> have you tried lowering the GPU and memory clocks?


To lower than stock? That's where I'm at. The only thing I changed was to a more aggressive fan curve but nothing with clocks, voltage or power


----------



## Bruizer

doritos93 said:


> To lower than stock? That's where I'm at. The only thing I changed was to a more aggressive fan curve but nothing with clocks, voltage or power


Lower the voltage and increase the power limit to +50%. I'd start with my voltage at 1050mv on core and if that's not stable, go up to 1075, and then 1100 if more is needed.

Also, what drivers are you on? The recent driver support has not been great.


----------



## sinnedone

There comes a point where undervolting will lower max clocks regardless of maxed out power limit.

It becomes a balancing act as to how you want your card to perform.


----------



## Bruizer

sinnedone said:


> There comes a point where undervolting will lower max clocks regardless of maxed out power limit.
> 
> It becomes a balancing act as to how you want your card to perform.


Precisely.


----------



## PontiacGTX

doritos93 said:


> gdmfsob got my card back from RMA and I still have the crashing during gaming with a solid color on the monitor
> 
> 2700x
> Prime X470-PRO
> Ripjaws V DDR4 3200 C16 16-18-18-36-58
> Strix Vega 64 (fresh from RMA)
> Corsair RM750 (was EVGA 850 B3)
> 
> Tried loosening timings, thinking about turning off PBO to see if that helps. Any other ideas?


RMA again unless drivers dont change anything.. a solid color sometimes is memory related (recall i had this when I changed the timming on the HD 7950 and wasnt stable ) maybe the reballing job they did wasnt good enough.this is assuming it is stock clocked


----------



## Worldwin

I wonder if I should RMA my V64 Nitro+. Been getting random BSOD whilst gaming for "KERNEL_MODE_HEAP_CORRUPTION." Tried every driver since 1903 released and had crashes prior. According to whocrashed it happens due to "ATI Radeon Kernel Mode Driver" and "NT Kernel & System." Downside is I have no other GPU to test to see if the GPU is the problem itself or its something else in my system.


----------



## mtrai

Well I know I have had my head in messing with my 5700 XT but here were my last timespy and firestrike scores on my vega 64

8 669 with AMD Radeon RX Vega 64(1x) and AMD Ryzen 7 2700X
Graphics Score 8430
CPU Score 10331

https://www.3dmark.com/spy/7188720

23 665 with AMD Radeon RX Vega 64(1x) and AMD Ryzen 7 2700X
Graphics Score 29093
Physics Score 23473
Combined Score 9915

https://www.3dmark.com/fs/19304391

Though the FS score was not stable for actual use...daily use it was around 27.5k graphics to be stable.


----------



## miklkit

I have a Sapphire Nitro Vega 64 and have been using Trixx v6.6.0 happily until recently, when it stopped loading with windoze. So I just installed Trixx v7.0.0 and......it is a monitoring utility only with no way to OC or undervolt. What is going on?


----------



## Alastair

What software do you guys recommend for this GPU. I don't think afterburner will be as good since you guys are talking about P States a lot. And wattool seems to lock up my system when changing settings. And Wattman wont go above the 1200mv setting in driver. 



But the good news is its in. And 36c on the barrow block at stock clocks (balanced driver settings). And damn it looks good. On the vertical GPU mount.


----------



## 113802

Alastair said:


> What software do you guys recommend for this GPU. I don't think afterburner will be as good since you guys are talking about P States a lot. And wattool seems to lock up my system when changing settings. And Wattman wont go above the 1200mv setting in driver.
> 
> 
> 
> But the good news is its in. And 36c on the barrow block at stock clocks (balanced driver settings). And damn it looks good. On the vertical GPU mount.


MSI afterburner works fine, you can't lock P States on Vega on Windows. Run Linux if you want to lock a P State. You won't be able to go above 1200mV unless you change your PP or flash the Vega 64 LC bios if you have a reference card I suggest you do. You shouldn't go above 1200mV anyway or else you'll just throttle your card. 

My RX Vega 64 ran at 1200mV 1750/1105Mhz and it would run between 1700-1730Mhz

I'd probably crush HWBot if I re-installed my Vega 64 LC: https://www.3dmark.com/fs/18270986

GPU Score: 29358


----------



## Greenland

So I just gifted a code from a co worker with his purchase of the 3800X, unfortunately I don't have the appropriate hardware to redeem the code, anyone wants to help me out ? thanks.


----------



## PontiacGTX

Alastair said:


> What software do you guys recommend for this GPU. I don't think afterburner will be as good since you guys are talking about P States a lot. And wattool seems to lock up my system when changing settings. And Wattman wont go above the 1200mv setting in driver.
> 
> 
> 
> But the good news is its in. And 36c on the barrow block at stock clocks (balanced driver settings). And damn it looks good. On the vertical GPU mount.


afterburner if you have any brand, if sapphire then TRI-XX,only to get the Boost feature.. otherwise for overclocking and ' modifying ' the P-States MSI is the best bet. even you can use wattman if you dont care about riva tuner


----------



## Alastair

WannaBeOCer said:


> MSI afterburner works fine, you can't lock P States on Vega on Windows. Run Linux if you want to lock a P State. You won't be able to go above 1200mV unless you change your PP or flash the Vega 64 LC bios if you have a reference card I suggest you do. You shouldn't go above 1200mV anyway or else you'll just throttle your card.
> 
> My RX Vega 64 ran at 1200mV 1750/1105Mhz and it would run between 1700-1730Mhz
> 
> I'd probably crush HWBot if I re-installed my Vega 64 LC: https://www.3dmark.com/fs/18270986
> 
> GPU Score: 29358


 Well it reports 1150MV. But when I go to unlock wattmans voltage control 1200 is already input? 





PontiacGTX said:


> afterburner if you have any brand, if sapphire then TRI-XX,only to get the Boost feature.. otherwise for overclocking and ' modifying ' the P-States MSI is the best bet. even you can use wattman if you dont care about riva tuner


 At the moment I just want AB at this point for the overlay. But its a sapphire reference PCB so probably no benefit to Trixx in that regard.


----------



## Alastair

I've managed 1670Mhz in afterburner which is giving me about 1630-1660MHz in Heaven at 1150mv. Which after about 20mins settles at 40C core with my rad fans at a measly 1200rpm. I try applying voltage changes in afterburner but I am not seeing the voltage of my gpu change. Tried all the way to +50mv. I'm on 19.9.3.


----------



## 113802

Alastair said:


> I've managed 1670Mhz in afterburner which is giving me about 1630-1660MHz in Heaven at 1150mv. Which after about 20mins settles at 40C core with my rad fans at a measly 1200rpm. I try applying voltage changes in afterburner but I am not seeing the voltage of my gpu change. Tried all the way to +50mv. I'm on 19.9.3.


Flash the RX Vega 64 LC bios if you're using a block.


----------



## Alastair

WannaBeOCer said:


> Flash the RX Vega 64 LC bios if you're using a block.


Shwepps I forgot about the LC BIOS.


----------



## Naeem

Anyone here with Vega 64/LC/56 and Ryzen first gen and Battlefield V and fully updated windows can you check if your gpu is taking full load after latest Battlefield V update mine just show like 50%-70% load and clock just hover around 1000 1200 mhz with like 75w load on chip it used to go 1680+mhz with 90 to 100 load before last update ?


----------



## Bruizer

Naeem said:


> Anyone here with Vega 64/LC/56 and Ryzen first gen and Battlefield V and fully updated windows can you check if your gpu is taking full load after latest Battlefield V update mine just show like 50%-70% load and clock just hover around 1000 1200 mhz with like 75w load on chip it used to go 1680+mhz with 90 to 100 load before last update ?


My Vega 56 only runs full load on BFV if I enable DX12.


----------



## Naeem

Bruizer said:


> My Vega 56 only runs full load on BFV if I enable DX12.




Can you confirm that it was after this update ? also check what fps u get in Operation underground map with both DX 11 and DX 12 ?


----------



## bill1971

I want to make oc to my Vega 56,i use waterblock, whats better stock bios uv, oc? Flash 64 bios? Is there a power mod, reg key?


----------



## 113802

bill1971 said:


> I want to make oc to my Vega 56,i use waterblock, whats better stock bios uv, oc? Flash 64 bios? Is there a power mod, reg key?


Flashing the Vega 64 bios: https://amp.hothardware.com/news/amd-radeon-rx-vega-56-unlocked-vega-64-bios-flash


----------



## PontiacGTX

WannaBeOCer said:


> Flashing the Vega 64 bios: https://amp.hothardware.com/news/amd-radeon-rx-vega-56-unlocked-vega-64-bios-flash


tempted to do this but I need a good cooler otherwise the flashing is pointless


----------



## Satanello

Hi, I'm using a Vega 64 Nitro+ with excellent performance on stock cooling (undervolt and +50%PL). Soon I'll probably switch to custom liquid cooling on cpu, you think I'll have decent performance boost with a full cover wb (Byksky fullcover cost 90€) on vga card? 

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## bill1971

Satanello said:


> Hi, I'm using a Vega 64 Nitro+ with excellent performance on stock cooling (undervolt and +50%PL). Soon I'll probably switch to custom liquid cooling on cpu, you think I'll have decent performance boost with a full cover wb (Byksky fullcover cost 90€) on vga card?
> 
> Inviato dal mio LYA-L29 utilizzando Tapatalk


No your card is cool you don't need extra cooling and spend your money.


----------



## Satanello

bill1971 said:


> No your card is cool you don't need extra cooling and spend your money.


Thanks for your advice. I will raise the clocks to see how much the gpu can hold, keeping the stock cooling.

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## miklkit

I have a V64 Nitro also. It is in a case modded for good air flow. In Trixx ver. 6.6 the fans are set to hit 100% @ 60C. They always wait longer but it does run cool with most games running in the 50-60C range, and 2 games pushing it into the 60-66C range. 



It is set to run @ 1660/1100 mhz, +50PL, and -81 voltage. The ram is barely stable there and in some games the GPU will overclock itself as high as 1690mhz. So go for it as you have room to improve it.


----------



## Alastair

Can anyone post a copy of the 265 watt Liquid edition bios? All I can seem to find on the TPU bios database are the 220 watt bios'es.


----------



## LicSqualo

Alastair said:


> Can anyone post a copy of the 265 watt Liquid edition bios? All I can seem to find on the TPU bios database are the 220 watt bios'es.


This is mine Vega64 LC edition bios .


----------



## colorfuel

Hi,

I'm having issues with shutdowns on idle since a few driver iterations. Namely 19.9.1, 2, 3 and 19.10.1. I've also experienced the same kind of shut-downs while playing Gothic 2 DX11. The game only needs 600Mhz GPU, but after 5 Minutes or so freezes and the PC just shuts down. On shut-down, only the status LED stays alive and I need to pull the plug for a few seconds to actually be able to boot. The Graphic subsystem crashes, since after reboot, Wattman settings are reset.

I use DDU for all the driver installations and tried with and without OC/UV.

The crazy part is where 19.5.2 WHQL works and doesn't cause any of these issues, neither idle shut-downs or Gothic 2 shut downs and the new drivers work without crashingwhen playing GPU-heavy titles like Remedy Control.

The crashing only seems to happen in low usage or idlind in Windows on drivers newer than 19.5.2. 

Its a weird issue, I know, I couldn't find anything through google.

I suspect the newer drivers could create unusual spikes in voltage on low states or idle, causing the PSU to crash. But that is just a hypothesis and I dont want to buy a new PSU just to test that, since it seems to work well in high-load scenarios.

edit: I just see my signature is gone. I'll update that. Until then: Vega 64/Morpheus - Gigabyte Gaming K7 AX370 - Bequiet E9 CM 580W - Ryzen [email protected], 16GB Flare-X [email protected] optimized.


----------



## VicsPC

colorfuel said:


> Hi,
> 
> I'm having issues with shutdowns on idle since a few driver iterations. Namely 19.9.1, 2, 3 and 19.10.1. I've also experienced the same kind of shut-downs while playing Gothic 2 DX11. The game only needs 600Mhz GPU, but after 5 Minutes or so freezes and the PC just shuts down. On shut-down, only the status LED stays alive and I need to pull the plug for a few seconds to actually be able to boot. The Graphic subsystem crashes, since after reboot, Wattman settings are reset.
> 
> I use DDU for all the driver installations and tried with and without OC/UV.
> 
> The crazy part is where 19.5.2 WHQL works and doesn't cause any of these issues, neither idle shut-downs or Gothic 2 shut downs and the new drivers work without crashingwhen playing GPU-heavy titles like Remedy Control.
> 
> The crashing only seems to happen in low usage or idlind in Windows on drivers newer than 19.5.2.
> 
> Its a weird issue, I know, I couldn't find anything through google.
> 
> I suspect the newer drivers could create unusual spikes in voltage on low states or idle, causing the PSU to crash. But that is just a hypothesis and I dont want to buy a new PSU just to test that, since it seems to work well in high-load scenarios.
> 
> edit: I just see my signature is gone. I'll update that. Until then: Vega 64/Morpheus - Gigabyte Gaming K7 AX370 - Bequiet E9 CM 580W - Ryzen [email protected], 16GB Flare-X [email protected] optimized.


I haven't had that issue at all, I did have issues with 19.1 where windows would take forever to load and had a BSOD while launching Trine 4 for review. I uninstalled and went to 19.3 and Windows now boots properly. Stop using DDU as well, the AMD uninstaller does the same thing and even does it in safe mode now, I had nothing but issues with DDU in the past when using it.


----------



## Deadboy90

So Ive got my Vega 56 under a custom loop but I still want more performance than what im getting with the power target cranked and the 64 bios flashed to it. I've heard about editing a power play table or something to pump more voltage into the card and was wondering

1: is that safe for 24/7 usage?
2: is there a good guide for how to do it?

I've seen some of Steve's videos messing with it but i didn't see one showing precisely how to do it.

thanks


----------



## PontiacGTX

LicSqualo said:


> This is mine Vega64 LC edition bios .


Samsung?


----------



## LicSqualo

PontiacGTX said:


> Samsung?


 Yes.


----------



## Satanello

miklkit said:


> I have a V64 Nitro also. It is in a case modded for good air flow. In Trixx ver. 6.6 the fans are set to hit 100% @ 60C. They always wait longer but it does run cool with most games running in the 50-60C range, and 2 games pushing it into the 60-66C range.
> 
> 
> 
> It is set to run @ 1660/1100 mhz, +50PL, and -81 voltage. The ram is barely stable there and in some games the GPU will overclock itself as high as 1690mhz. So go for it as you have room to improve it.


Thanks, I installed Trixx 6.8 and after some testing I've actually reached a good result with fan on Auto settings. I think I still have room to go up a little


----------



## chris89

Are you guys able to mod the bios on these cards yet or no?


----------



## Alastair

Satanello said:


> Thanks, I installed Trixx 6.8 and after some testing I've actually reached a good result with fan on Auto settings. I think I still have room to go up a little


I can't even get timespy to run. I just get an instant black screen as it finishes loading graphics test 1. I haven't even started overclocking yet. I'm just going to give up with 3D mark. Cause everything else works.


----------



## Loladinas

Alastair said:


> I can't even get timespy to run. I just get an instant black screen as it finishes loading graphics test 1. I haven't even started overclocking yet. I'm just going to give up with 3D mark. Cause everything else works.


It's a known issue with SystemInfo component of the benchmark, although I don't remember where I read about it, it's been a while. I had the same issue, then with some driver version it went away (19.7.1 and whatever the latest SystemInfo version was around July-August?).


----------



## PontiacGTX

Alastair said:


> I can't even get timespy to run. I just get an instant black screen as it finishes loading graphics test 1. I haven't even started overclocking yet. I'm just going to give up with 3D mark. Cause everything else works.


were you using his bios?


----------



## Alastair

PontiacGTX said:


> were you using his bios?


no I am still stock trying to get a baseline. I seem to be having cpu related issues.


----------



## YellowBlackGod

Hellow everybody.... 

I am about to upgrade from my RX 590 to Vega 64 (Sapphire Nitro+), for 100+ fps full HD gaming. I have a big case (storm stryker), good stable 910W psu and can find the Vegas in great price. Is it the upgrade worth it?


----------



## Wuest3nFuchs

Hey all! 

All of a sudden...
I dont know why the hell radeon relive doesnt want me to connect to youtube .
Not getting any form of notification on my google account either. Tested with 19.4.3 and 19.10.1 driver 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Alastair

I cant seem to get my system to do anything. I just get an instant blackscreen under any sort of 3D load. At first I thought it was my cpu OC. So I set it to stock. Then I thought I might of damaged my Delidded cpu in the mounting process. Took it out and it turns out its A ok. Now I am 100% stock on everything and I still get a hard black screen. Restart the system and it says display driver stopped responding and was reset.



So whenever I apply a 3d load. The system black screens and I have to reset or power down the system to get it to run again. 



I HAVE TRIED THE FOLLOWING. 

Clean install of windows
DDU driver and reinstall
Driver roll back to 19.5.2 , 19.6.2, 19.9.2 and 19.10.1
Reset CPU to stock clocks
Reset GPU to stock clocks.
Lowered power limit to -20%
Lowered clocks to 1200MHz core
Set 945MHz as HBM minimum state.
Set HBM to 800MHz. 

Assuming the little switch near the IO is the BIOS switch I have tried the switch in both positions. 



I have a Delta based Antec HCP-1300 so I doubt power is the issue here. 


Anyone else got any help for me?


----------



## miklkit

That has happened to me in the past and it was the PSU. It was not defective but just could not put out the needed voltage. It got so hot it burned my fingers when I touched it.


----------



## Minotaurtoo

Alastair said:


> I cant seem to get my system to do anything. I just get an instant blackscreen under any sort of 3D load. At first I thought it was my cpu OC. So I set it to stock. Then I thought I might of damaged my Delidded cpu in the mounting process. Took it out and it turns out its A ok. Now I am 100% stock on everything and I still get a hard black screen. Restart the system and it says display driver stopped responding and was reset.
> 
> 
> 
> So whenever I apply a 3d load. The system black screens and I have to reset or power down the system to get it to run again.
> 
> 
> 
> I HAVE TRIED THE FOLLOWING.
> 
> Clean install of windows
> DDU driver and reinstall
> Driver roll back to 19.5.2 , 19.6.2, 19.9.2 and 19.10.1
> Reset CPU to stock clocks
> Reset GPU to stock clocks.
> Lowered power limit to -20%
> Lowered clocks to 1200MHz core
> Set 945MHz as HBM minimum state.
> Set HBM to 800MHz.
> 
> Assuming the little switch near the IO is the BIOS switch I have tried the switch in both positions.
> 
> 
> 
> I have a Delta based Antec HCP-1300 so I doubt power is the issue here.
> 
> 
> Anyone else got any help for me?


Was PSU for me too last time it happened to me...


----------



## Alastair

Minotaurtoo said:


> Was PSU for me too last time it happened to me...


But I mean this PSU is still new.... Although that being said this is the third HCP-1300 I am on. Maybe they are flawed in their design somehow? 

But I thought it was CPU but I can have it doing realbench @ 5050MHz for several hours without issue. 

But I apply a combined load and BLACKSCREEN.

I Tried adding 50mv in afterburner and BLACKSCREEN. So I don't know. looks like its a power issue. Maybe I am sending this PSU back AGAIN on RMA.


----------



## Minotaurtoo

I'm guessing you don't have a tester psu then... too bad you aren't closer by, I'd loan you mine to test the theory... I also thought my PSU was good... but what I've found was that some PSU's are or get too sensitive to the sudden draws that Vega can have... seems it triggers the "short circuit" protection in it... I had a PSU that should have been strong enough since I could run 2 7950's highly overclocked on it but it wasn't ready for Vega... got a Rosewill 750w and all was well... I did a lot of googling before I ran across someone who was having the same problems, they even had a 1200w psu fail to work with Vega... Fortunately for me I had a few PSU's around here to test with, and strangely, the cheaper ones worked better than the expensive one I had before because they lacked a good overcurrent protection...


----------



## Alastair

Minotaurtoo said:


> I'm guessing you don't have a tester psu then... too bad you aren't closer by, I'd loan you mine to test the theory... I also thought my PSU was good... but what I've found was that some PSU's are or get too sensitive to the sudden draws that Vega can have... seems it triggers the "short circuit" protection in it... I had a PSU that should have been strong enough since I could run 2 7950's highly overclocked on it but it wasn't ready for Vega... got a Rosewill 750w and all was well... I did a lot of googling before I ran across someone who was having the same problems, they even had a 1200w psu fail to work with Vega... Fortunately for me I had a few PSU's around here to test with, and strangely, the cheaper ones worked better than the expensive one I had before because they lacked a good overcurrent protection...


I do have a spare unit which I just installed. A Gigabyte XP1200M out of my Rure Penthe miner. And what do you know. :thumb:
Guess this HCP-1300 is going back. I looked at some NEwegg reviews for the HCP-1300. It seems this unit is not without its issues. Which is sad, I figured Delta quality would really be something, but I guess not in this particular unit. Yet it used to run 2 furys unlocked to 60CUs both at 1150MHz which is more power than a single vega and it seemed to do fine. But it seems these units just hit like a year and they go rapidly downhill.


----------



## Alastair

I managed to break through 4K physics in Timespy. Infact I pretty much hit 4200 physics at 5050Mhz. So that is cool. Here is a TS run at 5050MHz / 2660NB/ 2925HTT with DDR3 @ 2127MHz 9-10-9-27 1T with V64 at stock. https://www.3dmark.com/spy/8996782


----------



## Ipak

3x more threads 3x the score https://www.3dmark.com/compare/spy/8813251/spy/8996782#


----------



## Alastair

Ipak said:


> 3x more threads 3x the score https://www.3dmark.com/compare/spy/8813251/spy/8996782#


that's a lot of threads


----------



## Alastair

Ipak said:


> 3x more threads 3x the score https://www.3dmark.com/compare/spy/8813251/spy/8996782#


is your vega OCéd? Cause I am at stock out the box settings. No UVing no OCing. No nufink!


----------



## Ne01 OnnA

Alastair said:


> that's a lot of threads


If You can save up and go for Zen 3700X on Used Mobo (HeroVI or VII will do the job)
+ 16GB 2x8 (for starters) Predator 4000MHz+ (4133CL19 is best Perf-Price).


----------



## Alastair

Ne01 OnnA said:


> If You can save up and go for Zen 3700X on Used Mobo (HeroVI or VII will do the job)
> + 16GB 2x8 (for starters) Predator 4000MHz+ (4133CL19 is best Perf-Price).


Nah the FX is sticking for now.


----------



## Ipak

Alastair said:


> is your vega OCéd? Cause I am at stock out the box settings. No UVing no OCing. No nufink!


https://www.3dmark.com/compare/spy/8813251/spy/8996782/spy/8999738/spy/8999694#

Yeah its oced, added stock 0% and 50% power limit.


----------



## Alastair

Ipak said:


> https://www.3dmark.com/compare/spy/8813251/spy/8996782/spy/8999738/spy/8999694#
> 
> Yeah its oced, added stock 0% and 50% power limit.


Sweet my GFX score seems to be holding up then as I haven't done any tweaking yet. Still on the stock power limits of 220w. Cpu score seems to be comparable to a 1500X from what I can see.


----------



## Alastair

Ipak said:


> https://www.3dmark.com/compare/spy/8813251/spy/8996782/spy/8999738/spy/8999694#
> 
> Yeah its oced, added stock 0% and 50% power limit.


 Here is a +50% power limit / No OC run @5050MHz CPU. https://www.3dmark.com/spy/9002697
Looks like my GFX score is holding on.


----------



## Minotaurtoo

Alastair said:


> Here is a +50% power limit / No OC run @5050MHz CPU. https://www.3dmark.com/spy/9002697
> Looks like my GFX score is holding on.


 Glad you got it going, hate you have to go through the RMA process... Kinda tickles me that you are sticking with FX, my son has my old 9590 and sometimes I miss it... I used to run up to 5.217ghz for daily use clocks... and yes it was actually prime stable... could hit 5.427 on it for brief times... but in his rig with his cooling, it barely can hold 4.7ghz in the summer... FX rules of overclocking: if you can cool it, you can clock it!.... and boy was that true... I didn't realize how much clock speed my exotic cooling really bought me until he put it under a Corsair AIO cooler.... just not enough to handle that beast... 



On this Ryzen chip I feel my custom loop is wasted, I haven't tried a stock cooler on it, but based on reviews, I'm betting it's only getting me 50mhz at best.


----------



## Deadboy90

Anyone? I picked up a thicc 360 Rad that im drooling over and want to install when I get home next week.



Deadboy90 said:


> So Ive got my Vega 56 under a custom loop but I still want more performance than what im getting with the power target cranked and the 64 bios flashed to it. I've heard about editing a power play table or something to pump more voltage into the card and was wondering
> 
> 1: is that safe for 24/7 usage?
> 2: is there a good guide for how to do it?
> 
> I've seen some of Steve's videos messing with it but i didn't see one showing precisely how to do it.
> 
> thanks


----------



## Alastair

Minotaurtoo said:


> Glad you got it going, hate you have to go through the RMA process... Kinda tickles me that you are sticking with FX, my son has my old 9590 and sometimes I miss it... I used to run up to 5.217ghz for daily use clocks... and yes it was actually prime stable... could hit 5.427 on it for brief times... but in his rig with his cooling, it barely can hold 4.7ghz in the summer... FX rules of overclocking: if you can cool it, you can clock it!.... and boy was that true... I didn't realize how much clock speed my exotic cooling really bought me until he put it under a Corsair AIO cooler.... just not enough to handle that beast...
> 
> 
> 
> On this Ryzen chip I feel my custom loop is wasted, I haven't tried a stock cooler on it, but based on reviews, I'm betting it's only getting me 50mhz at best.


There are a number of reasons I stuck with my FX. 

1 is finances. I was planning on some upgrades for ghost which included a R5 3600 but that shortly went belly up after I got the Vega. 

2 nothing out there seems to be able to OC really well. Which is no fun for me. A lot of the enjoyment I get from PCs is seeing how far and how hard I can push. For this reason I have been considering getting a Xeon 1680V2 as an upgrade simply because it would land me in ryzen-ish territory but I cans still have some fun with overclocking. 

3. Once it was clear that finances were going to be in the way of further upgrades after the Vega I made it my personal mission to see how much more I could extend the life of this platforma and to see if an upgrade really was that necessary. And mainly to also stick it to those who kept saying FX sucked. I delidded it to give me some extra thermal headroom. Which unlocked an additional 50MHz of core clock for me. Not really much I know. But this motherboard can't handle more. Man 5.2 sounds nice. If only I could hit that would be awesome. I don't even think 5.1 will happen for me even once I have gotten the AC installed and ambients down (South African summer is coming and it's killing me) 

So far it seems to be managing. I have seen no major frame rate issues. It takes a heck ton of clockspeed but she gets the job done.


----------



## Alastair

So I installed the LC bios on my V64. And at the default settings trying to run heaven is prooving to be unsuccessful. I get a lock up pretty quickly and the driver has to reset. Any ideas?


----------



## LicSqualo

Have you "installed" OverdriveNTool? Just to check the settings 
Mine start with 1,250V that is really "overvolted" for my clocks.
The times I had this behaviour I see an 1810MHz on my card before the driver reset.
I have GOverlay running in background (a little LCD monitor with some parameters displayed) and when the driver reset itself to come back with the default settings (I noted this when my undervolt is too low for my overclock) I see a really high GPU clock, as mentioned before.


----------



## LicSqualo

Also the Adrenalin driver have different results for my Vega (clocks and voltages).
At any driver upgrade (today with 19.10.2) I have to recheck my settings if stable or not.


----------



## Alastair

LicSqualo said:


> Have you "installed" OverdriveNTool? Just to check the settings
> Mine start with 1,250V that is really "overvolted" for my clocks.
> The times I had this behaviour I see an 1810MHz on my card before the driver reset.
> I have GOverlay running in background (a little LCD monitor with some parameters displayed) and when the driver reset itself to come back with the default settings (I noted this when my undervolt is too low for my overclock) I see a really high GPU clock, as mentioned before.


Brilliant suggestion thank you. It looks like the LC bios was being very aggressive with P6 out the gate setting it to 1668 at 1100 which it didn't seem to like I upped the voltage on P6 to 1200 and it seems happy


----------



## LicSqualo

Alastair said:


> Brilliant suggestion thank you. It looks like the LC bios was being very aggressive with P6 out the gate setting it to 1668 at 1100 which it didn't seem to like I upped the voltage on P6 to 1200 and it seems happy


I'm very glad I helped you out.


----------



## Alastair

LicSqualo said:


> I'm very glad I helped you out.


 Nope didnt work, I thought it did but it didnt. Here is a screenshot of overdrivN tool.


----------



## LicSqualo

Sorry! I'll try to recommend some settings.
You don't have a liquid-cooled version (it seems to me) so maybe these clocks are a bit high for your VGA.
I would try lowering the P6 and P7 as well as their voltages.
These might be suitable values (or something similar, to begin with):
P6 at 1632MHz with 1135 mV
P7 at 1702MHz with 1170 mV


----------



## Alastair

LicSqualo said:


> Sorry! I'll try to recommend some settings.
> You don't have a liquid-cooled version (it seems to me) so maybe these clocks are a bit high for your VGA.
> I would try lowering the P6 and P7 as well as their voltages.
> These might be suitable values (or something similar, to begin with):
> P6 at 1632MHz with 1135 mV
> P7 at 1702MHz with 1170 mV


It isn't the liquid cooled version. It's a reference card. But it's under a full cover waterblock.


----------



## LicSqualo

Alastair said:


> It isn't the liquid cooled version. It's a reference card. But it's under a full cover waterblock.


Well, thank you for the clarification, it allows me to reason on other points. Temperature should not be a problem.
I would start from a comparison between the initial settings of the original VGA with those of the new bios. Techpowerup helps in this case, here https://www.techpowerup.com/gpu-specs/amd-vega-10.g800.
So I imagine a clock/voltage problem. Unfortunately I can not imagine a valid reason that could cause this problem, but I would start to lower the clocks to the initial settings and would go up testing and verifying what happens.
As also written above I also noticed that the settings used with one driver are not valid with the next and I have to recalibrate my clocks, maybe even for you this could be a way to follow.


----------



## DrEVILish

I have Vega Frontier Edition, how can I get Global Wattman working with this card with the new 2019 drivers?

I've tried installing 19.Q1 Feb but got unable to update or switch driver modes. Check for Drivers returns
"Unable to connect to the AMD server and check for updates"

Anyone got any suggestions?


----------



## Wuest3nFuchs

https://www.amd.com/en/support/kb/release-notes/rn-rad-win-19-10-2

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Alastair

So I tried the LC bios again. AnD at lower clocks and voltages. And it runs. But I get these weird fluctuations in clocks and voltages. Like it tanks HARD. Which obviously makes for choppy frame rates and is quite jarring. Here is a screen shot of a timespy stress test on the LC Bios set to -50mv +50% power and I'm asking for 1650MHz of the core.


Edit I will add it does not do this on the standard air BIOS.


----------



## Ipak

Core temps or Hot spot temps causing it to throttle probably. My card wont run LC bios at all, crashing even while browsing webs.


----------



## Alastair

Ipak said:


> Core temps or Hot spot temps causing it to throttle probably. My card wont run LC bios at all, crashing even while browsing webs.


No it definitely isn't core or hot-spot Temps. This card is under water.


----------



## LicSqualo

This is my TimeSpy run to compare.
A bit of speed throttle is always present with Vega. As for my experience.


----------



## Alastair

LicSqualo said:


> This is my TimeSpy run to compare.
> A bit of speed throttle is always present with Vega. As for my experience.


No but the LC bios is giving extreme fluctuations. Like to the point it causes jarring stutter.


----------



## LicSqualo

Perhaps is the bios too "aggressive" for the hardware?


----------



## Alastair

LicSqualo said:


> Perhaps is the bios too "aggressive" for the hardware?


It would be disappointing if it is. After letting heaven run for about 30 mins set at 1650MHz +50% power i hit 45C core and 72C hotspot. What is the max for Hotspot?


----------



## Minotaurtoo

I can't help but to wonder if the vrm's aren't handling it well... I am not well informed about the reference vrm design, but that'd be my guess.


----------



## Alastair

Minotaurtoo said:


> I can't help but to wonder if the vrm's aren't handling it well... I am not well informed about the reference vrm design, but that'd be my guess.


 Vrms maxed around 48 and 50C respectively. Full cover block remember. Does the LC bios have a ~75ish C limit on the hotspot?


----------



## Minotaurtoo

I really don't know, but it's the amperage limit I was thinking of... some vrm's have a hardware set amp limit to keep them safe... FuryX's were bad about that... its the only thing we could figure on them to explain how there was such a bad negative performance impact from increasing voltage anyway... I've seen similar things with vega... .05 voltage reduction helped me gain more performance than increasing clocks or power limit did... a mix of all worked best... I'm pretty sure water wouldn't help my card because of that... temps have never been an issue for me either.


----------



## Alastair

Minotaurtoo said:


> I really don't know, but it's the amperage limit I was thinking of... some vrm's have a hardware set amp limit to keep them safe... FuryX's were bad about that... its the only thing we could figure on them to explain how there was such a bad negative performance impact from increasing voltage anyway... I've seen similar things with vega... .05 voltage reduction helped me gain more performance than increasing clocks or power limit did... a mix of all worked best... I'm pretty sure water wouldn't help my card because of that... temps have never been an issue for me either.


We never did figure out what was causing the negative scaling above 1250mv. I dont think it was an amperage thing as the core clocks never dropped. And remember with Fiji we weren't limited by on card BIOS checks so we were free to use custom BIOSes. I used a 400amp BIOS with a 400 watt power limit and +50% power limit on top of that and I still got the negative scaling. So meh. Whatever Fiji.


Now as for the Vega I did the power play mod. SO surely that mods the amperage limits as well? Caues amps x volts = power. So surely upping the power limit as a whole increases the amp limit as well?


----------



## miklkit

My air cooled Nitro has about a 20C difference between the gpu and the hot spot normally but when working at max it is a bit less. For instance if the gpu hits 67C the hot spot will hit 83C. Stock I have seen it get into the 90s. 

I had an 8800GTS that always ran in the 85C range and it never died. That was the hottest gpu I've ever had.


----------



## Alastair

miklkit said:


> My air cooled Nitro has about a 20C difference between the gpu and the hot spot normally but when working at max it is a bit less. For instance if the gpu hits 67C the hot spot will hit 83C. Stock I have seen it get into the 90s.
> 
> I had an 8800GTS that always ran in the 85C range and it never died. That was the hottest gpu I've ever had.


Well I seem to be running about 45C on the core (at lowish fanspeeds) and I run around 72C on HS.


----------



## Minotaurtoo

Alastair said:


> We never did figure out what was causing the negative scaling above 1250mv. I dont think it was an amperage thing as the core clocks never dropped. And remember with Fiji we weren't limited by on card BIOS checks so we were free to use custom BIOSes. I used a 400amp BIOS with a 400 watt power limit and +50% power limit on top of that and I still got the negative scaling. So meh. Whatever Fiji.
> 
> 
> Now as for the Vega I did the power play mod. SO surely that mods the amperage limits as well? Caues amps x volts = power. So surely upping the power limit as a whole increases the amp limit as well?


I'm actually referring to a hardware limit, not bios controlled... Although clocks didn't show they were dropping, me and a few others on here noticed that Fury had the ability to lie to you... like the memory speeds lied... you could only tell when it was doing what it said it was doing by observing performance...even then it was just "is it better yes/no" not really sure if it hit exactly what it claims it did.


----------



## Alastair

Minotaurtoo said:


> I'm actually referring to a hardware limit, not bios controlled... Although clocks didn't show they were dropping, me and a few others on here noticed that Fury had the ability to lie to you... like the memory speeds lied... you could only tell when it was doing what it said it was doing by observing performance...even then it was just "is it better yes/no" not really sure if it hit exactly what it claims it did.


But was there really a hardware limit? I seem to remember the Fury's having a rather overbuilt VRM?


----------



## Minotaurtoo

Alastair said:


> But was there really a hardware limit? I seem to remember the Fury's having a rather overbuilt VRM?


it was supposed to be overbuilt, but I noticed that it seemed to never draw more than certain wattage no matter how much headroom we gave the bios... can't remember exactly what it was, but it was significantly less than vega can pull...especially in spikes.


----------



## Alastair

What IS the default boost clock for Vega 64. I see reports saying 1545. MSi if I reset it reports 1630. So what is the default boost clock?


----------



## LicSqualo

Alastair said:


> What IS the default boost clock for Vega 64. I see reports saying 1545. MSi if I reset it reports 1630. So what is the default boost clock?


https://www.techpowerup.com/gpu-specs/amd-vega-10.g800

For the Air cooled is 1536MHz
For the LC is 1668MHz


----------



## Alastair

LicSqualo said:


> https://www.techpowerup.com/gpu-specs/amd-vega-10.g800
> 
> For the Air cooled is 1536MHz
> For the LC is 1668MHz


So why is afterburner and GPU-z telling me my default boost clock on air bios is 1630 and on the LC bios 1750?


----------



## LicSqualo

Alastair said:


> So why is afterburner and GPU-z telling me my default boost clock on air bios is 1630 and on the LC bios 1750?


Good question, also my GPU-Z tell me the same (1750)


----------



## Alastair

So I did some research into the clock fluctuations with the LC Bios. It appears to be a bug with the standard air cards that were modded with LC edition bios'es. Anyone else experienced this? And has anyone that has their air cards under a custom loop managed to find a way around the issue?


----------



## 113802

Alastair said:


> So why is afterburner and GPU-z telling me my default boost clock on air bios is 1630 and on the LC bios 1750?


Those are peak boost clocks, shocked AMD removed those clocks off of their website. They had them on their site on launch day. Probably removed them before people complained like they did with Zen. You'll only see those clocks with very light load at stock settings. 

I know I was able to see those clocks with GPUPI benchmark.



Alastair said:


> So I did some research into the clock fluctuations with the LC Bios. It appears to be a bug with the standard air cards that were modded with LC edition bios'es. Anyone else experienced this? And has anyone that has their air cards under a custom loop managed to find a way around the issue?


That's not a bug, even the stock RX Vega 64 LC cards frequency fluctuates due to reaching the power target. If you're using an air cooler the default thermal target on the LC bios is lower so your card might even thermal throttle since its max target temp is 70C.

There are two LC bios the normal one with a power target of 264w and a power saving bios with a power target of 220w and you can increase both by 50%. Make sure you flashed the 264w bios and increase the power target by 50%. Even than at the stock 1.25v of the LC bios it will still power throttle. You'll need to UV your card which you might not be able to do with the LC bios since the LC cards are binned. 

Here's my old RX Vega 64 LC under water with a OC/UV:


----------



## Alastair

WannaBeOCer said:


> Those are peak boost clocks, shocked AMD removed those clocks off of their website. They had them on their site on launch day. Probably removed them before people complained like they did with Zen. You'll only see those clocks with very light load at stock settings.
> 
> I know I was able to see those clocks with GPUPI benchmark.
> 
> 
> 
> That's not a bug, even the stock RX Vega 64 LC cards frequency fluctuates due to reaching the power target. If you're using an air cooler the default thermal target on the LC bios is lower so your card might even thermal throttle since its max target temp is 70C.
> 
> There are two LC bios the normal one with a power target of 264w and a power saving bios with a power target of 220w and you can increase both by 50%. Make sure you flashed the 264w bios and increase the power target by 50%. Even than at the stock 1.25v of the LC bios it will still power throttle. You'll need to UV your card which you might not be able to do with the LC bios since the LC cards are binned.
> 
> Here's my old RX Vega 64 LC under water with a OC/UV:
> 
> https://www.youtube.com/watch?v=7I8SxpajvLo&t=25s


:doh:I feel its becoming habit on OCN for people to answer posts without fully reading them these days. I have already stated in a few posts that my card is under a full cover block, in fact I think I already mentioned custom loop in my last post. 



Any way. In all their launch material AMD states 1546MHz for air and 1668 for LC so :eh-smiley I don't ever remember seeing 1630 or 1750 advertised anywhere.



And it most certainly is a bug. I have done enough digging to find that out. It is effecting AIR card (air AS in reference air board not necessarily cooling) users who are using LC bios on their cards on any of the 19.X series drivers. I am most certainly NOT hitting power limits as I have used modded the soft power play tables for 150% power/current. It ONLY seems to effect air card users on LC bios. 



It is not a heat issue as I only hit around 45C core. And HS sits around 70ish. Which as I understand it is perfectly reasonable for HS. 



If all LC edition cards were fluctuating as badly as this bug is making mine and others fluctuate then they would have never of sold as people don't like their gaming sessions being interrupted by slideshow FPS for a couple of seconds every 35-50 seconds. 



The bug is confirmed. I am now looking for a solution. Hence why I asked had anyone found a solution.


----------



## 113802

Alastair said:


> :doh:I feel its becoming habit on OCN for people to answer posts without fully reading them these days. I have already stated in a few posts that my card is under a full cover block, in fact I think I already mentioned custom loop in my last post.
> 
> 
> 
> Any way. In all their launch material AMD states 1546MHz for air and 1668 for LC so :eh-smiley I don't ever remember seeing 1630 or 1750 advertised anywhere.
> 
> 
> 
> And it most certainly is a bug. I have done enough digging to find that out. It is effecting AIR card (air AS in reference air board not necessarily cooling) users who are using LC bios on their cards on any of the 19.X series drivers. I am most certainly NOT hitting power limits as I have used modded the soft power play tables for 150% power/current. It ONLY seems to effect air card users on LC bios.
> 
> 
> 
> It is not a heat issue as I only hit around 45C core. And HS sits around 70ish. Which as I understand it is perfectly reasonable for HS.
> 
> 
> 
> If all LC edition cards were fluctuating as badly as this bug is making mine and others fluctuate then they would have never of sold as people don't like their gaming sessions being interrupted by slideshow FPS for a couple of seconds every 35-50 seconds.
> 
> 
> 
> The bug is confirmed. I am now looking for a solution. Hence why I asked had anyone found a solution.


You're absolutely right I didn't go back to re-read all your troubleshooting. That is why I provided you all the info I know about the RX Vega 64 since I owned it since launch day. 1750Mhz was listed on their site as peak boost on launch day and pretty much every seller advertised the 1750Mhz LC card. It's the peak boost just like the Radeon VII's peak boost is 1801Mhz even though it never touches it. They used to have a page dedicated just for the RX Vega 64 LC now they combined them. Again I want to confirm you flashed the 264w LC bios not the 220w LC bios?

https://www.amd.com/en/products/graphics/amd-radeon-vii

Base Frequency: 1400 MHz
Boost Frequency: Up to 1750 MHz
Peak Frequency: Up to 1800 MHz

Here is Anandtech referring to the peak boost of the Frontier Edition. Again I'm not sure why AMD went back and removed the peak boost advertising from all the RX Vega cards.

https://www.anandtech.com/show/1158...hes-air-cooled-for-999-liquid-cooled-for-1499

Here is Sapphire advertising 1750Mhz is the peak frequency: https://sapphirenation.net/sapphire-radeon-vega/



> Base Clock	1247 MHz	1406 MHz
> Boost Clock	1546 MHz	1677 MHz
> Max Clock (DPM7)	1630 MHz	1750 MHz


I just want to also point out I never had success with the PP power mods and this is coming from the person with the highest RX Vega 64 Fire Strike GPU score which I got after bugging my RX Vega 64 with the 8GB Frontier Edition bios when flashing back to the RX Vega 64 LC bios. I can get it to do this every single time and it gets stuck at DPM7.

https://www.3dmark.com/fs/18270986 - 1827/1140

Here's me gaming at 1800Mhz sustained: 




Edit: Here are my Gigabyte RX Vega 64 LC bios


----------



## Alastair

WannaBeOCer said:


> You're absolutely right I didn't go back to re-read all your troubleshooting. That is why I provided you all the info I know about the RX Vega 64 since I owned it since launch day. 1750Mhz was listed on their site as peak boost on launch day and pretty much every seller advertised the 1750Mhz LC card. It's the peak boost just like the Radeon VII's peak boost is 1801Mhz even though it never touches it. They used to have a page dedicated just for the RX Vega 64 LC now they combined them. Again I want to confirm you flashed the 264w LC bios not the 220w LC bios?
> 
> https://www.amd.com/en/products/graphics/amd-radeon-vii
> 
> Base Frequency: 1400 MHz
> Boost Frequency: Up to 1750 MHz
> Peak Frequency: Up to 1800 MHz
> 
> Here is Anandtech referring to the peak boost of the Frontier Edition. Again I'm not sure why AMD went back and removed the peak boost advertising from all the RX Vega cards.
> 
> https://www.anandtech.com/show/1158...hes-air-cooled-for-999-liquid-cooled-for-1499
> 
> Here is Sapphire advertising 1750Mhz is the peak frequency: https://sapphirenation.net/sapphire-radeon-vega/
> 
> 
> 
> I just want to also point out I never had success with the PP power mods and this is coming from the person with the highest RX Vega 64 Fire Strike GPU score which I got after bugging my RX Vega 64 with the 8GB Frontier Edition bios when flashing back to the RX Vega 64 LC bios. I can get it to do this every single time and it gets stuck at DPM7.
> 
> https://www.3dmark.com/fs/18270986 - 1827/1140
> 
> Here's me gaming at 1800Mhz sustained: https://www.youtube.com/watch?v=GXDced_nNPw&t=301s
> 
> Edit: Here are my Gigabyte RX Vega 64 LC bios


 It's very interesting the boost base clock thing. 

As for the LC BIOS. I am not 100% sure which power version I have. I ASSUME its the 260 watt version. Because in between the massive fluctuations I do see it trying to maintain 265 watts. So I think it's the 265. But I mean even if I was on the normal 220 one 
150% power would still be 500 watts or more? 

From what I can tell it is just standard cards running LC bios running 19.X drivers. 

Some people seem to recon its a fan control issue. Others seem to think it's a sensor probe that is dropping off and then the card freaks out and throttles and then the sensor comes back.

Here are a few posts about it. 
https://community.amd.com/thread/237233
https://www.google.com/url?sa=t&sou...BhAB&usg=AOvVaw1a5JPxyMRB-Uus6T8HqNNO&ampcf=1
https://forums.overclockers.co.uk/threads/the-amd-driver-thread.18643923/page-853


At this point I don't know what to do. I'm quite disappointed. I was looking forward to chasing 1800MHz with my loop. But I'm limited by the stock air bios.

At this point I am considering trading my standard card for an LC card and paying the difference.


----------



## 113802

Alastair said:


> It's very interesting the boost base clock thing.
> 
> As for the LC BIOS. I am not 100% sure which power version I have. I ASSUME its the 260 watt version. Because in between the massive fluctuations I do see it trying to maintain 265 watts. So I think it's the 265. But I mean even if I was on the normal 220 one
> 150% power would still be 500 watts or more?
> 
> From what I can tell it is just standard cards running LC bios running 19.X drivers.
> 
> Some people seem to recon its a fan control issue. Others seem to think it's a sensor probe that is dropping off and then the card freaks out and throttles and then the sensor comes back.
> 
> Here are a few posts about it.
> https://community.amd.com/thread/237233
> https://www.google.com/url?sa=t&sou...gQIBhAB&usg=AOvVaw1a5JPxyMRB-Uus6T8HqNNO&cf=1
> https://forums.overclockers.co.uk/threads/the-amd-driver-thread.18643923/page-853
> 
> 
> At this point I don't know what to do. I'm quite disappointed. I was looking forward to chasing 1800MHz with my loop. But I'm limited by the stock air bios.
> 
> At this point I am considering trading my standard card for an LC card and paying the difference.


I see, very odd. Not sure if you already tried but I suggest flashing every single 264w LC bios and testing them. The one I provided in my link was the Gigabyte LC launch bios 8709. 

https://hwbot.org/submission/407152...___1080p_xtreme_radeon_rx_vega_64_5503_points


----------



## Alastair

WannaBeOCer said:


> I see, very odd. Not sure if you already tried but I suggest flashing every single 264w LC bios and testing them. The one I provided in my link was the Gigabyte LC launch bios 8709.
> 
> https://hwbot.org/submission/407152...___1080p_xtreme_radeon_rx_vega_64_5503_points


Can you send me the BIOS as looking at the VGA BIOS database I can not tell which BIOS is which.


----------



## 113802

Alastair said:


> Can you send me the BIOS as looking at the VGA BIOS database I can not tell which BIOS is which.


I uploaded it on post 7871 both the LC 264w and 220w bios. 

It's bios version 016.001.001.000.008709

https://www.techpowerup.com/vgabios/194720/sapphire-rxvega64-8176-170719


----------



## Alastair

WannaBeOCer said:


> I uploaded it on post 7871 both the LC 264w and 220w bios.
> 
> It's bios version 016.001.001.000.008709
> 
> https://www.techpowerup.com/vgabios/194720/sapphire-rxvega64-8176-170719


 OK found it. The LC Bios I have is 8774. It is a 265W bios. And the HS limit is 105C. So I cant see it being a limit thing. I am installing 19.1.2 which was the last known good driver and I will get back. 



https://www.techpowerup.com/vgabios/195143/sapphire-rxvega64-8176-170811

Thats the BIOS I am using


----------



## 113802

Alastair said:


> OK found it. The LC Bios I have is 8774. It is a 265W bios. And the HS limit is 105C. So I cant see it being a limit thing. I am installing 19.1.2 which was the last known good driver and I will get back.
> 
> 
> 
> https://www.techpowerup.com/vgabios/195143/sapphire-rxvega64-8176-170811
> 
> Thats the BIOS I am using


I still suggest flashing the older bios. See if what ever check they added relies on that new bios. August was around the time people were able to flash bios.


----------



## Alastair

Well it looks like my Vega is a dud for the most part. I managed to get LC bios working smoothly (no wonky FPS and clocks) using 19.1.2 driver and using the BIOS version you posted @WannaBeOCer.

But it hasn't helped much if any at all. At the stock settings. The card will insta crash. I won't bother with soft power play tables as the setting I am at won't put more than 350 watts out. 

I set 
DPM 6: 1652MHz @1250mv
DPM 7: 1662MHz @1250mv

I applied more volts to DPM6 as 1650ish my card didn't want to do it at 1200mv let alone 

Anything more than that on DPM 7 and it falls on its face. That gives me around 1630MHz in Heaven. I don't know if I am missing something. But it seems this card just won't OC.

I've just realised I have an extra 50mv on hand (1250mv in watt man gives me 1200) and I am BARELY able to better my clocks, surely I am doing something wrong?


----------



## 113802

Alastair said:


> Well it looks like my Vega is a dud for the most part. I managed to get LC bios working smoothly (no wonky FPS and clocks) using 19.1.2 driver and using the BIOS version you posted @WannaBeOCer.
> 
> But it hasn't helped much if any at all. At the stock settings. The card will insta crash. I won't bother with soft power play tables as the setting I am at won't put more than 350 watts out.
> 
> I set
> DPM 6: 1652MHz @1250mv
> DPM 7: 1662MHz @1250mv
> 
> I applied more volts to DPM6 as 1650ish my card didn't want to do it at 1200mv let alone
> 
> Anything more than that on DPM 7 and it falls on its face. That gives me around 1630MHz in Heaven. I don't know if I am missing something. But it seems this card just won't OC.


I'm not shocked, AMD binned their Vega 64 cards just like the 9900KS and those went to the LC cards which was why I bought one. The reason I wanted you to try the 8709 bios was to see if the frequency still fluctuates with newer drivers.


----------



## Alastair

WannaBeOCer said:


> I'm not shocked, AMD binned their Vega 64 cards just like the 9900KS and those went to the LC cards which was why I bought one. The reason I wanted you to try the 8709 bios was to see if the frequency still fluctuates with newer drivers.


 I havent tried the newer drivers. Will try later. But at this point the LC bios doesnt matter for me anyway. Im settling for an average gaming clock of around 1600MHz @ 1.1V. Which in wattman terms is 1652MHz @ 1150mv. At least I get the HBM voltage out of it. So I just set 1100MHz out the box and it worked. Which is nice. 



I really hoped that I could get at least 1700 out of my card. Maybe I am missing something. This new method of boost clocks and base clocks really confuses me. Coming from Fiji and older cards, you just asked for a clock and you either got it or you didn't. Non of this in between boost and base rubbish. 



At this point I am contemplating swapping this for an LC. But the guy who wants to do the swap wants like R2000 (+-130USD) as ther difference. Going to have to talk him down.


----------



## Minotaurtoo

according to amd's own software gigabyte has set the boost clock on this one to 1632mhz... funny bit is, with only a slight undervolt along with +35%power limit, it'll hit 1670 peaks and hold over 1600mhz in most games I play that actually demand the performance... and in things like folding at 100% usage it hold around the 1632 mark occasionally getting over 1700, the problem there is, it's not stable for long past 1700... I honestly think the AMD defaults are more like suggestions even on their own cards... seems like the boost tech overrides that and does whatever it needs to that it can withing specified limits. If it weren't for this cards flaky behavior with the random (once or twice a week) crashes, it'd be pretty good.


----------



## Falkentyne

Alastair said:


> I havent tried the newer drivers. Will try later. But at this point the LC bios doesnt matter for me anyway. Im settling for an average gaming clock of around 1600MHz @ 1.1V. Which in wattman terms is 1652MHz @ 1150mv. At least I get the HBM voltage out of it. So I just set 1100MHz out the box and it worked. Which is nice.
> 
> 
> 
> I really hoped that I could get at least 1700 out of my card. Maybe I am missing something. This new method of boost clocks and base clocks really confuses me. Coming from Fiji and older cards, you just asked for a clock and you either got it or you didn't. Non of this in between boost and base rubbish.
> 
> 
> 
> At this point I am contemplating swapping this for an LC. But the guy who wants to do the swap wants like R2000 (+-130USD) as ther difference. Going to have to talk him down.


When did you get your Vega 64?
What HBM speed were you running when you tried >1663 mhz on the core?


----------



## Alastair

Falkentyne said:


> When did you get your Vega 64?
> What HBM speed were you running when you tried >1663 mhz on the core?


I got my v64 march. It was a used card. I don't know how old it is. Maybe its an early card running on immature silicone? 

I left HBM at default for OC tests so that it did not cause instability.


----------



## Falkentyne

Alastair said:


> I got my v64 march. It was a used card. I don't know how old it is. Maybe its an early card running on immature silicone?
> 
> I left HBM at default for OC tests so that it did not cause instability.


That's a pretty bad sample then 
My card can run at 1700 mhz core/1050 HBM with a -23mv undervolt on P7, or 1650 mhz core/ 1100 mhz HBM, with a 100-150 mhz undervolt on P7 (huh?). (-150 mhz undervolt on P7 causes a greater clock speed reduction as temps rise, than -100mv undervolt, even though I'm not power limited anywhere (softpowerplay editor)! No crashes though.


----------



## miklkit

My Nitro is set to 1630 stock but never touches it. What it actually runs at depends on the games I play where it might hit 1690 but average 580. :kookoo: And yes fps is bad in that game. 



My settings in Trixx.


----------



## VicsPC

miklkit said:


> My Nitro is set to 1630 stock but never touches it. What it actually runs at depends on the games I play where it might hit 1690 but average 580. :kookoo: And yes fps is bad in that game.
> 
> 
> 
> My settings in Trixx.


Yea so is mine, I've seen a few games hit that, particularly ETS2 in DX9 mode or packed areas.


----------



## PontiacGTX

Biggest problem on Vega is how the power state work the P7 almost never stays 100% on load I think that might affect performance slighly I saw a tool called clockblocker but It didnt work for some reason in metro exodus maybe I didnt set it correctly basically what it does it creates an OpenCl context/load (maybe loop it) to force higher clock speed just wonder if the drivers or Vega architecture detect that kind of small load and avoid altogether the P7 to save power


----------



## LicSqualo

PontiacGTX said:


> Biggest problem on Vega is how the power state work the P7 almost never stays 100% on load I think that might affect performance slighly I saw a tool called clockblocker but It didnt work for some reason in metro exodus maybe I didnt set it correctly basically what it does it creates an OpenCl context/load (maybe loop it) to force higher clock speed just wonder if the drivers or Vega architecture detect that kind of small load and avoid altogether the P7 to save power


Better:

Biggest problem on Vega is how the power state work
the P7 almost never stays 100% on load 
I think that might affect performance slighly 
I saw a tool called clockblocker 
but It didnt work for some reason in metro exodus 
maybe I didnt set it correctly 
basically what it does it creates an OpenCl context/load (maybe loop it) to force higher clock speed 
just wonder if the drivers or Vega architecture detect that kind of small load and avoid altogether the P7 to save power


----------



## Alastair

Falkentyne said:


> That's a pretty bad sample then
> My card can run at 1700 mhz core/1050 HBM with a -23mv undervolt on P7, or 1650 mhz core/ 1100 mhz HBM, with a 100-150 mhz undervolt on P7 (huh?). (-150 mhz undervolt on P7 causes a greater clock speed reduction as temps rise, than -100mv undervolt, even though I'm not power limited anywhere (softpowerplay editor)! No crashes though.


yeah it seems I got a real dud. 

I can hold ~1610MHz/1100mhz at 1125mv (set to 1660MHz -75mv) in afterburner (on LC bios). 

LC bios doesn't seem to give me much beyond what I would manage with standard bios. I can't even get it to hold 1650MHz without crashing. 1650 at 1200mv (which is what the default 1260mv LC bios gives me) I can't even get a measly 100MHz over the stock air clocks using the LC bios. 

Surely I am doing something wrong? My card can't be THAT bad cab it?


----------



## Falkentyne

Alastair said:


> yeah it seems I got a real dud.
> 
> I can hold ~1610MHz/1100mhz at 1125mv (set to 1660MHz -75mv) in afterburner (on LC bios).
> 
> LC bios doesn't seem to give me much beyond what I would manage with standard bios. I can't even get it to hold 1650MHz without crashing. 1650 at 1200mv (which is what the default 1260mv LC bios gives me) I can't even get a measly 100MHz over the stock air clocks using the LC bios.
> 
> Surely I am doing something wrong? My card can't be THAT bad cab it?


Are you talking about sustained clocks? Because I'm not.

I'm talking about the clocks you set in MSI Afterburner (or wattman but I'm not using wattman except to set the "floor" voltage (HBM voltage which everyone says is not HBM Voltage but core floor).
My card doesn't hold 1650 mhz. It barely holds 1650 mhz at 1700 core set/1050 core set and -23mv P7.

At 1650 mhz core set/1100 mhz hbm set and -100mv P7, it holds 1613-1607.
For some reason, at 1650/1100 set and -150mv p7, it only holds 1580 mhz. I have no idea why.


----------



## Alastair

Falkentyne said:


> Are you talking about sustained clocks? Because I'm not.
> 
> I'm talking about the clocks you set in MSI Afterburner (or wattman but I'm not using wattman except to set the "floor" voltage (HBM voltage which everyone says is not HBM Voltage but core floor).
> My card doesn't hold 1650 mhz. It barely holds 1650 mhz at 1700 core set/1050 core set and -23mv P7.
> 
> At 1650 mhz core set/1100 mhz hbm set and -100mv P7, it holds 1613-1607.
> For some reason, at 1650/1100 set and -150mv p7, it only holds 1580 mhz. I have no idea why.


 I am holding around ~1610MHz / 1100MHz at 1125mv SUSTAINED. This is SET to 1660MHz / -75mv in Afterburner. I have set floor voltage under HBM to 1050mv


Leaving the voltage at the stock 1250mv setting the LC bios gives you, gives me 1200mv underload. AT THAT setting (so 0mv in afterburner) I set 1680MHz in afterburner to try get 1650ish SUSTAINED and it locks up. Crashes. And the driver resets.


----------



## Falkentyne

Alastair said:


> I am holding around ~1610MHz / 1100MHz at 1125mv SUSTAINED. This is SET to 1660MHz / -75mv in Afterburner. I have set floor voltage under HBM to 1050mv
> 
> 
> Leaving the voltage at the stock 1250mv setting the LC bios gives you, gives me 1200mv underload. AT THAT setting (so 0mv in afterburner) I set 1680MHz in afterburner to try get 1650ish SUSTAINED and it locks up. Crashes. And the driver resets.


Ok that makes more sense. Thank you.
That doesn't mean your chip is bad. It's maybe slightly worse than mine. I'm on air bios with those -mv offsets.

I'm curious.
What happens if you flash the air bios and set these settings:
1) 1650 / 1100 / -100mv p7
2) 1700 / 1050 / 0mv P7 ?


----------



## Alastair

My card is artifacting.


----------



## Alastair

Falkentyne said:


> Ok that makes more sense. Thank you.
> That doesn't mean your chip is bad. It's maybe slightly worse than mine. I'm on air bios with those -mv offsets.
> 
> I'm curious.
> What happens if you flash the air bios and set these settings:
> 1) 1650 / 1100 / -100mv p7
> 2) 1700 / 1050 / 0mv P7 ?


1700 at 0mv insta crash
1650 at - 100 insta as well. 

And I am getting artifacting on siege. 

I think I am going to RMA my card.


----------



## Alastair

NVM the artifact is only siege using older drivers. will update to newer drivers.


----------



## VicsPC

Alastair said:


> NVM the artifact is only siege using older drivers. will update to newer drivers.


Yes, that was a MASSIVE Ubisoft screw up. They had an update, broke Siege for Vega users and then passed the problem to AMD to fix. I had to wait a couple weeks or so for it to even be playable. Posted a few videos on youtube, shameful. Glad AMD got a fix for it though.


----------



## miklkit

Yeah, how it performs depends on the game. I only have one game that pushes it hard and in all the others it throttles down from a little to hardly running at all. In one game it runs the exact opposite of how one would expect it to run with it throttling down when under a heavy load, which causes frame rates in the teens, to throttling up when under a light load, which causes frame rates over 100fps. 



In one game I recently bought I'm finding that adding mods to put a load on it actually gives better frame rates because the mods make it work properly and power up to accept the load. I blame AMDs power saving program in the bios for this.


----------



## PontiacGTX

LicSqualo said:


> Better:
> 
> Biggest problem on Vega is how the power state work
> the P7 almost never stays 100% on load
> I think that might affect performance slighly
> I saw a tool called clockblocker
> but It didnt work for some reason in metro exodus
> maybe I didnt set it correctly
> basically what it does it creates an OpenCl context/load (maybe loop it) to force higher clock speed
> just wonder if the drivers or Vega architecture detect that kind of small load and avoid altogether the P7 to save power


Commas also could work but I cant be bothered for an opinion post on a forum lol.

Still I would like to get something that works to force all CU to be locked to P7 state, I could try to write a simple opencl kernel and see if it works as expected



Alastair said:


> yeah it seems I got a real dud.
> 
> I can hold ~1610MHz/1100mhz at 1125mv (set to 1660MHz -75mv) in afterburner (on LC bios).
> 
> LC bios doesn't seem to give me much beyond what I would manage with standard bios. I can't even get it to hold 1650MHz without crashing. 1650 at 1200mv (which is what the default 1260mv LC bios gives me) I can't even get a measly 100MHz over the stock air clocks using the LC bios.
> 
> Surely I am doing something wrong? My card can't be THAT bad cab it?


Try This https://www.guru3d.com/files-details/clockblocker-download.html



Alastair said:


> NVM the artifact is only siege using older drivers. will update to newer drivers.


artifacts in siege with overclock is a bad signal increase voltage or reduce clock speed... Siege rarely produce artifacts with OC


----------



## LicSqualo

Honestly, I found it interesting as a post, and I certainly didn't want to disturb you. 
I also thought I was kind enough to put the pauses in the speech back together... and yes, you're right, even the commas would have improved the text...


----------



## VicsPC

PontiacGTX said:


> Commas also could work but I cant be bothered for an opinion post on a forum lol.
> 
> Still I would like to get something that works to force all CU to be locked to P7 state, I could try to write a simple opencl kernel and see if it works as expected
> 
> 
> 
> Try This https://www.guru3d.com/files-details/clockblocker-download.html
> 
> 
> 
> artifacts in siege with overclock is a bad signal increase voltage or reduce clock speed... Siege rarely produce artifacts with OC


Or as he said since it was all over the internet, old drivers lol.


----------



## Alastair

So @WannaBeOCer I tried the oldest LC bios. I think that was the one you supplied. And It was fine under 19.2.1. Broken under 19.10.1. I get the extreme throttling again and I had to revert to Air BIOS. At this point I am very annoyed with AMD. As this is the second time a new flagship has come around that a driver update basically broke overclocking on their ex-flagship. I feel at this point that AMD are pulling an nVidia and they are purposefully hurting the performance of their old flagships. Because when Vega came around they broke HBM overclocking on Fiji. And now Radeon VII came around and they broke overclocking on V56's and 64s that are making full use of upgraded cooling by using the LC Bios. And its very convenient for them to hide behind the veil of its not "officialy supported". They did that when they broke HBM overclocking on Fiji. And now it will probably be the same with Vega owners complaining that the LC bios isn't working on the standard cards. 



It seems that neither Red or Green will be making any sales off of me. Ill just stick to buying off the second hand market.


----------



## Wuest3nFuchs

Alastair said:


> So @WannaBeOCer I tried the oldest LC bios. I think that was the one you supplied. And It was fine under 19.2.1. Broken under 19.10.1. I get the extreme throttling again and I had to revert to Air BIOS. At this point I am very annoyed with AMD. As this is the second time a new flagship has come around that a driver update basically broke overclocking on their ex-flagship. I feel at this point that AMD are pulling an nVidia and they are purposefully hurting the performance of their old flagships. Because when Vega came around they broke HBM overclocking on Fiji. And now Radeon VII came around and they broke overclocking on V56's and 64s that are making full use of upgraded cooling by using the LC Bios. And its very convenient for them to hide behind the veil of its not "officialy supported". They did that when they broke HBM overclocking on Fiji. And now it will probably be the same with Vega owners complaining that the LC bios isn't working on the standard cards.
> 
> 
> 
> It seems that neither Red or Green will be making any sales off of me. Ill just stick to buying off the second hand market.


Believe it or not ,but i was thinking the same when i was on fiji,right now im on a vega56 pulse and it nervs me when i read you cannot overclock anymore cause of the drivers...did amd got some crippling programers from nvidia ? 




Gesendet von meinem SM-G950F mit Tapatalk


----------



## Alastair

Wuest3nFuchs said:


> Believe it or not ,but i was thinking the same when i was on fiji,right now im on a vega56 pulse and it nervs me when i read you cannot overclock anymore cause of the drivers...did amd got some crippling programers from nvidia ?
> 
> 
> 
> 
> Gesendet von meinem SM-G950F mit Tapatalk


I dont know. But I feel not enough people are making nouse about it. I am a massive AMD fan since HD5XXX but if nvidia gets hung out to dry for single digit % looses in FPS with a driver update then AMD needs to be brought to task about these things.


----------



## Wuest3nFuchs

Alastair said:


> I dont know. But I feel not enough people are making nouse about it. I am a massive AMD fan since HD5XXX but if nvidia gets hung out to dry for single digit % looses in FPS with a driver update then AMD needs to be brought to task about these things.


...the reason why i moved away from nvidia...crippled drivers for older hardware...lets write them a open loveletter...frustrating Times 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## PontiacGTX

Alastair said:


> So @WannaBeOCer I tried the oldest LC bios. I think that was the one you supplied. And It was fine under 19.2.1. Broken under 19.10.1. I get the extreme throttling again and I had to revert to Air BIOS. At this point I am very annoyed with AMD. As this is the second time a new flagship has come around that a driver update basically broke overclocking on their ex-flagship. I feel at this point that AMD are pulling an nVidia and they are purposefully hurting the performance of their old flagships. Because when Vega came around they broke HBM overclocking on Fiji. And now Radeon VII came around and they broke overclocking on V56's and 64s that are making full use of upgraded cooling by using the LC Bios. And its very convenient for them to hide behind the veil of its not "officialy supported". They did that when they broke HBM overclocking on Fiji. And now it will probably be the same with Vega owners complaining that the LC bios isn't working on the standard cards.
> 
> 
> 
> It seems that neither Red or Green will be making any sales off of me. Ill just stick to buying off the second hand market.


power draw on the Liquid cooled bios is 350w no? And you are running a heatsink which barely handles 250w?


----------



## Alastair

PontiacGTX said:


> power draw on the Liquid cooled bios is 350w no? And you are running a heatsink which barely handles 250w?


 MY GOD PEOPLE FOR THE 50TH FREAKING TIME. Dammit people READ. 



FULL


COVER 



WATER


BLOCK. 



ITS IN THE DAMN SIG RIG


----------



## Alastair

PontiacGTX said:


> power draw on the Liquid cooled bios is 350w no? And you are running a heatsink which barely handles 250w?


 You even quoted me. Where I said. 



LC BIOS = FINE ON DRIVER 19.1.2
LC BIOS = BROKEN ON 19.10.1 (and in this case its all drivers SINCE 19.1.2)


----------



## 113802

Wuest3nFuchs said:


> ...the reason why i moved away from nvidia...crippled drivers for older hardware...lets write them a open loveletter...frustrating Times
> 
> Gesendet von meinem SM-G950F mit Tapatalk


nVidia never crippled drivers for older hardware. Where did this myth come from? AMD fanboys?

Even Fermi cards support DX12 and they also had support for nVidia FreeStyles when it was launched in driver 390.65 while AMD's Radeon Image Sharpening required users to complain for support.


----------



## miklkit

So it is the liquid cooled bios that is gimped? I have been running the same OC for 6-8 months now with no issues as to overclocking. On 19.10.2 drivers now. The only issue I have is black screen flickers on startup and Trixx no longer starting at bootup. Heh. I forgot to start up Trixx the other day and got concerned about games crashing. This V64 at stock sucks.


----------



## Alastair

miklkit said:


> So it is the liquid cooled bios that is gimped? I have been running the same OC for 6-8 months now with no issues as to overclocking. On 19.10.2 drivers now. The only issue I have is black screen flickers on startup and Trixx no longer starting at bootup. Heh. I forgot to start up Trixx the other day and got concerned about games crashing. This V64 at stock sucks.


Yup. Liquid bios flashed onto a reference air card and every driver after 19.1.2 causes HUGE throttling. Doesn't matter what voltage you use what power limits you use. It just turns into a stutter fest. And it isn't limited to a single bios either I have tried 3 different revisions of the LC bios


----------



## Minotaurtoo

Am I the only one in here who runs [email protected] on cold days just for the heat.... I mean, I'm sitting here with the computer on anyway, the only other heat in this room is an electric heater.... and I don't think it works as good for heat as Vega does lol


edit: just for giggles I checked the temp in here vs temp in the living room where the actual heater is (propane wall heater)... it's 68 in there and it's 73 in here lol... 5 deg warmer in the room on the windward side of the house... was 64 in here when I turned it on a couple hours ago.


----------



## Alastair

Minotaurtoo said:


> Am I the only one in here who runs [email protected] on cold days just for the heat.... I mean, I'm sitting here with the computer on anyway, the only other heat in this room is an electric heater.... and I don't think it works as good for heat as Vega does lol
> 
> 
> edit: just for giggles I checked the temp in here vs temp in the living room where the actual heater is (propane wall heater)... it's 68 in there and it's 73 in here lol... 5 deg warmer in the room on the windward side of the house... was 64 in here when I turned it on a couple hours ago.


 Its turning over to summer here in South Africa. This morning at about 9am when I turned GHOST on the ambient room temp was 28C. 

Internal case temp gets to 36C during gaming. And since the 360mm draws from inside the case, even my loop can get a bit warm.


----------



## miklkit

I used to get that with the FX + Fury rig. Now with Zen + V64 I need to turn on the old plasma tv to get more heat into this room. It's the least insulated room in the house so gets hotter in the summer and colder in the winter. There can be an 8C difference between the front and the rear of this house.


I'm confused. This is from last July on the only game I have that pushes the V64 hard. Is it hitting 207w or 314w?


----------



## Ne01 OnnA

Always look at averages.
It matters most, when gaming.
~100-200tW


----------



## Alastair

miklkit said:


> I used to get that with the FX + Fury rig. Now with Zen + V64 I need to turn on the old plasma tv to get more heat into this room. It's the least insulated room in the house so gets hotter in the summer and colder in the winter. There can be an 8C difference between the front and the rear of this house.
> 
> 
> I'm confused. This is from last July on the only game I have that pushes the V64 hard. Is it hitting 207w or 314w?


Others can correct me if I am wrong. But I think core power is just the power consumed by the core. While the higher one might be total board power?


----------



## Minotaurtoo

Alastair said:


> Others can correct me if I am wrong. But I think core power is just the power consumed by the core. While the higher one might be total board power?


My ASIC power rarely goes over 250w, but I've found that it seems to relate more closely with the power draw increase I see from the wall than the core power... so I'm going to assume that you are right in your assessment... or HWiNFO64 is wrong on the core power... either way, that's a lot of power for no more clocks than I saw... must be a stressful load that you were doing there.... here is what power/usage/clocks look like for me folding..


----------



## miklkit

If I am reading Trixx correctly, then it is the higher numbers that are the GPU power draw. It was only a spike in a poorly optimized game with major problems, but the numbers conform to what I've seen in HWINFO64.


----------



## Minotaurtoo

an interesting spike for sure... I will have to watch mine to see if I see spikes like that, but I don't recall seeing any as of yet... but I do know sometimes hwinfo can read very off here is an example today lol 1918W... must be a super modded bios... it's absurdly off... and I noticed that with the latest drivers my dang card is auto overclocking itself past 1700 again even with it set to 1652....ugh... it will randomly fail if I let it go over 1700mhz for whatever reason... no matter the voltage... 1700=crash most of the time.... maybe with the new drivers it won't... will have to see.


----------



## miklkit

Yes HWINFO64 does have its problems so I only use it rarely and for short periods. For instance it will lock up the VRMs on this motherboard somehow if left running very long. Then fans come and go at random. And yes one time I saw huge numbers like you show, and it was running very poorly that day. But what else is there?


----------



## Rabit

I bought used Vega 56 o ebay Sapphire Pulse, I hear that hbm modules corners are easily to crack when re mounting cooler, I wonder if anyone have advices ?
My initial idea is to use so thermal pads and place them near to HBM corners to help distribute pressure when mountain back cooller back.
Any advice for undervolt ?


----------



## Alastair

Rabit said:


> I bought used Vega 56 o ebay Sapphire Pulse, I hear that hbm modules corners are easily to crack when re mounting cooler, I wonder if anyone have advices ?
> My initial idea is to use so thermal pads and place them near to HBM corners to help distribute pressure when mountain back cooller back.
> Any advice for undervolt ?


Between my Furys and now my Vega's I have owned 4 HBM cards. And a 5th on its way. I have never managed to crack an HBM module. I think provided you are careful. Tighten the screws down in the star pattern going a bit of a turn each time on each screw you should be ok.


----------



## Rabit

Alastair said:


> Between my Furys and now my Vega's I have owned 4 HBM cards. And a 5th on its way. I have never managed to crack an HBM module. I think provided you are careful. Tighten the screws down in the star pattern going a bit of a turn each time on each screw you should be ok.


I revived today my Vega 56 Visual inspection show only insignificant amount of dust on inside side of fan blades, temperatures Max GPU 60C - Junction 65Cat 1592Mhz/ 950mV, for now I will leave like that 

Upgrade from RX 580 https://www.3dmark.com/spy/9289510 to RX Vega 56 https://www.3dmark.com/3dm/41169116 UV GPU-Z power draw 150W * PC powered by mighty Seasonic 450 Watt FOCUS Gold 

Power Draw measured with Voltcraft LOGGER 4000 under 3D Mark Time Spy TEST

________ Stock 950mV
Demo 378W 320W
Graphic 1 346W 290W
Graphic 2 349W 285W
CPU Test 176W 159W
Idle 70W
GPU-Z 182W 145W
Temp J 68 65

Also encountered Power spikes at start of tests at Stock 678W before each test, at 950mV only once 620W
Additional info Voltage in Wall Socket during test 237V


----------



## PontiacGTX

Rabit said:


> I bought used Vega 56 o ebay Sapphire Pulse, I hear that hbm modules corners are easily to crack when re mounting cooler, I wonder if anyone have advices ?
> My initial idea is to use so thermal pads and place them near to HBM corners to help distribute pressure when mountain back cooller back.
> Any advice for undervolt ?


Molded dies shouldnt have such problems, is it molded? if so I wouldnt worry the only thing is that if you dropped some TIM which conducts electricity in the gap between the die and the HBM stack it might damage the card

also are you ok with a 3770 in games? I was thinking about upgrading a SB CPU but for now seems to suffice for some games


----------



## Rabit

PontiacGTX said:


> Molded dies shouldnt have such problems, is it molded? if so I wouldnt worry the only thing is that if you dropped some TIM which conducts electricity in the gap between the die and the HBM stack it might damage the card
> 
> also are you ok with a 3770 in games? I was thinking about upgrading a SB CPU but for now seems to suffice for some games


IS OCed https://valid.x86.fr/1maw2b and for games is fine but I already waiting for Ryzen 5 3600


----------



## Alastair

Forgive me lord for I have sinned.


----------



## Minotaurtoo

Alastair said:


> Forgive me lord for I have sinned.


as penance you have to mail that to me... and in return I shall mail you an old trusty 6850 to complete your punishment.


----------



## Alastair

Minotaurtoo said:


> as penance you have to mail that to me... and in return I shall mail you an old trusty 6850 to complete your punishment.


 I think it will cost you more than the value of the 6850 to ship it halfway around the world to me. I still have my two 6850s' Living their best retired life in my moms PC. :drunken:
Now I can sell my poor sample of a V64 and install this LC edition and pile on the clocks! Now to just wait for my 3800X and mobo and GHOSTs transformation will be complete!


----------



## Ne01 OnnA

Alastair said:


> Forgive me lord for I have sinned.


I have the same 
It's a great GPU.

Try 1668MHz 1.081v (or 1.093v) HBM2 at 1120MHz 918v +1% POW
Don't forget the Mem Tweaks 

-> https://github.com/Eliovp/amdmemorytweak/releases/download/0.2.3/x64.zip
Main -> https://github.com/Eliovp/amdmemorytweak

==


----------



## Falkentyne

Ne01 OnnA said:


> I have the same
> It's a great GPU.
> 
> Try 1668MHz 1.081v (or 1.093v) HBM2 at 1120MHz 918v +1% POW
> Don't forget the Mem Tweaks
> 
> ==


AMD memory tweak?
What? Where did that program come from? I didn't know that was available.
Just great. Time for more black screens....I thought I was done with perma black screens on R9 290x trying to push 1500 RAM without increasing GPU voltage and I gave that up after I (guess I) modded a BIOS incorrectly (?) to set 1500 by default with +25mv but apparently that didn't work even though it worked manually in MSI Afterburner so I gave up.

Guess I need to mess with my vega 64 now too...


----------



## Ne01 OnnA

Falkentyne said:


> AMD memory tweak?
> What? Where did that program come from? I didn't know that was available.
> Just great. Time for more black screens....I thought I was done with perma black screens on R9 290x trying to push 1500 RAM without increasing GPU voltage and I gave that up after I (guess I) modded a BIOS incorrectly (?) to set 1500 by default with +25mv but apparently that didn't work even though it worked manually in MSI Afterburner so I gave up.
> 
> Guess I need to mess with my vega 64 now too...



Here:
-> https://github.com/Eliovp/amdmemorytweak/releases/download/0.2.3/x64.zip


----------



## Falkentyne

Ne01 OnnA said:


> Here:
> -> https://github.com/Eliovp/amdmemorytweak/releases/download/0.2.3/x64.zip


Thank you.


----------



## XxxfriezaxxX

Good people, I have a doubt that my temperature is a personalized gigabyte vega 56 and I have no instability problem but I don't know if it is healthy to use it this way ...
In 1080p it consumes much less and does not reach these excessive temperatures, but already playing in 2880 x 1620p, the panorama changes, if I use it in this way nothing serious will happen in the future. Or do I reduce the voltage?
Thank you very much

https://ibb.co/K0Pb4GB


----------



## Wuest3nFuchs

Anyone playing Battlefield V and has DX12 activated with vega? 
I only get stuttering...when using DX12.

Gesendet von meinem SM-G950F mit Tapatalk


----------



## diggiddi

Wuest3nFuchs said:


> Anyone playing Battlefield V and has DX12 activated with vega?
> I only get stuttering...when using DX12.
> 
> Gesendet von meinem SM-G950F mit Tapatalk


I don't think that is unique to vega,during the free weekend trial it kept crashing on my system (290x) in DX12 so used DX11 exclusively


----------



## Wuest3nFuchs

diggiddi said:


> I don't think that is unique to vega,during the free weekend trial it kept crashing on my system (290x) in DX12 so used DX11 exclusively


yeah but for example The Division2 runs better with dx12 on and dice cant fix their dx12 since bf1 .

Gesendet von meinem SM-G950F mit Tapatalk


----------



## psl3

I also have a Sapphire Vega 56 Pulse with Samsung memory and read everything I can find for Vega 64 Bios Update. Jackolito's short guide looks like the one to use along with the XFX Vega 64 Double Edition BIOS and Fan edit.


However - I am still very confused with where the Bios switch should be -- I have read so much on Bios 1 or 2, Primary or secondary, left or right but none of these are very clear. I have worked out that on my default setup - if the switch is towards the backplate and outputs, it is set for Power Limit of around 160 (in Wattman) and if the switch is in the position away from the outputs then the Power Limit is 180 -- so away from the Outputs is the performance setting (which I currently use).


For Flashing though - I have heard some people having issues if flashing the wrong Bios or finding it write protected


*Please can someone give me a really clear, for the Sapphire Vega 56 Pulse -- to flash the Bios to the mentioned Vega 64 version ---- do I need the Switch moved over towards the outputs or away from the output connectors?*


Thanks very much.


----------



## MehlstaubtheCat

[email protected]!

I have also download the "AMD Memory Tweak" and want to do some experiments with it.
I had a Sapphire Radeon Vega 56 Pulse with Hynix HBM2. 

Now i have seen this video:





In this video there a some pre made profiles for the ramtimings for "mining" with the cards.

Where can i find this? Mining forum?

Maybe some advice or link

Greedings


----------



## diggiddi

Wuest3nFuchs said:


> yeah but for example The Division2 runs better with dx12 on and dice cant fix their dx12 since bf1 .
> 
> Gesendet von meinem SM-G950F mit Tapatalk


I don't know about other titles, only BFV's DX12 issues


----------



## Wuest3nFuchs

MehlstaubtheCat said:


> [email protected]!
> 
> 
> 
> I have also download the "AMD Memory Tweak" and want to do some experiments with it.
> 
> I had a Sapphire Radeon Vega 56 Pulse with Hynix HBM2.
> 
> 
> 
> Now i have seen this video:
> 
> https://www.youtube.com/watch?v=PrkNpl2Y3h4
> 
> 
> 
> In this video there a some pre made profiles for the ramtimings for "mining" with the cards.
> 
> 
> 
> Where can i find this? Mining forum?
> 
> 
> 
> Maybe some advice or link
> 
> 
> 
> Greedings


https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/

Gesendet von meinem SM-G950F mit Tapatalk


----------



## sinnedone

Anyone try this new driver with Vega? Any issues?


----------



## VicsPC

sinnedone said:


> Anyone try this new driver with Vega? Any issues?


No issues here, RadeonSettings.exe has crashed on me twice yesterday, only noticed by having 2 radeon icons on the taskbar but the screen didn't flicker or anything so not sure. Seems ok.


----------



## Wuest3nFuchs

FIRST i had the 1604 error but i got it fixed after 1hour, and it works great! i like the look and feel of the 2020 Adrenalin driver. 
Games ive tested with the game preset: BFV,GTAV,Insurgency Sandstorm and Chernobylite. Games look and feel a bunch better right now .

btw vega56 pulse

Gesendet von meinem SM-G950F mit Tapatalk


----------



## King Lycan

xD


----------



## Minotaurtoo

sinnedone said:


> Anyone try this new driver with Vega? Any issues?


I had one insta-crash on boot right after installing... turned off and back on... was fine.


----------



## Falkentyne

Darksiders: Genesis was confirmed to stutter bad with this driver by the developer.


----------



## nolive721

King Lycan said:


> xD


turn off the light and put the Card ON to give this beast justice!

its a beauty honestly best looking GPU I ever had


----------



## Worldwin

New drivers are questionable. New UI has plenty of stuff i do not care about namely the streaming and social media. Also got a system service exception BSOD.


----------



## miklkit

Using 19.12.1 drivers Trixx and nothing else. No wattman. Installed while unplugged from the internet. This gives multiple black screens on bootup and if the V64 is run with Trixx off it crashes in games. OCed it is fine.


----------



## Minotaurtoo

On the new drivers mine shows it clocking higher than it used to with the same profiles I used before... seems to have fixed my "over 1700mhz bug" since it flew right past 1700 with no crash... I was able to get nearly to 1800 now....


On another topic...my son wanted my vega .... I'm moving on to 5700 XT.... hoping I'm not backing up... but didn't really feel like paying nearly the same price for a new vega that I could get a new navi card for... well... I did pay a little more, good news is I do have a 30 day "I hate this" return window.


----------



## nolive721

posted this in the Drivers thread

tried the new drivers on my VEGA 64LC. very nice UI but a bit overwhelming.

only, but serious, concern is that my aggressive OC profile which was working fine on 19.10.1 now crashes in both benchmarks and gaming.Same experience here for VEGA owners?
I haven't tried to copy my UV settings yet so I cant comment if that is a fail here as well or not, hopefully not!

another thing is that with Eyefinity, I could center the Windows taskbar on my center screen (I run triples) but I cant find this Option with these new drivers so if somebody could provide guidance here, its much appreciated.

thanks


----------



## PontiacGTX

It was the uv


----------



## Nighthog

A proud new owner of a MSI VEGA 64 8GB model from today.

Running it with my old card in the system until my water-block will be arriving. 

There were some glitches and issues with having a RX 480 & Vega 64 try to cooperate, not entirely problem free as the system can't figure out which card it wants to use at all times but after some troubleshooting and persistence it settled down after I changed out the HDMI cable for issues related to Freesync on the new card versus the older RX 480 card. Didn't like my HDMI 2.1 spec cable for some reason even if these cards are only 2.0b capable. A regular Premium 2.0 spec cable worked better for some reason with the VEGA. 

I'm not running CF or DX12 multi-gpu but overall the system gets glitchy with 2 cards but is serviceable. Windows 10 is the one causing havoc the most.
Noticed a nice boost to the trouble games that the RX 480 couldn't handle too well in 4K.


----------



## Wuest3nFuchs

Hi all! is that true that Vega GPU P2 voltage is coupled with MEMORY P3 voltage?

mysterious 

And if true i did oc+uv wrong 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Nighthog

Installed my waterblock on the Vega 64 but had to retry 3 times and still my Hotspot temperature isn't too good, better then the first two tries but Still it's reaching 100C+ with only 1100mv to the core.

Didn't expect it to be such and issue to get it tamed. First mount was awful second try a little better but not useful the third has been better but not what I expected. It's kinda useful now but will throttle if it goes to such high temperature and is the limit on my OC/stock runs thus far. I saw the Powerlimit could be a cause for throttle as well but with maxed it's not the first limit but close, it looks like ~330W ASIC Power as highest but I hit my temp limit around 300W ~1075mv anyway.

I've run Superposition to bench and test the temperatures as 3dMark refuses to run for some reason. Gets stuck on scanning system.

Anyway I can reach ~6650 score on 4K Optimized as best with ~1560Mhz core and 1090Mhz HBM. (1100Mhz HBM crashes)

Is this good or bad? Hotspot is 50-55C hotter than the normal GPU temperature like this which hasn't reached 50C yet. Top I've noted has been 47C GPU temperature.


----------



## sinnedone

How much radiator do you have?

What water block?

My Vega 64 hotspot usually stays around 60c while GPU is 50C or less.


----------



## Wuest3nFuchs

maybe you have to try this Methode from igorslab. 

the ominous Hotspot 

https://translate.googleusercontent...700283&usg=ALkJrhiyaznCua2MtUVOVrZNxdCgCbdm7g



Gesendet von meinem SM-G950F mit Tapatalk


----------



## Nighthog

1x 360 & 2x 120mm 45mm thick radiators. 

Used a Alphacool GPX VEGA M01

I've figured out it's probably mounting pressure issue coupled with the amount of paste used. Not proper contact with the GPU die. After each disassembly I noticed paste was still quite thick overall but the screws can't be turned more really. Started to strip threads already when I was trying on one I think on the side away from the die.

I think I need to add more paste and hope for the best with that. 

The HBM doesn't really go above 40C and VRM aren't even reaching 60C yet. Only GPU hotspot causing issues with throttling when trying a mild OC. 
The last try at least made it stock stable... 80-92C Hotspot so I can actaully do anything compared to before. 

I was just really adding more and more paste which each try. 

Using MX-4.

EDIT: 4th retry didn't improve significantly. Only at most 3-4C better at the top edge.

A picture to illustrate the issue with the contact.

Only one side of the HBM makes proper contact to the waterblock surface. This has been the case with each try I presume. So not optimal surface on the block or the spacers or something.


----------



## Ipak

Had similar contact with my fluid gaming ek waterblock. I file down these stands off near gpu and i had improve my hotspot temps by over 20'C


----------



## VicsPC

Ipak said:


> Had similar contact with my fluid gaming ek waterblock. I file down these stands off near gpu and i had improve my hotspot temps by over 20'C


I have no issues with mine. Then again i always tighten the gpu screws before the whole block. I think i stay around 60-70°C in hotspot temps. I haven't checked it in forever as i know it's really good and doesnt bother me.

For example, here are my temps, about 28°C case ambient, no a/c or windows open. Memory at 1100mhz all voltages at factory. Gpu temperature is reading wrong in gpuz, ab reads it at max 45°C.


----------



## Nighthog

Ipak said:


> Had similar contact with my fluid gaming ek waterblock. I file down these stands off near gpu and i had improve my hotspot temps by over 20'C


Thanks for suggestion about those there.

I was either thinking that or the chokes on the card are pressing onto the block with the thermal pads which are 0.5mm thick.

After your suggestion I might get better result if I file down on the stand off's near the chokes. Will wait and see when I try that. Now on the 4th day with this so want to give it a rest for now as I've basically forced the screws down as hard as I can half destroying the heads. They are no longer black as they should be but shine silver. 

Sitting between 90-100C now on Hotspot getting around 6750->6780 and occasional 6800 score for Superposition 4K optimized benchmark, with case open. Temps increase more with case closed.

I could game TW:Warhammer 2 yesterday evening but it was throttling I noticed. After some hours I checked what temps it had reached and had hit 112C for Hotspot. Before I gamed it was just around 100-103C for Superposition 4K bench.


----------



## Notbn

I have a Sapphire Nitro+ V64, not the limited edition. I have my P6 and P7 at 1100mv at stock clocks (1630mhz) and it achieves around 1575mhz on the core that way which I'm fine with. My issue is my memory OC isn't all that great. 



Memory set at 1000mhz with a 1100mv core voltage floor.


Superposition 1080p extreme passes with no issues, even when looped for 20+ mins, but the card artifacts randomly in Witcher 3. Green flashing spots and black squares, usually only in cut scenes.


How could Witcher 3 push the card to artifact when superposition doesn't?


Memory anything above 1000mhz starts artifacting pretty bad.


Did I just lose the lottery on this card? Or is my UV/OC not the best optimized? Any help would be appreciated!


----------



## Nighthog

Notbn said:


> I have a Sapphire Nitro+ V64, not the limited edition. I have my P6 and P7 at 1100mv at stock clocks (1630mhz) and it achieves around 1575mhz on the core that way which I'm fine with. My issue is my memory OC isn't all that great.
> 
> 
> 
> Memory set at 1000mhz with a 1100mv core voltage floor.
> 
> 
> Superposition 1080p extreme passes with no issues, even when looped for 20+ mins, but the card artifacts randomly in Witcher 3. Green flashing spots and black squares, usually only in cut scenes.
> 
> 
> How could Witcher 3 push the card to artifact when superposition doesn't?
> 
> 
> Memory anything above 1000mhz starts artifacting pretty bad.
> 
> 
> Did I just lose the lottery on this card? Or is my UV/OC not the best optimized? Any help would be appreciated!


Heat issue for HBM? I presume you running air-cooling? 

Try to turn up the fan speed if it helps as a preliminary test. 

Games are more taxing than Superposition. You don't get long-term stability done in it.


----------



## Notbn

Nighthog said:


> Heat issue for HBM? I presume you running air-cooling?
> 
> Try to turn up the fan speed if it helps as a preliminary test.
> 
> Games are more taxing than Superposition. You don't get long-term stability done in it.



Repasted the card last night, unmolded die. Card on air.



After roughly an hour of witcher 3:


Core max: 76c mostly averaging ~70c
Mem: 85c max
Hotspot: 95c max


Card did the same weird artifacting before the repaste as well. Temps really didn't change all that much. I'm kind of concerned about the hotspot temps but I can't remember what they were before the repaste.


The card doesn't crash at all, just weird little artifacts every now and then.


Will try custom fan profile when I get home to see if it helps.


----------



## miklkit

Yeah it can get hot. Turn up those fans. I have mine set to hit 100% at 60C but the HBM can still average 60C. I use Trixx and here is my OC and how it runs in my most demanding game.


----------



## LicSqualo

Notbn said:


> I have a Sapphire Nitro+ V64, not the limited edition. I have my P6 and P7 at 1100mv at stock clocks (1630mhz) and it achieves around 1575mhz on the core that way which I'm fine with. My issue is my memory OC isn't all that great.
> Memory set at 1000mhz with a 1100mv core voltage floor.
> Superposition 1080p extreme passes with no issues, even when looped for 20+ mins, but the card artifacts randomly in Witcher 3. Green flashing spots and black squares, usually only in cut scenes.
> How could Witcher 3 push the card to artifact when superposition doesn't?
> Memory anything above 1000mhz starts artifacting pretty bad.
> Did I just lose the lottery on this card? Or is my UV/OC not the best optimized? Any help would be appreciated!


TW3 is always been my HBM OC test. Artifacts are surely much fast with this game, much more than with any gpu test. i ended with my LC version at 1080MHz to be really stable.

Beside, any driver upgrade have reduced the OC capability (for both, gpu and hbm). But for the other hand we have received a more stable Vga.

These are my personal conclusion after two year with Vega.


----------



## VicsPC

Cleaned my loop today, i must have dropped another 4-5°C before it was clean. Feels so nice.


----------



## Notbn

Well, played an hour an a half of TW3 today with slightly modified settings. No artifacts that I saw. Only other thing I did was apply the settings with AMD memory tweak this time instead of Wattman. Not sure if that makes a difference. Will continue to keep an eye on things. Here are my temps from the session.


----------



## Nighthog

Anyone have trouble with HBCC?

Can't use it when I've tried it with TW:Warhammer 2. Crashes instantly when loading campaign. 

Just disabled it, It just no go with it, tried various things. 
I noticed it draws more power with it enabled and I can't as such get as high sustained clock or score if I run Superposition 4k either.


----------



## b0uncyfr0

*b0uncyfr0*

Do we have some fresh benchmarks of how the Vega cards are performing in newer games from last year? I'm thinking of selling my 1070 and going for a second hand 64. I saw RDR2 benches and those were impressive.

Problem is my 1070 OC's really well. Around 2.1Ghz, which puts it above a 1070 Ti.


----------



## nolive721

b0uncyfr0 said:


> Do we have some fresh benchmarks of how the Vega cards are performing in newer games from last year? I'm thinking of selling my 1070 and going for a second hand 64. I saw RDR2 benches and those were impressive.
> 
> Problem is my 1070 OC's really well. Around 2.1Ghz, which puts it above a 1070 Ti.


to me and i am not sure about your expectations but this is a side move.if you are at above 1070ti it means gtx1080 level. 

I have a 1080 and a 1080 Ti as well as vega64 LC that clocks well on core and vey well on memory
But in 2019 games still the card is closer to 1080 std than the ti so you would be doing a side move here
And thats if you pick up a good VEGA and i am not sure where you will get it from so risk is there thats not a good performer
Dont get me wrong i really love mine but you will need also lot of time tweaking to get most of it
My 2p
Ps this website gives recent benchmarks in details
https://forums.guru3d.com/threads/r...-bios-tweaks-cont.426001/page-50#post-5747627


----------



## fcchin

Alastair said:


> Tighten the screws down in the star pattern


not recommended by this guy 

https://translate.google.com/transl...,15700259,15700262,15700265,15700271,15700283


----------



## Ne01 OnnA

*2019 Benchmark Suite*

Games tested:

RESIDENT EVIL 2
Metro Exodus
A Plague Tale Innocence
RAGE 2
Total War THREE KINGDOMS
Control
Borderlands 3
Call of Duty Modern Warfare
Star Wars Jedi: Fallen Order
Red Dead Redemption 2


----------



## LicSqualo

Really happy with my VEGA 64 LC.


----------



## miklkit

Yeah. With the latest drivers there is a big jump in performance in everything except Unity based games. Those are slightly better but are lagging behind the others I've tried. Just might keep it around a while longer.


----------



## prom

I've been trying my hand and optimizing clocks & voltage on my V64 Strix, and I can't for the life of me get more clock speed out of the damn thing.
If I'm really pushing it, I'll get around 1540-1550 for a lot more power. I've settled for ~1530 with a 20% target bump.

I've attached my OverdriveNTool setup for criticism.

Firestrike: 24724 Graphics Score
Firestrike Stress Test: Pass
Time Spy: 7608 Graphics Score
Superposition 1080p Extreme: 4775 give or take a couple points

Temperatures are solid, and well under control.
Hotspot doesn't go above 90
VRM temps don't go above 90
Core & HBM are usually 20 below the hotspot temp

The really annoying thing is that I KNOW this thing has hit 1600mhz before.
I don't know if it's a driver thing or what.

Also, I'll occasionally *overboost* where P7 is maxxed and I crash. I don't know what causes it, and more importantly I don't know how to fix it.

Any input would be greatly appreciated.

Note: Fan settings can be disregarded, unless you think they would have an effect.

As a side note, could it just be down to reporting software? I find HWInfo & GPU-Z have different results. Not by much, but different.


----------



## Wuest3nFuchs

prom said:


> I've been trying my hand and optimizing clocks & voltage on my V64 Strix, and I can't for the life of me get more clock speed out of the damn thing.
> 
> If I'm really pushing it, I'll get around 1540-1550 for a lot more power. I've settled for ~1530 with a 20% target bump.
> 
> 
> 
> I've attached my OverdriveNTool setup for criticism.
> 
> 
> 
> Firestrike: 24724 Graphics Score
> 
> Firestrike Stress Test: Pass
> 
> Time Spy: 7608 Graphics Score
> 
> Superposition 1080p Extreme: 4775 give or take a couple points
> 
> 
> 
> Temperatures are solid, and well under control.
> 
> Hotspot doesn't go above 90
> 
> VRM temps don't go above 90
> 
> Core & HBM are usually 20 below the hotspot temp
> 
> 
> 
> The really annoying thing is that I KNOW this thing has hit 1600mhz before.
> 
> I don't know if it's a driver thing or what.
> 
> 
> 
> Also, I'll occasionally *overboost* where P7 is maxxed and I crash. I don't know what causes it, and more importantly I don't know how to fix it.
> 
> 
> 
> Any input would be greatly appreciated.
> 
> 
> 
> Note: Fan settings can be disregarded, unless you think they would have an effect.
> 
> 
> 
> As a side note, could it just be down to reporting software? I find HWInfo & GPU-Z have different results. Not by much, but different.


Hello, will be a fast rely from me, cause im on my way to work.
So why u use such an old Version of overdriventool ? 
The thing is which i also learned hardly by myself: GPU P2 voltage and memory p3 voltage need to be the same after i did that ,it had changed clocks on my vega56 .

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Ipak

My stock V64 with waterblock easly running ~1605 Mhz during Time Spy, while using stock clocks and voltages with +50% power target. 
When i lower P7 voltage below 1150 mV, card usualy clocks much lower, and sometimes overboost and crash.

My card dont like undervolting


----------



## LicSqualo

Ipak said:


> My stock V64 with waterblock easly running ~1605 Mhz during Time Spy, while using stock clocks and voltages with +50% power target.
> When i lower P7 voltage below 1150 mV, card usualy clocks much lower, and sometimes overboost and crash.
> 
> My card dont like undervolting


This also happened to me, with the new drivers from October 2019 to today. I had to, necessarily, reconfigure all the voltages and clocks to have a good undervolting with high GPU clocks. 
Today on OverdriveNTool I set P7 to 1180 mV with 1760 MHz clock. With 50% more Power my clocks are around 1700/1750 MHz. 
My general opinion is that you often need to review your settings to get the most out of this GPU.


----------



## LicSqualo

https://www.3dmark.com/3dm/43246161?


----------



## prom

Wuest3nFuchs said:


> Hello, will be a fast rely from me, cause im on my way to work.
> So why u use such an old Version of overdriventool ?
> The thing is which i also learned hardly by myself: GPU P2 voltage and memory p3 voltage need to be the same after i did that ,it had changed clocks on my vega56 .
> 
> Gesendet von meinem SM-G950F mit Tapatalk


Good idea. Updated all my monitoring software, and adjusted my P2/3 voltages.
Now my scores haven't changed at all, but the software reporting is all over the place.
Seems like my clock speeds are all over the place now.


----------



## Wuest3nFuchs

prom said:


> Good idea. Updated all my monitoring software, and adjusted my P2/3 voltages.
> 
> Now my scores haven't changed at all, but the software reporting is all over the place.
> 
> Seems like my clock speeds are all over the place now.


i dont know whats up with your strix card,maybe a driverthing,since i heard reports daily when it comes to the new 2020 drivers. 
All i can say if i pump up PT +25% i get higher clocks(around 1680-1725)which is great specially for chernobylite and its a very demanding game .

What are your temperatures on idle and load?

Gesendet von meinem SM-G950F mit Tapatalk


----------



## prom

Temps seem pretty good to me as you can see here.
That's how it looks playing Battlefront 2 (the game I use for stability testing)

I'm starting to think I've just got a poor some poor silicon.
If I change my power target to +25% I'll become unstable and have to bump up my voltages.
I don't really WANT to bump up my voltages much more simply because the card is in an NCASE M1 and I can only dissipate the heat so fast 

Maybe I'll bump it up to 1100 and 25%

It's worth noting that this card has NOT been repasted, and I've not changed the thermal pad.
I've ordered the stuff to do so, but you can see the temps aren't terrible as is.


----------



## Worldwin

What are your average core clocks in game? I recommend you lower your voltages for P1-P5. When the card throttles for w/e reason and the voltages set their are higher, the voltage will go up.


----------



## Wuest3nFuchs

prom said:


> Temps seem pretty good to me as you can see here.
> 
> That's how it looks playing Battlefront 2 (the game I use for stability testing)
> 
> 
> 
> I'm starting to think I've just got a poor some poor silicon.
> 
> If I change my power target to +25% I'll become unstable and have to bump up my voltages.
> 
> I don't really WANT to bump up my voltages much more simply because the card is in an NCASE M1 and I can only dissipate the heat so fast
> 
> 
> 
> Maybe I'll bump it up to 1100 and 25%
> 
> 
> 
> It's worth noting that this card has NOT been repasted, and I've not changed the thermal pad.
> 
> I've ordered the stuff to do so, but you can see the temps aren't terrible as is.


I would pump up the fans up to max 80% .
Also didn't repaste or reseated the cooler yet .
Since first iteration of the adrenalin2020 drivers i had more success in terms of oc and uv.
Had a few issues installing them first but 
got them sorted and now the card seems to run better than ever before .

Download: Radeon Software Adrenalin 2020 Edition 20.1.4

meanwhile i'm @p7~1500mhz and a bit above @980mV [email protected]%



Gesendet von meinem SM-G950F mit Tapatalk


----------



## prom

Hmmm, I'll give undervolting P1-P5 a go.

The trouble is that I'm just about stable, and I'm having a hard time comparing to others to see if I'm doing anything wrong.
It seems like few people actually have a Strix 64 and/or actively post their setups. Everyone runs a Nitro+ or water


----------



## prom

Seems as though I can't keep total stability with stock P7 (1630) with much less than 1100mv. 1090 SEEMS stable for the most part, but there are some outliers.
I'll keep pushing.

Clocks are stable though and pass both firestrike & timespy stress tests, regardless of game to game stability.


----------



## Worldwin

I would make sure its game stable since thats what matters when you game. You want it to be stable in all scenarios rather than a generic scenario.


----------



## R0CK3T

LicSqualo said:


> https://www.3dmark.com/3dm/43246161?


https://www.3dmark.com/fs/19621889


----------



## Wuest3nFuchs

R0CK3T said:


> https://www.3dmark.com/fs/19621889


 like your posting!have same cpu as yours.
did you overclock it much?
dunno what numbers i get with my set,lets see...

Gesendet von meinem SM-G950F mit Tapatalk


----------



## LicSqualo

R0CK3T said:


> https://www.3dmark.com/fs/19621889


Thanks to share! I use it for comparison with my system. 
This is my stable best score today, raising HBM to 1100 MHz. 
https://www.3dmark.com/3dm/43607251?


----------



## Wuest3nFuchs

https://www.3dmark.com/3dm/43608409

stockclocks and low voltages on gpu and cpu.










Gesendet von meinem SM-G950F mit Tapatalk


----------



## prom

Worldwin said:


> I would make sure its game stable since thats what matters when you game. You want it to be stable in all scenarios rather than a generic scenario.


Of course! Granted, the only 2 things that ever give me stability issues were PUBG and Frostbite-based games likes SW:BF2.

Since I don't have a winning chip, I'm thinking of boosting my HBM up some more, or downclocking to the *older* Strix bios clocks of 1590 to see how low I can go


----------



## miklkit

Worldwin said:


> I would make sure its game stable since thats what matters when you game. You want it to be stable in all scenarios rather than a generic scenario.



Yeah. When I first got this V64 I clocked it to stress tests and it crashed in games. Then I adjusted it to be game stable and it works better and doesn't crash.


----------



## Roaches

I wonder if anyone here with 2 or more Vega 64s can run a Blender 2.8 Classroom GPU cycles benchmark with the preset tile size and 512 x 512 tiles. I kinda wanna gauge how much of a uplift moving from my dual RX480s on would be? Thanks.

https://www.blender.org/download/demo-files/
https://download.blender.org/demo/test/classroom.zip


----------



## snipernote

hello everyone
i recently replaced thermal pads and paste of my my vega 56 red dragon after i noticed hot spot is reaching 105 - 108 c which make my GPU down clock its boosting to around 1520 - 1540 mhz ( card is flashed with nitro+ Vega 64 bios , p4 p5 p6 and p7 undervolted -75mv hbm 1075mhz and floor voltage 1050mv as p4 +30% power limiter (saw up to 290w usage) and custom fan curve case is cooler master H500 with lots of fans 200mmx2 + 2x120mm aio fans push pull intake config , 200mm top intake , 120mm exhaust), with this config i reached a max of 72c core temp and 108c hot spot during a session of superposition extreme which is mad, the average power was 240w going up a bit and going down afterwards ... i used gelid extreme pads on chockes 11w/mk and for Vrms i used arctic thermal pads 6w/mk with cooler master master gel maker nano 11w/mk paste very generous one even layer on all gpu and hbm surface area then used additional 1 washer (about 0.25mm thick) on each screw to get the best possible tightness ... the hot spot issue still there and my card cannot break 300w usage so far which makes my gpu clocks go down fast when hotspot reaches 108c ... the card has Samsung hbm memory of course 

my best scores
https://www.3dmark.com/3dm/42680475
https://www.3dmark.com/3dm/42680739


i was thinking of changing the thermal paste to a thermal interface KU-ALF5 that has 220w/mk with the air cooler ... do you think this is the right choice ? or should i remount again with double washers for example ? ... my target is to reach the max 345w on air as i feel i can get more performance by overclocking core speed to 1700 instead of stock 1630mhz which is effectively a 1580-1600mhz mostly


----------



## prom

So these are my tentatively final results for the time being. I feel I can pull more, but my chip needs a lot of voltage sadly and I'd need to repaste the card to do so effectively.
Again, my card is a Strix Vega 64.

P7 1606mhz @ 1050mV
HBM 1060mhz @ 975mV
Power Target @ 20%

*Benchmark Scores:*
TimeSpy graphics score: 7763
FireStrike graphics score: 24982
Superposition score: 4812

*Loop Averages*
TimeSpy GPU Test 2: 1560mhz @ 200w
Superposition: 1515mhz @ 238w

Clocks are also TimeSpy & FireStrike Stress Test stable 

In-game clocks are around 1580-1600mhz depending on the game.

------------

So I've found this to be my sweetspot. 
I can run the stock clock of 1630 without crashing if I bump the voltage up to 1100 but honestly it is not worth it at all.
Bumping it up to 1100mV consumes 14% MORE asic wattage and pumps a good bit more heat into my tiny NCASE M1.

The older Strix cards (like mine) have a defective cooling solution that can be fixed with some new pads and paste, but I'm not thermal throttling at all so I'm not rushing to take care of it.
Temps are 70 and below, and stock fans were replaced by 2x NF-F12 exhausting out the bottom of the case.

*UPDATE:*
Rebooted my PC and rebenched and saw a pretty decent uptick in performance. Rig had been up for roughly a week using sleep instead of shutting down. Neat!


----------



## nolive721

prom said:


> So these are my tentatively final results for the time being. I feel I can pull more, but my chip needs a lot of voltage sadly and I'd need to repaste the card to do so effectively.
> Again, my card is a Strix Vega 64.
> 
> P7 1606mhz @ 1050mV
> HBM 1030mhz @ 975mV
> Power Target @ 20%
> 
> *Benchmark Scores:*
> TimeSpy graphics score: 7651
> FireStrike graphics score: 24740
> Superposition score: 4761
> 
> *Loop Averages*
> TimeSpy GPU Test 2: 1560mhz @ 200w
> Superposition: 1515mhz @ 238w
> 
> Clocks are also TimeSpy & FireStrike Stress Test stable /forum/images/smilies/biggrin.gif
> 
> In-game clocks are around 1580-1600mhz depending on the game.
> 
> ------------
> 
> So I've found this to be my sweetspot.
> I can run the stock clock of 1630 without crashing if I bump the voltage up to 1100 but honestly it is not worth it at all.
> Bumping it up to 1100mV consumes 14% MORE asic wattage and pumps a good bit more heat into my tiny NCASE M1.
> 
> The older Strix cards (like mine) have a defective cooling solution that can be fixed with some new pads and paste, but I'm not thermal throttling at all so I'm not rushing to take care of it.
> Temps are 70 and below, and stock fans were replaced by 2x NF-F12 exhausting out the bottom of the case.


if i rememeber well the strix was the worst vega to get from thermal management point of view
When i replaced my 1080hybrid i went for the LC VEGA 64 mainly for quietness and thermals behavior
But what you achieved with your card seems good though


----------



## snipernote

snipernote said:


> hello everyone
> i recently replaced thermal pads and paste of my my vega 56 red dragon after i noticed hot spot is reaching 105 - 108 c which make my GPU down clock its boosting to around 1520 - 1540 mhz ( card is flashed with nitro+ Vega 64 bios , p4 p5 p6 and p7 undervolted -75mv hbm 1075mhz and floor voltage 1050mv as p4 +30% power limiter (saw up to 290w usage) and custom fan curve case is cooler master H500 with lots of fans 200mmx2 + 2x120mm aio fans push pull intake config , 200mm top intake , 120mm exhaust), with this config i reached a max of 72c core temp and 108c hot spot during a session of superposition extreme which is mad, the average power was 240w going up a bit and going down afterwards ... i used gelid extreme pads on chockes 11w/mk and for Vrms i used arctic thermal pads 6w/mk with cooler master master gel maker nano 11w/mk paste very generous one even layer on all gpu and hbm surface area then used additional 1 washer (about 0.25mm thick) on each screw to get the best possible tightness ... the hot spot issue still there and my card cannot break 300w usage so far which makes my gpu clocks go down fast when hotspot reaches 108c ... the card has Samsung hbm memory of course
> 
> my best scores
> https://www.3dmark.com/3dm/42680475
> https://www.3dmark.com/3dm/42680739
> 
> 
> i was thinking of changing the thermal paste to a thermal interface KU-ALF5 that has 220w/mk with the air cooler ... do you think this is the right choice ? or should i remount again with double washers for example ? ... my target is to reach the max 345w on air as i feel i can get more performance by overclocking core speed to 1700 instead of stock 1630mhz which is effectively a 1580-1600mhz mostly


Update : yesterday i reseated the gpu heatsink and added 3x0.3mm thick washers on each screw of the clamp bracket ... Now the gpu breaks 300w of power while maintaining 240 to 260 of sustainable usage 
Super position 1080p extreme reached 4298 on stock v64 bios and 4627 with my oc applied
Card is doing 1560mhz minimum now ... But hotspot still reaches 108c ... Anyidea how we can get a better performance and lower hotspot temps with graphite pads ? I dont mind the card temp goes up to 75c i just want the hotspot to be near that temp ... Not a 30c difference !









Sent from my POCOPHONE F1 using Tapatalk


----------



## prom

What are your overdrive settings and is your die molded or unmolded?
A 30 degree difference is higher than normal.


----------



## snipernote

prom said:


> What are your overdrive settings and is your die molded or unmolded?
> 
> A 30 degree difference is higher than normal.


Power color vega 56 Molded die samsung hbm with sapphire nitro+ v64 bios installed with latest drivers .... I always had a hotspot problem even before i opened up my card it reached 108c
But i am thinking that i reached the card heatsink cooling capacity .... I ordered a graphite pad just in case and i will lap my copper heatsink pad because i think it should look brighter maybe some oxidization happened
I am using cooler master mastergel nano maker new Thermal paste with 11w/mk full coverage on my vega gpu
Thermal pads on chockes are gelid extreme 11w/mk and arctic thermal pads 6w/mk for vrm drivers ... Thats as per recommended requirements from manufacturer

Overdrive config i will post a picture when i come home but i remeber that speeds are stock as the v64 bios i undervolt p4 1050mv p5 1075mv p6 1100mv p7 1125mv hbm oc 1075 and floor voltage 1050mv power limiter +50 after the last washer mod and zero fan mode off Custom fan curve going up from 30c 25% to 75c with 100% usually sticking to 2700rpm at 65 to 70c thats all i remember


----------



## prom

That seems like a LOT of voltage for not very much clock 🤔


----------



## snipernote

prom said:


> That seems like a LOT of voltage for not very much clock ????


Actually the card is not stable enough unless i supply 1125mv ... Especially in heavy games like metro redux and exudos

Sent from my POCOPHONE F1 using Tapatalk


----------



## snipernote

snipernote said:


> Power color vega 56 Molded die samsung hbm with sapphire nitro+ v64 bios installed with latest drivers .... I always had a hotspot problem even before i opened up my card it reached 108c
> But i am thinking that i reached the card heatsink cooling capacity .... I ordered a graphite pad just in case and i will lap my copper heatsink pad because i think it should look brighter maybe some oxidization happened
> I am using cooler master mastergel nano maker new Thermal paste with 11w/mk full coverage on my vega gpu
> Thermal pads on chockes are gelid extreme 11w/mk and arctic thermal pads 6w/mk for vrm drivers ... Thats as per recommended requirements from manufacturer
> 
> Overdrive config i will post a picture when i come home but i remeber that speeds are stock as the v64 bios i undervolt p4 1050mv p5 1075mv p6 1100mv p7 1125mv hbm oc 1075 and floor voltage 1050mv power limiter +50 after the last washer mod and zero fan mode off Custom fan curve going up from 30c 25% to 75c with 100% usually sticking to 2700rpm at 65 to 70c thats all i remember


Update : yesterday i received the ic graphite pad and did the following (lapped the heatsink for 5 minutes as to remove the older darker surface then i put nail polish and applied the pad) i had to reseat the heatsink 4 times already breaking 4 clamp screws and replacing them again ... The graphite pad did not help in my situation as hot spot reached 110c and performance was degraded

i am pretty sure the thermal limitation are applied for this gpu heatsink now 
As i know that vega 56 is a 186w card with power limit up to 279w on stock oc bios (i suspect powercolor thermal solution can do 250-260w only as i saw from my experience before the thermal pad )
As you can see in the pics i applied nail polish to the adjacent area of the gpu and i plan to remove the residuals with nail polish remover carefully using ear buds ... I hope this wont affect my gpu or shorten something out 

I will apply a generous amount of thermal paste again ( CMMGMN new or GC extreme as both i have now )
And will change all screws again ... 

The weird thing that happened yesterday is that clamp screws are getting cut right under the washer which might imply that i am applying too much pressure ? Its a pain to remove those screws and reseat

Any advise would be appreciated as i am going today to finalize this adventure ( replacing all screws and putting new non worn out ) ... Might look into water-cooling in the future but there are no compatible blocks with the red dragon cards afaik









Sent from my POCOPHONE F1 using Tapatalk


----------



## snipernote

Update : so i finished lapping and removed the graphite pad, used acetone on the nail polish to remove it and then applied a generous amount of gc extreme thermal paste ... Card now can do about 220 to 240w of sustained usage with my uv/oc settings ... I think the gc extreme need to settle if i am not mistaken because it felt very fuild similar to kryonaut ( cmmgnk new was alot thicker and after 2 months i reseated and got those amazing results about with 4610+ points) 

There is some fluctuation in the core speeds now (1590-1480mhz) and its was okay while gaming ... I am now convinced that the stock cooling on this card cannot do much more than that 
I added washers as well to the clamp screws but they look like not needed because when i compared the pictures for the washer mod of other cards the screw entry point is under the pcb while this card the screw hole rise above it by about 2mm from the back so its not flush, maybe adding those washers is a mistake since adding them did break 4 screws so far 

I might remove the washers today and try without them just to see if it helps

What do you think ? Honestly i looked all over and i found no one did open the card and moded with this card so far









Sent from my POCOPHONE F1 using Tapatalk


Update 28/4/2020: recieved the nylon washers and used them on the gpu mount to increase pressure with no good results so i reflashed the card to original power color vega 56 stock oc bios and redid my overclocking from there ... Reached 950mhz hbm oc with 950mv floor voltage , 1675mhz p7 @1200mv, [email protected] and all other ststates are stock ... Custom aggressive fan curve as well which gave me 108c hotspot max (110c limit in stock bios) gpu keeps 1540+ in heavens superposition extreme 1080p with 4600 points and other games gpu core boost till it reaches the power limit (max +50% slider at 279w) ... Gpu can stay there for about 5 minutes before downclocking due to hotspot ... Performance is far better than the sapphire bios and card is alot quiter most of the time ... I cannot see the difference in fps between 1075mhz hbm or 950mhz with higher gpu clocks ... If you are interested quote me to share the results


----------



## Wuest3nFuchs

Nice card man!!
Like the Fotos you made and looks same as my vega56 pulse from the length of the pcb.I like to say nano to this short gpu pcb's.

Looking forward onto my waterblock installation next Weekend(not the incoming one) with a dissassembly video and mounting the bykski a-xf56-nano-x block on it.Few parts arrive next week but then everything should be here and work to get this thing cooler. 

Greetings fox


p.s.: try this mounting method from igorslab 

https://translate.googleusercontent...paste/&usg=ALkJrhi6eNGrmc7WSKLV8ZZaGUT8tnHS7A
Gesendet von meinem SM-G950F mit Tapatalk


----------



## snipernote

Wuest3nFuchs said:


> Nice card man!!
> Like the Fotos you made and looks same as my vega56 pulse from the length of the pcb.I like to say nano to this short gpu pcb's.
> 
> Looking forward onto my waterblock installation next Weekend(not the incoming one) with a dissassembly video and mounting the bykski a-xf56-nano-x block on it.Few parts arrive next week but then everything should be here and work to get this thing cooler.
> 
> Greetings fox
> 
> 
> p.s.: try this mounting method from igorslab
> 
> https://translate.googleusercontent...paste/&usg=ALkJrhi6eNGrmc7WSKLV8ZZaGUT8tnHS7A
> Gesendet von meinem SM-G950F mit Tapatalk


Thanks the screw mounting method is Already in action ... Top screws first the the down ones ... Card settles well now as i redid the benchmark yesterday with a higher benchmark score ... Didnt remove the washers or anything for now i will keep it like the for the next couple months and i might look to get the right nylon washers afterwards ... Size m2.5 did not work so i might get m3 ones








Ps: that was from a cold started system and hotspot reached 104c within the test ... It was higher while gaming afterwards

Sent from my POCOPHONE F1 using Tapatalk


----------



## Nighthog

Hotspot is kinda hard to tame.

I have a waterblock and reach above 105C when going to 300W and above. Basically can't use +50% powerlimit or some games will crash.

Vega 64 

Is only stable UV ~1115-1120mv @ 1630, settings.
1080Mhz HBM.

using only +40% power ~270W it doesn't reach 90C. Those last watts really increase the temperature. I've realized the waterblock I got can't handle more than about 300W total as it is. Would probably need to do modifications for mounting pressure next but that's too much hassle. The card overall can get toasty even like this in extended usage.


----------



## hesee

Nighthog said:


> Hotspot is kinda hard to tame.
> 
> I have a waterblock and reach above 105C when going to 300W and above. Basically can't use +50% powerlimit or some games will crash.


On my Nitro+ and waterblock combo i had to switch to carbonaut pad to tame the hotspot. Hotspot was allways good in the begining, but it degraded after few months. Probably paste had too high viscosity and got pumpped out. Carbonaut had about 2c higher temperatures for core/hbm, but hotspot has been under control longer than with the paste.


----------



## Wuest3nFuchs

hesee said:


> On my Nitro+ and waterblock combo i had to switch to carbonaut pad to tame the hotspot. Hotspot was allways good in the begining, but it degraded after few months. Probably paste had too high viscosity and got pumpped out. Carbonaut had about 2c higher temperatures for core/hbm, but hotspot has been under control longer than with the paste.


hello ,planning to assembly my vega56 pulse this weekend with a bykski block. so you recommend a Pad for the gpu is better than a paste in long terms? i have different pastes and a few 0.5,1.0 and 1.5mm arcticpads,hopefully im prepared.

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Loladinas

hesee said:


> On my Nitro+ and waterblock combo i had to switch to carbonaut pad to tame the hotspot. Hotspot was allways good in the begining, but it degraded after few months. Probably paste had too high viscosity and got pumpped out. Carbonaut had about 2c higher temperatures for core/hbm, but hotspot has been under control longer than with the paste.


What core/hotspot temps were you seeing on your card? I have been running one with a simple smear of MX-4 for close to a year now, and I'm getting somewhere around ~20C over water temp on the core and ~40C over water on the hotspot, as the worst case scenario (stress testing heavy overclocks). In regular use it's less than half of that. No signs of pumpout :thinking:

Although I do realize I screwed up my application after I had completed my system, as per some previous posts here it's possible to get way lower core-hotspot delta, I've been putting off fixing it. Bit too much work for me, maybe I'll force myself to do it whenever I decide to do some loop maintenance.

EDIT: my die is of the "unmolded" variety, but HBM chip temps stay perfectly chilly


----------



## Wuest3nFuchs

Loladinas said:


> What core/hotspot temps were you seeing on your card? I have been running one with a simple smear of MX-4 for close to a year now, and I'm getting somewhere around ~20C over water temp on the core and ~40C over water on the hotspot, as the worst case scenario (stress testing heavy overclocks). In regular use it's less than half of that. No signs of pumpout :thinking:
> 
> 
> 
> Although I do realize I screwed up my application after I had completed my system, as per some previous posts here it's possible to get way lower core-hotspot delta, I've been putting off fixing it. Bit too much work for me, maybe I'll force myself to do it whenever I decide to do some loop maintenance.


That's impressive! do you have the gpu only in your Loop or also the cpu?

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Loladinas

Wuest3nFuchs said:


> That's impressive! do you have the gpu only in your Loop or also the cpu?
> 
> Gesendet von meinem SM-G950F mit Tapatalk


Overclocked 4790k under EWBK Supremacy Evo and reference PCB Vega 56 with sammy HBM under Barrow block. To be fair my rad setup is two Phobya 1260 radiators in series, standing on the floor next to the desk. That could skew the results a bit.


----------



## Wuest3nFuchs

Loladinas said:


> Overclocked 4790k under EWBK Supremacy Evo and reference PCB Vega 56 with sammy HBM under Barrow block. To be fair my rad setup is two Phobya 1260 radiators in series, standing on the floor next to the desk. That could skew the results a bit.


Thx i'm using a mora 420 pro but i guess it will work. at least i'll hope it.

Gesendet von meinem SM-G950F mit Tapatalk


----------



## hesee

Loladinas said:


> What core/hotspot temps were you seeing on your card? I have been running one with a simple smear of MX-4 for close to a year now, and I'm getting somewhere around ~20C over water temp on the core and ~40C over water on the hotspot, as the worst case scenario (stress testing heavy overclocks). In regular use it's less than half of that. No signs of pumpout :thinking:
> 
> Although I do realize I screwed up my application after I had completed my system, as per some previous posts here it's possible to get way lower core-hotspot delta, I've been putting off fixing it. Bit too much work for me, maybe I'll force myself to do it whenever I decide to do some loop maintenance.
> 
> EDIT: my die is of the "unmolded" variety, but HBM chip temps stay perfectly chilly


I just run superposition 1080 extreme. Score was 5168. (Vega 64 with 1702/1125mv and 1075mhz hbm)

Core 39c 
HBM 40c
Hotspot 67c
Power peak: 306W.

With MX4 Hotspot started with 65c, but after few months it hit over 100c.


----------



## Loladinas

hesee said:


> I just run superposition 1080 extreme. Score was 5168. (Vega 64 with 1702/1125mv and 1075mhz hbm)
> 
> Core 39c
> HBM 40c
> Hotspot 67c
> Power peak: 306W.
> 
> With MX4 Hotspot started with 65c, but after few months it hit over 100c.


Out of curiosity I ran the same bench. 30C - 34C - 65C. Peak temps, averages were a few degrees lower. Just a quick test at 1700Mhz core, 1000Mhz HBM, I just run this card stock most of the time. Power limit set to 330W, so I don't think it peaked out at 301W but it's getting pretty close. 

So, like I said, I should fix my hotspot with proper mounting. And maybe VRM temps too, since I used two different thickness thermal pads, I don't think I'm getting good contact on some of them. But other than that, I don't see any issues. I've been running it like this since last April or May - no difference in temps at all.


----------



## hesee

Loladinas said:


> Out of curiosity I ran the same bench. 30C - 34C - 65C. Peak temps, averages were a few degrees lower. Just a quick test at 1700Mhz core, 1000Mhz HBM, I just run this card stock most of the time. Power limit set to 330W, so I don't think it peaked out at 301W but it's getting pretty close.


Pretty close. Room temperature is bit higher here and pump & fans kick of in my system when water heats up more than this bench archives. For benchmark i could shave 2-3c of from temperatures (drop room temperature from 23-24 to 21), but in real gaming actual temperatures are around 5-7c higher as water heat's up and then system finds equalibrium.

Anyway this is third seating of waterblock, noctuas paste and mx4 both got degraded on my system, so i had to try the carbonaut pad.


----------



## Loladinas

hesee said:


> Pretty close. Room temperature is bit higher here and pump & fans kick of in my system when water heats up more than this bench archives. For benchmark i could shave 2-3c of from temperatures (drop room temperature from 23-24 to 21), but in real gaming actual temperatures are around 5-7c higher as water heat's up and then system finds equalibrium.
> 
> Anyway this is third seating of waterblock, *noctuas paste and mx4 both got degraded on my system*, so i had to try the carbonaut pad.


Yeah, but I wonder why. What's different between our systems...


----------



## hesee

Loladinas said:


> Yeah, but I wonder why. What's different between our systems...


Well it could be that i have a molded die and you have a unmolded one. Plus i haven't checked my heatsink shape, it should be flat, but it it's a little bit convex/concave it could cause the difference.


----------



## snipernote

Nighthog said:


> Hotspot is kinda hard to tame.
> 
> 
> 
> I have a waterblock and reach above 105C when going to 300W and above. Basically can't use +50% powerlimit or some games will crash.
> 
> 
> 
> Vega 64
> 
> 
> 
> Is only stable UV ~1115-1120mv @ 1630, settings.
> 
> 1080Mhz HBM.
> 
> 
> 
> using only +40% power ~270W it doesn't reach 90C. Those last watts really increase the temperature. I've realized the waterblock I got can't handle more than about 300W total as it is. Would probably need to do modifications for mounting pressure next but that's too much hassle. The card overall can get toasty even like this in extended usage.


If you have problems with power limit not reaching +50% thats usually a psu problem in my case i have upgraded to a 1050w psu to keep my card usage in check
Glad to see even vega cards on water block barely reach 300w .. i thought you would have sustained it to reach 100c hotspot but that would be a golden sample apparently to keep it low 
Anyway i am happy with my vega so far i wont be pushing it further as i dont want to invest in a waterblock and equipments that will cost the same price as the gpu ... Upgrading the gpu to a higher end one might be worth it on the long run (i am eying the vii with its 16gb hbm vram xD but i cannot explain it to my self if that card cannot sustain 400w+ on air with oc/uv lmao)

Sent from my POCOPHONE F1 using Tapatalk


----------



## snipernote

hesee said:


> On my Nitro+ and waterblock combo i had to switch to carbonaut pad to tame the hotspot. Hotspot was allways good in the begining, but it degraded after few months. Probably paste had too high viscosity and got pumpped out. Carbonaut had about 2c higher temperatures for core/hbm, but hotspot has been under control longer than with the paste.


Based on many replies from my post on reddit ... Cooler master mastergel nano maker new 11w/mk is the thickest paste i tried so far and its very suitable for vega imo ... After 2 months i opened the card and the paste was like a lightly dried yhick paste that i found most of it on the sides of the gpu core so i placed it back on the gpu core and closed the card as an experiment ... It was my best result so far before lapping reaching 318w peak and 260w sustained in superposition extreme 1080p at 1580mhz minimum getting 4655points with no washers or other mods ... Just that paste replacement ...
I also chamged the thermal past on another gpu (rx480 red devil) and the gpu temps dropped by 15c (80c to 65c) and better performance (over 110w sustained (old paste was a gd900 that lost its thermal conductivity due to expiration date and power usage was lower than 90w )

I would recommend this paste to anyone for gpu cores and always use a very thick paste for vega gpus
Gc extreme paste felt very fluid compared to cm tim and needed about 2 days to settle correctly

Also if anyone interested i contacted powercolor for the accurate thermal pads replacement for vega 56 red dragon and posted it on reddit as well 
Link : 
https://www.reddit.com/r/Amd/commen...a_56/?utm_medium=android_app&utm_source=share

Sent from my POCOPHONE F1 using Tapatalk


----------



## hesee

snipernote said:


> If you have problems with power limit not reaching +50% thats usually a psu problem in my case i have upgraded to a 1050w psu to keep my card usage in check
> Glad to see even vega cards on water block barely reach 300w .. i thought you would have sustained it to reach 100c hotspot but that would be a golden sample apparently to keep it low


Oh, getting way over 300W and staying there isn't any issue. I tried AIO versions bios as well to see how far card goes and powertable mod to set base TDP from 240W to 264W. My card just hits the limit with target clock of 1700-1710mhz, so rising that is futile and rising voltage is pointless as now it's stable and increasing it basically just adds heat and only few mhz more to actual clocks.


----------



## snipernote

hesee said:


> Oh, getting way over 300W and staying there isn't any issue. I tried AIO versions bios as well to see how far card goes and powertable mod to set base TDP from 240W to 264W. My card just hits the limit with target clock of 1700-1710mhz, so rising that is futile and rising voltage is pointless as now it's stable and increasing it basically just adds heat and only few mhz more to actual clocks.


But its an issue for vega nano pcb because after what i saw from AHOC pcb analysis video it seems like a vrm downgrade (7 phase) compared to stock vega 56 and 64 (12 phase) plus the pcb is half the size with more condensed heat being the main problem ... I am pretty sure now that my card cannot sustain 300w power into the gpu without breaking something so i will be stopping here and start enjoying its performance

I am just glad my card works as it is .. getting 1540mhz mostly stable after heating up the system with 260w sustained is great in my case as i am getting the best performance it can from a 186w rated card

Update : i just finally reliazed that i am pushing it too hard as the current power draw on this card is on its limits 
150(8pin)+75w(6pin)+75w(pciemaxdraw)=300w so i wont be pushing it any further

Sent from my POCOPHONE F1 using Tapatalk


----------



## Wuest3nFuchs

He guys anyone here knows why i could have the hbm tref @ 3120 from factory ?!


----------



## Ne01 OnnA

Wuest3nFuchs said:


> He guys anyone here knows why i could have the hbm tref @ 3120 from factory ?!


tREF is some sort of Refresh, i have it at 25000 
Try it, for this value best range is ~18000-25000

For gaming 24k or 25k

Here is my Default OC for Gaming.
Games run on those settings very good, Vega XTX is Cool&Quiet.
Games: BFV, BF1, Forza H4, The Outer Worlds, The Division2 & GR Breakpoint

===


----------



## Wuest3nFuchs

Ne01 OnnA said:


> tREF is some sort of Refresh, i have it at 25000
> 
> Try it, for this value best range is ~18000-25000
> 
> 
> 
> For gaming 24k or 25k
> 
> 
> 
> Here my Default OC for Gaming.
> 
> Games run on those settings very good, Vega XTX is Cool&Quiet.
> 
> Games: BFV, BF1, Forza H4, The Outer Worlds, The Division2 & GR Breakpoint
> 
> 
> 
> ===


big thx Onna !! 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## snipernote

How much performance you gain with memory tighten ? Should 1075mhz perform better with stock settings ?

Sent from my POCOPHONE F1 using Tapatalk


----------



## Nighthog

snipernote said:


> If you have problems with power limit not reaching +50% thats usually a psu problem in my case i have upgraded to a 1050w psu to keep my card usage in check
> Glad to see even vega cards on water block barely reach 300w .. i thought you would have sustained it to reach 100c hotspot but that would be a golden sample apparently to keep it low
> Anyway i am happy with my vega so far i wont be pushing it further as i dont want to invest in a waterblock and equipments that will cost the same price as the gpu ... Upgrading the gpu to a higher end one might be worth it on the long run (i am eying the vii with its 16gb hbm vram xD but i cannot explain it to my self if that card cannot sustain 400w+ on air with oc/uv lmao)
> 
> Sent from my POCOPHONE F1 using Tapatalk


Not the powersupply, It was crashing because of overheating at times. You get 110C+ Hotspot. It doesn't like it. 

It's a mounting pressure and paste issue. I redid the mounting 4x times and as I increased screw tightness the temperatures improved each try. Though it's still 40-50C+ for hotspot above the core temperature @ 300W. Was even worse on the first tries.
I can improve it a bit by increasing fan-speeds and water temps but the block is struggling. Didn't expect to get that hot after extended usage like that.

I'm using MX4 as some here were mentioning. I've not had problems with that paste before, but you never know. A fresh new tube even. I did have to use a godly amount of it though.


----------



## Wuest3nFuchs

We'll this is my first time for a AMD GPU with a waterblock installation.
My first one was GTX 670 FTW in the past...so may i did something wrong but also used Igors instructions on the ominous hotspot mounting.



RX Vega56 Pulse Stockcooler teardown and Bykski A-XF56-NANO-X Block mounting
https://imgur.com/a/1z79Sku


First test with GTA V very hot temps up to the hotspot 108 ° Cel. After 15min.

maximum temperatures in GTA V:
GPU 60 °
HBM 53 °
Hotspot 108 °
Radiator: MoRa 420 Pro


Checked in the reliability history today and does not look that good. Hardware error 141 I *get that on any driver installation*, but everything runs smoothly afterwards and was also like before the block installation.
I received the HW Error 141 when I started Insurgency Sandstorm and was only in the menu.
*So tha's not normal,never had that happen before.*
*As i told it happens every time i install the AMD driver. Strange isn't it?*
AMD driver 20.2.1 also seems to crash in my browser [BlackScreen], try the previous one.


----------



## Satanello

Hello, I downloaded the last version of AMD memory tweaker (1.1.23) but the application need Windows 10 Test Mode. Are there possible problems in using windows 10 in test mode? Is there no way to use Windows in normal operating mode?


----------



## LicSqualo

Satanello said:


> Hello, I downloaded the last version of AMD memory tweaker (1.1.23) but the application need Windows 10 Test Mode. Are there possible problems in using windows 10 in test mode? Is there no way to use Windows in normal operating mode?


Always used in win 10 x64 "normal" mode. Check your config, (as antivirus or similar) something is not running as it should. 
Obviously run it as admin.


----------



## Satanello

LicSqualo said:


> Always used in win 10 x64 "normal" mode. Check your config, (as antivirus or similar) something is not running as it should.
> Obviously run it as admin.


Tnx for your hel but:
Windows 10 x64 is fresh re-installed (2 days ago, I've only installed drivers, Firefox and a couple of games)
I always start as admin.
I have no antivirus installed (only Windows integrated security)
I only used W10Privacy to remove all the unwanted Win 10 options.

Here attachedyou can find the screen capture of the Exception.


----------



## LicSqualo

Satanello said:


> Tnx for your hel but:
> Windows 10 x64 is fresh re-installed (2 days ago, I've only installed drivers, Firefox and a couple of games)
> I always start as admin.
> I have no antivirus installed (only Windows integrated security)
> I only used W10Privacy to remove all the unwanted Win 10 options.
> 
> Here attachedyou can find the screen capture of the Exception.


Please download the correct one here:
https://github.com/Eliovp/amdmemorytweak 
and not the XL version that have more problems (as you are testing).

After you download the AmdMemoryTweak-master, unfold the zip where you want (mine is in the desktop as example) go to the GUI/x64 folder and run the exe.
Please, let me know if this solve your case.


----------



## Ne01 OnnA

Satanello said:


> Tnx for your hel but:
> Windows 10 x64 is fresh re-installed (2 days ago, I've only installed drivers, Firefox and a couple of games)
> I always start as admin.
> I have no antivirus installed (only Windows integrated security)
> I only used W10Privacy to remove all the unwanted Win 10 options.
> 
> Here attachedyou can find the screen capture of the Exception.


Yes, new version requires You to have Test Mode On
(EV Certs for Windows seem to be about $70 / yr)
-> https://forums.guru3d.com/threads/amd-memory-tweak-read-modify-timings-on-the-fly.426435/page-10

Use non XL.


----------



## wermad

Got them running. Waiting on new case (719). Planning on doing rgb led strip mod to make them LE lit.


----------



## Satanello

I came back late last night but I downloaded and tried the non XL. It works perfectly, now I have to get to work to optimize the timings!
The latest version of the software is not always the "best one". 
Thank you. 

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## snipernote

Wuest3nFuchs said:


> We'll this is my first time for a AMD GPU with a waterblock installation.
> My first one was GTX 670 FTW in the past...so may i did something wrong but also used Igors instructions on the ominous hotspot mounting.
> 
> 
> 
> RX Vega56 Pulse Stockcooler teardown and Bykski A-XF56-NANO-X Block mounting
> https://imgur.com/a/1z79Sku
> 
> 
> First test with GTA V very hot temps up to the hotspot 108 ° Cel. After 15min.
> 
> maximum temperatures in GTA V:
> GPU 60 °
> HBM 53 °
> Hotspot 108 °
> Radiator: MoRa 420 Pro
> 
> 
> Checked in the reliability history today and does not look that good. Hardware error 141 I *get that on any driver installation*, but everything runs smoothly afterwards and was also like before the block installation.
> I received the HW Error 141 when I started Insurgency Sandstorm and was only in the menu.
> *So tha's not normal,never had that happen before.*
> *As i told it happens every time i install the AMD driver. Strange isn't it?*
> AMD driver 20.2.1 also seems to crash in my browser [BlackScreen], try the previous one.


Try the enterprise pro drivers ... For me i switched to the latest one version 2020q1 (vega 56 red dragon flashed to nitro+ v64 with 108c hotspot also but i have no crashes because of low UV only -75mv so you might need to try that also clean install windows) they are alot stable ... Still using ODNT to input my settings and only problem i have so far is when i leave my game open and i get distracted from pc so it goes to sleep ... I turned it back on but after a while amd performance overlay will make the game performance stuttery after a minute or so then resume normally ... Solution in my case was to close the game and relaunch it obviously 

Sent from my POCOPHONE F1 using Tapatalk


----------



## Wuest3nFuchs

snipernote said:


> Try the enterprise pro drivers ... For me i switched to the latest one version 2020q1 (vega 56 red dragon flashed to nitro+ v64 with 108c hotspot also but i have no crashes because of low UV only -75mv so you might need to try that also clean install windows) they are alot stable ... Still using ODNT to input my settings and only problem i have so far is when i leave my game open and i get distracted from pc so it goes to sleep ... I turned it back on but after a while amd performance overlay will make the game performance stuttery after a minute or so then resume normally ... Solution in my case was to close the game and relaunch it obviously
> 
> Sent from my POCOPHONE F1 using Tapatalk


will keep that in my mind, thanks for posting!

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Wuest3nFuchs

Wuest3nFuchs said:


> We'll this is my first time for a AMD GPU with a waterblock installation.
> My first one was GTX 670 FTW in the past...so may i did something wrong but also used Igors instructions on the ominous hotspot mounting.
> 
> 
> 
> RX Vega56 Pulse Stockcooler teardown and Bykski A-XF56-NANO-X Block mounting
> https://imgur.com/a/1z79Sku
> 
> 
> First test with GTA V very hot temps up to the hotspot 108 ° Cel. After 15min.
> 
> maximum temperatures in GTA V:
> GPU 60 °
> HBM 53 °
> Hotspot 108 °
> Radiator: MoRa 420 Pro
> 
> 
> Checked in the reliability history today and does not look that good. Hardware error 141 I *get that on any driver installation*, but everything runs smoothly afterwards and was also like before the block installation.
> I received the HW Error 141 when I started Insurgency Sandstorm and was only in the menu.
> *So tha's not normal,never had that happen before.*
> *As i told it happens every time i install the AMD driver. Strange isn't it?*
> AMD driver 20.2.1 also seems to crash in my browser [BlackScreen], try the previous one.


Finally i got the mounting fixed! 

44°Cel on core and hbm 
VDDC 68° 
MVDD 68°
Hotspot 77° all max temps 

P7 1638mhz 1056mVwhile boosting to 1652mhz


Gesendet von meinem SM-G950F mit Tapatalk


----------



## snipernote

Wuest3nFuchs said:


> Finally i got the mounting fixed!
> 
> 44°Cel on core and hbm
> VDDC 68°
> MVDD 68°
> Hotspot 77° all max temps
> 
> P7 1638mhz 1056mVwhile boosting to 1652mhz
> 
> 
> Gesendet von meinem SM-G950F mit Tapatalk


What did you do ? Remount ? New TIM ? Add washers ? How did you verify your settings (superposition is not enough in my case i had to re run metro redux benchmarks till and raised all the required voltage till it got stable) ? 

Sent from my POCOPHONE F1 using Tapatalk


----------



## Wuest3nFuchs

snipernote said:


> What did you do ? Remount ? New TIM ? Add washers ? How did you verify your settings (superposition is not enough in my case i had to re run metro redux benchmarks till and raised all the required voltage till it got stable) ?
> 
> Sent from my POCOPHONE F1 using Tapatalk



Oh well it was an odyssey !


So i have not much timeleft today to translate what i've done, but i try to put it in goolgetranslator and hope translation turns out good.



*Here's the full story :*

Hello!
Well that's true with 8k, I did it via VSR myself only half a year ago I switched to a 1440p Freesync monitor and have to say, Freesync is cool ... if it works 

Thanks for your tips here, too, have to clean up my table before I start the conversion today.
The Fury is installed in between so that I can also read everything nicely.
The smartphone goes with the small screen on the angel.

I'll take photos too, so see you later or ... hopefully see you later!

So finally I found something regarding the dismantling of the stock cooler

https://imgur.com/a/JlayijC
, which makes my job a little easier.

Everything is prepared.
*MY own dismantling of Vega Pulse*
https://imgur.com/a/1z79Sku

*
Here we go*: still provide the back with pads then it is screwed together!
17:30 If it weren't for me adjusting the washer every time, I would have been finished for a long time ... boah that part is getting me ready.
And my pulse is molded.
21:45 Done and the filling starts!
10:30 p.m. First test with GTA V very hot temps up to the hotspot 108 ° Cel. After 15min.
Remount again, I guess, but today I am never knocked out.

maximum temperatures in GTA V:
GPU 60 °
HBM 53 °
Hotspot 108 °
Radiator: MoRa 420 Pro





























*
Update 23.2 around 10:30*

Moin, so checked in the reliability history today and does not look that good. Hardware error 141 I only get with a driver installation, but after that everything always worked perfectly and was also like this before the conversion.
I received the HW Error 141 when I started Insurgency Sandstorm and was only in the menu.
AMD driver 20.2.1 also seems to crash in my browser [BlackScreen], try the previous one.
*
Update 23.2 about 3:15 p.m.*
In the meantime I have disassembled the card for a second time and mounted the cooler again and 104 ° was the absolute maximum.
The card is now stable and even survived a round of insurgency.
Hotspot is now between 80-95 ° but in the worst and after 45min GTA V to 104 °, insurgency as well.
Will probably screw again and then look again, otherwise I'll have to take paste.

*Discussion on Thread*

*wuchzael said:
*
Hello,

definitely put some thermal paste in between, the pads have a much too low thermal conductivity!
I just saw your "edit" ????

greetings


*ME answering:*

So the habs disassembled for the fourth time and nothing has changed again. The pads are not as bad as I initially thought, when I look at the VDDC and MVDDC they are very cool in contrast to the pulse cooler

*Babyface said. :*
Yes. But I don't think it is more than 6 ° C at the peak, do I? That would still be too much for this type of cooling. Or am I wrong here too? (can be)




*ME answering:*
I still have hope 

*wuchzael said. :*
I think that the GPU with WLP instead of the pad stays significantly below 50 ° C and the hotspot also comes down a bit.

At least that's how it is with my 64, although I have significantly less radiator area.

Greetings!




*ME answering:*
Yes I do the days there but rather towards the WE paste on it, I have enough there.
But I am now the opinion that the clasp / backplate may have heard, or I have incorrectly assembled the instructions for the block, which was unfortunately completely in Chinese.
I've already tried it but the screws are too short
But I also have to add that I still have an instruction in English. found what I was looking for at the formulamodshop [aliexpress].

Otherwise, the cane cooler comes up again and it's good, I've already replaced the pads.
Photos are added ... but good things take time. Because after work, I usually need rest, because something like that is no longer possible with a steady hand.

Does anyone else have a Bykski block on a Vega here?
What could still be that the whather do not belong, well first of all paste then paste of the time, I hope.

Go to krutziteifi no amol eini with the HOTSPOT.

A tip from me: If you have a loop like me with approx. 2.25 l of liquid in there, you can use a hand pump [car accessories] in 15-20 minutes instead of 1 hour or longer.




Nice afternoon everyone!
*
Assumptions:*
1. The first mistake was to use the washer and the pad on the GPU!
At least with the washers it seemed to have been in contact somewhere since the GPU was stuck to the pad and had pressure columns, even if only minimal!
2.Tilted because the block-top is yes POM [Custom from Pro Liquid Cooling Store] and the hole of the fan cable connection of the GPU is not deep enough. At least it is always tilted because I can see from the paste whether there is something or not , and sometimes only half contact.

Tuesdays and yesterday another pad on it with 1.5mm thickness then 0.5 tried again and again until nothing was tilted, at least it seems so.

*Status Thursday:*
Hotspot was tamed once but only with the stockcooler ... was assembled with Igor's instructions [this time correct and not the Klaus Kevin method].

Before that, I had the problem that about half the block was on the GPU or still not 100%, so let's just stay there.
Because with the Stockcooler I now have such fine temperatures that I may only consider installing the water block again when I have the successor.
Only now and after 8-10x disassembly and draining / refilling water takes me too long overall.
In addition, I want to gamble again, and when I look at the temps from the original pads and paste, oh dear!
Partly over 90 ° hotspot and now it was just tamed but just with the stock cooler ... mounted pads renewed and Kryonaut on it and mounted with Igor's instructions [this time correct and not the Klaus Kevin method].

But learned a lot again and thanks to everyone for the tips and help!
*
EDIT: 29.2*
On Friday evening the monkey still bit me, it just didn't leave me in peace that I didn't get the BLock cleverly assembled.
Pads down, paste down wherever it is and where it is not.
Pads again directly on the vrms etc. no longer directly on the block and the Gelid Xtreme used.

44 ° Cel Core and HBM
VDDC 68 °
MVDD 68 °
Hotspot 77 ° all max temps

P7 1638mhz 1056mV Highest boost 1652mhz

Tested with GTA V

Today with Insurgency Sandstorm ma hotspot after 45min 82 ° but only for a short time, so far so good!


Up to 8-10 times dismantling and letting out water everytime you get a hurt in the back ,but last Friday the Ape bit me as we say in austria and i was so pissed about the fact i didnt make it and tried it one time and it worked !


Bykski :thumb:
Vega :thumb: Molded and Samsung HBM²


----------



## alceryes

Greetings programs!
Anyone else here have their Vega 64 set up with the Morpheus Vega VGA cooler? I'd like to compare notes. 

https://1drv.ms/u/s!AjLCgZ8P-HzihmpIPXukp9U7QiWN?e=ltq9bD


https://1drv.ms/u/s!AjLCgZ8P-HzihmvLer8IaNvmW6Oc?e=GCoy8f

Also see sig and UserBenchmark link.


----------



## generaleramon

Hi guys, i want to share some UV "results"
The card is a reference PowerColor VEGA64 using the latest 200w-lowpower BIOS.
i managed to UV+OC P1+P2+P3 using SoftPowerTables.
P1= [email protected]
P2= [email protected]
P3= [email protected]

the rest of the pstates are [email protected] 1025mhz HBM2
I'm using the Powersaving mode in Wattman, the card likes to stay around 1000-1300Mhz while playing Maxed 1080p games, consuming 60-100w, no fan noise and heat. Amazing. i love it, this card is stupid efficient when clocks and voltages are set right.

i'm starting to work on HBM2 timings, i already gained around 2% performance.


----------



## Alastair

Double srry


----------



## Alastair

well guys I just installed my V64LC to replace what I thought was a craptastic v64 that couldnt go beyond 1650MHz even with the LC bios on it. And now I am sitting around 1780MHz applied and getting around 1750-1770MHz sustained at a wonderful 36C core and 52C hotspot. . But I want that 1800MHz. I am looking more into PP table mods. I tried modding the PP table to get me more than 1250mv but to no luck so far. Is there a concrete way we can get more voltage out of vega?


----------



## wermad

Running stock


----------



## Alastair

Im only hitting 320w core power at ~1750mhz. But my power slider is unlocked to 150%. I feel there is something holding me back.


----------



## LicSqualo

I'm more happy with undervolt that searching the extreme voltage, Vega is very efficient when undervolted.


----------



## Alastair

LicSqualo said:


> I'm more happy with undervolt that searching the extreme voltage, Vega is very efficient when undervolted.


I am not interested in efficiency. I have water blocks and radiators for days. I want all the performance I can muster. Its the whole reason I have a water cooled system. So that I can search for every last MHZ. If I wanted efficiency I wouldn't be on OCN in the first place. This is OCN. Overclocking is the name of the game.


----------



## LicSqualo

Alastair said:


> I am not interested in efficiency. I have water blocks and radiators for days. I want all the performance I can muster. Its the whole reason I have a water cooled system. So that I can search for every last MHZ. If I wanted efficiency I wouldn't be on OCN in the first place. This is OCN. Overclocking is the name of the game.


You right! 
But if you see my settings they are not stock (never used). Overclocked and undervolted for Vega is the key to have the best, for me and my system, obviously.
My 1700 (as CPU example) run at 4065MHz actually. My lower speed is 4040MHz during summer.
For Vega the overclock phylosophy is quite different. You can squeeze more undervolting than overvolting.
But this is my opinion and perhaps only for my system.


----------



## Alastair

LicSqualo said:


> You right!
> But if you see my settings they are not stock (never used). Overclocked and undervolted for Vega is the key to have the best, for me and my system, obviously.
> My 1700 (as CPU example) run at 4065MHz actually. My lower speed is 4040MHz during summer.
> For Vega the overclock phylosophy is quite different. You can squeeze more undervolting than overvolting.
> But this is my opinion and perhaps only for my system.


 Vega undervolting gets you more clocks only to a point as you are freeing up the power limits. I have modded power play tables for 150% power. So undervolting in my case would just loose me clocks.


----------



## nolive721

Alastair said:


> Im only hitting 320w core power at ~1750mhz. But my power slider is unlocked to 150%. I feel there is something holding me back.


hello

I am also running a 64LC, can you share your PP table settings as well as which drivers you are running? because I am really curious to push the card as well, my UV has been very good but now I want to see how much the beast can deliver with more V and W available.

I am impressed if you can sustain 1750Mhz in Gaming for example so can you also share which conditions the card can run this frequency? A game name, resolution etc I could use as reference would be great as well as maybe a FS score.But also which monitoring tool you are using to observe your card performance because AB and Wattman give me different reading, it seems.

I am in the 26500ish range graphics score with mine on FS and thats with base OC core and memory in Wattman as well as pushing the memory with the Memory Tweak by Eliovp so not bad but I am greedy????

I had seen promising Core Frequency stability and levels results with the 1st 2020 Adrenalin drivers I installed in January but now any version I install gets the card and my PC to hard crash, BSOD and all that so I am back to Adrenalin 2019 unfortunately 

thanks so much

Olivier


----------



## ducegt

Alastair said:


> Vega undervolting gets you more clocks only to a point as you are freeing up the power limits. I have modded power play tables for 150% power. So undervolting in my case would just loose me clocks.


The power play table doesn't add anything past 50% contrary to what many misinformed people believe. Only way to pull more power is a physical modification. 

Best way to get more performance is redo the stock thermal paste.


----------



## Alastair

ducegt said:


> The power play table doesn't add anything past 50% contrary to what many misinformed people believe. Only way to pull more power is a physical modification.
> 
> Best way to get more performance is redo the stock thermal paste.


Oh no the power play table mod definitely gets you more than 50%. I have seen it first hand with my previous v64 prior to my current v64LC. As 220w x 1.5 is 330 watt and with my previous card I could hit north of 375w. However power isn't the problem. Because even the LC voltage at 50% is 420w. And currently even at 1780MHz I am only seeing around 320w in Heaven Bench. I need more voltage.


----------



## 113802

Alastair said:


> Oh no the power play table mod definitely gets you more than 50%. I have seen it first hand with my previous v64 prior to my current v64LC. As 220w x 1.5 is 330 watt and with my previous card I could hit north of 375w. However power isn't the problem. Because even the LC voltage at 50% is 420w. And currently even at 1780MHz I am only seeing around 320w in Heaven Bench. I need more voltage.


Issue with Vega 10 is the horrible boost, AMD needs to release a new bios to fix Vega 10. That won't happen and Vega users were screwed. This is what it looks like when you have proper boost.

https://www.3dmark.com/fs/18270986


----------



## Alastair

WannaBeOCer said:


> Issue with Vega 10 is the horrible boost, AMD needs to release a new bios to fix Vega 10. That won't happen and Vega users were screwed. This is what it looks like when you have proper boost.
> 
> https://www.3dmark.com/fs/18270986
> 
> https://www.youtube.com/watch?v=GXDced_nNPw


What do you mean by horrible boost?


----------



## snipernote

wermad said:


> Running stock


What kind of a performance boost you get from 2 vega cards running in crossfire ? ... Your capacity to spread heat in your room is crazy lol

Sent from my POCOPHONE F1 using Tapatalk


----------



## Alastair

I just want more. I really just want 1800MHz. Maybe I'm missing something in my search. I have done the power play tables mod for 150% more power. Not that it seems I need it yet because even with 1750-1770MHz sustained at the 1200mv applied that the LC bios gives me I don't seem to be getting anywhere near above 320w while running heaven bench or other loads. And I have massis of thermal headroom available because once my loop has stabilized I'm only hitting around 35C core 45C HBM and 55C Hotspot. So I don't know if I am missing something. But I feel this card has so much to give if I can just give it what it needs?


----------



## PopReference

I think you can overvolt in the PP table, I haven't fully tested it yet so can't be sure.

The full break down was in this post: https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios-26.html#post_26297003


----------



## Alastair

PopReference said:


> I think you can overvolt in the PP table, I haven't fully tested it yet so can't be sure.
> 
> The full break down was in this post: https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios-26.html#post_26297003


I am busy reading that thread. but i am not entrely sure I am changing the correct values. I tried putting in 1350mv hex value for P7 and P6 (I know I need to reverse the hex values) but it has not seemed to do much. the 150% power limit did change. And I also tried changing socet power, battery power, small power and the TDC limits all to 500. Not sure if they took or not though. I just wish this was a bit easier. I am ASSUMING that applying the PP tables voltage may apply but will immediately get overridden back to stock value if I try changing say my core clock in Wattman/Afterburner.


----------



## Alastair

I HAVE DONE IT. I think I'm pretty sure I have done it. In combination with soft power play editor and watt tool I think I have actually over voted my Vega. And I have confirmed power draw increases with my Kill-a-Watt!

I tested 1775MHz applied (1740-1770ish sustained) at the 1250mv LC bios settings for 320watts core power. Kill-a-Watt gives me 600 watts at the wall during heaven bench.

Now I set soft PP settings using SPP editor and I'm using watt tool to change settings like you would change settings on afterburner. And it seems to be working. 

I quickly punched in 1300mv into watt tool and it gave me +-1250mv applied. Reported +-375w core power. Kill-a-Watt reported 655—660 watts. 

Question is. Will it scale. ????


----------



## PopReference

Alastair said:


> I am busy reading that thread. but i am not entrely sure I am changing the correct values. I tried putting in 1350mv hex value for P7 and P6 (I know I need to reverse the hex values) but it has not seemed to do much. the 150% power limit did change. And I also tried changing socet power, battery power, small power and the TDC limits all to 500. Not sure if they took or not though. I just wish this was a bit easier. I am ASSUMING that applying the PP tables voltage may apply but will immediately get overridden back to stock value if I try changing say my core clock in Wattman/Afterburner.


Just did a quick test, it works but you need at least 240% power limit or you'll still be held back.

Put in F2,00, (242%) for you power limit.


----------



## Najenda

Dear forum users, i need help. I will buy a used vega 64 LC , but i couldnt find any test for this bad guy. Would someone here do a stock furmark test with latest drivers ?


----------



## Ne01 OnnA

Najenda said:


> Dear forum users, i need help. I will buy a used vega 64 LC , but i couldnt find any test for this bad guy. Would someone here do a stock furmark test with latest drivers ?


Vega XTX LC at 1440p is almost on par with 5700XT.

From here -> https://gamegpu.com/test-video-cards/podvedenie-itogov-po-igrovym-resheniyam-2019-goda


----------



## LicSqualo

YES!


----------



## Alastair

PopReference said:


> Just did a quick test, it works but you need at least 240% power limit or you'll still be held back.
> 
> Put in F2,00, (242%) for you power limit.


I doubt I'll be held back. I've upped all the previous LC bios limits from 264watts and 300amps to 600 across the board. And I have 150% power limit mod applied as well. Even with 1.3V set with 1800MHz in watt tool I'm only seeing 410w core power which is only 
55% above the stock 264 watt power limit. I highly doubt I would need a 200+% power limit unless I started gunning for some sub zero runs.


----------



## Alastair

Najenda said:


> Dear forum users, i need help. I will buy a used vega 64 LC , but i couldnt find any test for this bad guy. Would someone here do a stock furmark test with latest drivers ?


Why does anyone want to use furmark. It hasn't been relevant in a decade.


----------



## Alastair

nolive721 said:


> hello
> 
> I am also running a 64LC, can you share your PP table settings as well as which drivers you are running? because I am really curious to push the card as well, my UV has been very good but now I want to see how much the beast can deliver with more V and W available.
> 
> I am impressed if you can sustain 1750Mhz in Gaming for example so can you also share which conditions the card can run this frequency? A game name, resolution etc I could use as reference would be great as well as maybe a FS score.But also which monitoring tool you are using to observe your card performance because AB and Wattman give me different reading, it seems.
> 
> I am in the 26500ish range graphics score with mine on FS and thats with base OC core and memory in Wattman as well as pushing the memory with the Memory Tweak by Eliovp so not bad but I am greedy????
> 
> I had seen promising Core Frequency stability and levels results with the 1st 2020 Adrenalin drivers I installed in January but now any version I install gets the card and my PC to hard crash, BSOD and all that so I am back to Adrenalin 2019 unfortunately
> 
> thanks so much
> 
> Olivier


I can get 1740-1770 sustained easily with no effort required by just inputting +50% and 1775p7 in Wattman. No voodoo magic required.it sustains this frequency using my standard heaven bench test loop. Haven't tested much else as of yet. 
AB+Wattman give me the same readings. Just Wattman is slow. It has like a 4-5 second report rate where AB I'm using a 500ms reporting rate.


----------



## LicSqualo

Alastair said:


> I can get 1740-1770 sustained easily with no effort required by just inputting +50% and 1775p7 in Wattman. No voodoo magic required.it sustains this frequency using my standard heaven bench test loop. Haven't tested much else as of yet.
> AB+Wattman give me the same readings. Just Wattman is slow. It has like a 4-5 second report rate where AB I'm using a 500ms reporting rate.


Yes, thank you! The set 1775 at 1180 mV give me a good result.

https://www.3dmark.com/3dm/44803037?


----------



## nolive721

Alastair said:


> I can get 1740-1770 sustained easily with no effort required by just inputting +50% and 1775p7 in Wattman. No voodoo magic required.it sustains this frequency using my standard heaven bench test loop. Haven't tested much else as of yet.
> AB+Wattman give me the same readings. Just Wattman is slow. It has like a 4-5 second report rate where AB I'm using a 500ms reporting rate.


 thanks

I am using Heaven as well as my Go to benchmark test for OCing so thats a good reference indeed

Your drivers version?


----------



## Alastair

nolive721 said:


> Alastair said:
> 
> 
> 
> I can get 1740-1770 sustained easily with no effort required by just inputting +50% and 1775p7 in Wattman. No voodoo magic required.it sustains this frequency using my standard heaven bench test loop. Haven't tested much else as of yet.
> AB+Wattman give me the same readings. Just Wattman is slow. It has like a 4-5 second report rate where AB I'm using a 500ms reporting rate.
> 
> 
> 
> thanks
> 
> I am using Heaven as well as my Go to benchmark test for OCing so thats a good reference indeed
> 
> Your drivers version?
Click to expand...

 whatever the current WHQL driver is currently.

I just need to get back to my PC so I can get back to breaking 1800MHz with unlocked core voltages!

MOAR POWA!


----------



## PopReference

Alastair said:


> I doubt I'll be held back. I've upped all the previous LC bios limits from 264watts and 300amps to 600 across the board. And I have 150% power limit mod applied as well. Even with 1.3V set with 1800MHz in watt tool I'm only seeing 410w core power which is only
> 55% above the stock 264 watt power limit. I highly doubt I would need a 200+% power limit unless I started gunning for some sub zero runs.


I meant the Boost Software will not apply the correct voltage to the core properly, for me it wouldn't apply more then 1200mV until I pushed the pL % past 200. I suggest you check the GPU core Voltage in HW monitor and/or the VDDC in GPU-Z to make sure it's actually pushing more then 1200mV. Also Wattman in the Radeon driver won't accept numbers more then 1200mV if you try and apply it so if you want to modify the other Pstate voltages you should use a different oc program.

MORE Power! you can also try raising the the tdc limit to 600, since that's what's in a 242% vegaLC PPtable that I found.


----------



## 113802

Alastair said:


> What do you mean by horrible boost?


We already had a discussion about this months ago.



LicSqualo said:


> Yes, thank you! The set 1775 at 1180 mV give me a good result.
> 
> https://www.3dmark.com/3dm/44803037?


Not trying to be rude but that's not performing anywhere near a card that runs at P7 at 1750Mhz. 

A Vega 64 that has P7 set to 1750Mhz with a UV generally scores above 28000 for the GPU score.


----------



## LicSqualo

WannaBeOCer said:


> Not trying to be rude but that's not performing anywhere near a card that runs at P7 at 1750Mhz.
> 
> A Vega 64 that has P7 set to 1750Mhz with a UV generally scores above 28000 for the GPU score.


My max score was around 27000 (with HBM at 1150 only benchable), never seen/reached 28000. 

I see here really few time (one user) with that result. But he have done some bios change with Vega FE bios (if I remember correctly).

Can you point me in the right direction?

Thank you in advance.


----------



## 113802

LicSqualo said:


> My max score was around 27000 (with HBM at 1150 only benchable), never seen/reached 28000.
> 
> I see here really few time (one user) with that result. But he have done some bios change with Vega FE bios (if I remember correctly).
> 
> Can you point me in the right direction?
> 
> Thank you in advance.


You're referring to me reaching 29000+ with a FE bios bug. Which ran the GPU at a sustained P7 frequency. https://www.3dmark.com/fs/18270986

Just looked at my scores and I was receiving 27500+ with 1750/1105Mhz @ 1.2v. Maybe your memory at 1150Mhz isn't stable and causing a lower score?

GPU ran between 1700-1730Mhz during these runs

https://www.3dmark.com/fs/18269836
https://www.3dmark.com/fs/17528640

At 1750/1140Mhz I was getting a GPU score of 28000 consistently. 

https://www.3dmark.com/fs/17445521
https://www.3dmark.com/fs/17370977
https://www.3dmark.com/fs/18269189
https://www.3dmark.com/fs/18251565
https://www.3dmark.com/fs/17528673


----------



## LicSqualo

WannaBeOCer said:


> You're referring to me reaching 29000+ with a FE bios bug. Which ran the GPU at a sustained P7 frequency. https://www.3dmark.com/fs/18270986
> 
> Just looked at my scores and I was receiving 27500+ with 1750/1105Mhz @ 1.2v. Maybe your memory at 1150Mhz isn't stable and causing a lower score?
> 
> GPU ran between 1700-1730Mhz during these runs
> 
> https://www.3dmark.com/fs/18269836
> https://www.3dmark.com/fs/17528640
> 
> At 1750/1140Mhz I was getting a GPU score of 28000 consistently.
> 
> https://www.3dmark.com/fs/17445521
> https://www.3dmark.com/fs/17370977
> https://www.3dmark.com/fs/18269189
> https://www.3dmark.com/fs/18251565
> https://www.3dmark.com/fs/17528673


 Yes. I will try raising voltage for both GPU and HBM, but probably your Intel CPU have give you a little push.
They are with FE bios or stock Vega LC bios?
Have you modded also the PowerTable to take these results?

In any case I don't want to bother you, and I'm quite happy with my system. 
Obviously if I can squeeze more juice from my VGA is better. 

Thank you for your answer. Much appreciated.
Lic


----------



## Alastair

WannaBeOCer said:


> We already had a discussion about this months ago.
> 
> 
> 
> Not trying to be rude but that's not performing anywhere near a card that runs at P7 at 1750Mhz.
> 
> A Vega 64 that has P7 set to 1750Mhz with a UV generally scores above 28000 for the GPU score.


I wasn't here months ago. I was here for a bit. Then stopped coming around when I decided my last Vega wasn't worth the effort I was putting in.


----------



## 113802

LicSqualo said:


> /forum/images/smilies/smile.gif Yes. I will try raising voltage for both GPU and HBM, but probably your Intel CPU have give you a little push.
> They are with FE bios or stock Vega LC bios?
> Have you modded also the PowerTable to take these results?
> 
> In any case I don't want to bother you, and I'm quite happy with my system.
> Obviously if I can squeeze more juice from my VGA is better. /forum/images/smilies/biggrin.gif
> 
> Thank you for your answer. Much appreciated.
> Lic


That was using the stock LC bios without powerplay mods.


----------



## Ne01 OnnA

2020 19.12.3 vs 12.2 & 11.3

-> https://www.3dmark.com/compare/fs/21271492/fs/21130013/fs/20988662

Fastest driver to date ! Best IPC to date goes to 19.12.3
Second place is for 20.1.4 & 20.2.2
Almost 28k, it's not constant 1750MHz tho it's a lot less (IMO 1700-1720MHz)

=== Always using same settings===


----------



## LicSqualo

THANK YOU Onna! I just tried your timings and bam! 27027 FS points.

But I used 50% TDP limit; here the result: https://www.3dmark.com/3dm/44835217?

This with 1% as your settings: https://www.3dmark.com/3dm/44835111?

Surely I can copy perfectly your settigns, but my HBM don't want to be stable over 1080MHz. I've tried TW3 a lot of times and only at 1080 I don't have memory video artifacts.
But is a good imprevements for sure.
Much appreciated for your help/settings.
I will try to lower my voltages a bit.
THX!


----------



## LicSqualo

The new driver (20.3.1) give me a better result.


----------



## Alastair

Guys does adding volts to HBM do anything for clocks? Im at 1195HBM and damn that extra 5MHz sounds tasty.


----------



## Alastair

Also can anyone give me a bit of help. I'm using soft power play tables editor. And when I set 119000 for 1190MHz HBM It locks me either to P2 for 800MHz HBM OR P1 for 500MHz HBM.


----------



## Worldwin

HBM voltage is BIOS locked. V64 bios has it at 1.35v V56 @1.2V


----------



## Alastair

Worldwin said:


> HBM voltage is BIOS locked. V64 bios has it at 1.35v V56 @1.2V


The core voltage was also supposed to be bios locked at the 1200 or 1250mv upper limit. But one can get around that by editing the power play tables. I ASSUME it would be the same for the HBM voltage.


----------



## Worldwin

Alastair said:


> The core voltage was also supposed to be bios locked at the 1200 or 1250mv upper limit. But one can get around that by editing the power play tables. I ASSUME it would be the same for the HBM voltage.


Nope. HBM voltage is hard coded. Core voltage was never bios locked. Wattman was introduced in 16.6.2. Vega was released a year later in August 2017. It was possible to adjust the voltage in wattman albeit it was very unstable based off comments from Vega's launch. The 1.2V and 1.25V was simply the voltage difference in V64 Liquid bios vs everything else.


----------



## Minotaurtoo

Just a PSA, if you are a Folding @ Home active member they are currently researching a cure for Covid-19

you may have to set your gpu slot as advanced to do this though, I did on Navi, but Vega may work by default.


----------



## Alastair

Worldwin said:


> Nope. HBM voltage is hard coded. Core voltage was never bios locked. Wattman was introduced in 16.6.2. Vega was released a year later in August 2017. It was possible to adjust the voltage in wattman albeit it was very unstable based off comments from Vega's launch. The 1.2V and 1.25V was simply the voltage difference in V64 Liquid bios vs everything else.


 I did not know this. The difficulty I have now is just getting the volts to hold above the 1.2 or 1.25v upper limit with soft power play editor. I have an issue with applying my HBM OC in SPP editor now. As its locking my HBM to 500MHz.


----------



## Worldwin

Alastair said:


> I did not know this. The difficulty I have now is just getting the volts to hold above the 1.2 or 1.25v upper limit with soft power play editor. I have an issue with applying my HBM OC in SPP editor now. As its locking my HBM to 500MHz.


Restart the driver?


----------



## Wuest3nFuchs

Alastair said:


> I did not know this. The difficulty I have now is just getting the volts to hold above the 1.2 or 1.25v upper limit with soft power play editor. I have an issue with applying my HBM OC in SPP editor now. As its locking my HBM to 500MHz.



Hi Alastair,









If i clock and set voltages on my vega this way i get the same issue .












Redid settings and fixed it.


----------



## Alastair

I now have plenty of time to mess around with my Vega. South Africa has been locked down thanks to COVID.


----------



## Wuest3nFuchs

Covid 19 is one of my worst nightmare scenarios!

But i can imagine what it means for younger people like my daughter .Horrible Times.

She hates when i have to go to work ...but i need to since we produce disinfectant these days.
And they call us Keyworkers...i hate such namings. 

Good N8 and stay healthy!

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Ne01 OnnA




----------



## Ne01 OnnA

Old drivers  
1700MHz Vega is a beast.


----------



## Alastair

Ne01 OnnA said:


> https://youtu.be/9g_syedkCJg


Yeah if only i can reach the mythical 1800MHz.


----------



## Ne01 OnnA

TBF, we don't need that kind of speed for Vega 64.
1630-1700 is more than enough.

1700+MHz is ~27.8k FS (on my setup).
Depends on monitor refresh tho, i have 71Hz  so i don't need more.


----------



## Alastair

Ne01 OnnA said:


> TBF, we don't need that kind of speed for Vega 64.
> 1630-1700 is more than enough.
> 
> 1700+MHz is ~27.8k FS (on my setup).
> Depends on monitor refresh tho, i have 71Hz  so i don't need more.


I don't need more. I want more. Sigh. 

And although I am seeing a corresponding increase in power consumption and heat from applying more voltage from soft PP mods I'm not seeing any scaling. I've pushed to 1300mv but I can't seem to get it to jump over the barrier that is around 1775Mhz


----------



## Alastair

I'm currently at the following settings. I've got 1275mv set in PP table. Underload that is 1225mv. I'm applying 1780 and getting around 1755-1730mhz. HBM is 1190. I managed around 102fps in heaven 1080p extreme as a quick and dirty bench while doing stress tests. (up from 93.3. biggest gain was HBM going 945 to 1190 gave me 101fps) 

Anything high than the 1730-1755MHz core clock regardless of voltage gives me driver crashes. Its using 350ish watts core power. The unlocked power limit is of no benifit as I can't HIT the stock limits. What am I missing.


----------



## BlueKnight83

Good evening,

I have a problem whit the Soft Power Table Mod and the Adrenalin 2020.

I have a Asus ROG Strix Vega 56 Gaming that come to default whit a max power comsuption of 260W. I try to increase the max power target to 50%, but everytime the GPU is near the 300W, the frequency of the core begin to downclock.
I read that Vega 56 are "locked" at 300W in BIOS, so i need to use the Soft Power Table Mod to increase that limit.

I modify the Windows Registry, but I found that after Adrenaline 2020, there is not the voice "PP_PhmSoftPowerPlayTable" but a new call "PP_PhmSoftWTTable"; so if I modify the registry, my change doesn't take effect.

Thank you for your help.


----------



## BlueKnight83

Alastair said:


> I'm currently at the following settings. I've got 1275mv set in PP table. Underload that is 1225mv. I'm applying 1780 and getting around 1755-1730mhz. HBM is 1190. I managed around 102fps in heaven 1080p extreme as a quick and dirty bench while doing stress tests. (up from 93.3. biggest gain was HBM going 945 to 1190 gave me 101fps)
> 
> Anything high than the 1730-1755MHz core clock regardless of voltage gives me driver crashes. Its using 350ish watts core power. The unlocked power limit is of no benifit as I can't HIT the stock limits. What am I missing.


You have found the hardware limit of your GPU. Rising the voltage does't mean that you can achieve infinite clock upgrade.
Gamers Nexus give +242% Power Limit and sto at 1710 Mhz..if you are stable al 1730 Mhz, you are already very lucky!


----------



## Minotaurtoo

BlueKnight83 said:


> You have found the hardware limit of your GPU. Rising the voltage does't mean that you can achieve infinite clock upgrade.
> Gamers Nexus give +242% Power Limit and sto at 1710 Mhz..if you are stable al 1730 Mhz, you are already very lucky!


I agree... mine was just over 1700 as well, but would hit it at only 1.1v... strange thing vega is.
@Alastair you likely will not be able to get anymore... I had problems with overclocking mine that occasionally it would overshoot my set clock and crash instantly... but up to that point it was fine... didn't matter how much voltage or what my power limit or even if it was fully loaded... much over 1700 and crash.


----------



## snipernote

BlueKnight83 said:


> Good evening,
> 
> 
> 
> I have a problem whit the Soft Power Table Mod and the Adrenalin 2020.
> 
> 
> 
> I have a Asus ROG Strix Vega 56 Gaming that come to default whit a max power comsuption of 260W. I try to increase the max power target to 50%, but everytime the GPU is near the 300W, the frequency of the core begin to downclock.
> 
> I read that Vega 56 are "locked" at 300W in BIOS, so i need to use the Soft Power Table Mod to increase that limit.
> 
> 
> 
> I modify the Windows Registry, but I found that after Adrenaline 2020, there is not the voice "PP_PhmSoftPowerPlayTable" but a new call "PP_PhmSoftWTTable"; so if I modify the registry, my change doesn't take effect.
> 
> 
> 
> Thank you for your help.


Probably your hotspot temp is over 100c and not under control .. this will decrease core speed until it finds equilibrium ... I have my gpu reaching 308w for less than a minute ( power color red dragon V56 flashed to 64 bios ) then it goes back to 235-245w due to hotspot temp reaching 103-108c ... If you are still on vega 56 gpu bios you might get more benefit with vega 64 bios and power limit increase but it will surely make your card run hotter ... Control your card temp near 60c as much as you can

Sent from my POCOPHONE F1 using Tapatalk


----------



## snipernote

Minotaurtoo said:


> Just a PSA, if you are a Folding @ Home active member they are currently researching a cure for Covid-19
> 
> 
> 
> you may have to set your gpu slot as advanced to do this though, I did on Navi, but Vega may work by default.


Yesterday i got this as well but after a while the gpu work unit stopped working .. i opened the advanced settings log and saw that the work stopped due to error in the math .... Gpu did not crash or the driver but i think [email protected] is very sensitive to gpu oc ... Vega does not perform well on stock so i dont know how to fix this .. my current settings are very conservative (-75mv undervolting with +50% power limit and 1075mhz HBM OC) ... These settings did not crash on me for the last 6 months so i am staying with them especially for heavy games

I am currently monitoring [email protected] while working on pc with medium power ... Might have to lower the power limit to increase sustained usage and decrease core speed (current settings fold at 180w with 1615mhz to 1627mhz and 1075mhz hbm)

Sent from my POCOPHONE F1 using Tapatalk


----------



## Alastair

Minotaurtoo said:


> I agree... mine was just over 1700 as well, but would hit it at only 1.1v... strange thing vega is.
> 
> @*Alastair* you likely will not be able to get anymore... I had problems with overclocking mine that occasionally it would overshoot my set clock and crash instantly... but up to that point it was fine... didn't matter how much voltage or what my power limit or even if it was fully loaded... much over 1700 and crash.


I figured with the LC editions binned by default I thought 1800MHz would be a sure fire thing.


----------



## Alastair

Also which software is more accurate at reporting clock speed? MSi Afterburner and the driver report lower clocks than HWinfo and GPU-z


----------



## Alastair

Did some testing. Each test was a single benchmark run of Heaven. 26scenes. 
Sustained clocks during Heaven Bench 4.0. 1920x1080 windowed. Ultra quality, Extreme Tessellation, AA 8x.
Voltages are reported averages by HWinfo
Core current in amps average reported BY HW info
HBM locked at 1190MHz

Voltages applied using Wattool and Soft Power Play Tables. 

1755MHz @ 1186mv = 1712mhz
1760MHz @ 1186mv = 1716mhz 125amps 102fps 320w
1765MHz @ 1186mv = crash
1765MHz @ 1200mv = 1715MHz
1770MHz @ 1200mv = crash
1775MHz @ 1225mv = 1719MHz 
1780MHz @ 1225mv = 1725MHz 135amp102.5fps 340w
1785MHz @ 1225mv = crash
1785MHz @ 1242mv = 1719MHz
1785MHz @ 1250mv = 1718MHz
1790MHz @ 1250mv = crash
1790MHz @ 1280mv = 1707MHz 145amps 400w
1795MHz @ 1280mv = crash


----------



## BlueKnight83

snipernote said:


> Probably your hotspot temp is over 100c and not under control .. this will decrease core speed until it finds equilibrium ... I have my gpu reaching 308w for less than a minute ( power color red dragon V56 flashed to 64 bios ) then it goes back to 235-245w due to hotspot temp reaching 103-108c ... If you are still on vega 56 gpu bios you might get more benefit with vega 64 bios and power limit increase but it will surely make your card run hotter ... Control your card temp near 60c as much as you can
> 
> Sent from my POCOPHONE F1 using Tapatalk



Thank you for your reply.

No, all my temperatures are under 70°C. There is somenthing I don't understand; it's like if pass the 300W, there a block on the GPU.

This is my configuration.

P4 1675 1150V
P5 1675 1150V
P6 1675 1150V
P7 1675 1150V

I use 3DMark and Uningine Superposition for know if my setup it's stable.
If I use 3DMark and Superposition 1080P Extreme, my GPU stay under the 300W (275W) and my core frequency work at 1660/1665 MHz.
If I start Superposition 4K or 8K, my GPU go near the (300W) and the frequency stay near the 1600 MHz.

If I start OCCT, GPU go near the 360W and the core frequency go down to 1500 MHz.


I don't understand these behavior...


----------



## BlueKnight83

Alastair said:


> Did some testing. Each test was a single benchmark run of Heaven. 26scenes.
> Sustained clocks during Heaven Bench 4.0. 1920x1080 windowed. Ultra quality, Extreme Tessellation, AA 8x.
> Voltages are reported averages by HWinfo
> Core current in amps average reported BY HW info
> HBM locked at 1190MHz
> 
> Voltages applied using Wattool and Soft Power Play Tables.
> 
> 1755MHz @ 1186mv = 1712mhz
> 1760MHz @ 1186mv = 1716mhz 125amps 102fps 320w
> 1765MHz @ 1186mv = crash
> 1765MHz @ 1200mv = 1715MHz
> 1770MHz @ 1200mv = crash
> 1775MHz @ 1225mv = 1719MHz
> 1780MHz @ 1225mv = 1725MHz 135amp102.5fps 340w
> 1785MHz @ 1225mv = crash
> 1785MHz @ 1242mv = 1719MHz
> 1785MHz @ 1250mv = 1718MHz
> 1790MHz @ 1250mv = crash
> 1790MHz @ 1280mv = 1707MHz 145amps 400w
> 1795MHz @ 1280mv = crash



Your real frequency are very far from the one you set.
Try to set P4, P5, P6, P7 whit the same voltage/frecuency: start whit 1180 mV and 1730 MHz.


----------



## Alastair

1225mv 1780MHz 1190HBM ~1730-1740MHz sustained


----------



## Alastair

I managed 26.8K graphics in firestrike and 8637 graphics in Timespy.


----------



## LicSqualo

I reached 27216 in FS and HWInfo has recorded a maximum clock of 1847 MHz with 1180mV and a 50% more power.
As write time ago, VEGA perform better with lower volts than stock or overvolting.

https://www.3dmark.com/3dm/45171908?


----------



## Alastair

LicSqualo said:


> I reached 27216 in FS and HWInfo has recorded a maximum clock of 1847 MHz with 1180mV and a 50% more power.
> As write time ago, VEGA perform better with lower volts than stock or overvolting.
> 
> https://www.3dmark.com/3dm/45171908?


I highly doubt I would be able to pull off 1775MHz @ 1180mv. Also be aware. Although HW Info reports a maximum clock of 1847MHz I highly doubt the chip is getting anywhere near that. I am finding that HW info reports vastly different core clocks to the driver and MSI Afterburner/RTSS. So I am more inclined to believe what the driver reports compared to HWinfo at this point. I have found that resetting the driver or the driver freezing and resetting itself somehow makes HWInfo report clocks accurately once the driver is restarted.


----------



## 113802

Alastair said:


> Ne01 OnnA said:
> 
> 
> 
> https://youtu.be/9g_syedkCJg
> 
> 
> 
> Yeah if only i can reach the mythical 1800MHz.
Click to expand...

What an awful video, his scores are very low for the clock speeds he's claiming.


----------



## Alastair

WannaBeOCer said:


> What an awful video, his scores are very low for the clock speeds he's claiming.


I'm guessing he went pure core and didn't much fiddle with HBM that much. Because I mean. Most of my gains have been from. HBM. Even though I am pursuing some core clocks doggidly, I've only netted like 1.5fps from core clocks thus far.

Edit: 1060HBM at 1060HBM in sure I could get close to 1800MHz like he did. But I loose core clocks as I up HBM. Around 20MHz core at 1190HBM for the same "requested" core clock vs 945HBM. 
But the gain in fps from HBM is far higher than the loss from core clock.


----------



## Alastair

I decided to try some UV testing.
Modified only P7 at this point.
Still using Heaven bench for the test. 
I used my best OV setting as a control for Temps as today is a bit cooler than yesterday. 
Again clocks and voltages are the average reported by HW info* for the duration of a single Heaven Bench benchmark pass (26scenes)

1750MHz @ 1140mv Avg = 1709MHz
1755MHz @ 1140mv Avg = crash
1750MHz @ 1145mv Avg = 1710MHz
1755MHz @ 1145mv Avg = crash
1750MHz @ 1154mv Avg = 1711MHz
1755MHz @ 1154mv Avg = crash
1750MHz @ 1157mv Avg = 1712 MHz
1755MHz @ 1158mv Avg = 1717MHz
1760MHz @ 1158mv Avg = crash
1755MHz @ 1166mv Avg = 1714MHz
1760MHz @ 1166mv Avg = crash
1755MHz @ 1168mv Avg = 1718mhz
1760MHz @ 1168mv Avg = crash
1755MHz @ 1176mv Avg = 1714MHz
1760MHz @ 1176mv Avg = 1722MHz
1765MHz @ 1176mv Avg = crash
1760MHz @ 1182mv Avg = 1718mhz
1765MHz @ 1182mv Avg = crash
1760MHz @ 1186mv Avg = 1720MHz
1765MHz @ 1186mv Avg = 1724MHz 

Best OV setting as control:
1780MHz @ 1225MV Avg = 1731MHz 34c core (yesterday 1224 result was at 38c core) so a 4c difference caused a swing of 6MHz+-

* HW info reports more online with the drivers once the drivers have been reset. Strange I know.


----------



## LicSqualo

Alastair said:


> I highly doubt I would be able to pull off 1775MHz @ 1180mv. Also be aware. Although HW Info reports a maximum clock of 1847MHz I highly doubt the chip is getting anywhere near that. I am finding that HW info reports vastly different core clocks to the driver and MSI Afterburner/RTSS. So I am more inclined to believe what the driver reports compared to HWinfo at this point. I have found that resetting the driver or the driver freezing and resetting itself somehow makes HWInfo report clocks accurately once the driver is restarted.


I've re-run a TimeSpy with a sustained clock around 1760-1750 MHz accordingly with Adrenalin program. Score is good enough: 8551 

https://www.3dmark.com/3dm/45187521?

But 1140 MHz HBM is (I'm quite sure) not playable, probably I will have artifacts. This morning I will play some game to check.


----------



## wolf9466

Alastair said:


> Worldwin said:
> 
> 
> 
> Nope. HBM voltage is hard coded. Core voltage was never bios locked. Wattman was introduced in 16.6.2. Vega was released a year later in August 2017. It was possible to adjust the voltage in wattman albeit it was very unstable based off comments from Vega's launch. The 1.2V and 1.25V was simply the voltage difference in V64 Liquid bios vs everything else.
> 
> 
> 
> I did not know this. The difficulty I have now is just getting the volts to hold above the 1.2 or 1.25v upper limit with soft power play editor. I have an issue with applying my HBM OC in SPP editor now. As its locking my HBM to 500MHz.
Click to expand...

It is not hardcoded.


----------



## Alastair

wolf9466 said:


> It is not hardcoded.


What HBM voltage? 
@Ne01 OnnA
The AMD memory tweak program. 
Tell me. Do I ALWAYS have to be in test mode for it to work? Because some of my games anti-cheat will not work in test mode. Using the AMD Memory tweak software is it "set it and forget it"? Or DO I have to apply settings after each reboot?


----------



## LicSqualo

Alastair said:


> What HBM voltage?
> 
> @Ne01 OnnA
> The AMD memory tweak program.
> Tell me. Do I ALWAYS have to be in test mode for it to work? Because some of my games anti-cheat will not work in test mode. Using the AMD Memory tweak software is it "set it and forget it"? Or DO I have to apply settings after each reboot?


You are using a "wrong" version, please use this: https://github.com/Eliovp/amdmemorytweak
The exe is inside /GUI/x64.


----------



## seppbalboa

Hey guys,

can anyone help me? 

I would like to reset my BIOS of my AMD FE to Standard. I bought it from eBay and I guess it was used for mining.
its totally under clocked. please see the attachment. or is that normal?

Im a Mac User and Im not a Hacker. so I have no clue what magic you are doing here but its awesome.

may be someone could help and guid me a bit.

thank you


----------



## Alastair

LicSqualo said:


> You are using a "wrong" version, please use this: https://github.com/Eliovp/amdmemorytweak
> The exe is inside /GUI/x64.


+ rep thanks


----------



## LicSqualo

seppbalboa said:


> Hey guys,
> 
> can anyone help me?
> 
> I would like to reset my BIOS of my AMD FE to Standard. I bought it from eBay and I guess it was used for mining.
> its totally under clocked. please see the attachment. or is that normal?
> 
> Im a Mac User and Im not a Hacker. so I have no clue what magic you are doing here but its awesome.
> 
> may be someone could help and guid me a bit.
> 
> thank you


If Vega is not stressed these clock are normal during an osx (aka windows) session (without 3d programs running).

Have you a good friend with a Windows PC? I think is possible to reset the bios to a standard one but only with windows, surely not on a mac. And how to make this possible is available on some thread here on OCN or simply following threads on the arguments (flash bios AMD, etc etc)


----------



## wolf9466

Alastair said:


> What HBM voltage?
> 
> @Ne01 OnnA
> The AMD memory tweak program.
> Tell me. Do I ALWAYS have to be in test mode for it to work? Because some of my games anti-cheat will not work in test mode. Using the AMD Memory tweak software is it "set it and forget it"? Or DO I have to apply settings after each reboot?


Yes, HBM2 voltage.


----------



## Worldwin

wolf9466 said:


> Yes, HBM2 voltage.


Care to explain how you are going to change it?


----------



## wolf9466

Worldwin said:


> Care to explain how you are going to change it?


Because (as far as I remember) it's provided by the VRM controller (IR35217), it *has* to be adjustable, because all of the controller's outputs are.


----------



## Worldwin

wolf9466 said:


> Because (as far as I remember) it's provided by the VRM controller (IR35217), it *has* to be adjustable, because all of the controller's outputs are.


That's a technicality. To the end user it is unchangeable outside bios swapping.


----------



## Alastair

wolf9466 said:


> Yes, HBM2 voltage.


How do we apply more then? Soft PP edit seems to be doing nothing? 



In other news. My TS score was enough to secure 3rd place in global V64 rankings on HW Bot! 

https://hwbot.org/submission/439221...radeon_rx_vega_64_8917_marks?recalculate=true
I can still get more out of TS as well as my CPU is still stock. 

I am at 26.9K in FireStrike as well. 



I have NOT tried the AMD Memory Tweak application yet. 

5440 in SuperPosition 1080P Xtreme


edit: Clocks? Why P7 is set to 1800MHz @ 1315mv for approx 1740-180MHz sustained core clocks, with 1190MHz HBM.


----------



## Alastair

wolf9466 said:


> Because (as far as I remember) it's provided by the VRM controller (IR35217), it *has* to be adjustable, because all of the controller's outputs are.


Yes it might be on IR35217 but I cant see anyway to change it. Ive pushed to 1550mv with soft PP edits and I am still artifact at the EXACT same 1192MHz clock. Which is disappointing. A bit more on HBM would have me claiming some titles since HBM is king on Vega. I will have to try tweak the timings to get anymore gains.


----------



## Alastair

LicSqualo said:


> You are using a "wrong" version, please use this: https://github.com/Eliovp/amdmemorytweak
> The exe is inside /GUI/x64.


at 1190HBM I dont seem to have enough room to adjust timings. Even slight adjustments pushes me to artifact.


----------



## wolf9466

Worldwin said:


> wolf9466 said:
> 
> 
> 
> Because (as far as I remember) it's provided by the VRM controller (IR35217), it *has* to be adjustable, because all of the controller's outputs are.
> 
> 
> 
> That's a technicality. To the end user it is unchangeable outside bios swapping.
Click to expand...

I said it could be changed. Not how.



Alastair said:


> wolf9466 said:
> 
> 
> 
> Because (as far as I remember) it's provided by the VRM controller (IR35217), it *has* to be adjustable, because all of the controller's outputs are.
> 
> 
> 
> Yes it might be on IR35217 but I cant see anyway to change it. Ive pushed to 1550mv with soft PP edits and I am still artifact at the EXACT same 1192MHz clock. Which is disappointing. A bit more on HBM would have me claiming some titles since HBM is king on Vega. I will have to try tweak the timings to get anymore gains.
Click to expand...

The timings are fun! Unfortunately, the tool to do it can't be shared, so until I finish reverse engineering it... 

However, an offset could no doubt be added in VoltageObjectInfo if we had any docs on the damn IR35217. >.>


----------



## Worldwin

wolf9466 said:


> I said it could be changed. Not how.


Then what is the point of nitpicking =.=. At the end the user still can not the HBM2 voltage outside of swapping from V56->V64 Bios. Any changes made to a vega bios will fail due to changes made by AMD.


----------



## Alastair

wolf9466 said:


> I said it could be changed. Not how.
> 
> 
> 
> The timings are fun! Unfortunately, the tool to do it can't be shared, so until I finish reverse engineering it...
> 
> However, an offset could no doubt be added in VoltageObjectInfo if we had any docs on the damn IR35217. >.>


The tool to adjust timings is right here. https://github.com/Eliovp/amdmemorytweak


----------



## Alastair

Do shunt resistor mods work on Vega?


----------



## Minotaurtoo

... I moved on to Navi... and in some ways I miss my vega 64.... for one thing it was more interesting to tune... navi was basically turn up the power, voltage and clocks to max and call it a day.... but right now, particularly last night I missed my vega for one unusual reason... I used it for heat on cold nights in my bedroom... yep, I'd run folding @ home on it... that vega would put out some serious heat and the way I looked at it, rather than burn electricity running a space heater that only put out heat, I'd run that and possibly help someone someday with the research that was being done while generating the same amount of heat.... Tried that last night with my navi card... woke up freezing... not kidding.


----------



## wolf9466

Alastair said:


> wolf9466 said:
> 
> 
> 
> I said it could be changed. Not how.
> 
> 
> 
> The timings are fun! Unfortunately, the tool to do it can't be shared, so until I finish reverse engineering it...
> 
> However, an offset could no doubt be added in VoltageObjectInfo if we had any docs on the damn IR35217. >.>
> 
> 
> 
> The tool to adjust timings is right here. https://github.com/Eliovp/amdmemorytweak
Click to expand...

I wrote my own. I meant the HBM2 voltage.


----------



## wolf9466

Worldwin said:


> wolf9466 said:
> 
> 
> 
> I said it could be changed. Not how.
> 
> 
> 
> Then what is the point of nitpicking =.=. At the end the user still can not the HBM2 voltage outside of swapping from V56->V64 Bios. Any changes made to a vega bios will fail due to changes made by AMD.
Click to expand...

Because it IS possible to do so at runtime.


----------



## S.M.

Minotaurtoo said:


> ... I moved on to Navi... and in some ways I miss my vega 64.... for one thing it was more interesting to tune... navi was basically turn up the power, voltage and clocks to max and call it a day.... but right now, particularly last night I missed my vega for one unusual reason... I used it for heat on cold nights in my bedroom... yep, I'd run folding @ home on it... that vega would put out some serious heat and the way I looked at it, rather than burn electricity running a space heater that only put out heat, I'd run that and possibly help someone someday with the research that was being done while generating the same amount of heat.... Tried that last night with my navi card... woke up freezing... not kidding.


My vega 64 folds at 130W chip power. Turning up the voltage nets no more clock speed or PPD, it's just wasting electricity for no reason.


----------



## Worldwin

wolf9466 said:


> Because it IS possible to do so at runtime.


The end user CAN'T. Which is the nitpick. Just because it is theoretically possible doesn't make it worth nitpicking over.


----------



## Alastair

wolf9466 said:


> I wrote my own. I meant the HBM2 voltage.


You sir have my attention


----------



## Minotaurtoo

S.M. said:


> My vega 64 folds at 130W chip power. Turning up the voltage nets no more clock speed or PPD, it's just wasting electricity for no reason.


who said I turned up the voltage, it's the Navi card I used max voltage on (1.2v)... I ran the same undervolted profile I used for gaming on the vega... 1.1v I believe it was with just under 1700mhz set as max clock, but it always went past it... usually running just over 1700 when folding and pulling around 250 watts power.... gaming that sucker would pass 300 watts... This new Navi rarely crosses 200 watts gaming and many times when folding I've seen it hoovering around 100 watts....


----------



## wolf9466

Worldwin said:


> The end user CAN'T. Which is the nitpick. Just because it is theoretically possible doesn't make it worth nitpicking over.


So, in the event this changes because an "end user" writes an easy-to-use tool for it, I suppose it's still irrelevant to you. Okay.


----------



## Minotaurtoo

wolf9466 said:


> So, in the event this changes because an "end user" writes an easy-to-use tool for it, I suppose it's still irrelevant to you. Okay.


I'm sure someone out there has found a way... they may be just like I was with a few game hacks I managed... keeping it quiet so they don't patch it.... before anyone goes full Karen about the hacking games, I only did it to see what I could find, not to win.


----------



## Alastair

wolf9466 said:


> So, in the event this changes because an "end user" writes an easy-to-use tool for it, I suppose it's still irrelevant to you. Okay.


Did you write an easy to use tool? If so will you share it? Will it also be able to adjust other parameters easily such as core voltage and clocks etc?


----------



## Worldwin

wolf9466 said:


> So, in the event this changes because an "end user" writes an easy-to-use tool for it, I suppose it's still irrelevant to you. Okay.


Oh I would love to be able to change the voltage. However it will most likely never happen. If you check the Vega bios thread:https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html there is a slide that shows that only validated bios may boot. This retains my original point. Gupsterg states " VEGA FE has a security feature to check VBIOS at post using a on die HW implementation, so modded VBIOS regardless of flash method is not working at present. RX VEGA also has this protection." Pretty sure all attempts for bios modding on Vega are dead by now and by extension any hopes of changing the HBM2 voltage.


----------



## Alastair

Worldwin said:


> Oh I would love to be able to change the voltage. However it will most likely never happen. If you check the Vega bios thread:https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html there is a slide that shows that only validated bios may boot. This retains my original point. Gupsterg states " VEGA FE has a security feature to check VBIOS at post using a on die HW implementation, so modded VBIOS regardless of flash method is not working at present. RX VEGA also has this protection." Pretty sure all attempts for bios modding on Vega are dead by now and by extension any hopes of changing the HBM2 voltage.


You can change Vcore on Vega. Through a combination of the Atikmdag patcher soft power play table mods and other software hacks using a tool like Wattool or AMD Memory Tweak XL, voltage CAN be changed. Not that it has netted me much and I've been as high as 1.3V underload (1375mv applied). I have seen the heat and power consumption scale accordingly but I have not been able to see the clocks scale. Although I did manage to hit 1800MHz for stressing and benching which got me a third in TS on HW Bot and fourth in Superposition on HW Bot respectively.


----------



## Worldwin

Alastair said:


> You can change Vcore on Vega. Through a combination of the Atikmdag patcher soft power play table mods and other software hacks using a tool like Wattool or AMD Memory Tweak XL, voltage CAN be changed. Not that it has netted me much and I've been as high as 1.3V underload (1375mv applied). I have seen the heat and power consumption scale accordingly but I have not been able to see the clocks scale. Although I did manage to hit 1800MHz for stressing and benching which got me a third in TS on HW Bot and fourth in Superposition on HW Bot respectively.


You realise the whole discussion between me and him was entirely based around the HBM2 voltage. I knew I should have specified that.


----------



## Alastair

Worldwin said:


> You realise the whole discussion between me and him was entirely based around the HBM2 voltage. I knew I should have specified that.


OH yeah HBM voltage would be great. Maybe 50 more mv for 1.4v might help with 1200MHz HBM and maybe some timings as well. But eh. I'm still confused as to why AMD released a card that was so obviously memory bottlenecked at such low clocks if even the most basic of HBM on Vega 64s (not 56s) could push 1100MHz. I mean I have seen scaling all the way to my hard limit of 1190HBM and I am sure I would see more scaling if I could go past that.

Edit: he claims to have made tool or to be in the process of making one. If he would be willing to share it with us we could then verify if his claims of being able to change HBM voltage as true or not. I remain cautiously optimistic. Ultimately it would be great if he has some success. If not. Well I'm not unhappy with my 1175mhz HBM game clocks.


----------



## wolf9466

Worldwin said:


> Oh I would love to be able to change the voltage. However it will most likely never happen. If you check the Vega bios thread:https://www.overclock.net/forum/67-amd/1633446-preliminary-view-amd-vega-bios.html there is a slide that shows that only validated bios may boot. This retains my original point. Gupsterg states " VEGA FE has a security feature to check VBIOS at post using a on die HW implementation, so modded VBIOS regardless of flash method is not working at present. RX VEGA also has this protection." Pretty sure all attempts for bios modding on Vega are dead by now and by extension any hopes of changing the HBM2 voltage.


Dude... as I said, you can do it at runtime. >.>
You do not NEED a VBIOS flash.



Alastair said:


> Did you write an easy to use tool? If so will you share it? Will it also be able to adjust other parameters easily such as core voltage and clocks etc?


I fully intended to reply to this saying, "No, but I have a good reason - the tool I have to do such is one I cannot share, which is also closed source. I've reverse engineered parts of it - but am far from understanding how it manages all of its capabilities."

This is still true - but... holy **** I bet the SMUIO block SMUSVI registers are *not* "gated" on Vega10 like they are on Navi! BRB.

EDIT0:At least some are gated! >.>
Gonna continue to prod it, though...

EDIT1: ... no... they're not... my brain just stupidly used the register locations for Vega20, and dear god I hope I didn't kill that RX Vega 64!

EDIT2: It's alive.


----------



## Worldwin

wolf9466 said:


> Dude... as I said, you can do it at runtime. >.>
> You do not NEED a VBIOS flash.
> 
> 
> 
> I fully intended to reply to this saying, "No, but I have a good reason - the tool I have to do such is one I cannot share, which is also closed source. I've reverse engineered parts of it - but am far from understanding how it manages all of its capabilities."
> 
> This is still true - but... holy **** I bet the SMUIO block SMUSVI registers are *not* "gated" on Vega10 like they are on Navi! BRB.
> 
> EDIT0:At least some are gated! >.>
> Gonna continue to prod it, though...
> 
> EDIT1: ... no... they're not... my brain just stupidly used the register locations for Vega20, and dear god I hope I didn't kill that RX Vega 64!
> 
> EDIT2: It's alive.


So you are saying there is a way to change it but are not willing to share it. There is a reason people are skeptical of claims over the internet. Lets say you prove it w/o showing the tool. Instead you show the HBM2 voltage being fed is adjusted with a multimeter.


----------



## Alastair

Worldwin said:


> So you are saying there is a way to change it but are not willing to share it. There is a reason people are skeptical of claims over the internet. Lets say you prove it w/o showing the tool. Instead you show the HBM2 voltage being fed is adjusted with a multimeter.


This :teaching:


----------



## Wuest3nFuchs

hi ,
anyone else got pixel images when recording a game session? i tried obs and relive, tried different settings and still this pixel video...what im doing wrong?

Never got that problem on my fury. 
Maybe encoding decoding in hw is not so nice as i thought it would be on vega.

stay healthy

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Alastair

If only.


----------



## nolive721

Ah ah 

funny coz i also got last week the radeon overlay showing 2300mhz clock speed on my vega 64 lol)


----------



## Falkentyne

wolf9466 said:


> Dude... as I said, you can do it at runtime. >.>
> You do not NEED a VBIOS flash.
> 
> 
> 
> I fully intended to reply to this saying, "No, but I have a good reason - the tool I have to do such is one I cannot share, which is also closed source. I've reverse engineered parts of it - but am far from understanding how it manages all of its capabilities."
> 
> This is still true - but... holy **** I bet the SMUIO block SMUSVI registers are *not* "gated" on Vega10 like they are on Navi! BRB.
> 
> EDIT0:At least some are gated! >.>
> Gonna continue to prod it, though...
> 
> EDIT1: ... no... they're not... my brain just stupidly used the register locations for Vega20, and dear god I hope I didn't kill that RX Vega 64!
> 
> EDIT2: It's alive.


Look. If you're under NDA, that's fine, but if you are under NDA and then talking about "doing things to the video card", then posting a PARTIAL (no-mocked up) screenshot of what you're doing, without giving away the name of the tool, isn't exactly "breaking" NDA any more than discussing what you're not supposed to be discussing" :/


----------



## bvo

*Vega 64 Reference Liquid Cooling Unit Transfer! (Need help!)*

Hey guys! I need to poll people's expert opinion and feedback. I have a nonworking vega 64 liquid cooling unit and transferred the Cooler Master AIO unit that came with it to my new MSI vega 64 air boost OC board that looks 99% identical. The individual parts might be different internally since I saw they didn't have the same numbers and stuff but the layout and size is 99% the same. I also notice MSI sells the same reference liquid cooling version as well so that might be a good sign! The only difference is this jst and capacitor on the corner of the board that I assume powers the liquid cooling unit.  My question is, is it possible to just solder those two items to the new board? I don't see why not, but I am not an expert. This will cost me $50 to have someone professionally solder it and he told me to do some research before deciding to do it. A few things is would there be a bios thing that wouldn't make it work? Since this is a little invasive, would it be possible to rewire something externally to a power source instead of messing with soldering or could it be possible a few of those wires might be like some temperature regulator or something? (who knows, i'm a total noob at this!) Just weighing my options. Thank you community!  

(Link to detailed pictures!)https://drive.google.com/open?id=1MmI6EIQ6XQKCTPDl6aaJ-mxfUPsOYK7f

(old non working unit)
https://www.sapphiretech.com/en/consumer/21275-00-radeon-rx-vega64-8g-hbm2-lc
(new unit w/o the connection and capacitor)
https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-Air-Boost-8G-OC
(MSI's LC version)
https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-WAVE-8G.html


----------



## PopReference

bvo said:


> Hey guys! I need to poll people's expert opinion and feedback. I have a nonworking vega 64 liquid cooling unit and transferred the Cooler Master AIO unit that came with it to my new MSI vega 64 air boost OC board that looks 99% identical. The individual parts might be different internally since I saw they didn't have the same numbers and stuff but the layout and size is 99% the same. I also notice MSI sells the same reference liquid cooling version as well so that might be a good sign! The only difference is this jst and capacitor on the corner of the board that I assume powers the liquid cooling unit.  My question is, is it possible to just solder those two items to the new board? I don't see why not, but I am not an expert. This will cost me $50 to have someone professionally solder it and he told me to do some research before deciding to do it. A few things is would there be a bios thing that wouldn't make it work? Since this is a little invasive, would it be possible to rewire something externally to a power source instead of messing with soldering or could it be possible a few of those wires might be like some temperature regulator or something? (who knows, i'm a total noob at this!) Just weighing my options. Thank you community!
> 
> (Link to detailed pictures!)https://drive.google.com/open?id=1MmI6EIQ6XQKCTPDl6aaJ-mxfUPsOYK7f
> 
> (old non working unit)
> https://www.sapphiretech.com/en/consumer/21275-00-radeon-rx-vega64-8g-hbm2-lc
> (new unit w/o the connection and capacitor)
> https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-Air-Boost-8G-OC
> (MSI's LC version)
> https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-WAVE-8G.html


It's possible but there's two spots for some kind of smd components just above the header that may be needed, I can't find a picture of a bare LC PCB so I have no clue what they are though.

If nothing else you can directly wire the motor to the 12v PSU. Though I also don't know why just the pump needs 6 pins, looks like the usual 4, 1 extra ground and 2 I'm unsure about.


----------



## Unhooked

Hey guys

I am thinking of buying this card. Have all the issues with the drivers been sorted out in 2020?

My main requirement is gaming at 2K and VR gaming. 

Appreciate your thoughts


----------



## PopReference

Unhooked said:


> Hey guys
> 
> I am thinking of buying this card. Have all the issues with the drivers been sorted out in 2020?
> 
> My main requirement is gaming at 2K and VR gaming.
> 
> Appreciate your thoughts


20.3.1 from March is okay for me. The latest drivers however introduced a new issue:
-Radeon RX Vega series graphics products may experience a system crash or TDR when performing multiple task switches using Alt+Tab.


----------



## Unhooked

So which card would you recommend, Vega Fe w/ wb for $350 or 5700xt liquid devil for $450?


----------



## Alastair

Unhooked said:


> So which card would you recommend, Vega Fe w/ wb for $350 or 5700xt liquid devil for $450?


 Depends. 



If you are after the value factor than hands down the Vega with a block. It can perform on par with the 5700XT once you dial in a bit of an OC (which is obviously why you want a block.) 



But if you are a power conscious user I reckon normal 5700XT with the block. At stock clocks the 5700XT will be more efficient on the power use than the 64. 



But if you are all about raw performance, the 5700XT with a block and an OC will outperform 64. 



I do not think the liquid devil SHOULD be on your list if you choose a 5700xt. UNLESS you just want to go for the unique look of the block. But if you don't care about the look of the block than the reference 5700XT + a block will be the better bet of the 5700XT's you might choose from. Because ALTHOUGH Liquid Devils are likely binned, I don't think it will gain you a noticeable difference (what maybe 100MHz?) over a standard 5700XT + block.


EDIT: This is assuming you are coming from a raw gaming perspective. If you want some compute performance as well, maybe you are a low key professional user than maybe the extra compute power in Vega might be of use to you.


----------



## Unhooked

Any one tried flashing a Vega 64 LC bios to this card?

Any backdoor way of enabling gaming mode with current drivers?


----------



## Alastair

Unhooked said:


> Any one tried flashing a Vega 64 LC bios to this card?
> 
> Any backdoor way of enabling gaming mode with current drivers?


to what card. :doh:


----------



## SpecChum

Alastair said:


> to what card. :doh:


This card...keep up


----------



## snipernote

Unhooked said:


> So which card would you recommend, Vega Fe w/ wb for $350 or 5700xt liquid devil for $450?


Get the newer card obviously ... Better for your power consumption as well 

Sent from my POCOPHONE F1 using Tapatalk


----------



## geriatricpollywog

Hey guys. I just installed Red Dead Redemption 2. When running DX12, my VRAM utilization starts at 5gb, then fills up to 8gb within a few minutes. Next, my system memory fills up from 8gb to 16gb. Then the game crashes. When running under Vulkan this doesn’t happen. Could this be a Vega issue or should I post in the gaming forum?


----------



## Unhooked

Sorry Vega 64 LC bios to the VEga Fe card.

I did end up getting the Vega FE, deal was too good to pass up. $250 🙂. I do need to get a wB for it 🙂


----------



## helis4life

Need some help with a Asus Vega 64 Strix. Just put a water block on it and tried going balls to the wall oc. But I seem to have hit a hard 300w power limit. Nothing I do in wattman or powerplay tables will enable the card to draw more than 300w max. Card has a 260w bios, however power limit slider does nothing beyond 20ish %
Tried setting 350w 400a 142% slider with PPT. Doesnt go beyond 300w

Does anyone have any experience OCing this particular card, and has Asus implemented some sort of physical limit on total power draw?

Psu is 850w seasonic focus gold. Using a single cable with dual 8pin pcie to feed gpu

Cheers

Sent from my SM-N975F using Tapatalk


----------



## SpecChum

helis4life said:


> Need some help with a Asus Vega 64 Strix. Just put a water block on it and tried going balls to the wall oc. But I seem to have hit a hard 300w power limit. Nothing I do in wattman or powerplay tables will enable the card to draw more than 300w max. Card has a 260w bios, however power limit slider does nothing beyond 20ish %
> Tried setting 350w 400a 142% slider with PPT. Doesnt go beyond 300w
> 
> Does anyone have any experience OCing this particular card, and has Asus implemented some sort of physical limit on total power draw?
> 
> Psu is 850w seasonic focus gold. Using a single cable with dual 8pin pcie to feed gpu
> 
> Cheers
> 
> Sent from my SM-N975F using Tapatalk


I'm fairly sure a single cable can only do 300W, all it's doing is splitting it into 2 x 150W.

You'll need 2 separate ones for more.


----------



## helis4life

SpecChum said:


> I'm fairly sure a single cable can only do 300W, all it's doing is splitting it into 2 x 150W.
> 
> 
> 
> You'll need 2 separate ones for more.


Yeah I thought that too. I just tried using a second cable, still won't pull more than 300w

Sent from my SM-N975F using Tapatalk


----------



## SpecChum

helis4life said:


> Yeah I thought that too. I just tried using a second cable, still won't pull more than 300w
> 
> Sent from my SM-N975F using Tapatalk


Oh, didn't expect that lol

I'm not sure then, probably BIOS limit, as you say.

I know my reference Vega can pull 400W+, but maybe ASUS changed the VRM design; the reference models are built like a tank and can manage well over 400W


----------



## PopReference

helis4life said:


> Need some help with a Asus Vega 64 Strix. Just put a water block on it and tried going balls to the wall oc. But I seem to have hit a hard 300w power limit. Nothing I do in wattman or powerplay tables will enable the card to draw more than 300w max. Card has a 260w bios, however power limit slider does nothing beyond 20ish %
> Tried setting 350w 400a 142% slider with PPT. Doesnt go beyond 300w
> 
> Does anyone have any experience OCing this particular card, and has Asus implemented some sort of physical limit on total power draw?
> 
> Psu is 850w seasonic focus gold. Using a single cable with dual 8pin pcie to feed gpu
> 
> Cheers


I have one and am able to get past 300w with only 50%. I've actually burned out my PCIe power connector, twice, last week after I got rid of my PPT, using separate cables from the PSU.
FYI using Driver 20.3.1 and PPT that goes to 242%, this one:



Spoiler



Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0000]
"PP_PhmSoftPowerPlayTable"=hex:B6,02,08,01,00,5C,00,E1,06,00,00,08,2C,00,00,1B,\
00,48,00,00,00,80,A9,03,00,F0,49,02,00,32,00,08,00,00,00,00,00,00,00,00,00,\
00,00,00,00,00,02,01,5C,00,4F,02,46,02,94,00,9E,01,BE,00,28,01,7A,00,8C,00,\
BC,01,00,00,00,00,72,02,00,00,90,00,A8,02,6D,01,43,01,97,01,F0,49,02,00,71,\
02,02,02,00,00,00,00,00,00,08,00,00,00,00,00,00,00,05,00,07,00,03,00,05,00,\
00,00,00,00,00,00,01,08,20,03,84,03,B6,03,E8,03,1A,04,4C,04,7E,04,B0,04,01,\
01,46,05,01,01,84,03,00,08,60,EA,00,00,00,40,19,01,00,01,80,38,01,00,02,DC,\
4A,01,00,03,90,5F,01,00,04,00,77,01,00,05,90,91,01,00,06,6C,B0,01,00,07,01,\
08,D0,4C,01,00,00,00,80,00,00,00,00,00,00,1C,83,01,00,01,00,00,00,00,00,00,\
00,00,70,A7,01,00,02,00,00,00,00,00,00,00,00,88,BC,01,00,03,00,00,00,00,00,\
00,00,00,C0,D4,01,00,04,00,00,00,00,00,00,00,00,44,23,02,00,05,00,00,00,00,\
01,00,00,00,00,58,02,00,06,00,00,00,00,01,00,00,00,B8,7C,02,00,07,00,00,00,\
00,01,00,00,00,00,05,60,EA,00,00,00,40,19,01,00,00,80,38,01,00,00,DC,4A,01,\
00,00,90,5F,01,00,00,00,08,28,6E,00,00,00,2C,C9,00,00,01,F8,0B,01,00,02,80,\
38,01,00,03,90,5F,01,00,04,F4,91,01,00,05,D0,B0,01,00,06,C0,D4,01,00,07,00,\
08,6C,39,00,00,00,24,5E,00,00,01,FC,85,00,00,02,AC,BC,00,00,03,34,D0,00,00,\
04,68,6E,01,00,05,08,97,01,00,06,EC,A3,01,00,07,00,01,68,3C,01,00,00,01,04,\
3C,41,00,00,00,00,00,50,C3,00,00,00,00,00,80,38,01,00,02,00,00,24,71,01,00,\
05,00,00,01,08,00,98,85,00,00,40,B5,00,00,60,EA,00,00,50,C3,00,00,01,80,BB,\
00,00,60,EA,00,00,94,0B,01,00,50,C3,00,00,02,00,E1,00,00,94,0B,01,00,40,19,\
01,00,50,C3,00,00,03,78,FF,00,00,40,19,01,00,88,26,01,00,50,C3,00,00,04,40,\
19,01,00,80,38,01,00,80,38,01,00,50,C3,00,00,05,80,38,01,00,DC,4A,01,00,DC,\
4A,01,00,50,C3,00,00,06,00,77,01,00,00,77,01,00,90,5F,01,00,50,C3,00,00,07,\
90,91,01,00,90,91,01,00,00,77,01,00,50,C3,00,00,01,18,00,00,00,00,00,00,00,\
0B,E4,12,40,06,B8,0B,4E,00,2A,00,54,03,90,01,90,01,90,01,90,01,90,01,90,01,\
90,01,01,32,00,37,00,02,00,23,07,08,01,08,01,08,01,90,01,00,00,59,00,69,00,\
4A,00,4A,00,5F,00,73,00,73,00,64,00,40,00,90,92,97,60,96,00,90,55,00,00,00,\
00,00,00,00,00,00,00,00,00,00,00,00,00,00,02,02,D4,30,00,00,02,10,60,EA,00,\
00,02,10


----------



## miklkit

Yeah, I use a single cable and have gone past 300 watts too.


----------



## Ne01 OnnA

Gears Tactics GPU Test


----------



## Tarts5

Hi, just ordered a used Gigabyte Vega 64 Gaming OC card. Looking to replace the old thermal pads on it. I think there are some between the GPU board and the backplate and some between the heatsink and the "what are they called" components. What size and thinkness pads should I get and are there anything else to look for? 
As for paste, will the Arctic MX4 do fine?
Thanks!


----------



## PopReference

Tarts5 said:


> Hi, just ordered a used Gigabyte Vega 64 Gaming OC card. Looking to replace the old thermal pads on it. I think there are some between the GPU board and the backplate and some between the heatsink and the "what are they called" components. What size and thinkness pads should I get and are there anything else to look for?
> As for paste, will the Arctic MX4 do fine?
> Thanks!


It's hard to know what size pads to get for every spot since it depends on the GPU/company, best to search for a review or breakdown that shows the pads sizes.

However in general the common sizes are .5mm and 1.5mm. Pads contacting the back plate can be really thick, probably 3.0mm, if so just stack the pads to fill the gap so 1.5+1.5. Then the same works for 1mm, 2 0.5mm pads. Temps shouldn't be much worse if you're using better thermal pads then stock.


----------



## Unhooked

So safe to say no one has tried to flash a standard Vega 64 bios to a Vega FE card?


----------



## Unhooked

Ok so I tried it and it didn't work. Had to use my back up card to flash it back.


----------



## DiakonCz

Hello, I have vega 56 sapphire pulse, I did flashbios xfx vega 64. So my question is, is it save to use +50% power limit?
56 bios was on 270w with +50%
64 bios will be about 320w with +50% (if I count it right)

Thank you


----------



## wolf9466

Worldwin said:


> wolf9466 said:
> 
> 
> 
> Dude... as I said, you can do it at runtime. >.>
> You do not NEED a VBIOS flash.
> 
> 
> 
> I fully intended to reply to this saying, "No, but I have a good reason - the tool I have to do such is one I cannot share, which is also closed source. I've reverse engineered parts of it - but am far from understanding how it manages all of its capabilities."
> 
> This is still true - but... holy **** I bet the SMUIO block SMUSVI registers are *not* "gated" on Vega10 like they are on Navi! BRB.
> 
> EDIT0:At least some are gated! >.>
> Gonna continue to prod it, though...
> 
> EDIT1: ... no... they're not... my brain just stupidly used the register locations for Vega20, and dear god I hope I didn't kill that RX Vega 64!
> 
> EDIT2: It's alive.
> 
> 
> 
> So you are saying there is a way to change it but are not willing to share it. There is a reason people are skeptical of claims over the internet. Lets say you prove it w/o showing the tool. Instead you show the HBM2 voltage being fed is adjusted with a multimeter.
Click to expand...

I plan to release it when I know it won't explode. It would be linux based though.

I had to make my own, is the point.


----------



## Alastair

DiakonCz said:


> Hello, I have vega 56 sapphire pulse, I did flashbios xfx vega 64. So my question is, is it save to use +50% power limit?
> 56 bios was on 270w with +50%
> 64 bios will be about 320w with +50% (if I count it right)
> 
> Thank you


 Shouldnt be an issue. But did are you sure the pulse is a reference PCB? I assume the 64 BIOS you used is for reference PCB


----------



## DiakonCz

Alastair said:


> Shouldnt be an issue. But did are you sure the pulse is a reference PCB? I assume the 64 BIOS you used is for reference PCB


I used bios xfx vega 64 with nano pcb.

it works just fine, I´m just not sure about power limit, as i don´t want to burn it.


----------



## Tarts5

PopReference said:


> It's hard to know what size pads to get for every spot since it depends on the GPU/company, best to search for a review or breakdown that shows the pads sizes.
> 
> However in general the common sizes are .5mm and 1.5mm. Pads contacting the back plate can be really thick, probably 3.0mm, if so just stack the pads to fill the gap so 1.5+1.5. Then the same works for 1mm, 2 0.5mm pads. Temps shouldn't be much worse if you're using better thermal pads then stock.


Thanks, perfect!
How do you guys comment this claim: https://www.reddit.com/r/Amd/comments/b08jc7/vega_undervolting_guide_for_2019_with_extensive/ TL;DR for it is that the "sweet spot" settings can be now achieved by just changing the power limit to -5% and thats it!?


----------



## snipernote

DiakonCz said:


> Hello, I have vega 56 sapphire pulse, I did flashbios xfx vega 64. So my question is, is it save to use +50% power limit?
> 
> 56 bios was on 270w with +50%
> 
> 64 bios will be about 320w with +50% (if I count it right)
> 
> 
> 
> Thank you


Power color red dragon vega 56 here 
.. i managed to max my card with 279w (+50% power limit) and [email protected] while staying stable (power usage hover eventually between 250w and 279w with core temp 69-70c hotspot 108c) 
Power is not the issue on vega ... The main issue is cooling capacity to sustain load

Sent from my POCOPHONE F1 using Tapatalk


----------



## Xinoxide

I just sold a card [edit: Vega64] I pushed 450w through pretty regularly at varying speeds depending on benches to a friend.

Card is still in fantastic working order after being just about beat to death.

The VRM on the reference board is damn well plenty for anything you'll do on air or water so don't let that hinder card choice.


----------



## Nighthog

Xinoxide said:


> I just sold a card [edit: Vega64] I pushed 450w through pretty regularly at varying speeds depending on benches to a friend.
> 
> Card is still in fantastic working order after being just about beat to death.
> 
> The VRM on the reference board is damn well plenty for anything you'll do on air or water so don't let that hinder card choice.


Can I ask what kind of cooling you used to be able to push that much wattage through your card?

Seems I've picked the worst waterblock imaginable for Vega 64, can't do 300Watts without hotspot reaching 110C and crashing games.


----------



## Nighthog

Found by chance someone who got seemingly a RX VEGA 64 PRO BIOS on their RX VEGA 64.

Bios is newer and unknown prior, it default to use PRO drivers. But is gimped with 165Watt limit compared to regular RX VEGA 64 @ 220Watts.

Here a link to discussion about it at videocardz:
http://disq.us/p/298jbil

Link to bios download:
RX VEGA 64 PRO BIOS

If someone with more adventurous spirits want to try, be my guest. I'll wait but seems we have more options now. 

Not recommended to anyone unwilling to risk their card.


----------



## Xinoxide

Nighthog said:


> Can I ask what kind of cooling you used to be able to push that much wattage through your card?
> 
> Seems I've picked the worst waterblock imaginable for Vega 64, can't do 300Watts without hotspot reaching 110C and crashing games.


I used an LC550 with Coollabs liquid metal. Thats like a 120mm AIO.

Hot spot temp requires a very precise and tight mount. Out of the 50~ or so temp sensors accross the die, the hotspot is of course the highest one.

Outside of running 3DMark, my hotspot was pretty much guaranteed to exceed max, but I was only running 3dmark, it would only get to about mid 90s before the next test started loading.


----------



## Tarts5

Hi guys, need help. Im using the Radeon software to OC/UV my Vega64 (Gigabyte Gaming OC) but its running too hot (hotspot temp 100+) in some games. Now I would like to lower the core (P6/P7) voltage to around 900 or a little bit more (its currently at 960) but whenever I lower it to 950 or lower, (while also lowering my VRAM voltage as this is the "floor" as I understood) then my HBM memory clock drops to 799mhz fixed. Although it is stable up to about 1120mhz but I have set it to 1100mhz. 

So is there a way to lower the core to around 900mv but keep my memory clock as it is? (with preferably using a software for it)
Also, would undervolting P1-P5 help at all or not (in games)?


----------



## snipernote

Tarts5 said:


> Hi guys, need help. Im using the Radeon software to OC/UV my Vega64 (Gigabyte Gaming OC) but its running too hot (hotspot temp 100+) in some games. Now I would like to lower the core (P6/P7) voltage to around 900 or a little bit more (its currently at 960) but whenever I lower it to 950 or lower, (while also lowering my VRAM voltage as this is the "floor" as I understood) then my HBM memory clock drops to 799mhz fixed. Although it is stable up to about 1120mhz but I have set it to 1100mhz.
> 
> 
> 
> So is there a way to lower the core to around 900mv but keep my memory clock as it is? (with preferably using a software for it)
> 
> Also, would undervolting P1-P5 help at all or not (in games)?


Actually no as you need to check the P2 voltage state for HBM if it's at 900 then it's fixed ... I would let your gpu Pstates stay over 950mv just to be safe

Sent from my POCOPHONE F1 using Tapatalk


----------



## Alastair

Nighthog said:


> Can I ask what kind of cooling you used to be able to push that much wattage through your card?
> 
> Seems I've picked the worst waterblock imaginable for Vega 64, can't do 300Watts without hotspot reaching 110C and crashing games.


I am using a Chinese Barrow block on my 64. Ive pushed the 400s during superposition runs at 1800MHz. But didnt see above 80 on HS.


----------



## Dtrain

Hey guys I have a Sapphire Vega 64 and I'm no longer able to set my power limit to anything above 1%. The only options I have are -1%, 0%, and 1%.

I've wiped and reinstalled my drivers which allowed me to edit the power limit again, but after shutting down last night and setting my overclock again today its reverted back to only allowing me to select from -1% to 1%. 

I'm currently on the 20.2.2, and have also tried the 20.4.2 drivers, but neither are allowing me to edit my power limit % now after I shut down or restart after installing. Anyone have any ideas what I'm doing wrong, I haven't edited my power play registry, as I wasn't trying to exceed the 50% range, not sure if I need to now, to be able to edit back to 30% for my oc.


----------



## PopReference

Dtrain said:


> Hey guys I have a Sapphire Vega 64 and I'm no longer able to set my power limit to anything above 1%. The only options I have are -1%, 0%, and 1%.
> 
> I've wiped and reinstalled my drivers which allowed me to edit the power limit again, but after shutting down last night and setting my overclock again today its reverted back to only allowing me to select from -1% to 1%.
> 
> I'm currently on the 20.2.2, and have also tried the 20.4.2 drivers, but neither are allowing me to edit my power limit % now after I shut down or restart after installing. Anyone have any ideas what I'm doing wrong, I have edited my power play registry, as I wasn't trying to exceed the 50% range, not sure if I need to now, to be able to edit back to 30% for my oc.


You should clean install the drivers if you haven't. It could be an issue with the overdrive tool for overclocking but making sure the PP table is removed or stock settings is important.


----------



## Dtrain

PopReference said:


> You should clean install the drivers if you haven't. It could be an issue with the overdrive tool for overclocking but making sure the PP table is removed or stock settings is important.


I wiped with DDU previously twice before installing the current drivers, and beta drivers. I ended up getting fed up just reinstalling Windows and whatever it was is now fixed. I can't think of what my particular issue was though. I can certainly say it's resolved now though!


----------



## Ne01 OnnA

0.2.8 beta1 (with PP_Table support)
Im using this one all't time 

Download:
-> 
https://www.dropbox.com/s/ibxcdlrgkm82ccg/OverdriveNTool 0.2.8beta1.7z?dl=1


----------



## Gustavo Al

Ne01 OnnA said:


> 0.2.8 beta1 (with PP_Table support)
> Im using this one all't time
> 
> Download:
> ->
> https://www.dropbox.com/s/ibxcdlrgkm82ccg/OverdriveNTool 0.2.8beta1.7z?dl=1


Thanks for sharing this one, I've been away from Vega 64 overclock for a bit, is there any way to increase the GPU voltage right now (apart of the liquid edition bios)? I couldn't get to increase power limit over 50% or voltages over 1200mV with this tool.


----------



## Blackops_2

Using a Byskii block on my 56 turned out well. Though and i'm just reading up a bit. But i'm not at all sure how to stabilize these clocks. Undervolting and increasing the power limit according to quick guides i've seen still doesn't do it for me. I'm sitting at the 1250mhz mark at 30-35C all day. Starting to wonder if the 3770k @ 4.5ghz just isn't enough.


----------



## Falkentyne

Hey guys

Can you PLEASE check and see if the P1 voltage affects how far your HBM can overclock on a vega 64?

I was crashing fast in Valley at -30mv P7, at 1700/1050 mhz, but -24mv worked,
Then someone on reddit just now mentioned increasing p1 helped hbm get to 1180 from 1140 on a Radeon VII, now i'm here with 
-30mv on P7 and 1700/1075 mhz and Valley hasn't crashed yet ....


----------



## Nighthog

Falkentyne said:


> Hey guys
> 
> Can you PLEASE check and see if the P1 voltage affects how far your HBM can overclock on a vega 64?
> 
> I was crashing fast in Valley at -30mv P7, at 1700/1050 mhz, but -24mv worked,
> Then someone on reddit just now mentioned increasing p1 helped hbm get to 1180 from 1140 on a Radeon VII, now i'm here with
> -30mv on P7 and 1700/1075 mhz and Valley hasn't crashed yet ....


Testing right now.

Usually with stock P1 900mv I could only do 1080Mhz HBM on my RX VEGA 64 without issues.
Did a few quick tests and it seems ok but I need more time to do proper longevity test.

Anyway I could "get results" up to 1025mv. I increased HBM to 1150Mhz. Will look if it will crash or stay stable, 1000mv @ 1150Mhz crashed often. 1050mv @1160Mhz crashed as well.

The drivers are a little wonky if they crash meaning it's advised to restart if they do or they don't work normally. (no consistency of results)

Just a quick test no proper gaming tests....


EDIT: Nope, still crashing @ 1050mv @1150Mhz. Might be trying to much at once. I'll test later, got not time at time moment to mess with it.


----------



## Blackops_2

Also need some help on just stabilizing clocks. I can't get the thing to run stock clocks. I'm presuming the 3770k just isn't enough. When i first installed vega the PC wouldn't post, cleared CMOS and reapplied my OC at 1.2vcore it was fine least the run of IBT i did and almost five hours of P95 before calling it to play COD all night. 

I even went so far as to just lower voltage 50mv on state 6 and 7 and increase the power slider to 50% but it's to no avail, heaven, valley, time spy, firestrike, nothing pegs the card to it's suggested clocks. This is all using the adrenaline drivers maybe i need to use wattman and keep afterburner off?

If it is the CPU other than upgrading all i can think is to delid the thing and get a couple of more mhz out of it. For what ever reason a single D5 120X6mm of rad space isn't keeping my 3770k cool this time with anything around 1.25v.


----------



## PopReference

Blackops_2 said:


> Also need some help on just stabilizing clocks. I can't get the thing to run stock clocks. I'm presuming the 3770k just isn't enough. When i first installed vega the PC wouldn't post, cleared CMOS and reapplied my OC at 1.2vcore it was fine least the run of IBT i did and almost five hours of P95 before calling it to play COD all night.
> 
> I even went so far as to just lower voltage 50mv on state 6 and 7 and increase the power slider to 50% but it's to no avail, heaven, valley, time spy, firestrike, nothing pegs the card to it's suggested clocks. This is all using the adrenaline drivers maybe i need to use wattman and keep afterburner off?
> 
> If it is the CPU other than upgrading all i can think is to delid the thing and get a couple of more mhz out of it. For what ever reason a single D5 120X6mm of rad space isn't keeping my 3770k cool this time with anything around 1.25v.


Vega doesn't usually hit max clocks, at least on most set ups, it's more like highest optimistic clock suggestion. Some software doesn't record the clock speed correctly as well; HWinfo, GPUZ, and some others report higher then Wattman for me.

the one type of software that I can think can push sustained max clocks is crypto mining.


----------



## cplifj

Blackops_2

i don't believe ever the 3770K would be the problem. It may be that it is clocked to high if anything.
Things have changed since Intel released their updated microcodes. And when running windows 10 , those microcodes get used now. In your case you would have CPU microcode 21 now.
But again, i doubt your CPU is too weak to drive the Vega.


----------



## Blackops_2

Gotcha i made a new thread about it because at a moments notice of adjusting the power limit mid benchmark it crashes to the point of having to clear CMOS. Am thinking at this point the card is faulty or the bios is corrupt. I'm considering flashing the stock. Or just trying the LN2 or whatever the secondary is to see if it helps at all. Shame i have samsung HBM too.


----------



## ToetjeNL

*Love this card *

I love this card.



















Seems good to me 

Specs:

AMD Ryzen 2700X
Gigabyte x570 aorus pro
32GB Corsair C14 running at CL14 @3466mhz low subtimings
Powercolor Red Devil Vega 64 (Liquid metal)
AORUS AD27QD Gaming Monitor
850W Corsair PSU


----------



## Nighthog

Anyone notice the availability for 10bits per pixel colour support in the 20.5.1 drivers?

This is the usually only reserved for PRO drivers, the 10bit colour option.
I tried it out and it prompted me to reboot the system for the change to apply.

Can say colours look a little different to say the least. Messes up some games like Destiny 2 where HDR will not kick in and such.
I have a HDR capable 4K TV. So using 8bit, 10/12bit output has a drastic change on colour gamut. Even regular 8bit was different, more punchy.

Seems not too useful for regular consumer, but I wonder which software actually supports and uses the correct 10bit support properly? (games are a mess to be said)


----------



## jearly410

Nighthog said:


> Anyone notice the availability for 10bits per pixel colour support in the 20.5.1 drivers?
> 
> This is the usually only reserved for PRO drivers, the 10bit colour option.
> I tried it out and it prompted me to reboot the system for the change to apply.
> 
> Can say colours look a little different to say the least. Messes up some games like Destiny 2 where HDR will not kick in and such.
> I have a HDR capable 4K TV. So using 8bit, 10/12bit output has a drastic change on colour gamut. Even regular 8bit was different, more punchy.
> 
> Seems not too useful for regular consumer, but I wonder which software actually supports and uses the correct 10bit support properly? (games are a mess to be said)


AFAIK the 10bit option has always been there for me. Non-HDR 34" UW. Display ->Color Depth -> 10bpc correct?


----------



## Nighthog

jearly410 said:


> AFAIK the 10bit option has always been there for me. Non-HDR 34" UW. Display ->Color Depth -> 10bpc correct?


Different thing. 

Graphics->Advanced->[10Bit Pixel Format]->Enabled/Disabled (bits per pixel, not bits per channel)
This is the the one Photoshop users usually might want.

If you enable this and then also use 10bits per channel for Display output you get "true 10bits" rather than the consumer 8bit->10bit conversion stuff. 
But seems it messes up to many things as nothing really expects 10bits and are designed with 8bits in mind.

For example HDR would no longer work proper as I saw it. Destiny 2 was a mess and useless. Could not be used with it. HDR did not get switched on and stayed SDR but everything was "crushed" for brightness/contrast.
Total war: Warhammer 2 and colours got messed up for some elements but not all for a SDR game. Could actually be used if you like some "Pop" to colours.

This way I could have a 10bit desktop, not limited to DX3D 10bit output. But as I saw, games get borked, they expect 8bits. 
I could verify it was 10bits by using my media setup MPC->MADVR->10bit output for video. It was still 10bit but I no longer had issues with not using [DX11 full screen-exlusive-mode]. 10bit output worked in window mode as well.


----------



## jearly410

Oh now I see what you are saying. You are correct, the toggle is new.


----------



## geriatricpollywog

My GPU won’t post and the GPU tach won’t light up. It boots fine from iGPU. Any ideas? I tried different 8 pin power connectors already.


----------



## ducegt

About 6 months ago I redid the thermal paste on my 64 LC that was bought open box and previously wouldn't take an overclock beyond +2 mhz on the core. The result, #1 7700K + Vega 64 LC (or non 64) in TimeSpy and FireStrike using the stock liquid cooler and ambient temperatures. I've modified my Z270 board to take a 9900k and in the coming weeks will see if this Vega can climb the charts again.

https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=

https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=


----------



## Wuest3nFuchs

Hello,


Vega56 Pulse Watercooled



My VDDC and MVDD temps where about 20-30° Cel higher than those from hbm and gpu...?
Is that normal for a watercooled card ?
Should i care about that higher temp there?



My 2700x CPU had 38.3° Cel while running a game and is also in the same loop as the GPU.


Below are pics of my maxed out temps for Vega56 while gaming and a pic from overdriventool for clocks and voltages.


Regards


----------



## miklkit

Gave meself a scare recently. Started smelling a hot electronics smell and then one day there was a buzz in the headphones and the monitor went black. Restarted the puter and all seemed well but the smell persisted. So off came the cables and side panels and out into the yard it went where it was given a thorough beating with a Data Vac. Big cloud of toxic dust.


Since then if anything the rig is running slightly cooler than it was a year ago at the same specs. Still, it might be time to think about re-greasing the bearings, so to speak. Is there anything to pay attention to on this Sapphire Vega 64?


----------



## Ne01 OnnA

ducegt said:


> About 6 months ago I redid the thermal paste on my 64 LC that was bought open box and previously wouldn't take an overclock beyond +2 mhz on the core. The result, #1 7700K + Vega 64 LC (or non 64) in TimeSpy and FireStrike using the stock liquid cooler and ambient temperatures. I've modified my Z270 board to take a 9900k and in the coming weeks will see if this Vega can climb the charts again.
> 
> https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=
> 
> https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=



Am i first? 23640 Overall 

"Your best score: 23640
Better than 100% of results"

Waiting for Win 2004 to drop on my Update then i will test some & get some 

-> https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=
Change Overall to GFX and back to have it right.


----------



## Ne01 OnnA

Best scores Online comparison (I don't know how to submit this tho)
I think i have the one of the best scores for Vega XTX (LC Solution).
My scores at the end (2 of them one Hi OC 1795MHz Tess x8 24k and the other Max Gamer 1750MHz with Valid score), enjoy:

-> https://www.3dmark.com/compare/fs/19128644/fs/19131114/fs/19487116/fs/23234735/fs/23234687

More info about the settings:
-> https://forums.guru3d.com/threads/r...-bios-tweaks-cont.426001/page-65#post-5814820
-> https://forums.guru3d.com/threads/r...-bios-tweaks-cont.426001/page-65#post-5814821


----------



## Robotmind

Great job! It is a lonely list for sure... 

I may not have top score, but make up for that in quantity.

All ran with no chicken clocking.





ducegt said:


> About 6 months ago I redid the thermal paste on my 64 LC that was bought open box and previously wouldn't take an overclock beyond +2 mhz on the core. The result, #1 7700K + Vega 64 LC (or non 64) in TimeSpy and FireStrike using the stock liquid cooler and ambient temperatures. I've modified my Z270 board to take a 9900k and in the coming weeks will see if this Vega can climb the charts again.
> 
> https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=
> 
> https://www.3dmark.com/newsearch#ad...de=false&showInvalidResults=false&freeParams=


----------



## Ne01 OnnA

THX


----------



## fcchin

Hello, GPUZ says MEM_VRM_temp high around 100C idling, how to reduce it?

I'm using Vega64 reference card converted liquid cool, new thermal paste 13.5w.K/m finally managed to reduce hot spot 65C or below. Even tried flash vega56 bios using 1.25V MVDDC but still high 95C VR_MVDD (HWinfo term).

Thanks in advance for advice.

Already added extra new thermal pad 13w.K/m to various other smaller IC to cool them but nearly no change to.

Added convertor cable from original GPU FAN header to PWN fan split pushing pump 2700rpm and push pull fans on radiator 420x86x140. All temps are low, boost are high, throttle little, but mem vrm temp is high 100C not comfortable


----------



## Ne01 OnnA

You can reduce the temps by UV or Lowered Clocks (GPU as well as HBM2).
I've never seen temps more than 75deg. Hot spot on my XTX.... (in Summer of course)


----------



## fcchin

Ne01 OnnA said:


> You can reduce the temps by UV or Lowered Clocks (GPU as well as HBM2).
> I've never seen temps more than 75deg. Hot spot on my XTX.... (in Summer of course)


Below I highlight the mem vrm temp high and MVDDC reduce to 1.25v - headache don't understand why so high......


----------



## Worldwin

fcchin said:


> Below I highlight the mem vrm temp high and MVDDC reduce to 1.25v - headache don't understand why so high......


I would put no faith in those readings. Only pay attention to Core(GPU)+Mem(HBM)+Hotspot temps. 1.25V is standard for V56 bios and is not a concern. Voltages for memory do not change.


----------



## Ne01 OnnA

Yup, IMO it's missread by sensor.
You have the Hotspot to look at (this is BTW the highest temp read on all GPU PCB )


----------



## miklkit

Well, this might be the end of the road for this Vega 64. It has started black screening and turning itself off. It only lasts a few minutes in games anymore but is still good for youtube.


I've been looking around and most seem to think they shut down due to a lack of power. This Seasonic 850 watt 80+platinum PSU is less than 2 years old and is connected to the V64 by 2 separate cables.
Temps have always been good because it's a Nitro and it has an aggressive fan profile. Temps normally stay in the 50s and sometimes creep into the 60c range.


There was one person who said he cured it by flashing the bios. I've never done it and don't know how, but a few minutes ago I flipped the switch to the right which I believe is the low power bios. Will see how that does.


Meanwhile here is the last reading I took of the system just a week before it started black screening during a heat wave.


----------



## Worldwin

miklkit said:


> Well, this might be the end of the road for this Vega 64. It has started black screening and turning itself off. It only lasts a few minutes in games anymore but is still good for youtube.
> 
> 
> I've been looking around and most seem to think they shut down due to a lack of power. This Seasonic 850 watt 80+platinum PSU is less than 2 years old and is connected to the V64 by 2 separate cables.
> Temps have always been good because it's a Nitro and it has an aggressive fan profile. Temps normally stay in the 50s and sometimes creep into the 60c range.
> 
> 
> There was one person who said he cured it by flashing the bios. I've never done it and don't know how, but a few minutes ago I flipped the switch to the right which I believe is the low power bios. Will see how that does.
> 
> 
> Meanwhile here is the last reading I took of the system just a week before it started black screening during a heat wave.


3000RPM on fan though. Might as well try undervolting, assuming you haven't. Not like you have much to lose at this point.


----------



## dagget3450

miklkit said:


> Well, this might be the end of the road for this Vega 64. It has started black screening and turning itself off. It only lasts a few minutes in games anymore but is still good for youtube.
> 
> 
> I've been looking around and most seem to think they shut down due to a lack of power. This Seasonic 850 watt 80+platinum PSU is less than 2 years old and is connected to the V64 by 2 separate cables.
> Temps have always been good because it's a Nitro and it has an aggressive fan profile. Temps normally stay in the 50s and sometimes creep into the 60c range.
> 
> 
> There was one person who said he cured it by flashing the bios. I've never done it and don't know how, but a few minutes ago I flipped the switch to the right which I believe is the low power bios. Will see how that does.
> 
> 
> Meanwhile here is the last reading I took of the system just a week before it started black screening during a heat wave.


I had a friend who got a V64, and had an issue with his pc shutting down/off when getting into a game about 3-5 min in. Turned out his PSU wasn't handling the power spikes. I loaned him a 1kw psu and his problem was resolved. I think he had a 750/850 psu but i don't know what brand/model. So it is very possible your PSU is the culprit. Vega eats massive amounts of power, maybe you could try undervolting as suggested to see if it helps.


----------



## miklkit

It is undervolted by -81 mv. Any lower and it destabilizes. It is OCed to 1660mhz GPU, 1100 mhz ram, and -81 mv. 



About the fans, I like to keep it as cool as possible so set an aggressive fan profile. Anyway, I flipped the bios switch to the right and then played for well over an hour with no issues. Maybe it is just a corrupted bios.


----------



## Worldwin

miklkit said:


> It is undervolted by -81 mv. Any lower and it destabilizes. It is OCed to 1660mhz GPU, 1100 mhz ram, and -81 mv.
> 
> 
> 
> About the fans, I like to keep it as cool as possible so set an aggressive fan profile. Anyway, I flipped the bios switch to the right and then played for well over an hour with no issues. Maybe it is just a corrupted bios.


Best not be undervolting by MSI AB. It offsets all P-states
.https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/
This will let you modify all states.


----------



## MgoZ

Hi all.

I have a RX Vega 56 Sapphire Pulse that is giving me problems. I recently installed a water block https://es.aliexpress.com/item/32912568591.html , now when starting the pc the screen shows image until it enters windows, where the screen turns off after a few seconds.
If I put the heatsink that came from the factory, everything works correctly.

I think the problem may be that when I put the water block on it, it does not detect that there is a fan connected to the graphics card and it stops displaying the image for safety.

Does anyone have a Vega 56 Sapphire Pulse with water block that had this problem? Do i need a special bios?

Thanks for all


----------



## miklkit

Worldwin said:


> Best not be undervolting by MSI AB. It offsets all P-states
> .https://forums.guru3d.com/threads/overdriventool-tool-for-amd-gpus.416116/
> This will let you modify all states.



I don't use MSI AB for anything except for the Rivatuner part for the overlay. It's a Sapphire V64 and for whatever reason wattman doesn't work, so I use Trixx ver. 6.6.0. 



I tried that Overdriventool over a year ago and it did not want to work. I would make changes and it would revert to stock. This was still better than wattman that would do whatever it felt like, as it would at least revert to stock.


Anyway, I played the most stressful game I have for 2 hours last night with no issues. Temps got into the 60C range. So it is looking like that bios did indeed get corrupted. I never knew a bios could get corrupted either. Now to learn how to flash that bios. Gulp.


I know nothing about waterblocks but that sounds like a bad mount.


----------



## Worldwin

miklkit said:


> I don't use MSI AB for anything except for the Rivatuner part for the overlay. It's a Sapphire V64 and for whatever reason wattman doesn't work, so I use Trixx ver. 6.6.0.
> 
> 
> 
> I tried that Overdriventool over a year ago and it did not want to work. I would make changes and it would revert to stock. This was still better than wattman that would do whatever it felt like, as it would at least revert to stock.
> 
> 
> Anyway, I played the most stressful game I have for 2 hours last night with no issues. Temps got into the 60C range. So it is looking like that bios did indeed get corrupted. I never knew a bios could get corrupted either. Now to learn how to flash that bios. Gulp.
> 
> 
> I know nothing about waterblocks but that sounds like a bad mount.


Oh that occurs when the voltage set at a given state conflict with the default ones applied VIA bios. The fix for that is to set the voltage by registry. This can be done by running OverdriveNtool in Admin mode then right clicking the top to open the option tab and selecting "PPtable Editor." Setting your P-states here is superior as it overrides the voltages in the bios.


----------



## Loladinas

MgoZ said:


> Hi all.
> 
> I have a RX Vega 56 Sapphire Pulse that is giving me problems. I recently installed a water block https://es.aliexpress.com/item/32912568591.html , now when starting the pc the screen shows image until it enters windows, where the screen turns off after a few seconds.
> If I put the heatsink that came from the factory, everything works correctly.
> 
> I think the problem may be that when I put the water block on it, it does not detect that there is a fan connected to the graphics card and it stops displaying the image for safety.
> 
> Does anyone have a Vega 56 Sapphire Pulse with water block that had this problem? Do i need a special bios?
> 
> Thanks for all


Why not connect a fan to the header and check it out? I had blackskreening issues like that on my Vega too, though it's the reference model. Turns out I didn't have a proper mount.


----------



## MgoZ

Loladinas said:


> Why not connect a fan to the header and check it out? I had blackskreening issues like that on my Vega too, though it's the reference model. Turns out I didn't have a proper mount.


Thank you for the help. I am building everything again, i will try it.


----------



## miklkit

Worldwin said:


> Oh that occurs when the voltage set at a given state conflict with the default ones applied VIA bios. The fix for that is to set the voltage by registry. This can be done by running OverdriveNtool in Admin mode then right clicking the top to open the option tab and selecting "PPtable Editor." Setting your P-states here is superior as it overrides the voltages in the bios.



In almost 2 years this is the first I have heard of that. Anyway, it has been running fine with Trixx until a few weeks ago and at this stage I will not be changing. This V64 might be dying as it black screened again last night. It's odd that it can run for hours just fine and when paused, it black screens.


----------



## Worldwin

miklkit said:


> In almost 2 years this is the first I have heard of that. Anyway, it has been running fine with Trixx until a few weeks ago and at this stage I will not be changing. This V64 might be dying as it black screened again last night. It's odd that it can run for hours just fine and when paused, it black screens.


Yea it mainly has to do with P2 states for the memory being tied to I believe P2 of the core voltages. P2 Memory defaults to 950(?)mV, so if you did not save to the registry it would constantly go back to said 950(?)mV. Hence the issue cause you could have a higher voltage on a lower state which is a no-no. Luckily once everything is tuned you can save as a registry file so there is no need to manually add everything again.


----------



## Nighthog

miklkit said:


> In almost 2 years this is the first I have heard of that. Anyway, it has been running fine with Trixx until a few weeks ago and at this stage I will not be changing. This V64 might be dying as it black screened again last night. It's odd that it can run for hours just fine and when paused, it black screens.


AMD Drivers! The presets are unstable period! I learned that on my Vega 64 since start. Only the "standard" profile is stable or it will reset/black screen in idle or browsing the web. (it doesn't default to this profile either)

I apply the standard profile and then do a manual OC which is fine and gives no issues.

Every other driver release seems to mess with the clocks & voltages the Vega 64 cards run at. Some are just more filled with issues than others. You want one that sets the targets "just right" for your particular card. There is just too much variance in the silicon quality and they seemingly haven't set them good in a universal way. Some drivers are just too aggressive.


----------



## asds asds

*HAHA*


----------



## Ark-07

Been a long time since i was here, basically ran my msi vega 64 on -20% power setting to keep temps below 65c. Its only recently that stopped this noticed i wouldnt go above 1560mhz while gaming and i wanted to squeeze every last inch of frame rate i can for modern games. With my current settings i hit 1655mhz max while underload and average 1610mhz while gaming with temps hitting 70c max. 
As for the fans for the gpu i never really hear them i use a headset and they only ever go high while gaming as for other system cooling all silent @ 4 intake fans (1400rpm) and 3 exhaust fans(1800rpm). 

My vega is probably on its last legs as i live by the sea the back partl where the cables go is rusting and I didnt see any rust 6 months ago. I assume thats the reason for the rust at least its lasted 3yrs, I will probably throw it away as i doubt anyone is gonna buy a gpu with rust on it.


----------



## Najenda

i bought vega 64 lc 7 months ago, i forgot to write it here, nice card but i have fear of pump leak or death of memory, second hand buying is risky


----------



## Najenda

It is my daily use clocks,im only playing dying light and poe


----------



## PopReference

I wanted to share some things I noticed from overclocking/undervolting Vega regarding Vram speeds not being consistent and what Vram tuning voltage setting actually does on Vega. (This isn't easy to replicate since the memory can bug out while changing settings, not properly down clocking, making changes inconsistent when testing but I hope it's understandable.)
Memory voltage is related to Core voltage where voltage to core, determined by power state, will set the mem Mhz pstate:
At stock when the core is state5 and above then memory will be at state3 (Core p5 1401mhz[1100mv] => Vram p3 945mhz[1100mv]) and if core is state2 and above but below p5 then memory will be p2. (Core p3 1138mhz[1000mv] => Vram p2 800mhz[950mv])
The issues arrives when undervolting core below 1100mv and leaving Vram at 1100mv:
When state7 voltage is 1050 then memory will be state2 (Core p7 1630mhz[1050mv] => Vram p2 800mhz[950mv]) and if p6 is left as 1150 then memory will be p3 (Core p6 1536mhz[1150mv] => Vram p2 9450mhz[1100mv])

Basic advice for undervolting is to lower voltage for core power states evenly and/or match Vram voltage to core state5.

I don't know how well documented this is, since not all games will be effected the same.


----------



## Ark-07

PopReference said:


> I wanted to share some things I noticed from overclocking/undervolting Vega regarding Vram speeds not being consistent and what Vram tuning voltage setting actually does on Vega. (This isn't easy to replicate since the memory can bug out while changing settings, not properly down clocking, making changes inconsistent when testing but I hope it's understandable.)
> Memory voltage is related to Core voltage where voltage to core, determined by power state, will set the mem Mhz pstate:
> At stock when the core is state5 and above then memory will be at state3 (Core p5 1401mhz[1100mv] => Vram p3 945mhz[1100mv]) and if core is state2 and above but below p5 then memory will be p2. (Core p3 1138mhz[1000mv] => Vram p2 800mhz[950mv])
> The issues arrives when undervolting core below 1100mv and leaving Vram at 1100mv:
> When state7 voltage is 1050 then memory will be state2 (Core p7 1630mhz[1050mv] => Vram p2 800mhz[950mv]) and if p6 is left as 1150 then memory will be p3 (Core p6 1536mhz[1150mv] => Vram p2 9450mhz[1100mv])
> 
> Basic advice for undervolting is to lower voltage for core power states evenly and/or match Vram voltage to core state5.
> 
> I don't know how well documented this is, since not all games will be effected the same.


Im a little confused by what your saying what can i do to get my frequency overclock higher? I noticed undervolting that my card wouldnt go above 1534mhz so I comprised until i have it running 1590mhz avg. Sometimes when its cool it will hit 1615mhz. But Ive noticed hitting 1630mhz I crash but it might have been HBCC related this card is very old so im just waiting for watercooled 6900xt's to be in stock. Im more then happy to squeeze it as much as possible for cyberpunk while trying to be under 75c


----------



## PopReference

Ark-07 said:


> Im a little confused by what your saying what can i do to get my frequency overclock higher? I noticed undervolting that my card wouldnt go above 1534mhz so I comprised until i have it running 1590mhz avg. Sometimes when its cool it will hit 1615mhz. But Ive noticed hitting 1630mhz I crash but it might have been HBCC related this card is very old so im just waiting for watercooled 6900xt's to be in stock. Im more then happy to squeeze it as much as possible for cyberpunk while trying to be under 75c


I wasn't talking about mhz speed of Gpu frequency but HBM memory speed.
I've also noticed pushing past 1630 can crash the gpu but lowering voltage past 1100 can keep the mhz as high as possible in very demanding games.


----------



## LXP-F

My reference sapphire vega 64 on an EK waterblock only crashes over 1730mhz core, and memory over 1125mhz produces artifacts in some games so I keep it at 1100mhz. I have not figured out what the memory voltage does. This is not an undervolt by the way.


----------



## Ark-07

LXP-F said:


> My reference sapphire vega 64 on an EK waterblock only crashes over 1730mhz core, and memory over 1125mhz produces artifacts in some games so I keep it at 1100mhz. I have not figured out what the memory voltage does. This is not an undervolt by the way.


This is why ill never buy another air cooled card just watercooled in the future


----------



## jearly410

LXP-F said:


> My reference sapphire vega 64 on an EK waterblock only crashes over 1730mhz core, and memory over 1125mhz produces artifacts in some games so I keep it at 1100mhz. I have not figured out what the memory voltage does. This is not an undervolt by the way.


What are your temps? I've redone my mounting a few times but can't get the hotspot under 105c. When I stop being lazy I'll try again


----------



## LXP-F

jearly410 said:


> What are your temps? I've redone my mounting a few times but can't get the hotspot under 105c. When I stop being lazy I'll try again


Personally I think the hot spot temp is broken (or some of the points it gets it's data from is) because I'll run Time spy extreme stress test, get 99%, look at the temps in hwinfo and it'll say the hot spot maxed at like 164c. I just ran another Time Spy Extreme Stress Test (got 99.3%, nice) and took a screenshot for you. I see my hot spot was at 1c at some point, I know this is wrong as the house is always between 20c and 22c. You can also see the liquid and case temps at the bottom of the hwinfo there. I guess if your consistently getting a high hot spot average then yeah I'd try a remount.


----------



## fcchin

jearly410 said:


> What are your temps? I've redone my mounting a few times but can't get the hotspot under 105c. When I stop being lazy I'll try again


add thermal pads next time to cover the entire GPU edges, helps alot.









and also try testing adding another layer to washer between the screw and block to prevent screw from touching the bottom and not actually fully tighten down.


----------



## Nighthog

fcchin said:


> add thermal pads next time to cover the entire GPU edges, helps alot.
> View attachment 2467314
> 
> 
> and also try testing adding another layer to washer between the screw and block to prevent screw from touching the bottom and not actually fully tighten down.


0.5mm thickness pads around the GPU?
I've considered trying this when I if ever retry my mount, my hotspot temperature has been increasing and increasing over the last year since I did my last remount.
Seems I have non-optimal thermal compound for this GPU. (MX-4)
So will need to redo it but need to buy new pads to replace the old ones before I try. Need to consider which sized pads to buy.


----------



## jearly410

Thoughts on this? Anyone done it this way?









AMD Radeon RX Vega, the ominous hotspot and the correct application of thermal paste | igor'sLAB


If you write rather flapsily, with the RX Vega everything would be a little different, then you are not even so wrong. In addition to the interesting technology around the interposer and the HBM2…




www.igorslab.de


----------



## fcchin

Nighthog said:


> 0.5mm thickness pads around the GPU?
> I've considered trying this when I if ever retry my mount, my hotspot temperature has been increasing and increasing over the last year since I did my last remount.
> Seems I have non-optimal thermal compound for this GPU. (MX-4)
> So will need to redo it but need to buy new pads to replace the old ones before I try. Need to consider which sized pads to buy.


Sorry to reply late ......... was thinking how to easiest explain, well here it goes, something like steps below, not exact, because many attempts and retries.

Buy a few different thickness 0.1mm 0.2mm, 0.5mm, 1mm can add them up if need.
Use toothpaste to test. Apply on all places
Trial install the GPU to liquid_Block with toothpaste and pads to find where needs the thinest.
In my case was all the big grey inductors was immediately touching the Block at the same time GPU touching Block, headache.
Apply 0.1mm to all big grey inductors and continue to next spot.
Apply XXX pads to GPU edge.Then measure with ruler, see picture below.
Put a piece of paper strip between GPU and liquid_Block.
Trial install GPU to Block without paste and without screws, just human pressing power, push and pull the paper,
if paper moves = too thick pads, needs to reduce
if paper tight can't moves = may be too thin pads, may be not enough, try add some.

Actually above is just practise.
Next apply thermal paste, one rice grain at at time.
Trial install GPU to Block again, human power press, take out and see how much paste still neeed, repeat process. I needed 7 rice grain, did it 7 times to ensure coverage and positions, measured around 1 gram in total.
Due inclusion of thermal paste, thickness increase, now use the paper and push pull on the GPU edge with thermal pad, that's why above was just practise.


----------



## fcchin

While you're add it, buy a fan-mini-header convertor cable to standard size to let GPU control liquid-pump and fan expansion board to simultaneously control fans too.


----------



## fcchin

jearly410 said:


> Thoughts on this? Anyone done it this way?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Radeon RX Vega, the ominous hotspot and the correct application of thermal paste | igor'sLAB
> 
> 
> If you write rather flapsily, with the RX Vega everything would be a little different, then you are not even so wrong. In addition to the interesting technology around the interposer and the HBM2…
> 
> 
> 
> 
> www.igorslab.de


Yes I did this, absolutely believe it.

After apply paste, trial install GPU and Block with human press power then remove to visual inspect it covers all the places and goes to all the correct places and not the wrong places.

How to know? do it a few times in a roll, install, press, remove inspect, install again, press, remove again inspect, and push the paste or guide the paste to areas lacking, repeat until it doesn't go to the wrong places anymore and always cover the entire surface. Then screw it on.


----------



## Wuest3nFuchs

jearly410 said:


> Thoughts on this? Anyone done it this way?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Radeon RX Vega, the ominous hotspot and the correct application of thermal paste | igor'sLAB
> 
> 
> If you write rather flapsily, with the RX Vega everything would be a little different, then you are not even so wrong. In addition to the interesting technology around the interposer and the HBM2…
> 
> 
> 
> 
> www.igorslab.de


yes i did it with my sapphire rx vega56 pulse and i tried 8times till i got it right[emoji6]

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Ivor.p

Hello people. I have a problem with my vega, recently i installed a morpheus ii cooler, and everything was ok but vrm temps were in the 95+ so i took it apart and changed all the thermal pads, but when re assembleing , the original thread in the morpheus got worn out and i couldnt put it back; so i used some M3 screws and nuts and washers to put it back (as seen on cuple pics online and in buildzoids vid once)

But after that i am getting alot of coil whine and buzzing form the card. I went to dissasemble it again and i noticed a broken MLCC ( in between NS515 and NS517 on the pics) right next to the cooler mounting hole. So can it be fixed?
And the card has not lost any performance or stability, olny alot of coil wine and buzzing. And i am concerned.

Thnx for the replys.

Sorry for multiple pictures. I was posting form mobile and accidentaly put extra pics.


----------



## PopReference

Ivor.p said:


> Hello people. I have a problem with my vega, recently i installed a morpheus ii cooler, and everything was ok but vrm temps were in the 95+ so i took it apart and changed all the thermal pads, but when re assembleing , the original thread in the morpheus got worn out and i couldnt put it back; so i used some M3 screws and nuts and washers to put it back (as seen on cuple pics online and in buildzoids vid once)
> 
> But after that i am getting alot of coil whine and buzzing form the card. I went to dissasemble it again and i noticed a broken MLCC ( in between NS515 and NS517 on the pics) right next to the cooler mounting hole. So can it be fixed?
> And the card has not lost any performance or stability, olny alot of coil wine and buzzing. And i am concerned.
> 
> Thnx for the replys.
> 
> Sorry for multiple pictures. I was posting form mobile and accidentaly put extra pics.


Looking up other board pics it's probably a resistor, not a cap, possibly the same kind on as others in similar position. Repairing shouldn't be to hard so long as you have what you need: the right resistor and a steady soldering hand. But it's possible it may not fix the coil whine.


----------



## Ivor.p

Can you help me to somehow idetify the specific resistor in question? 

But the coil whine wasnt there before i dissasembled it the last time, not with stock blower or when i had the morpheus prior to changing thermal pads, only after that i heard the whine and i went to check what i did wrong and noticed this broken SMD.


----------



## PopReference

Ivor.p said:


> Can you help me to somehow idetify the specific resistor in question?
> 
> But the coil whine wasnt there before i dissasembled it the last time, not with stock blower or when i had the morpheus prior to changing thermal pads, only after that i heard the whine and i went to check what i did wrong and noticed this broken SMD.


I don't know where to find the exact parts list myself. You can use a Multimeter to test the resistance of the others but they often won't read correctly on a pcb like this.


----------



## fcchin

I think it's a capacitor as seen between NS514 and NS516. The middle one is not black color.

But it's strange there's no NUMBER on the component position, unlike others have Cxxx or Rxxx.

Regarding noise whine, I used to have it between 2017 October ~ 2020 April. Then replace new coolant and took the opportunity to add more thermal pads to other areas of suspicious heat as well as noise whine, now no more whine, which leads me to think the 2017 Oct ~ 2020 April thermal pad that came with the block was too thick on some places and cause bad thermal pad contacts on some places which allowed more coil vibration finally noice whine. Try check all pads looseness with a paper as seen in picture above, if the paper can be pulled out means not tight enough or not thick enough or too thick in some other places.

Last addition of tips, it's not necessary to replace coolant every year. Only replace if the temp rises or blockage. I was susprised my liquid block was 100% clean no blockage, no wear and tear. Dam, wasted US$ 30 old coolant (1.5 liters). If your coolant is this price of higher, don't worry at all, just use it till infinity.


----------



## Naeem

does anyone know how to run 1.25v on vega 64 LC gpu my gpu always stays udner 1.20v even if it pput +50mv in msi aftr nurnr and on AMD settings it says 1250mv for 1750mhz clock but gpu never go obove 1.20v for some reason and if i leave it 1750mhz in msi ab i get rando mcrash i had to set it 1740 or below with mem at 1100mhz and +50 power


----------



## Ivor.p

Naeem said:


> does anyone know how to run 1.25v on vega 64 LC gpu my gpu always stays udner 1.20v even if it pput +50mv in msi aftr nurnr and on AMD settings it says 1250mv for 1750mhz clock but gpu never go obove 1.20v for some reason and if i leave it 1750mhz in msi ab i get rando mcrash i had to set it 1740 or below with mem at 1100mhz and +50 power





you need to do a powerplay table mod so you can up your power limit to 150%+ and than you can be able to go over 1.2v


i have a vega 56 with vega 64 bios and a powerplay mod, but when i try to flush a LC bios, it keeps freezing and laging all the time. has anyone done this on vega 56?

ILL put some links for you


__
https://www.reddit.com/r/Amd/comments/8ra8vr









AMD Vega 56 Hybrid Results: Fixing AMD’s Artificial Limit at 1742MHz


Everyone talks game about how they don’t care about power consumption. We took that comment to the extreme, using a registry hack to give Vega 56 enough extra power to kill the card, if we wanted, and a Floe 360mm CLC to keep temperatures low enough that GPU diode reporting inaccuracies emerge...




www.gamersnexus.net






__
https://www.reddit.com/r/overclocking/comments/dmm09u









1.25v+ on Vega64?


Hey there, I got a power table and I want to ask, has anybody managed to get anything higher than 1.25v? I have flashed my Vega64 with an LC bios, because I've put an EK waterblock on it. I manage to get 1690-1720MHz on the core while gaming. I just wanna see if going higher than 1.25v would...




www.overclock.net


----------



## Ivor.p

fcchin said:


> I think it's a capacitor as seen between NS514 and NS516. The middle one is not black color.
> 
> But it's strange there's no NUMBER on the component position, unlike others have Cxxx or Rxxx.
> 
> Regarding noise whine, I used to have it between 2017 October ~ 2020 April. Then replace new coolant and took the opportunity to add more thermal pads to other areas of suspicious heat as well as noise whine, now no more whine, which leads me to think the 2017 Oct ~ 2020 April thermal pad that came with the block was too thick on some places and cause bad thermal pad contacts on some places which allowed more coil vibration finally noice whine. Try check all pads looseness with a paper as seen in picture above, if the paper can be pulled out means not tight enough or not thick enough or too thick in some other places.
> 
> Last addition of tips, it's not necessary to replace coolant every year. Only replace if the temp rises or blockage. I was susprised my liquid block was 100% clean no blockage, no wear and tear. Dam, wasted US$ 30 old coolant (1.5 liters). If your coolant is this price of higher, don't worry at all, just use it till infinity.



maybe i need to check with a microscope if there is a part number on the smd itself. i cant find alot of information online


----------



## PopReference

Ivor.p said:


> maybe i need to check with a microscope if there is a part number on the smd itself. i cant find alot of information online


I was probably wrong before sorry. If it's a capacitor it's somewhat easier to guess the replacement parts you need.
The size should be standard my guess would be 0603 (1.60mm x 0.80mm) you should measure it to be sure but they should be all standard sizes.
Voltage can either be 16v or 2.5v dependent on what side of the VRM it's on. Just rated voltage higher then the expected voltage is safer.
Far capacitance normally the higher the better, unless it has a specific need to be a low as possible. You can try testing the other caps if you have a that can multimeter, the broken one could be bridging the connection so you should check it to.


----------



## nolive721

I can see whine noise being mentioned here. I have had a VEG64LC for a good 2 years and noticed whine noise under low load that somehow is not noticeable when the card is under heavy load and the fans blowing faster.

I have my Rad above the card itself obviously but set the tubbing at the top of the Rad, not the bottom so could it be the culprit?

I could change the orientation if people here believes it would make a difference.

thanks


----------



## PopReference

nolive721 said:


> I can see whine noise being mentioned here. I have had a VEG64LC for a good 2 years and noticed whine noise under low load that somehow is not noticeable when the card is under heavy load and the fans blowing faster.
> 
> I have my Rad above the card itself obviously but set the tubbing at the top of the Rad, not the bottom so could it be the culprit?
> 
> I could change the orientation if people here believes it would make a difference.
> 
> thanks


Coil whine like that is common, it's caused by the inductors in the vrm. High quality inductors are more noisy.


----------



## nolive721

Ok so no effect of tubbing position , up or down would not influence this coil whine?


----------



## PopReference

nolive721 said:


> Ok so no effect of tubbing position , up or down would not influence this coil whine?


Yeah, the tube positions can cause noise but it will sound like water trickle from the rad but coil whine is high pitched hum from the card itself.


----------



## fcchin

nolive721 said:


> Ok so no effect of tubbing position , up or down would not influence this coil whine?


One of more of the inductors are not being pressed down by thermal pads enough pressure, a loose inductor whines louder.


----------



## nolive721

ok, thanks all I guess I shouldn't worry too much then and leave my Rad position as it is assuming its not pump but VRM noise related.thanks again


----------



## Dilet

Just thought I'd post my OC.
Card is a Sapphire Pulse V56.
I flashed it to a XFX DD V64 BIOS.
P6/P7 set is 1575/1650 volts are 1065/1115.
Power limit is +50%
HBM(samsung) is currently set 1050mhz at 965mv
actual clocks in Cyberpunk 2077 are 1630-40 mhz. 
Temps are 60C, hot spot is roughly 75.


----------



## Wuest3nFuchs

Having the same card but under water . @Dilet what vbios did you flashed exactly?
maybe im trying this out myself on the weekend. To all others that flashed their vega56 did anyone of you know are there any restrictions from/to the driver when using flashing a vega64 bios or is this only when you mod a vbios yourself.
And also should i flash the normal bios or the silent one? 

Gesendet von meinem SM-G950F mit Tapatalk


----------



## Naeem

Does anyone have Vega 64 Liquid Edtion Power table mod with 100% power ? is had old reg but it does not work anymore with new windows 10 version


----------



## wolf9466

I said once before that RX Vega 64 HBM2 voltage can be modified in software, and now I can prove it.








Just added IR35217 temperature and voltage offset support. :3


----------



## Worldwin

wolf9466 said:


> I said once before that RX Vega 64 HBM2 voltage can be modified in software, and now I can prove it.
> Just added IR35217 temperature and voltage offset support. :3


Well that is certainly interesting. Any reason its an offset?


----------



## wolf9466

Worldwin said:


> Well that is certainly interesting. Any reason its an offset?


I didn't code the manual VID part in yet. However... I'm wary of using it because of a bug I discovered in the drivers regarding the NCP81022:

If one removes control from SVI2 such that you can set the voltage over I2C/SMBus, then you subsequently soft reboot, the driver will die on initialization because the NCP81022 is not in power-on reset state.

They make foolish assumptions - namely, that upon driver init, the regulator is in power-on reset state. This is not guaranteed. If you're assuming some option is set, it damn well better be because you set it earlier.

Because of that, while I am going to add manual VID support, I'm going to probably have options to explicitly take control of this, and release control to SVI2.


----------



## Worldwin

wolf9466 said:


> I didn't code the manual VID part in yet. However... I'm wary of using it because of a bug I discovered in the drivers regarding the NCP81022:
> 
> If one removes control from SVI2 such that you can set the voltage over I2C/SMBus, then you subsequently soft reboot, the driver will die on initialization because the NCP81022 is not in power-on reset state.
> 
> They make foolish assumptions - namely, that upon driver init, the regulator is in power-on reset state. This is not guaranteed. If you're assuming some option is set, it damn well better be because you set it earlier.
> 
> Because of that, while I am going to add manual VID support, I'm going to probably have options to explicitly take control of this, and release control to SVI2.


This is pretty awesome. Assuming this translates to other AMD gpus like the RX 6000 series, it will be game-changing like the amdmemorytweak for benching. Too bad my V64 cant do past 1050mhz.


----------



## wolf9466

Worldwin said:


> This is pretty awesome. Assuming this translates to other AMD gpus like the RX 6000 series, it will be game-changing like the amdmemorytweak for benching. Too bad my V64 cant do past 1050mhz.


AMD has a new I2C HW engine starting with Vega20 - I have code for it, but it's read-only for the moment - works on VII and 5700 XT.


----------



## damric

Ok so is there a working mod for exceeding +50% power limit on Vega 56? Mine is the Gigabyte Gaming OC version. I can only get so far with undervolting and the driver p states. If I could even get +75% it would be much better for maintaining clocks. Cooling is no problem at all even at 330W.

I must have read thru this thread and others dozens of times but I still feel like a dumbass


----------



## PopReference

damric said:


> Ok so is there a working mod for exceeding +50% power limit on Vega 56? Mine is the Gigabyte Gaming OC version. I can only get so far with undervolting and the driver p states. If I could even get +75% it would be much better for maintaining clocks. Cooling is no problem at all even at 330W.
> 
> I must have read thru this thread and others dozens of times but I still feel like a dumbass


Using PowerPlay table registry mod you can, the post with the full details is buried in the Vbios thread the OP has a few reg files you can use and modify: Preliminary view of AMD VEGA Bios


----------



## damric

PopReference said:


> Using PowerPlay table registry mod you can, the post with the full details is buried in the Vbios thread the OP has a few reg files you can use and modify: Preliminary view of AMD VEGA Bios


Yeah still pretty lost.


----------



## PopReference

damric said:


> Yeah still pretty lost.


If you copy this put in a txt file then save it as a .reg you can run it to change the registry to overwrite the PPtable when you restart:


Spoiler



Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0000]
"PP_PhmSoftPowerPlayTable"=hex:B6,02,08,01,00,5C,00,E1,06,00,00,08,2C,00,00,1B,\
00,48,00,00,00,80,A9,03,00,F0,49,02,00,48,00,08,00,00,00,00,00,00,00,00,00,\
00,00,00,00,00,02,01,5C,00,4F,02,46,02,94,00,9E,01,BE,00,28,01,7A,00,8C,00,\
BC,01,00,00,00,00,72,02,00,00,90,00,A8,02,6D,01,43,01,97,01,F0,49,02,00,71,\
02,02,02,00,00,00,00,00,00,08,00,00,00,00,00,00,00,05,00,07,00,03,00,05,00,\
00,00,00,00,00,00,01,08,20,03,84,03,B6,03,E8,03,1A,04,4C,04,7E,04,B0,04,01,\
01,46,05,01,01,84,03,00,08,60,EA,00,00,00,40,19,01,00,01,80,38,01,00,02,DC,\
4A,01,00,03,90,5F,01,00,04,00,77,01,00,05,90,91,01,00,06,6C,B0,01,00,07,01,\
08,D0,4C,01,00,00,00,80,00,00,00,00,00,00,1C,83,01,00,01,00,00,00,00,00,00,\
00,00,70,A7,01,00,02,00,00,00,00,00,00,00,00,88,BC,01,00,03,00,00,00,00,00,\
00,00,00,C0,D4,01,00,04,00,00,00,00,00,00,00,00,44,23,02,00,05,00,00,00,00,\
01,00,00,00,00,58,02,00,06,00,00,00,00,01,00,00,00,B8,7C,02,00,07,00,00,00,\
00,01,00,00,00,00,05,60,EA,00,00,00,40,19,01,00,00,80,38,01,00,00,DC,4A,01,\
00,00,90,5F,01,00,00,00,08,28,6E,00,00,00,2C,C9,00,00,01,F8,0B,01,00,02,80,\
38,01,00,03,90,5F,01,00,04,F4,91,01,00,05,D0,B0,01,00,06,C0,D4,01,00,07,00,\
08,6C,39,00,00,00,24,5E,00,00,01,FC,85,00,00,02,AC,BC,00,00,03,34,D0,00,00,\
04,68,6E,01,00,05,08,97,01,00,06,EC,A3,01,00,07,00,01,68,3C,01,00,00,01,04,\
3C,41,00,00,00,00,00,50,C3,00,00,00,00,00,80,38,01,00,02,00,00,24,71,01,00,\
05,00,00,01,08,00,98,85,00,00,40,B5,00,00,60,EA,00,00,50,C3,00,00,01,80,BB,\
00,00,60,EA,00,00,94,0B,01,00,50,C3,00,00,02,00,E1,00,00,94,0B,01,00,40,19,\
01,00,50,C3,00,00,03,78,FF,00,00,40,19,01,00,88,26,01,00,50,C3,00,00,04,40,\
19,01,00,80,38,01,00,80,38,01,00,50,C3,00,00,05,80,38,01,00,DC,4A,01,00,DC,\
4A,01,00,50,C3,00,00,06,00,77,01,00,00,77,01,00,90,5F,01,00,50,C3,00,00,07,\
90,91,01,00,90,91,01,00,00,77,01,00,50,C3,00,00,01,18,00,00,00,00,00,00,00,\
0B,E4,12,60,09,60,09,4B,00,23,00,54,03,90,01,90,01,90,01,90,01,90,01,90,01,\
90,01,01,32,00,37,00,02,00,23,07,F7,00,F7,00,F7,00,51,01,00,00,59,00,69,00,\
4A,00,4A,00,5F,00,73,00,73,00,64,00,40,00,00,00,97,60,96,00,90,55,00,00,00,\
00,00,00,00,00,00,00,00,00,00,00,00,00,00,02,02,D4,30,00,00,02,10,60,EA,00,\
00,02,10



The full break down is in this post if you want to go over all the details: Preliminary view of AMD VEGA Bios


----------



## Dilet

Pushed my Pulse a bit more to 1650/1700 @ 1060/1110, as well as a free +50mhz on the HBM to 1100. Temps are definately starting to get up there, 65 under load for edge, 80 and even sometimes as high as 95 for hotspot. VRMs and HBM are around 70. Any kind of paste you guys recommend, and how should I apply the paste? I've heard some conflicting reports.


----------



## Blackops_2

I’ve probably asked this before but kinda pulling my hair out again because i can’t even remotely reach stock clocks on the reference Vega 56 i picked up, which yes probably a mining card. It was used off eBay. Tried guides to undervolting, power limit, etc. it hard crashes in war zone but seems to run time spy just fine. Issue is it doesn’t fix the issue in either program. I sit anywhere from 900mhz to 1250mhz. And never anywhere near 1400-1500. It’s on water it never breaks 31C but if i adjust power limit or undervolt p6 and p7 crash.
Is it simply the system is bottlenecking the hell out of it? I’m forcing everything to 1440p as well 

3770k @ 4.5ghz
G1. Sniper 3
GSkill ddr3 2133
Reference Vega 56
Seasonic 1250XM gold


----------



## fcchin

Blackops_2 said:


> i can’t even remotely reach stock clocks on the reference Vega 56 i picked up,


Try to mess with Radeon's anti-lag turn on or off, either or toggle to trigger the GPU to wake up.

Radeon Chill for me is set to min 30 fps and max 60 fps to save power electricity bill when in lobby or modding weapons etc, not real game fight. Sometimes it works, frame drop and power drop.

You could try turn on Chill but push min 120hz, max 144hz, something high in hope to force it work.....

Also the Radeon game list section, set those too and try.

There's too many software UI layers, sometimes it works sometimes it doesn't. I.E. sometimes changing the performance settings in individual game list works, but on next game it doesn't, which I have to go to the global performance tab and it works here, then in the next game global performance no response, but the anti-lag turn on works, push te GPU clock really high, above max boost clock, i.e. Vega64 max boost clock is 1630mhz, but with anti-lag turned on it jumps on 1700mhz recorded by HWinfo peak.

Try !!!! good luck.

Default all, reboot to start trying.

Edit - additional
After you set everything right and start a game, alt-tab out to go check again, mine reverts, i.e. from overclock to default then launch game, it jumps to overclock again.

or from default to overclock then launch game, it jumps back to default, 

a pain in the ass, !!! stupid software hahahahaha.


----------



## wolf9466

Worldwin said:


> This is pretty awesome. Assuming this translates to other AMD gpus like the RX 6000 series, it will be game-changing like the amdmemorytweak for benching. Too bad my V64 cant do past 1050mhz.


 Okay, so I've been working on this for Navi1x, and it's kind of a pain in my ass, so here's a status update:

So, I have this issue. I'm trying to use the I2C bus via the CKSVII2C_* registers, BUT... the SMU is using the bus too. Unlike me, it DOESN'T ****ING CHECK IF IT'S IN USE. Worse, if I'm using it when the SMU tries, it gets pissy and decides to lock up the bus until hard poweroff. REEEE.


----------



## Blackops_2

fcchin said:


> Try to mess with Radeon's anti-lag turn on or off, either or toggle to trigger the GPU to wake up.
> 
> Radeon Chill for me is set to min 30 fps and max 60 fps to save power electricity bill when in lobby or modding weapons etc, not real game fight. Sometimes it works, frame drop and power drop.
> 
> You could try turn on Chill but push min 120hz, max 144hz, something high in hope to force it work.....
> 
> Also the Radeon game list section, set those too and try.
> 
> There's too many software UI layers, sometimes it works sometimes it doesn't. I.E. sometimes changing the performance settings in individual game list works, but on next game it doesn't, which I have to go to the global performance tab and it works here, then in the next game global performance no response, but the anti-lag turn on works, push te GPU clock really high, above max boost clock, i.e. Vega64 max boost clock is 1630mhz, but with anti-lag turned on it jumps on 1700mhz recorded by HWinfo peak.
> 
> Try !!!! good luck.
> 
> Default all, reboot to start trying.
> 
> Edit - additional
> After you set everything right and start a game, alt-tab out to go check again, mine reverts, i.e. from overclock to default then launch game, it jumps to overclock again.
> 
> or from default to overclock then launch game, it jumps back to default,
> 
> a pain in the ass, !!! stupid software hahahahaha.


It helped a little but Idk I’m convinced i just have a defective card at this point. I just turned on performance metrics and it crashes. Just running it stock it’s the most unstable thing I’ve encountered.


----------



## fcchin

Blackops_2 said:


> It helped a little but Idk I’m convinced i just have a defective card at this point. I just turned on performance metrics and it crashes. Just running it stock it’s the most unstable thing I’ve encountered.


If you're sure sure and so inclined to "TRY" to fix it by self, then I'd recommend removing the heatsink and all thermal pads, clean it throughly and "TRY" the ultimate "OVEN BAKE", and hope some of those loose intermitten solder points joints back.

Normal wire solder metls around 350 Celcius.

I don't know about GPU and vram chips solder, heard was around 200 Celcius???? So oven bake at 200C for duration time? I don't know, best you check the web instead of just hearing from me.


----------



## fcchin

@Blackops_2 another thing I would like to ask is the bios switch, please use GPU-Z to show yourself the power rating, i.e. switch 1 = left is high power, 220 watts for Vega64, I don't know about 56.

switch 2 = moderate power, I forgot.... heard this is the LOCKED bios, cannot be flashed, saw once on buildzoid vega64.

But........ don't take any information as is..... you've got to find out yourself your card, because my switch 2 is not locked and has been flashed to liquid bios and other vios finally now settled on Vega56 lowest power bios 150 watts only..... hahahaha quick and dirty power saving.

But...... another but..... after flashing bios vega56 my shader or something the core 4096 and TMU 256 did not drop register, versus one of buildzoid youtube I think, may be I watched wrongly??? The HBM voltage of MDDV from 1.35v did drop to 1.25v and it completely shater the HBM and crash a lot. i.e. MDDV 1.25v only support upto 800mhz and crash at 945mhz, while MDDV 1.35v can run at 945mhz, even though actual HBM voltage push up 1.1v for both cases.

Try, good luck.

3rd topic, what temperature are you hitting? the hot spot I mean.


----------



## Blackops_2

fcchin said:


> @Blackops_2 another thing I would like to ask is the bios switch, please use GPU-Z to show yourself the power rating, i.e. switch 1 = left is high power, 220 watts for Vega64, I don't know about 56.
> 
> switch 2 = moderate power, I forgot.... heard this is the LOCKED bios, cannot be flashed, saw once on buildzoid vega64.
> 
> But........ don't take any information as is..... you've got to find out yourself your card, because my switch 2 is not locked and has been flashed to liquid bios and other vios finally now settled on Vega56 lowest power bios 150 watts only..... hahahaha quick and dirty power saving.
> 
> But...... another but..... after flashing bios vega56 my shader or something the core 4096 and TMU 256 did not drop register, versus one of buildzoid youtube I think, may be I watched wrongly??? The HBM voltage of MDDV from 1.35v did drop to 1.25v and it completely shater the HBM and crash a lot. i.e. MDDV 1.25v only support upto 800mhz and crash at 945mhz, while MDDV 1.35v can run at 945mhz, even though actual HBM voltage push up 1.1v for both cases.
> 
> Try, good luck.
> 
> 3rd topic, what temperature are you hitting? the hot spot I mean.


I flashed vega 64 bios to no avail though. Uniengine heaven sits at similar clocks like it was before. I'm about to reinstall the driver from scratch. Temps have never broken 33C cause it's on water at the moment.


----------



## Blackops_2

Update looking at youtube with others with a 3770k i just think it's the age of the system bottlenecking it. Even with V64 bios and Ivy Bridge at 4.5ghz the CPU is pegged at 70%. GPU usage is good but clocks are like 1200~1300 bumping to 1500 for a short second. It just seems like everything is pointing to Ivy Bridge being the issue. Especially since the higher i scale resolution the higher the clock climbs more consistently


----------



## fcchin

Blackops_2 said:


> Update looking at youtube with others with a 3770k i just think it's the age of the system bottlenecking it.


You're probably right, I overlooked your specs, 


AMD Ryzen 7 1700X vs AMD Ryzen 7 3800X vs Intel Core i7-3770K @ 3.50GHz [cpubenchmark.net] by PassMark Software


Ivy-bridge OC to 4.5Ghz probably score 8260, just half of a Ryzen 1700X, plus DDR3-2133 ......... good time to get a Ryzen now  3000 series should be easy to buy, instead of 5000 series heard availability not enough.....

btw the hot-spot temp is shown in HWinfo, it is not the regular GPU temp. When I didn't mount the waterblock properly, the GPU temp would show say example 40C but the hot-spot 105C and GPU speed drop due to thermal throttle.

Had to take it out and remount many times, finally GPU temp 40C and hot spot 75C in summer without air-cond and hot-spot 50C now in winter when room temp like 15C only.

The biggest benefit to flash V64 into V56 is because of the ram MDVV from 1.25V up to 1.35V which can overclock the HBM from 800Mhz to more more, plus the 220watts power profile also push the V56.


----------



## damric

Ok finally got soft mod registry power limit to work, My Vega 56 pulled over 400W in benchmarks this morning 
But temps are fantastic with this abnormally cold weather ❄❄❄


----------



## theheadache

I see a lot of knowledgeable guys here, please look at my thread about my semi broken Vega 56.








Vega 56 HBM problems


Hi everyone, I think this is my first post on here so excuse me if I did not stick to the rules. My system: I5 6600k 4,5 ghz, Asrock Z170 Extreme6+, XFX Vega 56 reference model. I have a 3 year old Vega 56 (so no warranty) it was running fine, but the last 3 months it sometimes threw the driver...




www.overclock.net


----------



## Blackops_2

fcchin said:


> You're probably right, I overlooked your specs,
> 
> 
> AMD Ryzen 7 1700X vs AMD Ryzen 7 3800X vs Intel Core i7-3770K @ 3.50GHz [cpubenchmark.net] by PassMark Software
> 
> 
> Ivy-bridge OC to 4.5Ghz probably score 8260, just half of a Ryzen 1700X, plus DDR3-2133 ......... good time to get a Ryzen now  3000 series should be easy to buy, instead of 5000 series heard availability not enough.....
> 
> btw the hot-spot temp is shown in HWinfo, it is not the regular GPU temp. When I didn't mount the waterblock properly, the GPU temp would show say example 40C but the hot-spot 105C and GPU speed drop due to thermal throttle.
> 
> Had to take it out and remount many times, finally GPU temp 40C and hot spot 75C in summer without air-cond and hot-spot 50C now in winter when room temp like 15C only.
> 
> The biggest benefit to flash V64 into V56 is because of the ram MDVV from 1.25V up to 1.35V which can overclock the HBM from 800Mhz to more more, plus the 220watts power profile also push the V56.


You were right hot spot temp is 105C. Never had that issue before. How do i find out what spot that is?


----------



## PopReference

Blackops_2 said:


> You were right hot spot temp is 105C. Never had that issue before. How do i find out what spot that is?


From what I understand it's the highest temperature reported from multiple points on the die, on Navi it became Junction temp. Best to take off the cooler and check the spread of the paste some people have said they had to remount multiple times to get it right.


----------



## Blackops_2

PopReference said:


> From what I understand it's the highest temperature reported from multiple points on the die, on Navi it became Junction temp. Best to take off the cooler and check the spread of the paste some people have said they had to remount multiple times to get it right.


It's a little infuriating TBH. Never really shaken a stick at AMD before but i wish there was a disclaimer to mount with adequate thermal paste. When i mounted i just went about it like i have done all my blocks before hand. Temps read wonderfully. Problem is really how i designed the loop. Draining it is kind of a pain. Gotta go back to dental school tomorrow and wont have time. So not looking forward to stripping the entire thing and dunno when i'll get to it. Kind of wonder if it would fix the bottlenecking or core fluctuations. I imagine it would to some extent. That said my cousin is getting rid of his 8700k build so might be time to snag it, the board, and the ram and upgrade completely.


----------



## fcchin

Blackops_2 said:


> Draining it is kind of a pain.


take your time to buy US$10 13.5w/Km paste and 13w/Km pads in various thickness 0.1mm, 0.2mm, 0.5mm, 1mm, (US$ 1~4) and stack them up if you need 0.3mm or 0.7mm etc, 

and read a special link shared many times before "how to screw on the waterblock NOT X method" 

finally wait for a nice 2~3 days holiday and enjoy cleaning and rebuilding the whole thing.


----------



## rx78

I'm looking for a project to do, wondering if it's worth buying the alphacool eiswolf for my MSI AIRBOOST VEGA 56? Is it difficult to do the mod?

Getting a bit irritated with the blower.


----------



## Alastair

rx78 said:


> I'm looking for a project to do, wondering if it's worth buying the alphacool eiswolf for my MSI AIRBOOST VEGA 56? Is it difficult to do the mod?
> 
> Getting a bit irritated with the blower.


It's worth it if you can do it. Water on a Vega is amazing.


----------



## rx78

Alastair said:


> It's worth it if you can do it. Water on a Vega is amazing.


I'm a bit torn as the alphacool is nearly the same price as what I paid for the 56 lol.


----------



## Alastair

rx78 said:


> I'm a bit torn as the alphacool is nearly the same price as what I paid for the 56 lol.


Well I went down the Barrow waterblock route. Paid R4000 zar for my 64 and R1000 for my barrow full cover. I am very happy with it


----------



## fcchin

rx78 said:


> I'm looking for a project to do, wondering if it's worth buying the alphacool eiswolf for my MSI AIRBOOST VEGA 56? Is it difficult to do the mod?
> 
> Getting a bit irritated with the blower.


difficulty answer = See some pictures page 413

Below some data for you to make the decision.

I'm using BARROW liquid block, it's not flat polished, has tiny lines/ribs from imperfect CNC process, yet the results very impressive, imagine if I'd not lazy and polish lap it nicely flat....

Couple with thermal paste 13.5w/K.m and

multiple different thickness pads 13w/K.m and extra pads on the GPU frame more contacts to transfer heat etc.... I bought 0.1mm 0.2mm 0.5mm finally 1mm and find the thinest places first, while some other places are stacked up i.e. 0.7mm or 0.3mm 1.5mm etc etc

Radiator is 420 x 140 x 86 thick, 6 x 14cm fans push 1400rpm pull 800rpm, pump controlled by original fan socket (convertor cable) matching pump is 2700rpm like original fan.

I'm in Hong Kong, last night was drizzling cool to cold, outside temperaure 20C in door temperature 25C,

finally hot spot 66C only, default GPU settings, without overclock, without undervolt, without extra power, HWinfo record 220w peak power used, peak core boost 1688mhz more than advertised 1630mhz. Sustained clock above 1600mhz, more than 1530mhz. Core voltage peak at 1.188 I think.

When set +35% power, peak core boost 1700mhz + abit. Sustain clock closer to 1630mhz or above, etc etc etc.... extra 10fps on top 100fps average, not worth the electricity bills, default is best surprisingly.

When you liquid cool, don't bother underclock. i.e. I tried 1.1v +50% power, it will not boost above 1600mhz, even hard to reach 1580mhz. 

1.15v is OK, but bottleneck the core boost. Only air-cool versions works with undervolt.

The POWER and voltage limits boost, don't let either be bottleneck.

loosely saying below,

Unlike older GPU, we cannot control the Mhz of Vega, it goes up however much it needs by itself based on the "POWER RATING" you allow. Until saturation due to heat first then itself.
old GPU is we push the MHZ, it will reach it then use the power without limits in the past.
It's probably not the right term/words but something like that.

Don't bios flash LIQUID_bios. No longer works after 2018/2019 AMD nerf the performance in Radeon drivers.


----------



## MrPerforations

i would test your Msi Radeon Vega 56 air boost out without water first.
mine dont have Samsung HBM and 900mhz was as far as a i could push the Hynix. check gpu-z for memory type.
the Hynix also does not allow you to flash different bios to the cards as far as i know, all the bios i could find that where above a 56's use Samsung HBM.
the gpu went to +3-4 and then started artifacting out, if you can reach that without water cooling then you have the settings, just are noise and temps a problem?.
i got two under water blocks, but it did not improve performance just noise and temps if i remember correctly.


----------



## LtAldoRaine

Hello guys and girls.
[email protected] flash bios , set to : 1715mhz /1030hgbm +25%PT(~270W) and work great temp 50-60max with LC hard loop.
Who tell me few things : In HWinfo have report GPU memory voltage 1,356v (In 56bios 1.256v) but AMD wattman in radeon driver tell me is 1,1 v ONtool and GPUz too 1,1v ?? Whats going on?
Help me please.


----------



## Pleskac

LtAldoRaine said:


> Hello guys and girls.
> [email protected] flash bios , set to : 1715mhz /1030hgbm +25%PT(~270W) and work great temp 50-60max with LC hard loop.
> Who tell me few things : In HWinfo have report GPU memory voltage 1,356v (In 56bios 1.256v) but AMD wattman in radeon driver tell me is 1,1 v ONtool and GPUz too 1,1v ?? Whats going on?
> Help me please.


1.1V is most likely the soc voltage, memory voltage is locked and its either 1.25V or 1.35V depending on what bios you use (vega56 or vega64). Never trust or use AMD software at all, btw I am not pushing the card too hard because of insane coil while (I have the LC edition not custom liquid loop) . With good memory you can get to 930mhz with 1.25V(vega56 bios) or 1070mhz with 1.35v(vega64 bios). Overclocking these cards makes no sense, undervolting and overclocking memory is all you can do with it, its pushed to the limit by manufacturer to improve ****ty yields.


----------



## LtAldoRaine

Pleskac said:


> 1.1V is most likely the soc voltage, memory voltage is locked and its either 1.25V or 1.35V depending on what bios you use (vega56 or vega64). Never trust or use AMD software at all, btw I am not pushing the card too hard because of insane coil while (I have the LC edition not custom liquid loop) . With good memory you can get to 930mhz with 1.25V(vega56 bios) or 1070mhz with 1.35v(vega64 bios). Overclocking these cards makes no sense, undervolting and overclocking memory is all you can do with it, its pushed to the limit by manufacturer to improve ****ty yields.


Welcome and Thx for respond my question.
Ok But whats going on with this : "insane coil while" .You write about sound incoming from coil ? Or something else ? I dont have any problem with coil sound . I used in my 56 bios from 64 air and max temp in GPU VR MVDD temperatur (voltage regulator) is 75C max gpu 65C max (gpu or mem or hot spot).I Oc only MHZ dont tuch any voltage 1715mhz/1030hbm and PPT +25% = 270Watt . That score 1715mhz/1030 hbm give me AUTO Oc in wattman.I just set this handle .
My Gpu is from MSI 56 vega referent version ,exactly the same project and part in PCB in 64 version.


----------



## rx78

rx78 said:


> I'm looking for a project to do, wondering if it's worth buying the alphacool eiswolf for my MSI AIRBOOST VEGA 56? Is it difficult to do the mod?
> 
> Getting a bit irritated with the blower.


It's been a few months and in the intervening period I've installed an AIO onto my v56 using the NZXT G12 bracket and stock backplate. I couldn't find the alphacool so decided on another way.

It's been great with the AIO, much more stable and temps are fine even with the probably anemic 120mm radiator. I've not added any heatsinks to the VRM yet, will be doing so next time i take it apart. I do have fans blowing across the card though and fortunately I've not had any problems.

I went and did the v64 AIRBOOST OC BIOS flash this week and am pleased with the results even with just HBM on 1000 (still looking at doing an undervolt as it's set on auto with MSI afterburner at the moment)


----------



## Alastair

So guys just a quick question. I am having some stability issues with my Vega 64LC. And I am having some trouble getting to the bottom of it. 

So I have a 64LC. And it is under a full cover block. Its been a good card and a fairly decent overclocker for the most part. 1770 core for gaming and 1150HBM. Temps are well under control. Ill see depending on my ambient 35-45C. (I don't have AC so I am dependent on the weather so 10c swings in ambient are normal.) So 35-45C core. Usually around 55C-65C sometimes 70 on the hotspot with HBM usually sitting in the mid 40s to 50s. This is worst case scenario on OC setting in Timespy bench. VRMs are good too.

I had never had any major stability issues, that got under my skin. Most of the issues I have had was me shooting too close to the sun. But when I got back from holiday something changed. My pc had been off for 3 weeks. And when I started playing games I got random black screen crashes. Doesn't matter if I am OC'ed Stock or UC'ed. 

The best way I can describe the crash is a black screen BSOD. It crashes to a black screen with rapid sound loop in the background like a BSOD, lasts for about a second and then resets the PC. Sometimes I will get Display Diver crashed and reset message when logging back in. Now this crashes makes me scratch my head in confusion because I can leave it looping heaven or Timespy for HOURS on end with OC settings and have 0 issues. But I play games and it falls on its face. From fairly demanding titles like ARK to simple to run titles like CS or Century: Age of Ashes. It can run the games for hours on end, or crash instantly as I am opening the menu. It crashed today while I was playing ark, lost my dinos rage quit and now I am rage typing, im ready to throw my rig in the bin.

Now everything I have read points to a power issue, so I investigated further. I saw in HW info my 3.3 and 5v rails were under spec. After a bit of fiddling I saw my ATX24 wasn't seated properly. Well that sorted out the low reading 3.3 and 5v numbers, they are now back in spec. 12V is in spec as well. My PSU should have more than enough juice to output to my Vega as its a CoolerMaster v1300 platinum Delta Electronics is the oem of the unit. So that shouldn't be an issue.

So here are the troubleshooting steps I have been through.

*DDU'ed and reinstalled drivers
*rolled back to older drivers. updated to newer drivers.
*Went back to stock clocks
*Underclocked
*re-plugged in all of the power cables.
*checked voltages on DMM for my psu rails
*reinstalled windows
*increased voltages across all power states.
*increased minimum memory clocks. so it doesn't downclock HBM
*disabled ULPS in afterburner
* Turned off my XMP and fCLK OC (Ryzen 3800x running 3600MHZ DRAM 1800fCLK)
*Increased voltage to cpu as had been running a slight undervolt. 

And it still does it and now I am at a loss. I have no idea what to do. I dont think my PSU is going bad because it will deliver a good 700-800 watts during heaven OC + prime run for hours on end but a game that uses less than 400w (kill-a-watt) will crash. So I am not convinced it is PSU. So I am thinking my card is dying? Any ideas here?


----------



## geriatricpollywog

Alastair said:


> So guys just a quick question. I am having some stability issues with my Vega 64LC. And I am having some trouble getting to the bottom of it.
> 
> So I have a 64LC. And it is under a full cover block. Its been a good card and a fairly decent overclocker for the most part. 1770 core for gaming and 1150HBM. Temps are well under control. Ill see depending on my ambient 35-45C. (I don't have AC so I am dependent on the weather so 10c swings in ambient are normal.) So 35-45C core. Usually around 55C-65C sometimes 70 on the hotspot with HBM usually sitting in the mid 40s to 50s. This is worst case scenario on OC setting in Timespy bench. VRMs are good too.
> 
> I had never had any major stability issues, that got under my skin. Most of the issues I have had was me shooting too close to the sun. But when I got back from holiday something changed. My pc had been off for 3 weeks. And when I started playing games I got random black screen crashes. Doesn't matter if I am OC'ed Stock or UC'ed.
> 
> The best way I can describe the crash is a black screen BSOD. It crashes to a black screen with rapid sound loop in the background like a BSOD, lasts for about a second and then resets the PC. Sometimes I will get Display Diver crashed and reset message when logging back in. Now this crashes makes me scratch my head in confusion because I can leave it looping heaven or Timespy for HOURS on end with OC settings and have 0 issues. But I play games and it falls on its face. From fairly demanding titles like ARK to simple to run titles like CS or Century: Age of Ashes. It can run the games for hours on end, or crash instantly as I am opening the menu. It crashed today while I was playing ark, lost my dinos rage quit and now I am rage typing, im ready to throw my rig in the bin.
> 
> Now everything I have read points to a power issue, so I investigated further. I saw in HW info my 3.3 and 5v rails were under spec. After a bit of fiddling I saw my ATX24 wasn't seated properly. Well that sorted out the low reading 3.3 and 5v numbers, they are now back in spec. 12V is in spec as well. My PSU should have more than enough juice to output to my Vega as its a CoolerMaster v1300 platinum Delta Electronics is the oem of the unit. So that shouldn't be an issue.
> 
> So here are the troubleshooting steps I have been through.
> 
> *DDU'ed and reinstalled drivers
> *rolled back to older drivers. updated to newer drivers.
> *Went back to stock clocks
> *Underclocked
> *re-plugged in all of the power cables.
> *checked voltages on DMM for my psu rails
> *reinstalled windows
> *increased voltages across all power states.
> *increased minimum memory clocks. so it doesn't downclock HBM
> *disabled ULPS in afterburner
> * Turned off my XMP and fCLK OC (Ryzen 3800x running 3600MHZ DRAM 1800fCLK)
> *Increased voltage to cpu as had been running a slight undervolt.
> 
> And it still does it and now I am at a loss. I have no idea what to do. I dont think my PSU is going bad because it will deliver a good 700-800 watts during heaven OC + prime run for hours on end but a game that uses less than 400w (kill-a-watt) will crash. So I am not convinced it is PSU. So I am thinking my card is dying? Any ideas here?


When my Vega64 would crash, it would crash to desktop and the wattman overclock would reset.
I’ve never seen a BSOD from an unstable GPU. Only CPU/RAM.

Was there a lightning storm at home when you were on holiday?


----------



## Formula383

Alastair said:


> So guys just a quick question. I am having some stability issues with my Vega 64LC. And I am having some trouble getting to the bottom of it.
> 
> So I have a 64LC. And it is under a full cover block. Its been a good card and a fairly decent overclocker for the most part. 1770 core for gaming and 1150HBM. Temps are well under control. Ill see depending on my ambient 35-45C. (I don't have AC so I am dependent on the weather so 10c swings in ambient are normal.) So 35-45C core. Usually around 55C-65C sometimes 70 on the hotspot with HBM usually sitting in the mid 40s to 50s. This is worst case scenario on OC setting in Timespy bench. VRMs are good too.
> 
> I had never had any major stability issues, that got under my skin. Most of the issues I have had was me shooting too close to the sun. But when I got back from holiday something changed. My pc had been off for 3 weeks. And when I started playing games I got random black screen crashes. Doesn't matter if I am OC'ed Stock or UC'ed.
> 
> The best way I can describe the crash is a black screen BSOD. It crashes to a black screen with rapid sound loop in the background like a BSOD, lasts for about a second and then resets the PC. Sometimes I will get Display Diver crashed and reset message when logging back in. Now this crashes makes me scratch my head in confusion because I can leave it looping heaven or Timespy for HOURS on end with OC settings and have 0 issues. But I play games and it falls on its face. From fairly demanding titles like ARK to simple to run titles like CS or Century: Age of Ashes. It can run the games for hours on end, or crash instantly as I am opening the menu. It crashed today while I was playing ark, lost my dinos rage quit and now I am rage typing, im ready to throw my rig in the bin.
> 
> Now everything I have read points to a power issue, so I investigated further. I saw in HW info my 3.3 and 5v rails were under spec. After a bit of fiddling I saw my ATX24 wasn't seated properly. Well that sorted out the low reading 3.3 and 5v numbers, they are now back in spec. 12V is in spec as well. My PSU should have more than enough juice to output to my Vega as its a CoolerMaster v1300 platinum Delta Electronics is the oem of the unit. So that shouldn't be an issue.
> 
> So here are the troubleshooting steps I have been through.
> 
> *DDU'ed and reinstalled drivers
> *rolled back to older drivers. updated to newer drivers.
> *Went back to stock clocks
> *Underclocked
> *re-plugged in all of the power cables.
> *checked voltages on DMM for my psu rails
> *reinstalled windows
> *increased voltages across all power states.
> *increased minimum memory clocks. so it doesn't downclock HBM
> *disabled ULPS in afterburner
> * Turned off my XMP and fCLK OC (Ryzen 3800x running 3600MHZ DRAM 1800fCLK)
> *Increased voltage to cpu as had been running a slight undervolt.
> 
> And it still does it and now I am at a loss. I have no idea what to do. I dont think my PSU is going bad because it will deliver a good 700-800 watts during heaven OC + prime run for hours on end but a game that uses less than 400w (kill-a-watt) will crash. So I am not convinced it is PSU. So I am thinking my card is dying? Any ideas here?


Test the card in a different system?


----------



## damric

My Vega64 has been acting weird for the last couple weeks too, but hard to diagnose. I guess old card lol.


----------



## Alastair

Formula383 said:


> Test the card in a different system?


 I wish I could. Firstly I don't have a second rig to test in. And secondly even if I did it would be a mighty pain in the butt in order to pull this out the loop and all that stuff. 





damric said:


> My Vega64 has been acting weird for the last couple weeks too, but hard to diagnose. I guess old card lol.


 How is yours acting strange?


----------



## Alastair

geriatricpollywog said:


> When my Vega64 would crash, it would crash to desktop and the wattman overclock would reset.
> I’ve never seen a BSOD from an unstable GPU. Only CPU/RAM.
> 
> Was there a lightning storm at home when you were on holiday?


It's not a BSOD. it's a crash to black reset. But I describe it like a black screen BSOD because it's got that looping sound like you would get with a BSOD but it would only last a second. And it's black screen. 

But what makes me think its gpu is when relogging into Windows I get the wattman has been reset message. 

I would regularly get the wattman crash. Normally when I am pushing an OC too hard. So. Yeah. I know what I am looking for for a "regular" GPU crash. That's why this has me scratching my head.


----------



## damric

I have problems at idle.

I don't set my system to sleep, and all of my power settings are max performance, but I set the monitor to turn off after 30 minutes.

Lately it's been either locked up entirely or won't wake up the monitor, but this only seems to happen when the system is left idle for a while.

I'm not sure if it's the card, or the driver, or some windows garbage update or what. It's only happened the last few weeks.


----------



## Formula383

Alastair said:


> It's not a BSOD. it's a crash to black reset. But I describe it like a black screen BSOD because it's got that looping sound like you would get with a BSOD but it would only last a second. And it's black screen.
> 
> But what makes me think its gpu is when relogging into Windows I get the wattman has been reset message.
> 
> I would regularly get the wattman crash. Normally when I am pushing an OC too hard. So. Yeah. I know what I am looking for for a "regular" GPU crash. That's why this has me scratching my head.


If the gpu is to work incorrectly it will either crash the game to desktop or freeze the game then reboot the driver.
Anything else sounds like a BSOD with out the blue screen part. this sounds like a cpu core error. as it gives no time for windows to produce the fault screen. however it is very possible this is caused by a faulty PSU, VRM or CPU. I really dont think it would be a faulty ram stick however you can most certainly test that too.




Alastair said:


> I had never had any major stability issues, that got under my skin. Most of the issues I have had was me shooting too close to the sun. But when I got back from holiday something changed. My pc had been off for 3 weeks. And when I started playing games I got random black screen crashes. Doesn't matter if I am OC'ed Stock or UC'ed.


If you dont have any other parts to swap or try out, then i would do a full tear down of the loop clean everything up reseat the cpu memory etc. inspect very carefully for water and corrosion, both on the mobo around caps and even under the pci-e slot (they can hide water under the slot pretty well). If all looks well and you get it all clean and put back in then i guess its most likely a PSU issue. I guess if your very certain no water or corrosion has happened i would probably just try a new PSU. or have yours sent in for repair. 

Best of luck, having random issues is never fun  i guess to me at least it does not look like a defective gpu. maybe thats something. I think a faulty PSU is much better than having to find a gpu at this moment.


----------



## 1devomer

Alastair said:


> I wish I could. Firstly I don't have a second rig to test in. And secondly even if I did it would be a mighty pain in the butt in order to pull this out the loop and all that stuff.
> 
> 
> 
> How is yours acting strange?


Did you test your system ram?
Did you update your motherboard bios lately?
Did you test your system as overall, using the bare minimum hardware?

I mean, it sounds that the gpu is giving up.
Still, you are able to run benchmarks for some time.
Reflashing the gpu bios also helps, especially if you have a dual bios card.


----------



## Alastair

Welp I sorted out the crashing to blackscreen and reset issue. Turns out my PCI-E power cables were corroded. Well they are quite old at this point. Now that the cables are replaced things are better but still not there yet. I am having an issue still at stock clocks or even underclocked I am having the issue of the drivers timing out. It happens fairly randomly. I have verified this in ARK and in Timespy looping. ARK will just error saying essentially the D3D device hung along with a wattman has been reset error message. This is at stock clocks and also dropped clocks with a 220w power limit. Timespy will just stop with a message saying an error occurred but no driver crash from what I can tell. I have increased the TdrDelay limit in the registry to no avail. I have tried various drivers but am currently on 22.2.2. I have reflashed my GPU bios. I am running the 264W LC bios. I have tried a minimal install of the drivers and I am still getting the crashes. I am going to try Radeon Pro drivers next. Can I still use Radeon Image Sharpening on Radeon Pro drivers? Is it possible these corroded cables have caused some sort of long term damage to my card?


----------



## damric

Corroded cables? How old is that PSU?


----------



## Formula383

Alastair said:


> Welp I sorted out the crashing to blackscreen and reset issue. Turns out my PCI-E power cables were corroded. Well they are quite old at this point. Now that the cables are replaced things are better but still not there yet. I am having an issue still at stock clocks or even underclocked I am having the issue of the drivers timing out. It happens fairly randomly. I have verified this in ARK and in Timespy looping. ARK will just error saying essentially the D3D device hung along with a wattman has been reset error message. This is at stock clocks and also dropped clocks with a 220w power limit. Timespy will just stop with a message saying an error occurred but no driver crash from what I can tell. I have increased the TdrDelay limit in the registry to no avail. I have tried various drivers but am currently on 22.2.2. I have reflashed my GPU bios. I am running the 264W LC bios. I have tried a minimal install of the drivers and I am still getting the crashes. I am going to try Radeon Pro drivers next. Can I still use Radeon Image Sharpening on Radeon Pro drivers? Is it possible these corroded cables have caused some sort of long term damage to my card?


if you have corrosion you have a water leak. If you mean melted / chard cables (black/white dust) then you have a bad connection, this dust would be very small amounts. 

Its far more likely you have a water leak someplace. I would urge you to stop using it right away and do a full teardown inspection and cleaning with pressure testing outside the case to inspect for leaks. water on electronics is never good, but water on live electronics will kill them very quickly.


----------



## dagget3450

Alastair said:


> Welp I sorted out the crashing to blackscreen and reset issue. Turns out my PCI-E power cables were corroded. Well they are quite old at this point. Now that the cables are replaced things are better but still not there yet. I am having an issue still at stock clocks or even underclocked I am having the issue of the drivers timing out. It happens fairly randomly. I have verified this in ARK and in Timespy looping. ARK will just error saying essentially the D3D device hung along with a wattman has been reset error message. This is at stock clocks and also dropped clocks with a 220w power limit. Timespy will just stop with a message saying an error occurred but no driver crash from what I can tell. I have increased the TdrDelay limit in the registry to no avail. I have tried various drivers but am currently on 22.2.2. I have reflashed my GPU bios. I am running the 264W LC bios. I have tried a minimal install of the drivers and I am still getting the crashes. I am going to try Radeon Pro drivers next. Can I still use Radeon Image Sharpening on Radeon Pro drivers? Is it possible these corroded cables have caused some sort of long term damage to my card?


i had an issue where on one of my vega FE's it burnt/charred a PCIE cable connection at the GPU 8 pin. if i recall correctly it help my issue after replacing cable. You may have to check the PSU side if its a modular cable also. if i recall i had a similar issue where somethings seemed fine/stable then a game would crash failry quickly. i think it was Black MEsa that i was crashing in of all things.

After reading your post of issue/work you've done one thing that sticks in my mind is perhaps a windows update is a possibility. i seem to recall not too long ago an update causing issues with i think AMD gpus. or something of that sort. Maybe check your windows updates and what version of windows are you on now?(referring to windows 10) if your on Win 11 then im not sure.


----------



## Alastair

damric said:


> Corroded cables? How old is that PSU?


The PSU itself is only a year old. It's a CoolerMaster V1300 Platinum. 

HOWEVER the cables are about 5 or 6 years old. I custom sleeved these cables myself. And remembering what a herculean task that was for me I opted to reuse my cables. I just reordered the wires to match the pinout of the new PSU. Which was done correctly mind you. No the cables are just old. I washed the cables in soapy water every 6 months or so when I clean and drain my pc. and then would oven dry them. Turns out oven drying them wasn't enough or something. 


Formula383 said:


> if you have corrosion you have a water leak. If you mean melted / chard cables (black/white dust) then you have a bad connection, this dust would be very small amounts.
> 
> Its far more likely you have a water leak someplace. I would urge you to stop using it right away and do a full teardown inspection and cleaning with pressure testing outside the case to inspect for leaks. water on electronics is never good, but water on live electronics will kill them very quickly.


 I definitely don't have a water leak. 


dagget3450 said:


> i had an issue where on one of my vega FE's it burnt/charred a PCIE cable connection at the GPU 8 pin. if i recall correctly it help my issue after replacing cable. You may have to check the PSU side if its a modular cable also. if i recall i had a similar issue where somethings seemed fine/stable then a game would crash failry quickly. i think it was Black MEsa that i was crashing in of all things.
> 
> After reading your post of issue/work you've done one thing that sticks in my mind is perhaps a windows update is a possibility. i seem to recall not too long ago an update causing issues with i think AMD gpus. or something of that sort. Maybe check your windows updates and what version of windows are you on now?(referring to windows 10) if your on Win 11 then im not sure.


 yes. Apon replacing the cables I managed to solve the majority of my issues. And a fresh install of Windows appears to have solved the remainder of my issues. I'm now back to a perfectly functional card. 24 hours of timespy looping at 1770MHz/1150HBM. 

But man am I pissed that I learnt today that we Vega owners arent getting RSR. Its only RDNA1 and up. What the heck. Its not like is Vega and VII owners also aren't suffering from the supply chain issues and aren't trying our best to eek out whatever life that our cards have left.


----------



## damric

Alastair said:


> The PSU itself is only a year old. It's a CoolerMaster V1300 Platinum.
> 
> HOWEVER the cables are about 5 or 6 years old. I custom sleeved these cables myself. And remembering what a herculean task that was for me I opted to reuse my cables. I just reordered the wires to match the pinout of the new PSU. Which was done correctly mind you. No the cables are just old. I washed the cables in soapy water every 6 months or so when I clean and drain my pc. and then would oven dry them. Turns out oven drying them wasn't enough or something.
> I definitely don't have a water leak.
> yes. Apon replacing the cables I managed to solve the majority of my issues. And a fresh install of Windows appears to have solved the remainder of my issues. I'm now back to a perfectly functional card. 24 hours of timespy looping at 1770MHz/1150HBM.
> 
> But man am I pissed that I learnt today that we Vega owners arent getting RSR. Its only RDNA1 and up. What the heck. Its not like is Vega and VII owners also aren't suffering from the supply chain issues and aren't trying our best to eek out whatever life that our cards have left.


Try the Lossless Scaling app on Steam. It's like $5 and works on pretty much any game.


----------



## Alastair

damric said:


> Try the Lossless Scaling app on Steam. It's like $5 and works on pretty much any game.


I've seen a few recommendations for this. Any good? Vega is still a solidly good card still especially with an overclock, but there are a few poorly optimised titles like Ark that can make it chug. So something like RSR or whatever might help alleviate it.


----------



## damric

Alastair said:


> I've seen a few recommendations for this. Any good? Vega is still a solidly good card still especially with an overclock, but there are a few poorly optimised titles like Ark that can make it chug. So something like RSR or whatever might help alleviate it.


Yes. Saved me some painful upgrade money trying to get 4k to run and look good.


----------



## Alastair

Anyone noticed that overclocks aren't that stable on the latest WHQL drivers. My usual 1770 / 1150 isn't stable and have had to back the core down by 20MHz vs 22.2.1?


----------



## LicSqualo

Yes, some drivers versions are really more sensitive than others to clocks or voltages. 
In any case, I would to share to you that yesterday I was able to obtain SAM (and Resizable Bar) actived on my Vega 64 and Ryzen 1700.
This is the proof:








and this is the forum where you can found all the info regarding this feature. 
Seems more stable with overclock


----------



## Alastair

LicSqualo said:


> Yes, some drivers versions are really more sensitive than others to clocks or voltages.
> In any case, I would to share to you that yesterday I was able to obtain SAM (and Resizable Bar) actived on my Vega 64 and Ryzen 1700.
> This is the proof:
> View attachment 2553979
> 
> and this is the forum where you can found all the info regarding this feature.
> Seems more stable with overclock
> View attachment 2553982


Any significant performance gains with Sam?


----------



## LicSqualo

Alastair said:


> Any significant performance gains with Sam?


Nothing impressive, but between choosing whether without or with, I prefer with.


----------



## Alastair

LicSqualo said:


> Nothing impressive, but between choosing whether without or with, I prefer with.


Well what you waiting for!? Post some results man! Come on man! 😜


----------



## dagget3450

I am so backlogged on hardware but I so want to test my Vegas with the nimez drivers and see what gains they get. I am fairly certain they have Sam and new features. You can run a multitude of versions.

I do want to see how they have aged but also am curious about driver threading and cpu overhead which plagued AMD since like Hawaii to Vega.


----------



## Alastair

dagget3450 said:


> I am so backlogged on hardware but I so want to test my Vegas with the nimez drivers and see what gains they get. I am fairly certain they have Sam and new features. You can run a multitude of versions.
> 
> I do want to see how they have aged but also am curious about driver threading and cpu overhead which plagued AMD since like Hawaii to Vega.


I'm going to switch to the nimez drivers. I am hoping that like SAM giving us Vega owners some RSR support is just a matter of a slight driver mod.


----------



## LicSqualo

Alastair said:


> Well what you waiting for!? Post some results man! Come on man! 😜
> 
> View attachment 2554022


----------



## Alastair

LicSqualo said:


> View attachment 2554080


3%? Heck ill take it! Could you do a bit more? Timespy, superposition. A few games. I've heard SAM doesn't play well with everything. 

+rep


----------



## LicSqualo

Is right. For my observations SAM don't play a great rule in every games. As write, (for free, in anycase) is better with SAM
At left my today run without SAM at right (yesterday) run with SAM activated.


----------



## Alastair

LicSqualo said:


> Is right. For my observations SAM don't play a great rule in every games. As write, (for free, in anycase) is better with SAM
> At left my today run without SAM at right (yesterday) run with SAM activated.
> View attachment 2554113


An extra 2% would push me to approx 9300ish Timespy which would be enough to push me into 5th for Vega 64 on HWBOT. 🚀


----------



## Alastair

LicSqualo said:


> Yes, some drivers versions are really more sensitive than others to clocks or voltages.
> In any case, I would to share to you that yesterday I was able to obtain SAM (and Resizable Bar) actived on my Vega 64 and Ryzen 1700.
> This is the proof:
> View attachment 2553979
> 
> and this is the forum where you can found all the info regarding this feature.
> Seems more stable with overclock
> View attachment 2553982


I see in the post you linked that it mentions something about easy anti-cheat. Do these modded drivers cause issues with anti-cheat engines?


----------



## Alastair

6th place on HW BOT.


----------



## LicSqualo

Alastair said:


> 6th place on HW BOT.
> View attachment 2555976


With modded driver?


----------



## Alastair

LicSqualo said:


> With modded driver?


no thats on official 22.4.1


----------



## LicSqualo

Alastair said:


> no thats on official 22.4.1


Nice


----------



## Blackops_2

Saw GPU prices were coming down and just ordered a 5600 to replace my 3600x. Finally getting around to fixing my vega 56. Just needs a little thermal paste on the hot spot, it's flashed to 64 BIOS. I was wondering if I should sell though? Paid 250 for it and then bought a bykski block to put on it. Just thinking though it's not likely I sell the block easily and truthfully for what little gaming I do my 1080 Mini is still going strong at 2K, vega pushed a little should yield a little better results. Alternatively I could build a new system in a year or so and retire this for the GF.


----------



## Alastair

Anyone else fi ding their 64s running out of VRAM before raw processing power. I feel in simulator titles like MSFS and IL2 great battles I see vram usage hit 8GB according to monitoring and regularly starts stuttering.


----------



## Blackops_2

fcchin said:


> take your time to buy US$10 13.5w/Km paste and 13w/Km pads in various thickness 0.1mm, 0.2mm, 0.5mm, 1mm, (US$ 1~4) and stack them up if you need 0.3mm or 0.7mm etc,
> 
> and read a special link shared many times before "how to screw on the waterblock NOT X method"
> 
> finally wait for a nice 2~3 days holiday and enjoy cleaning and rebuilding the whole thing.


So I'm just now rebuilding my AMD rig as i tore down my vega rig and ran the little guy for a while. I just took the block off and part of the die wasn't even covered with thermal paste. I can't seem to find "how to screw on the waterblock NOT X method" though any links?


----------



## semale88

hi all, someone unlocked frontier for OC?


----------



## chris89

Hello. I'm new to the whole Vega owner thread. I just bought a Powercolor Red Devil AMD Radeon RX Vega 56 on eBay for $150 usd. So my question is are you guys flashing your Vega 56s to 64s & how about running the Liquid Cooled BIOS on the Powercolor Red Devil cooler?

How about the VegaBiosEditor as well? Anyone using it?

Thanks


----------



## dagget3450

chris89 said:


> Hello. I'm new to the whole Vega owner thread. I just bought a Powercolor Red Devil AMD Radeon RX Vega 56 on eBay for $150 usd. So my question is are you guys flashing your Vega 56s to 64s & how about running the Liquid Cooled BIOS on the Powercolor Red Devil cooler?
> 
> How about the VegaBiosEditor as well? Anyone using it?
> 
> Thanks


This thread has been almost dead. Feel free to update your experience with the vega56 though. I still have my Vega FE hanging around. I just rarely use it.


----------



## chris89

@dagget3450 Thanks okay will do. So do you know if I can flash liquid bios vega 64 to vega 56, as long as its all the same Powercolor Red Devil?

Thanks


----------



## dagget3450

chris89 said:


> @dagget3450 Thanks okay will do. So do you know if I can flash liquid bios vega 64 to vega 56, as long as its all the same Powercolor Red Devil?
> 
> Thanks


I honestly do not know. I think there is a bios thread here. I never had a v56 myself.

Maybe this might help
Preliminary view of AMD VEGA Bios


----------



## Alastair

semale88 said:


> hi all, someone unlocked frontier for OC?


Frontier is blocked for Ocing? Have you tried afterburner?


----------



## damric

chris89 said:


> Hello. I'm new to the whole Vega owner thread. I just bought a Powercolor Red Devil AMD Radeon RX Vega 56 on eBay for $150 usd. So my question is are you guys flashing your Vega 56s to 64s & how about running the Liquid Cooled BIOS on the Powercolor Red Devil cooler?
> 
> How about the VegaBiosEditor as well? Anyone using it?
> 
> Thanks


Don't bother flashing anything. Do the soft power table mod. Be sure you change the hex values for power and current. Make sure your PSU is strong because it will draw over 400W.


----------



## Zero989

chris89 said:


> @dagget3450 Thanks okay will do. So do you know if I can flash liquid bios vega 64 to vega 56, as long as its all the same Powercolor Red Devil?
> 
> Thanks


Your biggest issue with the vega 64 liquid bios is losing video out signals, but it will "flash"


----------

