# [TS] DDR4 4000mhz RAM increases FPS in games by 10-19% over DDR4 2133/DDR3 1600



## caenlen

http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page3.html
Quote:


> On a high-end gaming system equipped with two GeForce GTX 980 Ti cards, ARMA 3 sees a 10% increase in minimum frame rates when going from DDR4-3000 to 4000, which is surprising. DDR4-4000 even provides a noteworthy increase over 3600MHz memory, it would be interesting to find what'd be the limit here.




Amazing... I had no idea high ram did this now, LOL I am going to go broke on my 2017 build I am planning


----------



## ChevChelios

+*18* avg fps over DDR4-2133 Mhz ? JUST from memory Mhz alone and nothing else ? thats the kind of boost going to the next tier of GPU gives ...

I dont belive it ...

or is it just Fallout ?

edit:
Quote:


> Fallout 4 is a unique animal and you won't see many, if any games, that scale quite how it does. Although DDR4-4000 wasn't a huge step forward over the 3600MHz memory, it was a whopping great step from 3000MHz delivering a 19% greater minimum frame rate.


yeah it seems F4 is anomaly here, although some other games still show lesser visible improvements

interesting, as far as I know DDR4-2133 vs DDR3-1600 is practically no difference in games, but going faster after that gives something apparently

my mobo + CPU + RAM upgrade might be closer then I thought if even RAM increases fps these days and not just CPU


----------



## ToTheSun!

I just perused the article. Where are the timings for 2133 mentioned?


----------



## Andr3az

Looks like its time to upgrade then









DDR4, GTX1080 / Polaris and empty wallet, here I come.


----------



## Nehabje

This seems pretty huge. Any other games that seem to get these benefits from higher clocked ram?


----------



## lilchronic

This is what happens when you overclock. lol


----------



## caenlen

Quote:


> Originally Posted by *Nehabje*
> 
> This seems pretty huge. Any other games that seem to get these benefits from higher clocked ram?


I mean 4 random games and 4 benefits... Fallout 4 is unique, but that doesn't mean this isn't still a huge leap... I mean even Black Ops 3 looks like it gets an extra 10-12 frames at 1440p going from 2133 to 4000... amazing lol

I'm sure 4 random games, from 4 different companies, this def applies to most if not all newer games


----------



## ChevChelios

ah there it is

http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page4.html
Quote:


> As you can see a single GTX 980 Ti is fully tapped even with DDR4-2133 memory. The ultra-quality settings paired with HairWorks are too much for this GPU configuration to take advantage of the CPU processor power provided by all that additional bandwidth.


their first tests were on SLI 980Ti and even then its *only* the Fallout 4 that showed such big increase

on one 980Ti there was no real difference of DDR4-2133 vs DDR4-4000

so - no use on single GPU systems at these prices, may be ok on top-end SLI since you're already paying through the roof anyway and it will increase the frames somewhat


----------



## caenlen

Quote:


> Originally Posted by *ChevChelios*
> 
> ah there it is
> 
> http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page4.html
> their first tests were on SLI 980Ti and even then its *only* the Fallout 4 that showed such big increase
> 
> on one 980Ti there was no real difference of DDR4-2133 vs DDR4-4000
> 
> so - no use on single GPU systems at these prices, may be ok on top-end SLI since you're already paying through the roof anyway and it will increase the frames somewhat


Need to see tests with gtx 1080, new chip design plus GDDR5x and future hBM2 single gpu setups may still benefit the same...


----------



## AcEsSalvation

Evertyone always talks about FPS increase with RAM... I always thought that it would make more sense that the impact would be on loading times and decreasing FPS drops when loading w/o a loading screen. I would love to see a test based on that.


----------



## Nammi

http://www.overclock.net/t/1487162/an-independent-study-does-the-speed-of-ram-directly-affect-fps-during-high-cpu-overhead-scenarios

Even DDR3 sees plenty of minimum fps increase in some games with higher freq.


----------



## caenlen

Will be interesting to see the GTX 1080 Ti in a single gpu configuration with Samsung 10nm DDR4 ram next year.


----------



## ChevChelios

well after reading some of this I will probably be aiming for at least 3000-3600 Mhz on my future CPU-RAM replacement assuming it doesnt go far beyond that by then


----------



## caenlen

Quote:


> Originally Posted by *ChevChelios*
> 
> well after reading some of this I will probably be aiming for at least 3000-3600 Mhz on my future CPU-RAM repalcement assuming it doesnt go far beyond that by then


Likewise my friend.

fyi Samsung is doing 10nm DDR4 later this winter, that is what I am waiting for, plus a single 1080 ti and maybe an i5 Kaby Lake, I haven't decided yet


----------



## Silent Scone

Been using 8GB at 4000Mhz on the Impact VIII for a few months now. It's speedy in all factors, so these kind of gains are welcome.


----------



## zetoor85

old news this, realy digital foundry covered this allready, but if you want the fastest ram for gaming
3600 gskill tridentz cl 15, theese smack any 4000mhz kit on the market duo to the cl 15.

so if you went out to buy 4000mhz ram, you jumped in with both legs, cl15 @ 3600mhz is the fatest we can get right now. 4000mhz is still hyped to sell, and dosent preform best


----------



## Pentdragon

Planetside 2 would be interesting to test and I don't mean 2GHz vs 4GHz, rather 1.3GHz vs 4GHz RAM.


----------



## tpi2007

Interesting. What I take form this article:

1. Higher speed grades and memory overclocking seem to provide for very decent future proofing. Single card gaming isn't taking advantage of higher speeds much yet, but future high-end GPUs later this year may change that;

2. DDR4 2133 Mhz on Skylake is wholly inadequate; the real starting point should be 2400 Mhz, but in practice, as of now, for the price asked, it doesn't make sense to get anything below 3000 Mhz;

3. A 16 GB DDR4 3000 Mhz kit is by far the best price / performance, with 3600 Mhz coming second. At this point in time there's little sense to make a new build with only 8 GB.

Now, is there any article that shows results of dual and quad channel gaming? How much of a difference is there? Can a lower clocked quad channel setup beat a higher clocked dual channel setup?


----------



## Silent Scone

Quote:


> Originally Posted by *zetoor85*
> 
> old news this, realy digital foundry covered this allready, but if you want the fastest ram for gaming
> 3600 gskill tridentz cl 15, theese smack any 4000mhz kit on the market duo to the cl 15.
> 
> so if you went out to buy 4000mhz ram, you jumped in with both legs, cl15 @ 3600mhz is the fatest we can get right now. 4000mhz is still hyped to sell, and dosent preform best


Jumping in with both legs is exactly what you've done by listening to Digital Foundry and automatically taking this as gospel. Especially given the fact they show gains beyond this in the article in the Op. So which is correct?

lol, people are thick.


----------



## pengs

It was going to happen sometime. If there are no limiting factors, GPU, API, CPU then memory does come into play. You can somewhat see whats happening with the Witcher 3, the GPU is most likely maxed out and tessellating, large view distances may be hitting draw call limits ect.

Reminds me of how people recommended 1600MHz DDR3 over 2333 and claimed that the memory controller on the platform was at fault, kind of a half-truth.
Quote:


> Originally Posted by *Pentdragon*
> 
> Planetside 2 would be interesting to test and I don't mean 2GHz vs 4GHz, rather 1.3GHz vs 4GHz RAM.


It may help now that they've done some threading work on the game but it's still running in DX9. I'm pretty sure that game is all API and draw call. They would probably see the same improvement DayZ SA saw with a move to DX11, DX11 removes that limit twice over but nothing compared to Vulkan and DX12.
IMO this is why low level API's are important because they reduce overhead, thread better, remove draw call limits and allow the hardware to be fully utilized.


----------



## happyrichie

Quote:


> Originally Posted by *AcEsSalvation*
> 
> Evertyone always talks about FPS increase with RAM... I always thought that it would make more sense that the impact would be on loading times and decreasing FPS drops when loading w/o a loading screen. I would love to see a test based on that.


when loading a game the transfer rate of the ssd is slower than the ram so thats the bottleneck neck their.


----------



## zetoor85

Quote:


> Originally Posted by *Silent Scone*
> 
> Jumping in with both legs is exactly what you've done by listening to Digital Foundry and automatically taking this as gospel. Especially given the fact they show gains beyond this in the article in the Op. So which is correct?
> 
> lol, people are thick.


digital foundry only benched varius types of ram speeds, not the fastes ram you can get, digital fondry only shows fps dif from stock to 3600mhz, and how much you gain, they dont show the fastest ddr4 ram on the market.

first of all, trident z 3600mhz cl15 is the fastet ram you can get right now, but if you dont want all the nice fps, go ahead, cl 19 2133mhz for you my friend.

so why you hate on digital foundry?


----------



## AcEsSalvation

Quote:


> Originally Posted by *happyrichie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *AcEsSalvation*
> 
> Evertyone always talks about FPS increase with RAM... I always thought that it would make more sense that the impact would be on loading times and decreasing FPS drops when loading w/o a loading screen. I would love to see a test based on that.
> 
> 
> 
> when loading a game the transfer rate of the ssd is slower than the ram so thats the bottleneck neck their.
Click to expand...

RAM quite a bit of the data during initial load. The load does get split between storage and RAM, but the speed of RAM could still influence the delays/FPS spikes. Even if it's miniscule, it could have a larger impact than the typical 2-3FPS that a lot of tests show.


----------



## Silent Scone

Quote:


> Originally Posted by *zetoor85*
> 
> digital foundry only benched varius types of ram speeds, not the fastes ram you can get, digital fondry only shows fps dif from stock to 3600mhz, and how much you gain, they dont show the fastest ddr4 ram on the market.
> 
> *first of all, trident z 3600mhz cl15* is the fastet ram you can get right now, but if you *dont want all the nice fps, go ahead, cl 19 2133mhz for you my friend.*
> 
> so why you hate on digital foundry?


What?

LOL. Also, I'm assuming you mean as binned off the shelf. Bit of an offhand comment, doesn't really make sense. With voltage, a lot of these timings can scale depending on the capability of the IC.


----------



## headd

Nothing new that skylake scale with fast DDR4 memory like crazy...


----------



## toncij

Fishy. I've tested 2400 vs 4200 and there was ZERO gain in games and in most benchmarks. The only benchmark I've seen improvement was compression....


----------



## Woundingchaney

These tests were done with 8 gigs of system ram I believe. Given that no one with these systems is typically running 8 gigs of system ram Im wondering if ram amount also plays an important role.


----------



## toncij

RAM amount is easier on other components. More RAM sticks, more strain on the controller. You won't be seeing this freq. with all slots filled in at least not with these sticks.








But, the RAM amount itself does not make a difference as long as you have enough.


----------



## Tivan

Quote:


> Originally Posted by *Woundingchaney*
> 
> These tests were done with 8 gigs of system ram I believe. Given that no one with these systems is typically running 8 gigs of system ram Im wondering if ram amount also plays an important role.


Given more ram doesn't increase the speed at which it can be accessed, I doubt it. Previous testing also showed that. More RAM only helps when you're out of RAM, and boy does FPS tank then.

P.S. Ram speed always increased performance in some CPU bound scenarios.


----------



## Asus11

its usually easier to get higher freq/ better timings with 4gb ram sticks then 8gb.. but 8gb in a gaming machine seems tiny even though its probably just enough..

I have been tempted to go down the 8gb route and get some mega high freq ram but it just seems budget like to just get 8gb idk


----------



## JackCY

Seems wishful, with single GPU = majority of people and even encoding videos CPU IPC and clock is king over any RAM speed or timings, sure it helps to get DDR3 over 2GHz with decent timings and with DDR4 the same but unless one runs a specific scenario like they did to get a memory limited test the extra RAM speed shows as close to nothing. Speed on DDR4 is better than DDR3 but latency (speed+timings combined) can be worse. Skylake is nice with DDR4 but there is no advantage in spending anything more than what the best price/performance options offer that are around 2.4-3GHz.

On the other hand if someone is running dual 980Tis or dual 1080s, they really can afford the fastest most expensive RAM too to get the max out of their expensive config and support the Asian companies and workers who earn close to nothing when making this RAM.


----------



## Assirra

Quote:


> Originally Posted by *toncij*
> 
> Fishy. I've tested 2400 vs 4200 and there was ZERO gain in games and in most benchmarks. The only benchmark I've seen improvement was compression....


Well some games get more out of it than others.
Back when digital foundry did their 2500k test they found a couple ones where it quite a bit of impact.
About a minute in this video.


----------



## Newbie2009

Makes my 1600mhz feel inadequate


----------



## EniGma1987

Wow. DDR4 came WAY down in cost lately:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820232193


----------



## Chickensoup23

interesting, but would such high speeds compromise an overclock significantly? I feel like the gains of a 15-20% OC faaaaaar outweight any sort of high RAM frequencies.


----------



## toncij

Nah, don't believe it. I generally don't believe any reviewer but that's my old problem.


----------



## czin125

Quote:


> Originally Posted by *EniGma1987*
> 
> Wow. DDR4 came WAY down in cost lately:
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820232193


the 2x8gb kit at 4133 at the same timings and voltage is 219 though


----------



## Asus11

Quote:


> Originally Posted by *Silent Scone*
> 
> What?
> 
> LOL. Also, I'm assuming you mean as binned off the shelf. Bit of an offhand comment, doesn't really make sense. With voltage, a lot of these timings can scale depending on the capability of the IC.


what kit do you have?


----------



## corky dorkelson

Please note that there were only teeny tiny itsy bitsy gains when they used a single 980ti. If you sprung for the fastest sticks with a single GPU setup, you likely threw away money from a gaming standpoint. Now if you do have a hot SLI setup, it seems there are some gains to be had with better ram.


----------



## KingG14

Would be nice if they compared high speed low latency DDR3 Ram as well, since there is still a lot of people running DDR3 in thier systems and to also know how the low latency factors into the overall performance of the ram in gaming.


----------



## EniGma1987

Quote:


> Originally Posted by *czin125*
> 
> the 2x8gb kit at 4133 at the same timings and voltage is 219 though


True true, but these prices really aren't that bad compared to the same top speed modules from the DDR3 era. Those ones were a few hundred bucks for 8GB kits pushing up past 2666MHZ. These ones at and above 4133 are the same situation and the cost is a lot sure for the 16GB ones, but less than the cost increases we had during DDR3.


----------



## Tivan

Quote:


> Originally Posted by *corky dorkelson*
> 
> Please note that there were only teeny tiny itsy bitsy gains when they used a single 980ti. If you sprung for the fastest sticks with a single GPU setup, you likely threw away money from a gaming standpoint. Now if you do have a hot SLI setup, it seems there are some gains to be had with better ram.


Depends on the games you play. I rarely play anything that puts more than 50% load on my gpu. (though I sometimes do.)


----------



## Imouto

This should be the definition of "diminishing returns".

The more you invest the less you gain.

EDIT: And only if your system qualifies for such gains lol.


----------



## outofmyheadyo

This makes no sense what about all those reviews that shiwed marginal increases with increased ram speed?


----------



## Tivan

Quote:


> Originally Posted by *outofmyheadyo*
> 
> This makes no sense what about all those reviews that shiwed marginal increases with increased ram speed?


They were benchmarking those fancy looking games that severely GPU bottleneck (because they're so pretty), but those games, while they are definitely popular, are not representative of the gaming world as a whole.

The RAM we talk about here, is first and foremost, system RAM. Not video RAM. Hence it's more closely tied to situations where some kind of single or multi core CPU performance is a limiting factor. Which isn't an area of frequent news site benching, but there's a couple tests that do look at that (in relation to RAM), and they are showing similar results like this.

But yeah, faster system RAM won't make your GPU faster. All it can do is make it run closer to full load, if it wasn't doing so before.


----------



## Ascii Aficionado

Quote:


> Originally Posted by *outofmyheadyo*
> 
> This makes no sense what about all those reviews that shiwed marginal increases with increased ram speed?


Quote:


> Originally Posted by *Tivan*
> 
> They were benchmarking those fancy looking games that severely GPU bottleneck (because they're so pretty), but those games, while they are definitely popular, are not representative of the gaming world as a whole.
> 
> The RAM we talk about here, is first and foremost, system RAM. Not video RAM. Hence it's more closely tied to situations where some kind of single or multi core CPU performance is a limiting factor. Which isn't an area of frequent news site benching, but there's a couple tests that do look at that (in relation to RAM), and they are showing similar results like this.
> 
> But yeah, faster system RAM won't make your GPU faster. All it can do is make it run closer to full load, if it wasn't doing so before.


1333 vs 1600 was often a topic back in the day and showed a 1% increase on average, I think one rare occurrence showed a 4% increase, in the end 1600 wasn't worth it way back then. I know games were used where the CPU was a bottleneck as well.

Feel free to correct me if I'm wrong, I never really paid attention until higher-end DDR4 stared showing worthwhile performance increases.


----------



## prjindigo

basically this only applies when dealing with games that store textures in main memory; in properly built and developed games it will make little difference


----------



## ZealotKi11er

Quote:


> Originally Posted by *prjindigo*
> 
> basically this only applies when dealing with games that store textures in main memory; in properly built and developed games it will make little difference


This seems to take effect in CPU limited games.


----------



## Slink3Slyde

Quote:


> Originally Posted by *Ascii Aficionado*
> 
> 1333 vs 1600 was often a topic back in the day and showed a 1% increase on average, I think one rare occurrence showed a 4% increase, in the end 1600 wasn't worth it way back then. I know games were used where the CPU was a bottleneck as well.
> 
> Feel free to correct me if I'm wrong, I never really paid attention until higher-end DDR4 stared showing worthwhile performance increases.


Somoene posted this already I think but there it is:

http://www.overclock.net/t/1487162/an-independent-study-does-the-speed-of-ram-directly-affect-fps-during-high-cpu-overhead-scenarios

There were other threads a couple of years ago with faster DDR3 showing improvements in games, but that was the main one. As well as the more recent Digital foundry videos and Techspots Fallout 4 review where they discovered how much it loves faster RAM by accident IIRC.

Linus's old video is the one people usually cite as proof RAM doesnt matter for gaming, he benchmarked Far Cry 3 and some other GPU limited games on a 660Ti on ultra settings, bottleneck was hard on the GPU.

I dont think RAM makes a huge difference all of the time, but its not much more expensive to go for the faster stuff anyway, I'm sure theres diminishing returns but I dont find it hard to believe that faster is better


----------



## corky dorkelson

It would be more interesting to see how systems with locked i3 or i5 CPUs respond to faster ram. Those systems are likely to see some benefit from faster ram when paired with an enthusiast GPU. Granted, many owners with the lower spec CPUs wouldn't be likely to fork over big dough for ram, but it would be interesting to see how big the gains are.


----------



## ZealotKi11er

Quote:


> Originally Posted by *corky dorkelson*
> 
> It would be more interesting to see how systems with locked i3 or i5 CPUs respond to faster ram. Those systems are likely to see some benefit from faster ram when paired with an enthusiast GPU. Granted, many owners with the lower spec CPUs wouldn't be likely to fork over big dough for ram, but it would be interesting to see how big the gains are.


I always get the fastest RAM I could find as long as the price premium is not too much.


----------



## mothergoose729

I don't believe it... really. Not to call out tech spot or anything, but those results make no sense.


----------



## corky dorkelson

Quote:


> Originally Posted by *mothergoose729*
> 
> I don't believe it... really. Not to call out tech spot or anything, but those results make no sense.


So you don't believe that in HEAVILY cpu intensive situations that faster ram will produce better performance? It's pretty clear-cut if you think about it. RAM is a bottleneck of sorts when the CPU is doing everything it can to keep up.


----------



## EniGma1987

Quote:


> Originally Posted by *Ascii Aficionado*
> 
> 1333 vs 1600 was often a topic back in the day and showed a 1% increase on average, I think one rare occurrence showed a 4% increase, in the end 1600 wasn't worth it way back then. I know games were used where the CPU was a bottleneck as well.
> 
> Feel free to correct me if I'm wrong, I never really paid attention until higher-end DDR4 stared showing worthwhile performance increases.


You cant compare tests done on one processor architecture and a different memory architecture to modern stuff. The newer Intel CPUs have much more advanced prefetch designs to boost performance and help negate cache misses. Having faster RAM will help keep that prefetcher fed nicely.


----------



## czin125

looks like a 4core skylake can beat a 6core haswell in handbrake if ram speeds differ by at least 1600mhz


----------



## toncij

I did bother and actually checked. Single scene, after load.

Doom - 5K, maxed out, TitanX

DDR4 - 2130MHz - 39-40 FPS
DDR4 - 3200MHz - 39-40 FPS

Case closed AFAIC.


----------



## outofmyheadyo

http://www.legitreviews.com/ddr4-memory-scaling-intel-z170-finding-the-best-ddr4-memory-kit-speed_170340

Says speed of your momory is irrelevant


----------



## TheReciever

Quote:


> Originally Posted by *toncij*
> 
> I did bother and actually checked. Single scene, after load.
> 
> Doom - 5K, maxed out, TitanX
> 
> DDR4 - 2130MHz - 39-40 FPS
> DDR4 - 3200MHz - 39-40 FPS
> 
> Case closed AFAIC.


Find you a CPU bound scenario and rerun


----------



## Liranan

Quote:


> Originally Posted by *czin125*
> 
> looks like a 4core skylake can beat a 6core haswell in handbrake if ram speeds differ by at least 1600mhz


Dual channel yes, but Skylake stands no chance against triple channel Haswell, let alone quad.


----------



## Liranan

Quote:


> Originally Posted by *toncij*
> 
> I did bother and actually checked. Single scene, after load.
> 
> Doom - 5K, maxed out, TitanX
> 
> DDR4 - 2130MHz - 39-40 FPS
> DDR4 - 3200MHz - 39-40 FPS
> 
> Case closed AFAIC.


Now run the same benchmark in a CPU limited scenario (e.g. 720p) and you will see the difference.

Of course anyone who buys expensive RAM just to alleviate bad coding is by any definition accepting the laziness and bad coding of the developers thus deserve what they get (fool and their money saying).


----------



## Imouto

Quote:


> Originally Posted by *Liranan*
> 
> Now run the same benchmark in a CPU limited scenario (e.g. 720p) and you will see the difference.


Man, you should add a "/sarcasm" at the end of these kind of sentences or someone may think you're serious.


----------



## Liranan

Quote:


> Originally Posted by *Imouto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Now run the same benchmark in a CPU limited scenario (e.g. 720p) and you will see the difference.
> 
> 
> 
> Man, you should add a "/sarcasm" at the end of these kind of sentences or someone may think you're serious.
Click to expand...

I am serious. Maybe you don't remember but there was a time when people ran benchmarks at 800x600 or even lower just to induce CPU limitations, despite nobody playing games at those resolutions.


----------



## Imouto

Quote:


> Originally Posted by *Liranan*
> 
> I am serious. Maybe you don't remember but there was a time when people ran benchmarks at 800x600 or even lower just to induce CPU limitations, despite nobody playing games at those resolutions.


So you're adding an asinine factor to your testing to prove an asinine point. That render the test utterly useless.


----------



## mothergoose729

Quote:


> Originally Posted by *corky dorkelson*
> 
> So you don't believe that in HEAVILY cpu intensive situations that faster ram will produce better performance? It's pretty clear-cut if you think about it. RAM is a bottleneck of sorts when the CPU is doing everything it can to keep up.


Very few applications see almost perfect scaling with memory bandwidth. The difference between DDR4 3000 and DDR4 4000, assuming adjusted timings, in both bandwidth and latency, is roughly 15%. It just doesn't make sense that a game, which are almost always GPU constrained, would see such a performance delta... especially at sub 100 fps. I just don't see it. Something stinks.


----------



## zealord

Definitely gonna read it throughly later when I have bit more time.

I am always very interested in RAM and CPU bottleneck tests. Minimum fps is the single most interesting thing to benchmark in the PC gaming industry

Did they downclock the RAM and use the same settings? That would be a tragedy though. I would love to see DDR 4 2133 with the best timings compared to DDR4 4000 with the best timings


----------



## mothergoose729

Quote:


> Originally Posted by *Liranan*
> 
> Dual channel yes, but Skylake stands no chance against triple channel Haswell, let alone quad.


The handful of applications that do benefit from the additional bandwidth see at most a 30% increase. Keep in mind that quad channel only works for large, sequential data transfers. The random read/write performance will see no benefit.

Unless game are caching massive data packets sequential in main memory, I just don't understand how increasing bandwidth is going to provide more than a margin performance increase in games. With either quad channel or dual channel memory.


----------



## jprovido

i wonder how my 3200mhz cl14 quad channel kit perform in this benchmark


----------



## TranquilTempest

Quote:


> Originally Posted by *zealord*
> 
> Minimum fps is the single most interesting thing to benchmark in the PC gaming industry


I'd put that second to full chain latency.(mouse click to photons)


----------



## Liranan

Quote:


> Originally Posted by *Imouto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I am serious. Maybe you don't remember but there was a time when people ran benchmarks at 800x600 or even lower just to induce CPU limitations, despite nobody playing games at those resolutions.
> 
> 
> 
> So you're adding an asinine factor to your testing to prove an asinine point. That render the test utterly useless.
Click to expand...

Testing a badly coded game and saying that you need to get the fastest RAM possible is obviously useful.


----------



## Imouto

Quote:


> Originally Posted by *Liranan*
> 
> Testing a badly coded game and saying that you need to get the fastest RAM possible is obviously useful.


I think my English gave up as I'm totally lost here.

You can't ask a Titan X owner to test anything at 720p because the results are absolutely irrelevant for him. So RAM speed at that resolution matters? So what? He's not going to play at resolution. Same with over the top graphic options that add little to no IQ but that's the way reviewers do things nowadays.

If you do I'm no one to question your life choices.


----------



## luckyduck

Quote:


> Originally Posted by *Newbie2009*
> 
> Makes my 1600mhz feel inadequate


Right. lol


----------



## bigjdubb

Quote:


> Originally Posted by *Liranan*
> 
> Testing a badly coded game and saying that you need to get the fastest RAM possible is obviously useful.


It's completely useful if what we get are badly coded games.


----------



## Liranan

Quote:


> Originally Posted by *Imouto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Testing a badly coded game and saying that you need to get the fastest RAM possible is obviously useful.
> 
> 
> 
> I think my English gave up as I'm totally lost here.
> 
> You can't ask a Titan X owner to test anything at 720p because the results are absolutely irrelevant for him. So RAM speed at that resolution matters? So what? He's not going to play at resolution. Same with over the top graphic options that add little to no IQ but that's the way reviewers do things nowadays.
> 
> If you do I'm no one to question your life choices.
Click to expand...

This is the way CPU's were tested for years. Drop everything to 800x600 in order to createCPU limited scenario's to test which CPU was the best. When raised to the average resolution people played at (1080p) that bottleneck disappeared but that didn't stop people from using those resolutions.


----------



## Liranan

Quote:


> Originally Posted by *bigjdubb*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Testing a badly coded game and saying that you need to get the fastest RAM possible is obviously useful.
> 
> 
> 
> It's completely useful if what we get are badly coded game.
Click to expand...

The alternative is not to buy those games. Which is what I do. I refuse to buy games as badly coded as Fallout 4, in which textures are missing and the game is riddled with bugs.


----------



## outofmyheadyo

not to mention it`s boring aswell


----------



## Blameless

Not much detail given about the system configuration.


----------



## iRUSH

I mentioned this quite a few times. High frequency RAM increases your minimum FPS. Minimum FPS is very important thing to keep in check for the best experience. Average FPS means very little to me if i'm bottoming out into unplayable territory at times. Having strong IPC and fast RAM works for me and articles like this make me feel less crazy as a minority around here lol..


----------



## caenlen

::wake up:: I wonder how that ram topic I made is doing... ::click, click...:: ::loading:: 65 unread messages.... LOL


----------



## stargate125645

Well, this makes me want to mess with my BCLK to get to 3400MHz.


----------



## hokk

I don't know much people playing at 720p


----------



## bucdan

This is very interesting. Back then, people were doing tests for DDR3, and lots of tests concluded that RAM speed didn't matter much in video games, mind you this was for speeds between 1333, 1600, 1866, 2133, and 2400. I guess the margins and gains don't truly show until you go really high? I'm curious to see if they can test DDR3-1600, just to see the scale from there with DDR4.

Or is this increase in speed more than RAM itself but the CPUs tasks with it?


----------



## TheReciever

No one seems to recall Bradley's tests concerning RAM speeds I guess

Or at least very few here have seen it.


----------



## Blameless

Quote:


> Originally Posted by *bucdan*
> 
> I guess the margins and gains don't truly show until you go really high?


Any test that shows a greater than linear gain, or an accelerating gain, is evidence of a problem with the test.

Best case scenario in a completely bottlenecked test would be a linear gain with increasing performance of a given subsystem. More common would be diminishing returns as one bottleneck is relieved and others start to surface.


----------



## corky dorkelson

Quote:


> Originally Posted by *Blameless*
> 
> Any test that shows a greater than linear gain, or an accelerating gain, is evidence of a problem with the test.
> 
> Best case scenario in a completely bottlenecked test would be a linear gain with increasing performance of a given subsystem. More common would be diminishing returns as one bottleneck is relieved and others start to surface.


The non-linearity could come down to timings.


----------



## Slink3Slyde

Quote:


> Originally Posted by *TheReciever*
> 
> No one seems to recall Bradley's tests concerning RAM speeds I guess
> 
> Or at least very few here have seen it.


Been posted twice already.

There were also these.

http://www.overclock.net/t/1366657/ddr3-1600-vs-2133-is-there-a-difference-in-game

http://www.overclock.net/t/1438222/battlefield-4-ram-memory-benchmark


----------



## Blameless

Quote:


> Originally Posted by *corky dorkelson*
> 
> The non-linearity could come down to timings.


For a positive non-linear increase...only if the timings are getting _tighter_ as frequency is increasing.

Even getting linear memory-only performance increase would require no core/uncore bottleneck and fixed timings as frequency scaled up.

Getting a linear or better than linear overall performance increase from a less than linear memory performance increase, by itself, is impossible.


----------



## Dargonplay

Think I could get the same performance with a G.Skill DDR3 2400MHz Cas Latency 10? I'm stuck with my 5820K and I'm not upgrading anytime soon (4.6GHz is awesome) and I just bought the fastest DDR3 RAM available today, the aforementioned G.Skill 2400MHz CAS 10.

Are these tests affected by timings more than frequency (It's relative latency to frequency, I know, but which is more important for gaming?) because I'd like to think my G.Skill 2400MHz at CAS 10 could perform the same against those DDR4 4000MHz CAS 20 for gaming, right?


----------



## iRUSH

Quote:


> Originally Posted by *TheReciever*
> 
> No one seems to recall Bradley's tests concerning RAM speeds I guess
> 
> Or at least very few here have seen it.


It didn't get the popularity it deserved in my opinion. Great thread!


----------



## EniGma1987

For all the people bickering here about the usefulness of this, how bout I buy a kit and test DDR3000 vs DDR4000 in a couple very CPU intensive games/situations that I come across when I play games with friends?


----------



## JoHnYBLaZe

For all those calling bull....wait....

This is 980ti SLi and 1440p....1440p is'nt high res anymore where 980ti SLi is concerned IMO

So if anyone is gonna independently test this, use 980 ti SLi or better at 1440p please....

Not like the one guy who was testing single Titan X at 5k, although it's nice he was able to shed more light on the subject....it's just not the same


----------



## ZealotKi11er

Quote:


> Originally Posted by *Dargonplay*
> 
> Think I could get the same performance with a G.Skill DDR3 2400MHz Cas Latency 10? I'm stuck with my 5820K and I'm not upgrading anytime soon (4.6GHz is awesome) and I just bought the fastest DDR3 RAM available today, the aforementioned G.Skill 2400MHz CAS 10.
> 
> Are these tests affected by timings more than frequency (It's relative latency to frequency, I know, but which is more important for gaming?) because I'd like to think my G.Skill 2400MHz at CAS 10 could perform the same against those DDR4 4000MHz CAS 20 for gaming, right?


Wait 5820K uses DDR4.


----------



## Ascii Aficionado

Quote:


> Originally Posted by *Newbie2009*
> 
> Makes my 1600mhz feel inadequate


/hides in a corner with his 1333


----------



## Dargonplay

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Wait 5820K uses DDR4.


Sometimes I really mess up.

Just checked and I've got a pair of 2133MHz CAS 14 DDR4 RAM on my setup, always thought it was DDR3























I ordered the G.Skill 2400MHz CAS 10 sticks yesterday, I will of course return them to get the fastest DDR4 available, but out of curiosity, would a 2400MHz CAS 10 be faster in games than a 4000MHz CAS 20?


----------



## Yuhfhrh

Quote:


> Originally Posted by *Dargonplay*
> 
> Sometimes I really mess up.
> 
> Just checked and I've got a pair of 2133MHz CAS 14 DDR4 RAM on my setup, always thought it was DDR3
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I ordered the G.Skill 2400MHz CAS 10 sticks yesterday, I will of course return them to get the fastest DDR4 available, but out of curiosity, would a 2400MHz CAS 10 be faster in games than a 4000MHz CAS 20?


The G.skill 3600 C15 sticks are the best DDR4 available right now, but you'll have trouble going past 3200 on X99. A 3200C14 kit would probably suit you best.

2133C14 = 13.10ns
2400C10 = 8.33ns
3200C14 = 8.75ns
3600C15 = 8.33ns
4000C20 = 10.00ns

Keep in mind though, 4000MHz DDR4 has about 66% more memory bandwidth compared to 2400.


----------



## michaelius

Quote:


> Originally Posted by *bucdan*
> 
> This is very interesting. Back then, people were doing tests for DDR3, and lots of tests concluded that RAM speed didn't matter much in video games, mind you this was for speeds between 1333, 1600, 1866, 2133, and 2400. I guess the margins and gains don't truly show until you go really high? I'm curious to see if they can test DDR3-1600, just to see the scale from there with DDR4.
> 
> Or is this increase in speed more than RAM itself but the CPUs tasks with it?


That tells more about quality of those tests and hardware sites who did it. If you saw testing from people who know what they are doing the benefits were obvious even on DDR3.

Anandtech to this day tests ram in gpu limited test places and gets zero scaling.


----------



## Dargonplay

Quote:


> Originally Posted by *Yuhfhrh*
> 
> The G.skill 3600 C15 sticks are the best DDR4 available right now, but you'll have trouble going past 3200 on X99. A 3200C14 kit would probably suit you best.
> 
> 2133C14 = 13.10ns
> 2400C10 = 8.33ns
> 3200C14 = 8.75ns
> 3600C15 = 8.33ns
> 4000C20 = 10.00ns
> 
> Keep in mind though, 4000MHz DDR4 has about 66% more memory bandwidth compared to 2400.


Very interesting numbers, thank you.

Does Bandwitch even matter for gaming though? I've come to think that bandwitch after a certain point (1866MHz) really makes no difference.


----------



## Yuhfhrh

Quote:


> Originally Posted by *Dargonplay*
> 
> Very interesting numbers, thank you.
> 
> Does Bandwitch even matter for gaming though? I've come to think that bandwitch after a certain point (1866MHz) really makes no difference.


It's sensible to suspect latency is more important for gaming, but someone would need to test that specifically to verify. Compression/decompression/X264 loves memory bandwidth.


----------



## ToTheSun!

Quote:


> Originally Posted by *Dargonplay*
> 
> Does Bandwitch even matter for gaming though? I've come to think that bandwitch after a certain point (1866MHz) really makes no difference.


For sequential data, latency matters less and bandwidth matters more. It all depends on the specific workload. Can a game request data from memory in such a way that bandwidth would trump latency? Probably not. That's assuming memory tested at lower frequencies have commensurate latency by way of reduced cycles for requests. That doesn't seem to be the case for this particular review, which is, tactfully saying, annoying.


----------



## BigMack70

Is this true also for X99 or just for the Z170 platform, though?


----------



## caenlen

Quote:


> Originally Posted by *BigMack70*
> 
> Is this true also for X99 or just for the Z170 platform, though?


Well, Techspot didn't test but 4 games... so there def needs to be more testing at this point... honestly not sure what to believe myself just yet, because I was always under the impression lower cas latency was important, so cs latency fo 15 and 3600 would be be a cas latency of 20 and 4000... just need more benches, lulz i only care about gaming and a lot of 3rd party tests are just for like cpu stressing


----------



## Imouto

Quote:


> Originally Posted by *Liranan*
> 
> This is the way CPU's were tested for years. Drop everything to 800x600 in order to createCPU limited scenario's to test which CPU was the best. When raised to the average resolution people played at (1080p) that bottleneck disappeared but that didn't stop people from using those resolutions.


Doesn't mean they did that testing right. If you think that 720p results on a GTX Titan X are relevant you're beyond help.

Same with min framerate figures. They're idiotic as they don't tell you how often they happen or how long they last. TR did a pretty good job in the past regarding this.


----------



## michaelius

Quote:


> Originally Posted by *Liranan*
> 
> This is the way CPU's were tested for years. Drop everything to 800x600 in order to createCPU limited scenario's to test which CPU was the best. When raised to the average resolution people played at (1080p) that bottleneck disappeared but that didn't stop people from using those resolutions.


That's not how cpus should be tested in modern games.

You can easily find cpu limited places even at 1080p very high settings if tester knows what they doing

http://www.purepc.pl/pamieci_ram/test_ddr3_vs_ddr4_jakie_pamieci_ram_wybrac_do_intel_skylake?page=0,18


----------



## Blameless

Quote:


> Originally Posted by *Dargonplay*
> 
> I will of course return them to get the fastest DDR4 available?


Going to have trouble putting anything past DDR4-3200 to use in a 24/7 stable setup on Haswell-E.
Quote:


> Originally Posted by *Dargonplay*
> 
> but out of curiosity, would a 2400MHz CAS 10 be faster in games than a 4000MHz CAS 20?


No, but it probably wouldn't be slower either.


----------



## sKorcheDeArtH

Stopped reading at ARMA 3!


----------



## caenlen

Quote:


> Originally Posted by *Imouto*
> 
> Doesn't mean they did that testing right. If you think that 720p results on a GTX Titan X are relevant you're beyond help.
> 
> Same with min framerate figures. They're idiotic as they don't tell you how often they happen or how long they last. TR did a pretty good job in the past regarding this.


rofl, thank you for checking this thread... I read that 720p thing too and just ignored him


----------



## Xuvial

Quote:


> Originally Posted by *caenlen*
> 
> fyi Samsung is doing 10nm DDR4 later this winter, that is what I am waiting for, plus a single 1080 ti and maybe an *i5 Kaby Lake*, I haven't decided yet


i7 or bust! It's ain't 2011 anymore when i5's were the bees knees for gaming


----------



## hokk

Quote:


> Originally Posted by *Xuvial*
> 
> i7 or bust! It's ain't 2011 anymore when i5's were the bees knees for gaming


Do you even know what you're saying?


----------



## pengs

Quote:


> Originally Posted by *Imouto*
> 
> Doesn't mean they did that testing right. If you think that 720p results on a GTX Titan X are relevant you're beyond help.


We could test GPU bound situations all day long but that's not getting anyone anywhere when it comes to proving that memory speed does anything for CPU sided situations. Honestly the OP made a bit of a thread crap there, who cares what only matters to him. Some people like to go beyond what is visible, I don't see a problem with that.


----------



## bonami2

arma 3 show the same thing..

And battlefield 4 beta did it when the game was buggued and all cpu at 100%


----------



## Chaython

It's important to note this is only an 8gb kit, most of these games can utilize over 8gb,if you compared a larger kit of memory at a lower speed, the results would be better than the 8gb at that speed; because with this smaller capacity the memory will have to recycle used space more often.


----------



## ILoveHighDPI

Quote:


> Originally Posted by *toncij*
> 
> I did bother and actually checked. Single scene, after load.
> 
> Doom - 5K, maxed out, TitanX
> 
> DDR4 - 2130MHz - 39-40 FPS
> DDR4 - 3200MHz - 39-40 FPS
> 
> Case closed AFAIC.


Do you have Just Cause 3?

I've been playing Doom at 2560x1440 on a 980Ti and my framerate rarely drops below 100fps.
Point being I find that game to be very consistent, probably not very taxing on the CPU in general (4.5ghz i5 4690K).

Just Cause 3 however is extremely CPU intensive in a few scenarios. When you blow up a bridge it's doing a bunch of Havok physics and usually drops me from 100fps to 60, and the big city keeps me at 60-70 fairly consistently.


----------



## EniGma1987

Quote:


> Originally Posted by *Chaython*
> 
> It's important to note this is only an 8gb kit, most of these games can utilize over 8gb,if you compared a larger kit of memory at a lower speed, the results would be better than the 8gb at that speed; because with this smaller capacity the memory will have to recycle used space more often.


This is system RAM. Please show me a game that uses more than 8GB of system RAM.


----------



## MaxFTW

I know that ram now gives performance increases of a spare few frames since Haswell, I now try and find much better memory than find the cool looking ones that go well with the PC.

I would like everyone to aim for at least 2800mhz for gaming, Depends on budget as the higher speed prices do rise a fair bit these days. but you get an approximate 6+ Frames in every game, Might not sound a lot but its totally worth it for a non CPU/GPU Upgrade, varies with games though.

Still i dont think anyone should buy the £999 ramkits, you get to a point where you pay £250 for a good kit (32gb) then it will be £350 for that extra frame with the slight boost the other kit gives you.


----------



## TheReciever

Quote:


> Originally Posted by *EniGma1987*
> 
> This is system RAM. Please show me a game that uses more than 8GB of system RAM.


I use my PC for more than just a gaming terminal. I am constantly pegged at 8GB for system RAM when I play Siege or BDO or combo of the two.


----------



## EniGma1987

Quote:


> Originally Posted by *TheReciever*
> 
> I use my PC for more than just a gaming terminal. I am constantly pegged at 8GB for system RAM when I play Siege or BDO or combo of the two.


That is nice, and you would need more than 8GB RAM then. But Chaython was saying lots of games use up more than 8GB of system RAM by themselves which is completely untrue. When these tests are being done, there is not a bunch of stuff during testing going on taking up a lot of memory, so his supposed situation that this only shows gains because the game was dumping things from memory all the time is not true at all.


----------



## TheReciever

This is with Rainbow Six Siege, and Office 2013 running.
Quote:


> Originally Posted by *EniGma1987*
> 
> That is nice, and you would need more than 8GB RAM then. But Chaython was saying lots of games use up more than 8GB of system RAM by themselves which is completely untrue. When these tests are being done, there is not a bunch of stuff during testing going on taking up a lot of memory, so his supposed situation that this only shows gains because the game was dumping things from memory all the time is not true at all.


----------



## EniGma1987

Quote:


> Originally Posted by *TheReciever*
> 
> 
> 
> This is with Rainbow Six Siege, and Office 2013 running.


Next tab over, show us how much RAM the game actually uses


----------



## TheReciever

Quote:


> Originally Posted by *EniGma1987*
> 
> Next tab over, show us how much RAM the game actually uses


3.9GB (Sorry, in game atm)


----------



## Eorzean

Gotta love all the armchair PC experts still spouting that higher frequency ram is pointless.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Eorzean*
> 
> Gotta love all the armchair PC experts still spouting that higher frequency ram is pointless.


Yeah its pretty stupid think many people have been saying for year. Some thing with many CPU too.


----------



## Blameless

Quote:


> Originally Posted by *Eorzean*
> 
> Gotta love all the armchair PC experts still spouting that higher frequency ram is pointless.


Armchair PC experts? As opposed to...foxhole PC experts, front-line PC experts, or what exactly?


----------



## spinFX

Quote:


> Originally Posted by *caenlen*
> 
> Need to see tests with gtx 1080, new chip design plus GDDR5x and future hBM2 single gpu setups may still benefit the same...


Need to see these tests on more than one game. A test sample of 1 doesn't prove anything.


----------



## spinFX

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Yeah its pretty stupid think many people have been saying for year. Some thing with many CPU too.


I think people say these things because for most people that is the case. (If I'm building a PC for a friend, I'm not gonna choose an i7 and overclocked RAM because the effects will not be noticed -- I upgraded a mates computer from 5400RPM HDD to 250GB SSD and didnt notice the difference!!)

Excluding my friends who are hard core gamers, all the rest of the people I know that play games don't even know about or play with graphics settings. They just run the game with whatever settings it defaults to, or whatever NvidiaGeforceExperience pre-sets based on hardware. This is usually on the low end of the scale of graphics, and they generally think the game looks awesome!


----------



## hokk

Quote:


> Originally Posted by *spinFX*
> 
> I think people say these things because for most people that is the case. (If I'm building a PC for a friend, I'm not gonna choose an i7 and overclocked RAM because the effects will not be noticed -- *I upgraded a mates computer from 5400RPM HDD to 250GB SSD and didnt notice the difference*!!)
> 
> Excluding my friends who are hard core gamers, all the rest of the people I know that play games don't even know about or play with graphics settings. They just run the game with whatever settings it defaults to, or whatever NvidiaGeforceExperience pre-sets based on hardware. This is usually on the low end of the scale of graphics, and they generally think the game looks awesome!


I don't believe you friendo


----------



## spinFX

Quote:


> Originally Posted by *kylzer*
> 
> I don't believe you friendo


What is there not to believe? I switched out HDD for SSD, owner of computer couldn't tell the difference. lol
I ran a benchmark to show him the difference in the numbers, but to him it's just numbers.
It was a birthday gift also, so I was kinda bummed when he said he "honestly can't tell the difference"


----------



## dagget3450

Would have been nice to see an AMD gpu tested with this as well.


----------



## Cyclops

A typical low end 2133 kit runs at 15-15-15-35 timings.

This kit has the same timings but is running at 3600 MHz. You'd be foolish to think that it would not help increase your performance.


----------



## XLifted

Quote:


> Originally Posted by *Nehabje*
> 
> This seems pretty huge. Any other games that seem to get these benefits from higher clocked ram?


I had 1600 mhz ram, I overclocked it to 2400, with slower timings, because it would NEVER run that high otherwise.

Got a FPS improvement of about 10-15 frames in Battlefield 4/Hardline and Rainbow Six Siege.

So I can actually see how that's VERY possible with 4000 mhz ram on new games (ESPECIALLY)


----------



## duganator

I thought this was fairly well known. There's been several threads created on this website discussing ram speed and how it affects frame rates. The biggest change is minimum frame rate


----------



## spinFX

Quote:


> Originally Posted by *Cyclops*
> 
> A typical low end 2133 kit runs at 15-15-15-35 timings.
> 
> This kit has the same timings but is running at 3600 MHz. You'd be foolish to think that it would not help increase your performance.


Was anyone ever arguing about increase in performance? Or were they saying that the increase in performance is not with the $$$ for the faster RAM to gain a few FPS here and there.
Would you not see the same sort of performance increases if you were to take the money you were going to spend on the faster RAM, and buy a faster GPU?


----------



## Cyclops

Quote:


> Originally Posted by *spinFX*
> 
> Was anyone ever arguing about increase in performance? Or were they saying that the increase in performance is not with the $$$ for the faster RAM to gain a few FPS here and there.
> Would you not see the same sort of performance increases if you were to take the money you were going to spend on the faster RAM, and buy a faster GPU?


What if you have the fastest GPU? We're not talking about mid range PCs when 4000 MHz ram is involved.


----------



## duganator

Typically the cost to go from the cheapest ram possible, to something pretty fast isn't enough to step up a gpu level.
Quote:


> Originally Posted by *spinFX*
> 
> Was anyone ever arguing about increase in performance? Or were they saying that the increase in performance is not with the $$$ for the faster RAM to gain a few FPS here and there.
> Would you not see the same sort of performance increases if you were to take the money you were going to spend on the faster RAM, and buy a faster GPU?


----------



## Liranan

Quote:


> Originally Posted by *duganator*
> 
> Typically the cost to go from the cheapest ram possible, to something pretty fast isn't enough to step up a gpu level.


Depending on the RAM it can very well be.


----------



## Unkzilla

Just before I built my Skylake system I saw something on digitalfoundry about the benefits of DDR4-3000mhz. I think in The Witcher 3 especially there was a pretty decent boost.

Ended up with DDR4-3200 as thats what was in stock and on the QVL for my motherboard. Kind of wishing I went even faster now


----------



## toncij

Quote:


> Originally Posted by *ILoveHighDPI*
> 
> Do you have Just Cause 3?
> 
> I've been playing Doom at 2560x1440 on a 980Ti and my framerate rarely drops below 100fps.
> Point being I find that game to be very consistent, probably not very taxing on the CPU in general (4.5ghz i5 4690K).
> 
> Just Cause 3 however is extremely CPU intensive in a few scenarios. When you blow up a bridge it's doing a bunch of Havok physics and usually drops me from 100fps to 60, and the big city keeps me at 60-70 fairly consistently.


I did:

Same settings - difference 1-2 FPS above 70. That 1-2 FPS I can attribute to variabile measurement.

(4,75gHz 5960X, TitanX @ 1,55GHz, 128GB DDR4)


----------



## ToTheSun!

Quote:


> Originally Posted by *Cyclops*
> 
> A typical low end 2133 kit runs at 15-15-15-35 timings.
> 
> This kit has the same timings but is running at 3600 MHz. You'd be foolish to think that it would not help increase your performance.


That 3600 MHz memory has better latency, too, not just bandwidth. The increased performance in gaming would, most likely, result from the former.


----------



## toncij

Quote:


> Originally Posted by *ToTheSun!*
> 
> That 3600 MHz memory has better latency, too, not just bandwidth. The increased performance in gaming would, most likely, result from the former.


Not really. CL15 is usually best you can get at DDR4.


----------



## ToTheSun!

Quote:


> Originally Posted by *toncij*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ToTheSun!*
> 
> That 3600 MHz memory has better latency, too, not just bandwidth. The increased performance in gaming would, most likely, result from the former.
> 
> 
> 
> Not really. CL15 is usually best you can get at DDR4.
Click to expand...

3600 MHz CL15 has better latency than 2133 MHz CL15.

Which was my point.


----------



## Stewart=B

Quote:


> Originally Posted by *ToTheSun!*
> 
> 3600 MHz CL15 has better latency than 2133 MHz CL15.
> 
> Which was my point.


Do the X99 boards actually take that frequency ram generally, surely around 3200 is maxing out most motherboards no? I have not seen a motherboard yet which can take 4266 ram..


----------



## Lays

Quote:


> Originally Posted by *toncij*
> 
> Not really. CL15 is usually best you can get at DDR4.


As mentioned above, C15 at 3600 mhz has a much lower latency than 2133 at C15.

As mhz goes up, if the cas latency stays the same, the overall latency goes down.


----------



## Lays

Quote:


> Originally Posted by *Stewart=B*
> 
> Do the X99 boards actually take that frequency ram generally, surely around 3200 is maxing out most motherboards no? I have not seen a motherboard yet which can take 4266 ram..


You won't see much higher than 3200-3400 on x99 for 24/7 usage.

SOME z170 boards can handle stuff that high, the Z170M OC Formula is one of the stand-out ones. It can handle 4000+ no problem if the IMC is up to it.

Obviously I can't run this 24/7 but this is a little taste of what it's capable of:


----------



## Kana Chan

Is that on air?


----------



## Lays

Quote:


> Originally Posted by *Kana Chan*
> 
> Is that on air?


Yes with a 120mm fan blowing on the RAM


----------



## Klocek001

That's cool, but 16GB DDR4 4000 costs as much as a new 6600K








best way would be to get a 8GB 2x4GB 4200MHz kit, then add another 8GB when prices drop


----------



## ToTheSun!

Quote:


> Originally Posted by *Klocek001*
> 
> That's cool, but 16GB DDR4 4000 costs as much as a new 6600K
> 
> 
> 
> 
> 
> 
> 
> 
> best way would be to get a 8GB 2x4GB 4200MHz kit, then add another 8GB when prices drop


Or 16 GB worth of 3466 MHz with the best latency of the bunch for a fraction of the price.


----------



## Silent Scone

Quote:


> Originally Posted by *Klocek001*
> 
> That's cool, but 16GB DDR4 4000 costs as much as a new 6600K
> 
> 
> 
> 
> 
> 
> 
> 
> best way would be to get a 8GB 2x4GB 4200MHz kit, then add another 8GB when prices drop


Mixing kits at that frequency is a really dumb idea for most users, especially those who are inherently in it for gaming and aren't versed with memory.

Quote:


> Originally Posted by *Lays*
> 
> You won't see much higher than 3200-3400 on x99 for 24/7 usage.
> 
> SOME z170 boards can handle stuff that high, the Z170M OC Formula is one of the stand-out ones. It can handle 4000+ no problem if the IMC is up to it.
> 
> Obviously I can't run this 24/7 but this is a little taste of what it's capable of:


I can run 4133 24/7 on the ASUS Impact VIII. The latency there is a restriction of the IC, though. Quite an old 8GB kit on older Samsung die. I think beyond 4133 you may struggle to get some kits unconditionally stable. Skylake has a really strong IMC so it's hard to tell.


----------



## ZealotKi11er

Quote:


> Originally Posted by *spinFX*
> 
> I think people say these things because for most people that is the case. (If I'm building a PC for a friend, I'm not gonna choose an i7 and overclocked RAM because the effects will not be noticed -- I upgraded a mates computer from 5400RPM HDD to 250GB SSD and didnt notice the difference!!)
> 
> Excluding my friends who are hard core gamers, all the rest of the people I know that play games don't even know about or play with graphics settings. They just run the game with whatever settings it defaults to, or whatever NvidiaGeforceExperience pre-sets based on hardware. This is usually on the low end of the scale of graphics, and they generally think the game looks awesome!


I did build a i5 6500 system recently for a friend. I used DDR4-2400. Yes RAM speeds only matters for people that want the very maximum fps and are CPU limited. His older system has X6 1055T + 8GB DDR3-1600 + 500GB HDD. Now he has i5 6500 + 16GB DDR4-2400 + 240GB SSD. He was amazed at the speeds difference. Your friend probably has no clue what he is doing if he cant notice the difference.


----------



## GamerusMaximus

I want to know why skylake needed 3000MHz ddr4 to match haswell with ddr3 2133. Anybody else notice that?


----------



## EniGma1987

Quote:


> Originally Posted by *spinFX*
> 
> Was anyone ever arguing about increase in performance? Or were they saying that the increase in performance is not with the $$$ for the faster RAM to gain a few FPS here and there.
> Would you not see the same sort of performance increases if you were to take the money you were going to spend on the faster RAM, and buy a faster GPU?


There were definitely people saying it made absolutely no difference in performance at all:
@Silent Scone @toncij @mothergoose729 @outofmyheadyo

Quote:


> Originally Posted by *Stewart=B*
> 
> Do the X99 boards actually take that frequency ram generally, surely around 3200 is maxing out most motherboards no? I have not seen a motherboard yet which can take 4266 ram..


You are right, they do struggle to go much higher than 3200 from what I hear. Skylake Z170 boards on the other hand can get pretty high a lot of the time.

Quote:


> Originally Posted by *GamerusMaximus*
> 
> I want to know why skylake needed 3000MHz ddr4 to match haswell with ddr3 2133. Anybody else notice that?


The DDR3-2133 memory had far lower latency than DDR4 memory until right around the 3000MHz area


----------



## ToTheSun!

Quote:


> Originally Posted by *GamerusMaximus*
> 
> I want to know why skylake needed 3000MHz ddr4 to match haswell with ddr3 2133. Anybody else notice that?


Probably because DDR3 2133 MHz had CL values of 8 or 9 while DDR4 memory usually takes more cycles for requests. DDR4 needs more frequency to go through the number of cycles required to match the overall latency of DDR3, but that's ok because 2133 MHz was already approaching the limits of DDR3 while DDR4 seems to have more breathing room, which ends up netting similar latency with better bandwidth and better power profiles.


----------



## Silent Scone

Quote:


> Originally Posted by *EniGma1987*
> 
> There were definitely people saying it made absolutely no difference in performance at all:
> @Silent Scone @toncij @mothergoose729 @outofmyheadyo
> You are right, they do struggle to go much higher than 3200 from what I hear. Skylake Z170 boards on the other hand can get pretty high a lot of the time.
> The DDR3-2133 memory had far lower latency than DDR4 memory until right around the 3000MHz area


Not surprising given it's a different architecture and also running an additional two channels. Check my sig, that's with Samsung B die. Anything above 3200 is tricky to get unconditionally stable on Haswell-E. Broadwell-E is not likely to be much different.


----------



## GamerusMaximus

Quote:


> Originally Posted by *ToTheSun!*
> 
> Probably because DDR3 2133 MHz had CL values of 8 or 9 while DDR4 memory usually takes more cycles for requests. DDR4 needs more frequency to go through the number of cycles required to match the overall latency of DDR3, but that's ok because 2133 MHz was already approaching the limits of DDR3 while DDR4 seems to have more breathing room, which ends up netting similar latency with better bandwidth and better power profiles.


I see, so it's the same as ddr2 400MHz-533 and ddr3 800-1066MHz.


----------



## Silent Scone

Quote:


> Originally Posted by *GamerusMaximus*
> 
> I see, so it's the same as ddr2 400MHz-533 and ddr3 800-1066MHz.


The two technologies can not be compared in any sensible manner due to advancements present in DDR4. I've said it multiple times in the past, most people who constantly make these comparisons on forums like this have no idea what bank groups are for example. The blind following the blind.


----------



## Blameless

Every new generation of DDR has enhancements that go beyond pure external frequency increases and DDR4 has a few especially noteworthy advantages over it's predecessor (more banks for deeper interleaving and bank groups for faster burst accesses), which makes direct comparisons difficult. However, dince we have a platform (Skylake) that supports DDR3(L) as well as DDR4, we can compare the two technologies on even footing, thus getting a rough idea of how the technologies compare at different clocks and timings.

Example:










http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/7

The new DDR4 changes seem to do a pretty good job mitigating any performance loss from looser timings relative to DDR3.


----------



## GamerusMaximus

Quote:


> Originally Posted by *Silent Scone*
> 
> The two technologies can not be compared in any sensible manner due to advancements present in DDR4. I've said it multiple times in the past, most people who constantly make these comparisons on forums like this have no idea what bank groups are for example. The blind following the blind.


Wow, I was simply comparing low speed ddr4 to low speed ddr and ddr2 in that they offered worse bandwidth than their predecessors did, except the jump from ddr3 to ddr4 seemed to be a bit larger in terms of the gulf between the same speeds, but way to jump down my throat and insinuate I know nothing about computer memory.


----------



## corky dorkelson

Quote:


> Originally Posted by *Ascii Aficionado*
> 
> /hides in a corner with his 1333


I'd be more embarrassed by the stock 2500K on aftermarket cooling with a P67 mobo!


----------



## Ascii Aficionado

Quote:


> Originally Posted by *corky dorkelson*
> 
> I'd be more embarrassed by the stock 2500K on aftermarket cooling with a P67 mobo!


It got to the point where every month running at 4.5 required bumping the vcore up slightly and I couldn't get it stable anymore, then eventually decided it wasn't worth the effort as nothing except for Warcraft (with lots of people in a city with the ultra shadow setting) was benefiting from it.


----------



## Silent Scone

Quote:


> Originally Posted by *GamerusMaximus*
> 
> Wow, I was simply comparing low speed ddr4 to low speed ddr and ddr2 in that they offered worse bandwidth than their predecessors did, except the jump from ddr3 to ddr4 seemed to be a bit larger in terms of the gulf between the same speeds, but way to jump down my throat and insinuate I know nothing about computer memory.


No need to get up on the defensive, it was a generalisation more so aimed at the post you were replying to.


----------



## ToTheSun!

Quote:


> Originally Posted by *Silent Scone*
> 
> No need to get up on the defensive, it was a generalisation more so aimed at the post you were replying to.


A generalisation, indeed, as you seemed to assume gaming performance relied heavily on bank groups.


----------



## Kana Chan

Quote:


> Originally Posted by *Klocek001*
> 
> That's cool, but 16GB DDR4 4000 costs as much as a new 6600K
> 
> 
> 
> 
> 
> 
> 
> 
> best way would be to get a 8GB 2x4GB 4200MHz kit, then add another 8GB when prices drop


There's a 2x8GB 4266mhz kit in June at 19-23-23-43 ( better than the 2x4GB at 19-26-26-46 )

Quote:


> Originally Posted by *Lays*
> 
> Yes with a 120mm fan blowing on the RAM


And the CPU is on air too?

Nice 4000 cas 12
Is it just the ram not being able to run 24/7 or the cpu too?


----------



## Lays

Quote:


> Originally Posted by *Kana Chan*
> 
> There's a 2x8GB 4266mhz kit in June at 19-23-23-43 ( better than the 2x4GB at 19-26-26-46 )
> And the CPU is on air too?
> 
> Nice 4000 cas 12
> Is it just the ram not being able to run 24/7 or the cpu too?


The cpu is on water, usually I run 4.8 24/7 but I think if I gave it more volts it'd do 24/7 5 Ghz. I don't really want to run that high voltage 24/7 though.

The ram will only run that high if maxmem in Windows is at like 1-2gb, as "b die" memory chips don't like being clocked high when lots of the ram is being used. Plus those timings and speed need near 2v, so benching is basically all those speeds are good for.


----------



## Imouto

Quote:


> Originally Posted by *Eorzean*
> 
> Gotta love all the armchair PC experts still spouting that higher frequency ram is pointless.


As opposed to armchair PC experts saying that it does?

- You do need a CPU that benefits from higher RAM speeds.
- You need enough GPU power to benefit from higher RAM speeds.
- You need a game that benefits from higher RAM speeds.
- You need that increment in performance to be noticeable (VRR monitor or that performance maintaining the FPS higher than a V-Sync breakpoint).
- You may need to combine two or more of the above requirements to see any performance improvement.

Now lets move to the testing:

- Min FPS figures are meaningless without context.
- Min FPS figures may have been reached just once in the whole benchmark..
- Min FPS figures may have been reached for a very short time in the whole benchmark.
- The frequency of FPS dips isn't known.
- Any combination of the above.
- Average FPS figures may be skewed by really high max FPS in areas that render them pointless.
- Extremely poor game selection.

*So higher RAM speeds improve performance in very limited scenarios. That only, and only, if we pass this crappy testing as valid.*

Now let's move to the economic side of things if we take this testing as any good:

- The sweet spot seems to be at DDR4 3000.
- Going from DDR4 3000 to DDR 3600: 44% more money on the RAM will yield 15% better perf on the best case scenario, 0% on the worst. (Scenarios explained above)
- Going from DDR4 3000 to DDR 4000: 133% more money on the RAM will yield 19% better perf on the best case scenario, 0% on the worst. (Scenarios explained above)
- Going from DDR4 3600 to DDR 4000: 61% more money on the RAM will yield 5% better perf on the best case scenario, 0% on the worst. (Scenarios explained above)

Now, If you dropped $1300 on GPUs be my guest and drop some more on the RAM but don't generalize saying that RAM speeds improve the performance because it depends on a lot of factors and we are yet to see any proper testing.


----------



## Hl86

Would this increase fps in cpu limited games like company of heroes 2 and world of warcraft?


----------



## djriful

"RAM doesn't improve FPS!!!, it's fake! misconception!"


----------



## Silent Scone

Quote:


> Originally Posted by *ToTheSun!*
> 
> A generalisation, indeed, as you seemed to assume gaming performance relied heavily on bank groups.


Try not to confuse yourself here. I was talking about people comparing latency across older memory architectures without even looking up the changes within.

If you're going to try and be clever, I suggest maybe thinking before replying lol.


----------



## Bloodcore

Quote:


> Originally Posted by *Hl86*
> 
> Would this increase fps in cpu limited games like company of heroes 2 and world of warcraft?


Haven't tried Company of Heroes, but I've seen higher RAM speeds have a great effect on minimum FPS in ARMA 3, which is heavily CPU limited.

Edit: Look at this. (Not minimum FPS though.)


http://www.hardware.fr/articles/940-5/cpu-ddr4-vs-ddr3-pratique.html


----------



## ILoveHighDPI

Quote:


> Originally Posted by *toncij*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ILoveHighDPI*
> 
> Do you have Just Cause 3?
> 
> I've been playing Doom at 2560x1440 on a 980Ti and my framerate rarely drops below 100fps.
> Point being I find that game to be very consistent, probably not very taxing on the CPU in general (4.5ghz i5 4690K).
> 
> Just Cause 3 however is extremely CPU intensive in a few scenarios. When you blow up a bridge it's doing a bunch of Havok physics and usually drops me from 100fps to 60, and the big city keeps me at 60-70 fairly consistently.
> 
> 
> 
> I did:
> 
> Same settings - difference 1-2 FPS above 70. That 1-2 FPS I can attribute to variabile measurement.
> 
> (4,75gHz 5960X, TitanX @ 1,55GHz, 128GB DDR4)
Click to expand...

Awesome, thanks for that.
Interesting that the extra four cores in the 5960x don't translate into more extra framerate, at launch the JC3 benchmarks seemed to indicate it would scale pretty well with more cores, I'm guessing the CPU heavy areas are still limited by a few threads.
I'm still on 1600mhz DDR3 (CL7) and the benchmarks I read at the time I put together that system indicated basically the same thing, you get one or two FPS more for faster RAM. If Just Cause 3 doesn't care about RAM speed then basically nothing I'm going to care about will be affected by RAM speed for a long time to come.

The only case I have seen where RAM speed actually matters is in my AMD APU, when the GPU is sharing system RAM then higher frequencies make a big difference.


----------



## ToTheSun!

Quote:


> Originally Posted by *Silent Scone*
> 
> If you're going to try and be clever


You're projecting.


----------



## TheReciever

Quote:


> Originally Posted by *Silent Scone*
> 
> Try not to confuse yourself here. I was talking about people comparing latency across older memory architectures without even looking up the changes within.
> 
> If you're going to try and be clever, I suggest maybe thinking before replying lol.


Making distinctions would have alleviated the whole issue tbh and is a growing problem in the forum it seems


----------



## Luxer

So i guess this settles the speed vs CAS debate.


----------



## Silent Scone

Quote:


> Originally Posted by *Luxer*
> 
> So i guess this settles the speed vs CAS debate.


It was never unsettled, just noise from those who had not tried DDR4 at these speeds


----------



## Cakewalk_S

Gunna try 2133MHz on my Samsung miracle sticks tonight... 1866MHz @ 1.4v seems to be an easy feat...


----------



## toncij

Quote:


> Originally Posted by *ILoveHighDPI*
> 
> Awesome, thanks for that.
> Interesting that the extra four cores in the 5960x don't translate into more extra framerate, at launch the JC3 benchmarks seemed to indicate it would scale pretty well with more cores, I'm guessing the CPU heavy areas are still limited by a few threads.
> I'm still on 1600mhz DDR3 (CL7) and the benchmarks I read at the time I put together that system indicated basically the same thing, you get one or two FPS more for faster RAM. If Just Cause 3 doesn't care about RAM speed then basically nothing I'm going to care about will be affected by RAM speed for a long time to come.
> 
> The only case I have seen where RAM speed actually matters is in my AMD APU, when the GPU is sharing system RAM then higher frequencies make a big difference.


Until DirectX12/Mantle/Vulkan - you can't use more cores and those won't make a difference. CPU can communicate with a GPU only using a single core at a time. New APIs change that. Vulkan patch should give more performance there, but since 5960X at 4.4 is used up to 25%, it has plenty of room not to feel any boost actually. You'd have to be CPU limited to actually benefit.


----------



## Blameless

Quote:


> Originally Posted by *Luxer*
> 
> So i guess this settles the speed vs CAS debate.


Quote:


> Originally Posted by *Silent Scone*
> 
> It was never unsettled, just noise from those who had not tried DDR4 at these speeds


Depends on what you mean by settled.

I don't think it was ever seriously in question that if you could choose between a significant increase in frequency or a moderate tightening of timings that the higher frequency would be the better option, but things are much less distinct when frequency adjustments are more subtle.

Often I've had memory were pushing for absolute maximum frequency resulted in less performance than the optimum combination of frequency and timing adjustments possible.

I've got DDR3 systems with Samsung ICs that I could push to 2400MT/s with difficulty, but they'd be slower than they are with the memory at 2133 or sometimes even 1866, because of the degree I'd have to loosen timings. Same trend is even more apparent with my current DDR4 in my signature system, I run 2667 because that's where the memory performs best. I can run 2800, with difficulty, but 2800 CL15 or 16 with slack sub-timings is measurably slower, in literally everything that shows any difference at all, than 2667 CL12 with ultra tight sub-timings.

Something like 2133 CL11 vs. 3200 CL15 is obvious, but when the frequencies are closer, things become fuzzy.

Tests are often complicated by the fact that almost no one seems to adjust secondary or tertiary timings when conducting them, and all together these can be a pretty big deal.


----------



## mothergoose729

Quote:


> Originally Posted by *Blameless*
> 
> Depends on what you mean by settled.
> 
> I don't think it was ever seriously in question that if you could choose between a significant increase in frequency or a moderate tightening of timings that the higher frequency would be the better option, but things are much less distinct when frequency adjustments are more subtle.
> 
> Often I've had memory were pushing for absolute maximum frequency resulted in less performance than the optimum combination of frequency and timing adjustments possible.
> 
> I've got DDR3 systems with Samsung ICs that I could push to 2400MT/s with difficulty, but they'd be slower than they are with the memory at 2133 or sometimes even 1866, because of the degree I'd have to loosen timings. Same trend is even more apparent with my current DDR4 in my signature system, I run 2667 because that's where the memory performs best. I can run 2800, with difficulty, but 2800 CL15 or 16 with slack sub-timings is measurably slower, in literally everything that shows any difference at all, than 2667 CL12 with ultra tight sub-timings.
> 
> Something like 2133 CL11 vs. 3200 CL15 is obvious, but when the frequencies are closer, things become fuzzy.
> 
> Tests are often complicated by the fact that almost no one seems to adjust secondary or tertiary timings when conducting them, and all together these can be a pretty big deal.


Sub timings do matter. They can net you an additional 5% more performance, sometimes more, with the same CAS timings and bandwidth. A tested it years ago with DDR2 on an ASUS board and I was able to measure the results. With that platform I increased performance in some benchmarks by as much as 8%.


----------



## boredgunner

Quote:


> Originally Posted by *Kana Chan*
> 
> There's a 2x8GB 4266mhz kit in June at 19-23-23-43


I'll probably buy this when the price comes down a bit... or something even better maybe. Tests like these make me curious.


----------



## TranquilTempest

There are a whole lot of variables that determine whether you see a benefit. Most of this has to do with how accurate the CPU is at prefetching the information it needs, and whether the memory interface is saturated. If you're completely bandwidth limited, latency isn't going to matter much, if you have perfect prefetching, latency isn't going to matter much. If neither of these are true, latency matters just as much as bandwidth.

If you're GPU limited, you should be setting an in game framerate cap to reduce buffering.


----------



## mutantmagnet

Quote:


> Originally Posted by *Luxer*
> 
> So i guess this settles the speed vs CAS debate.


Not yet. When DDR and DDR2 significantly affected FPS reviewers could hit 1.5 and 2 CL respectively for each type.

I've never seen a reviewer hit below 9 for DDR3. I've always wanted to see what would happen when achieving at least 6 CL on DDR3 or 12 CL on DDR4.


----------



## Dargonplay

Quote:


> Originally Posted by *mothergoose729*
> 
> Sub timings do matter. They can net you an additional 5% more performance, sometimes more, with the same CAS timings and bandwidth. A tested it years ago with DDR2 on an ASUS board and I was able to measure the results. With that platform I increased performance in some benchmarks by as much as 8%.


Choosing a XMP profile doesn't already get you the best sub timings on your sticks?


----------



## Chaython

Quote:


> Originally Posted by *EniGma1987*
> 
> That is nice, and you would need more than 8GB RAM then. But Chaython was saying lots of games use up more than 8GB of system RAM by themselves which is completely untrue. When these tests are being done, there is not a bunch of stuff during testing going on taking up a lot of memory, so his supposed situation that this only shows gains because the game was dumping things from memory all the time is not true at all.


Skyrim with mods, fallout 4, arma; games at launch constantly have memory leaks and it delays crashing


----------



## Lays

Quote:


> Originally Posted by *Dargonplay*
> 
> Choosing a XMP profile doesn't already get you the best sub timings on your sticks?


Relevant meme:


----------



## artemis2307

Guys note that this test is with 2x 980ti and probably OCed
in these cases of course faster RAM will eliminate the bottleneck btw cpu and gpu
Quote:


> Originally Posted by *mutantmagnet*
> 
> Not yet. When DDR and DDR2 significantly affected FPS reviewers could hit 1.5 and 2 CL respectively for each type.
> 
> I've never seen a reviewer hit below 9 for DDR3. I've always wanted to see what would happen when achieving at least 6 CL on DDR3 or 12 CL on DDR4.


my Mushkin redline Extreme kit is running 2400 c9/1600 c7, probably boot with 1400 c6 tho


----------



## Kana Chan

Quote:


> Originally Posted by *artemis2307*
> 
> Guys note that this test is with 2x 980ti and probably OCed
> in these cases of course faster RAM will eliminate the bottleneck btw cpu and gpu
> my Mushkin redline Extreme kit is running 2400 c9/1600 c7, probably boot with 1400 c6 tho


Are you running them at 2400C9 or 1400C6? The former should be much faster.

7.500 / 8.750 / 10.417 vs 8.571 / 10.714 / 13.571
1600C6 is only 7.500 / 9.375 / 11.875


----------



## Cakewalk_S

No difference with Sandy Bridge 2500k @ 2133MHz memory vs 1600MHz in gaming. MaxMemm benchmarks definitely higher once I got them stable.


----------



## Blameless

Quote:


> Originally Posted by *Dargonplay*
> 
> Choosing a XMP profile doesn't already get you the best sub timings on your sticks?


Most XMP profiles have exceedingly loose timings to make sure the sticks run at rated speed.


----------



## EniGma1987

Quote:


> Originally Posted by *artemis2307*
> 
> Guys note that this test is with 2x 980ti and probably OCed
> in these cases of course faster RAM will eliminate the bottleneck btw cpu and gpu
> my Mushkin redline Extreme kit is running 2400 c9/1600 c7, probably boot with 1400 c6 tho


I wonder if it is actually running at CAS of 7 or 6? Or if the bios simply labels it as that regardless of what it really is. I remember back in the fun days of overclocking when we had things like DFI boards with timings of 1, 1.5, and sometimes even 0 for things like tRCD. Didnt mean they actually ran at 0 clock cycled for tRCD. Bios can label things however they want for the end user to see.


----------



## Blameless

Quote:


> Originally Posted by *EniGma1987*
> 
> I wonder if it is actually running at CAS of 7 or 6? Or if the bios simply labels it as that regardless of what it really is.


CL 6 and 7 isn't hard to achive on some DDR3 at lower frequencies. 1600 CL7 was everywhere, and a fair number of ICs would do CL6 at 1600 or below. There were even kits that used CL5 at past 1333MTs.

Generally, any primary timing of 5 or higher is almost certainly going to be actually set to that number on almost any board, because all the way down to 5-5-5 is within JEDEC specs for lower end DDR3.

Of course, you can always bench it to make sure.
Quote:


> Originally Posted by *EniGma1987*
> 
> I remember back in the fun days of overclocking when we had things like DFI boards with timings of 1, 1.5, and sometimes even 0 for things like tRCD. Didnt mean they actually ran at 0 clock cycled for tRCD. Bios can label things however they want for the end user to see.


There were actual registers for some of these low timings, and some of them did work as expected with benchable differences, many did not though. A lot of the zeros were actually 1s and most later DDR ignored CAS 1.5.


----------



## mothergoose729

Quote:


> Originally Posted by *Dargonplay*
> 
> Choosing a XMP profile doesn't already get you the best sub timings on your sticks?


Its pretty close. Each stick is differently capable. You can squeeze out a bit more, especially in the timings that _don't_ show up in the tech specs (there are a lot of them).

Requires many hours of testing though. It took me days.

EDIT:

http://www.overclock.net/t/473314/benchmarks-performance-advantages-of-extreme-timing-and-motherboard-setting-tweaks

I saw a increase of as much as 11% in some benchmarks. This was DDR2 and AMD mind you, but the principle still applies - tweak it till it breaks


----------



## Blameless

Going to depend on the particular sticks/kit, but most of my memory, which is usually more budget stuff, typically ends up something like this:

Fastest XMP profile on this kit:








16-16-16-39 (CL-RCD-RP-RAS) / 55-313-193-133-7-5-28 (RC-RFC1-RFC2-RFC4-RRDL-RRDS-FAW)

vs.

Best stable settings:









Only the tighter binned 'high-end' stuff tends to have so little headroom that XMP is anywhere near optimal, and I won't pay the premium for it when I can usually get within spitting distance of the same real-world performance with memory that costs a third of the price.


----------



## Cyclops

3770K - 1600 MHz 9-9-9-24-1T (XMP)



3770K - 2200 MHz 9-10-11-24-1T



4790K - 2400 MHz 11-13-13-31-2T (XMP)



4790K - 2666 MHz 11-14-14-14-1T (XMP 2nd and 3rd timings)


----------



## Catscratch

Quote:


> Originally Posted by *Cakewalk_S*
> 
> No difference with Sandy Bridge 2500k @ 2133MHz memory vs 1600MHz in gaming. MaxMemm benchmarks definitely higher once I got them stable.




Highest I ever got with them because they are factory 1866 8-9-8 1.65v. Now I mixed them with 2x4gb (2x2 + 2x4), i settled with cl10 1.5v. Maybe I can get them to 9-10-9 with them at 1.6-1.65v. 2133 wasn't an upgrade for me back then. When I get a ddr4 rig, I'll shoot for 3733


----------



## Dargonplay

Now that so many RAM Knowledgeable people are in this thread I would like to ask something that I have yet to find a definite answer.

What does Command Rate do? 1T vs 2T, which one is better and how does it change performance if it even does?


----------



## mothergoose729

Quote:


> Originally Posted by *Dargonplay*
> 
> Now that so many RAM Knowledgeable people are in this thread I would like to ask something that I have yet to find a definite answer.
> 
> What does Command Rate do? 1T vs 2T, which one is better and how does it change performance if it even does?


http://www.hardwaresecrets.com/understanding-ram-timings/

The short answer is that 1T is faster, but you probably won't get it stable as I haven't see 1T timings sense DDR2 days.

The timings on your memory measure refresh cycles. It takes more than one pass for memory to fetch and write data and send its way along the bus to your CPU, GPU, or whatever. A higher bandwidth lets you transfer more bits, but a lower timings, or refresh cycles let you do things with lower latency. Is is almost always true that lower is better, provided you can get everything stable.


----------



## Catscratch

Quote:


> Originally Posted by *Dargonplay*
> 
> Now that so many RAM Knowledgeable people are in this thread I would like to ask something that I have yet to find a definite answer.
> 
> What does Command Rate do? 1T vs 2T, which one is better and how does it change performance if it even does?


1T is ofcourse better but most mobos can't handle it, thus data corruption. You gotta try and see.


----------



## Lays

Quote:


> Originally Posted by *mothergoose729*
> 
> http://www.hardwaresecrets.com/understanding-ram-timings/
> 
> The short answer is that 1T is faster, but you probably won't get it stable as I haven't see 1T timings sense DDR2 days.
> 
> The timings on your memory measure refresh cycles. It takes more than one pass for memory to fetch and write data and send its way along the bus to your CPU, GPU, or whatever. A higher bandwidth lets you transfer more bits, but a lower timings, or refresh cycles let you do things with lower latency. Is is almost always true that lower is better, provided you can get everything stable.


Quote:


> Originally Posted by *Catscratch*
> 
> 1T is ofcourse better but most mobos can't handle it, thus data corruption. You gotta try and see.


Quote:


> Originally Posted by *Dargonplay*
> 
> Now that so many RAM Knowledgeable people are in this thread I would like to ask something that I have yet to find a definite answer.
> 
> What does Command Rate do? 1T vs 2T, which one is better and how does it change performance if it even does?


Depending on the platform / board 1T can be difficult / or slightly easier, but nowadays at higher speeds and lower timings 1T is very hard to do 24/7. Usually once you get high up in clock speeds for 24/7 use it's better to do 2T and focus more on primary timings, as they'll net you a bit more.

1T at high speeds nowadays is mostly only useful for benchmarking.


----------



## artemis2307

Quote:


> Originally Posted by *mutantmagnet*
> 
> Not yet. When DDR and DDR2 significantly affected FPS reviewers could hit 1.5 and 2 CL respectively for each type.
> 
> I've never seen a reviewer hit below 9 for DDR3. I've always wanted to see what would happen when achieving at least 6 CL on DDR3 or 12 CL on DDR4.


The reason faster RAM increase performance now is GPUs have been moving forward too fast, if you use 1 980Ti or furyX RAM speed wouldn't matter in game
and since I moved from Z87 to H97 i can only run my kit at 1600, and yeah, 2400c9 is much faster than 1600c7, in benchmarks at least


----------



## Lays

Quote:


> Originally Posted by *mutantmagnet*
> 
> Not yet. When DDR and DDR2 significantly affected FPS reviewers could hit 1.5 and 2 CL respectively for each type.
> 
> I've never seen a reviewer hit below 9 for DDR3. I've always wanted to see what would happen when achieving at least 6 CL on DDR3 or 12 CL on DDR4.


My kit can do CL12, what benchmarks would you like to see? I posted an AIDA64 screenshot on the last page if you were interested.


----------



## Cyro999

Quote:


> Single card gaming isn't taking advantage of higher speeds much yet, but future high-end GPUs later this year may change that


It's not about the GPU performance - you'll see that RAM and CPU performance work hand in hand. If you're 100% GPU bound at your settings, neither will help.

More CPU performance (and RAM) helps for improving your FPS and matters more at a higher FPS, so you'll want to test it around 100+FPS if possible. 1080p @120 may show massive gains while 4k @30 doesn't, because 30fps is much much easier for a CPU than 120fps.
Quote:


> The reason faster RAM increase performance now is GPUs have been moving forward too fast, if you use 1 980Ti or furyX RAM speed wouldn't matter in game


Starcraft 2 and more than a couple of other games will show these kinds of performance gains with a single gtx950.


----------



## armartins

I'll not read through 20 pages... but I'm rather curious about the result if they do the same test with 1080s in SLI with the new 650Mhz bridge... My guess is that it would make that difference negligible... also I don't know at what resolution those tests were conducted... this seems like a case where a framebuffer/bandwidth bottleneck scenario can be constructed. Considering that Bethesda games are notorious for "modability" what could possible lead to a more forgiving memory management. That would benefit more from the sheer brute force/bandwidth increase from higher clocks.


----------



## Cakewalk_S

Quote:


> Originally Posted by *Catscratch*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Cakewalk_S*
> 
> No difference with Sandy Bridge 2500k @ 2133MHz memory vs 1600MHz in gaming. MaxMemm benchmarks definitely higher once I got them stable.
> 
> 
> 
> 
> 
> 
> Highest I ever got with them because they are factory 1866 8-9-8 1.65v. Now I mixed them with 2x4gb (2x2 + 2x4), i settled with cl10 1.5v. Maybe I can get them to 9-10-9 with them at 1.6-1.65v. 2133 wasn't an upgrade for me back then. When I get a ddr4 rig, I'll shoot for 3733
Click to expand...

I ended up dropping back to 1866MHz. The 2133Mhz did nothing for me in gaming and pumping an extra 0.12V through the ram felt like a waste, especially since I've got a SFF case with no ram cooling... I'll stick to 1.40v 1866 9-9-9-28 T1.


----------



## Neo_Morpheus

Im sure with tightened timings 1866Mhz ram would perform similar to the 2133Mhz.

http://www.eurogamer.net/articles/digitalfoundry-2016-is-it-finally-time-to-upgrade-your-core-i5-2500k

Current DDR4 4000mhz prices not to mention you would 16gb or you would negate some of the fps increases, but its going to be looking like the viable option for me in a 2017 build.


----------



## Cyro999

Quote:


> Im sure with tightened timings 1866Mhz ram would perform similar to the 2133Mhz.


Depends on the game. I've seen benches in sc2 where 1600c6 and 1600c9 had practically identical performance but higher clock speeds gave large performance gains.


----------



## reb00tas

This is my results from world of warcraft just watching over the landscape.

5820K
16GB ripjaws
Evga 1080 FE

2400MHz
17-17-17-39-2T
3000 MHz
15-15-15-35-2T

1440p

*2400 MHz:*
Minimum: 41.4336026518
Maximum: 82.1557673349
Average: 64.0588235294

*3000 MHz:*
Minimum: 45.1895702472
Maximum: 95.9969280983
Average: 67.1848739496

Changed spot someone fly around me ingame.. But
Overclocked the 1080 to [email protected]

*2400 MHz:*
Minimum 37.0192129715
Maximum 68.8562969084
Average 59.7899159664

*3000 MHz:*
Minimum 41.3376875698
Maximum 88.012673825
Average 66.4621848739


----------



## MuscleBound

DAMN. thats some serious speed.
Whats a good cheap mobo that supports 4000 RAM???


----------



## DMac84

Quote:


> Originally Posted by *mutantmagnet*
> 
> Not yet. When DDR and DDR2 significantly affected FPS reviewers could hit 1.5 and 2 CL respectively for each type.
> 
> I've never seen a reviewer hit below 9 for DDR3. I've always wanted to see what would happen when achieving at least 6 CL on DDR3 or 12 CL on DDR4.


Im running 32GB (4x8) Crucial Ballistix Elite DDR4 2666 CL16 @ 2666 12-12-12-28-1T and its rock solid (1.360 v). I was having issues with stability running higher MHz, regardless of voltage. Im willing to run some tests tho if someone can come up with things they would like me to try? Ill provide results.

CPU is 5960X @ 4.5 Core/4.5 Uncore


----------



## DMac84

Quote:


> Originally Posted by *reb00tas*
> 
> This is my results from world of warcraft just watching over the landscape.
> 
> 5820K
> 16GB ripjaws
> Evga 1080 FE
> 
> 1440p
> 
> *2400 MHz:*
> Minimum: 41.4336026518
> Maximum: 82.1557673349
> Average: 64.0588235294
> 
> *3000 MHz:*
> Minimum: 45.1895702472
> Maximum: 95.9969280983
> Average: 67.1848739496
> 
> Changed spot someone fly around me ingame.. But
> Overclocked the 1080 to [email protected]
> 
> *2400 MHz:*
> Minimum 37.0192129715
> Maximum 68.8562969084
> Average 59.7899159664
> 
> *3000 MHz:*
> Minimum 41.3376875698
> Maximum 88.012673825
> Average 66.4621848739


Hey Mate, just for reference, what are your ram timings on 2400 and 3000? Thanks!


----------



## greytoad

Going from ddr3 2200 10-11-11-24-2t to 10-11-11-24-1t gave me a small bump in valley. going from 1600 to 2200 gave maybe 2 percent increase in 3dmark on a rx480. I didn't test fallout's memory impact with the rx480.

I swear my ram wouldn't to 2200 10-11-11-24-1t at 1.65v when it was new. It does now. Does this prove the Buddhist concept of wearing into perfection?


----------



## reb00tas

Quote:


> Originally Posted by *DMac84*
> 
> Hey Mate, just for reference, what are your ram timings on 2400 and 3000? Thanks!


Added to my original post, but

2400MHz
17-17-17-39-2T
3000 MHz
15-15-15-35-2T


----------



## bfromcolo

Are these results going to be comparable with quad channel versus the dual channel setup used in the testing? I have DDR4-2400 but its quad channel. I guess it doesn't matter, my motherboard seems to support DDR4-3000 max anyway.

Sorry is this has been asked before, 21 pages to read through...


----------



## Carniflex

That is a bit odd result. Few years back when I did some testing (couple DX9 titles) the difference was like max 5% between DDR3 at 800 MHz and 1600 MHz.

That said more bandwidth is always nice, just have to find the sweet spot for the bang-for-buck


----------



## Asus11

cant find 4000mhz 2 x 8gb kit in the UK though..


----------



## prjindigo

Quote:


> Originally Posted by *ZealotKi11er*
> 
> This seems to take effect in CPU limited games.


CPU limiting isn't necessarily ram based. CPU limiting can be driver, windows configuration, cheap hard drive, incorrectly configured bus, Facebook apps, skype running... all sorts of garbage. I've been to some websites that cause CPU limiting in other tasks.

The problem is that it has been shown before and often that RAM speed changes alone don't make a damned bit of difference in a game's performance if you have sufficient ram for the game to operate to begin with. So I doubt upgrading your 8gb machine from 3200 to 4000 is really gonna make as much a difference as upgrading from 8gb to 32gb where windows can keep itself out of the way and you can disable the whole pagefile which would be flailing around like an early deployed drag race parachute after the upgrade.

The ram itself over about 3200 right now is CPU limited to begin with. Such a statement as "ddr4 4000" increases games by 10% over "ddr3 1600" doesn't take into account that you MUST have a CPU with technologies one or two generations newer just to run the ddr4 ram... The FPS increase isn't coming from faster ram, its coming literally from a whole generation of processor technologies that come with it.


----------



## Blameless

Quote:


> Originally Posted by *bfromcolo*
> 
> Are these results going to be comparable with quad channel versus the dual channel setup used in the testing?


Probably not, as you are already less memory bandwidth limited.

The reduction in latency might help in some titles, but there is less potential for improvement here as the Haswell-E platform can't scale frequency as high.
Quote:


> Originally Posted by *bfromcolo*
> 
> I guess it doesn't matter, my motherboard seems to support DDR4-3000 max anyway.


CPU, rather than motherboard, is more likely to be the limiting factor.


----------



## czin125

Quote:


> Originally Posted by *Asus11*
> 
> cant find 4000mhz 2 x 8gb kit in the UK though..


a french site or amazon.com?


----------



## Malinkadink

I wonder whats better, 3200mhz @ 14 CAS, or 4000mhz @ 19 CAS. Would like to see a test for comparison. I'd lean more towards 3200mhz @ 14 CAS


----------



## DADDYDC650

Quote:


> Originally Posted by *Malinkadink*
> 
> I wonder whats better, 3200mhz @ 14 CAS, or 4000mhz @ 19 CAS. Would like to see a test for comparison. I'd lean more towards 3200mhz @ 14 CAS


I'd lean towards faster RAM. Otherwise 2400Mhz CAS 10 would be near the top of the chart.


----------



## ToTheSun!

Quote:


> Originally Posted by *DADDYDC650*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Malinkadink*
> 
> I wonder whats better, 3200mhz @ 14 CAS, or 4000mhz @ 19 CAS. Would like to see a test for comparison. I'd lean more towards 3200mhz @ 14 CAS
> 
> 
> 
> I'd lean towards faster RAM. Otherwise 2400Mhz CAS 10 would be near the top of the chart.
Click to expand...

Do you know the timings to make that assertion?

Comparing 2133 CL16 to 4000 CL19 is not a great way to see what really matters for performance. I got CL16 and CL19 from a vague sentence at the conclusion of their article. As far as anyone knows, the dude benchmarked 4000 CL19, then decreased clocks to 2133 without tweaking timings.

Basically, the article is useless.


----------



## czin125

His newegg links only have 2133 CL15, 2400 CL15, 3000 CL15, 3600 CL17, 4000 CL19 on the 8GB price chart which is what he tested.

http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page4.html


----------



## Armand Hammer

Quote:


> Originally Posted by *ToTheSun!*
> 
> Do you know the timings to make that assertion?
> 
> Comparing 2133 CL16 to 4000 CL19 is not a great way to see what really matters for performance. I got CL16 and CL19 from a vague sentence at the conclusion of their article. As far as anyone knows, the dude benchmarked 4000 CL19, then decreased clocks to 2133 without tweaking timings.
> 
> *Basically, the article is useless*.


Bait much?

What would you recommend instead?


----------



## Silent Scone

Quote:


> Originally Posted by *Armand Hammer*
> 
> Bait much?
> 
> What would you recommend instead?


Not bait at all. Memory frequency and memory timings are intrinsically related. One without the other makes the comparison Le Mickey Mouse.


----------



## shapin

Quote:


> Originally Posted by *Asus11*
> 
> cant find 4000mhz 2 x 8gb kit in the UK though..


https://www.scan.co.uk/products/16gb-(2x8gb)-corsair-ddr4-vengeance-lpx-red-pc4-32000-(4000)-non-ecc-unbuffered-cas-19-23-23-45-xmp-


----------



## Lays

Quote:


> Originally Posted by *shapin*
> 
> https://www.scan.co.uk/products/16gb-(2x8gb)-corsair-ddr4-vengeance-lpx-red-pc4-32000-(4000)-non-ecc-unbuffered-cas-19-23-23-45-xmp-


Are TridentZ cheaper there? G.Skill's high end DDR4 is fantastic in my experience.


----------



## Xuvial

Thread almost 3 months old now, move pls


----------



## ToTheSun!

Quote:


> Originally Posted by *czin125*
> 
> His newegg links only have 2133 CL15, 2400 CL15, 3000 CL15, 3600 CL17, 4000 CL19 on the 8GB price chart which is what he tested.
> 
> http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page4.html


Thanks! I missed that part.

Looks like the last 2 kits are the only ones worth considering.

Having said that, i've seen conflicting results from other tests, in which the difference between frequency wasn't THAT expressive.


----------



## Asus11

Quote:


> Originally Posted by *shapin*
> 
> https://www.scan.co.uk/products/16gb-(2x8gb)-corsair-ddr4-vengeance-lpx-red-pc4-32000-(4000)-non-ecc-unbuffered-cas-19-23-23-45-xmp-


I did see these after saying I couldn't find any

shame theres no G SKILL :-( the corsair look ok but not as nice as the trident but im all for performance so what the hell right


----------



## Blameless

Quote:


> Originally Posted by *Armand Hammer*
> 
> What would you recommend instead?


Testing the memory at the tightest completely stable timings that were possible at each frequency.

Failing that, using JEDEC speed bins (which are generally quite loose, but at least scale with clock speed), would be acceptable.

Fixed timings aren't really representative of anything; no one is going to run 2133 at CL19.
Quote:


> Originally Posted by *Silent Scone*
> 
> Not bait at all. Memory frequency and memory timings are intrinsically related. One without the other makes the comparison Le Mickey Mouse.


Mickey Mouse would never use such loose timings at the lower end of DDR4 speed grades!


----------



## xlink

If they held timings constant, this is not at all surprising. You're literally cutting performance in half
If timings were tightened at lower frequencies, then this is surprising, I'd have thought that the delta would be much smaller - it usually is.


----------



## Kana Chan

Someone else has ran the 4000mhz 16gb kit at 17-19-19-41-2T-350 at 1.45v and 17-18-18-28-2T-300 at 1.45v 24/7 and the latter went from 6m53s -> 6m34s


----------



## Cyclops

DDR4: 2133 C15 to 3333 C15

 

DDR3: 1600 C9 to 2666 C11


----------



## ToTheSun!

Quote:


> Originally Posted by *xlink*
> 
> If timings were tightened at lower frequencies, then this is surprising, I'd have thought that the *delta would be much smaller - it usually is*.


In every other review i've seen, it is.


----------



## Deout

Seems like 3600hz is the sweet spot if there's a big price difference. I'm looking to get a 6700k CPU soon so this is good to know.


----------



## Silent Scone

Quote:


> Originally Posted by *Deout*
> 
> Seems like 3600hz is the sweet spot if there's a big price difference. I'm looking to get a 6700k CPU soon so this is good to know.


It is. Either 3400 or 3600 with tight seconds. Going beyond this there is little need.


----------



## XtremeCheater

Funny thing, reading this post and I bought Team DDR4 3333, later on switch to Patriot DDR4 3400 all in 16GB. Overclocked to DDR4 4000 CL17-18-18-36-2 at 1.35v no problem. Seems very stable and fast coupled with i7 6700K and GTX 1070. In my experience helping for stable framerate.


----------



## caenlen

Quote:


> Originally Posted by *XtremeCheater*
> 
> Funny thing, reading this post and I bought Team DDR4 3333, later on switch to Patriot DDR4 3400 all in 16GB. Overclocked to DDR4 4000 CL17-18-18-36-2 at 1.35v no problem. Seems very stable and fast coupled with i7 6700K and GTX 1070. In my experience helping for stable framerate.


Thanks for sharing, nice to know Patriot is a good overclocker, might try that next year myself when I do my next build.


----------



## SIDWULF

Quote:


> Originally Posted by *AcEsSalvation*
> 
> Evertyone always talks about FPS increase with RAM... I always thought that it would make more sense that the impact would be on loading times and decreasing FPS drops when loading w/o a loading screen. I would love to see a test based on that.


loading screens are almost an anomaly now with SSD's


----------



## darealist

Would be nice to see someone compare Haswell with fastest DDR3 vs Sky Lake or Kaby Lake with the fastest DDR4 RAM.


----------



## Seyumi

Nice thread necro lol. I need to confirm 1 important piece of info though:

The higher your memory clock is less you can overclock your CPU. Obviously CPU overclock is better than memory speeds.

My 5.0Ghz 6600k cannot handle 4133Mhz memory. System is unstable and crashes. Works fine though with 3600Mhz. I even got 2 different sets of the 4133Mhz incase my 1st was defective. And yes, I have one of the few motherboards that can actually handle the 4133Mhz memory officially and is rated specifically for this memory. If I bumped own my overclock I may have been able to handle the 4133 but I'd rather have the core clock versus the memory clock for obvious reasons. I didn't have the patience to try something in-between 3600 and 4133.


----------



## duganator

I'm starting to wonder if my 5960x is the same way. Got it to 4.3, but it took 1.3 volts, lowered the memory oc from 3200 to 2666 and it runs on less voltage
Quote:


> Originally Posted by *Seyumi*
> 
> Nice thread necro lol. I need to confirm 1 important piece of info though:
> 
> The higher your memory clock is less you can overclock your CPU. Obviously CPU overclock is better than memory speeds.
> 
> My 5.0Ghz 6600k cannot handle 4133Mhz memory. System is unstable and crashes. Works fine though with 3600Mhz. I even got 2 different sets of the 4133Mhz incase my 1st was defective. And yes, I have one of the few motherboards that can actually handle the 4133Mhz memory officially and is rated specifically for this memory. If I bumped own my overclock I may have been able to handle the 4133 but I'd rather have the core clock versus the memory clock for obvious reasons. I didn't have the patience to try something in-between 3600 and 4133.


----------



## Avant Garde

I bought G.Skill TridentZ 2x8GB 3200MHz CL14 for my new system, should I sell this and get 3600MHz/4000MHz ?


----------



## XtremeCheater

Quote:


> Originally Posted by *Avant Garde*
> 
> I bought G.Skill TridentZ 2x8GB 3200MHz CL14 for my new system, should I sell this and get 3600MHz/4000MHz ?


What for? Just increase it by changing freq and timing to 17-18-18-36. In my board either Team and Patriot memory can OC to 4133 with max up to 4167. I think GSkill is better than those cheap RAM


----------



## Lays

Quote:


> Originally Posted by *darealist*
> 
> Would be nice to see someone compare Haswell with fastest DDR3 vs Sky Lake or Kaby Lake with the fastest DDR4 RAM.


Here's a test I ran a long time ago on my 4790k with 2800 CL9-12-12 vs 6700k with~4100 mhz 12-11-11


----------



## prjindigo

This was actually debunked many times already. Linear increases in Ram speed have very little effect on FPS in the playable range if the graphics drivers are working correctly.


----------



## t1337dude

Quote:


> Originally Posted by *prjindigo*
> 
> This was actually debunked many times already. Linear increases in Ram speed have very little effect on FPS in the playable range if the graphics drivers are working correctly.


What evidence?


----------



## L36

Quote:


> Originally Posted by *prjindigo*
> 
> This was actually debunked many times already. Linear increases in Ram speed have very little effect on FPS in the playable range if the graphics drivers are working correctly.


----------



## KarathKasun

Increased memory speeds at relatively tight timings will net a performance benefit when any single core is pushed up to maximum load. It reduces the latency between requesting data and receiving data which can impact maximum per-core throughput.


----------



## Gumbi

Quote:


> Originally Posted by *KarathKasun*
> 
> Increased memory speeds at relatively tight timings will net a performance benefit when any single core is pushed up to maximum load. It reduces the latency between requesting data and receiving data which can impact maximum per-core throughput.


This.

I've personally benched increases of 10-15% in minimum frames per second in Starcraft 2 when overclocking RAM. (SC is a very CPU bound game in mid/late game scenarios and uses 1.5 cores~ max usually). I run pretty fast RAM 2200mhz Samsung "wonder" RAM at low timings (i think something like 9:10:12:20 and a CR of 1).


----------



## Woundingchaney

Quote:


> Originally Posted by *t1337dude*
> 
> What evidence?


Quote:


> Originally Posted by *L36*


This has actually been a ongoing topic of contention, literally over a decade. For the most part and in most configurations ram speed has not had a dramatic impact on fps, particularly nothing along the lines of a staggering 18% (which is borderline unthinkable). The problem is that in some very limited cases the scenario of faster ram speed has increased FPS, but this is also dependent on other components within a system (ram amount, gpu, gpu memory, processor, etc). Every couple of years there is an article that shows up that has a title that benefits from faster ram (FO4 for example) in some scenario or another, but usually its very difficult to pin point differences just to ram speeds.

In this scenario, they are suggesting a 18% increase in fps in FO4 due to faster ram. Well they are running the minimum system ram requirement and we have no idea just what ram allotments are in their PC. If the amount of ram is a bottleneck in these scenarios then, of course, faster ram will net gains. Additionally they are running SLI and they are only seeing these differences when using SLI, when they revert back to a single card solution there is virtually no difference. There are more than a few problems with the methodology in this article. Generally speaking of all the configurations available SLI and Xfire configurations have shown to benefit more from faster ram and so much of this is directly associated with minimum framerates (which in itself is a hard statistic to pinpoint).

Generally speaking faster ram is a metric and does lead to more system performance, but numbers of this magnitude across the board are very unlikely.

One of the better articles every released over the issue came from Anandtech in 2013
http://www.anandtech.com/show/7364/memory-scaling-on-haswell


----------



## KarathKasun

Quote:


> Originally Posted by *Woundingchaney*
> 
> This has actually been a ongoing topic of contention, literally over a decade. For the most part and in most configurations ram speed has not had a dramatic impact on fps, particularly nothing along the lines of a staggering 18% (which is borderline unthinkable). The problem is that in some very limited cases the scenario of faster ram speed has increased FPS, but this is also dependent on other components within a system (ram amount, gpu, gpu memory, processor, etc). Every couple of years there is an article that shows up that has a title that benefits from faster ram (FO4 for example) in some scenario or another, but usually its very difficult to pin point differences just to ram speeds.
> 
> In this scenario, they are suggesting a 18% increase in fps in FO4 due to faster ram. Well they are running the minimum system ram requirement and we have no idea just what ram allotments are in their PC. If the amount of ram is a bottleneck in these scenarios then, of course, faster ram will net gains. Additionally they are running SLI and they are only seeing these differences when using SLI, when they revert back to a single card solution there is virtually no difference. There are more than a few problems with the methodology in this article. Generally speaking of all the configurations available SLI and Xfire configurations have shown to benefit more from faster ram and so much of this is directly associated with minimum framerates (which in itself is a hard statistic to pinpoint).
> 
> Generally speaking faster ram is a metric and does lead to more system performance, but numbers of this magnitude across the board are very unlikely.
> 
> One of the better articles every released over the issue came from Anandtech in 2013
> http://www.anandtech.com/show/7364/memory-scaling-on-haswell


RAM speed cant compensate for RAM amount.

And yes, FO4 gains nearly 20% from memory speed/timing optimization. Even on AMD systems where memory speeds don't help a tremendous amount, the gains are there.

SLI/CF are showing improvements from memory speeds because they are no longer GPU limited. Its as simple as that. You see the same thing when you take a Titan X Pascal and put it into a Sandy Bridge or Ivy Bridge system. Some games are even constrained on the CPU side of things with current single GPU configurations and current CPU's.


----------



## reqq

This test was useless since he didnt lower the timings when he downclocked the memory. Total latency is speed and timings in conjunction.


----------



## Slink3Slyde

This old chestnut, again.

Bradley's updated his thread in July with a bit more info. You really have to look for parts of games that are heavily CPU limited for faster RAM to matter, and even then its not a huge difference, but I'm sure its there. I'd suggest that MMO's and RTS games benefit the most as theyre the ones that have the most CPU work. Someone else had a thread on BF4(?) a while ago showing similar results.

http://www.overclock.net/t/1487162/an-independent-study-does-the-speed-of-ram-directly-affect-fps-during-high-cpu-overhead-scenarios

Also Digital foundry. http://www.eurogamer.net/articles/digitalfoundry-2016-is-it-finally-time-to-upgrade-your-core-i5-2500k

Although annoyingly they dont talk about timings at all.

Is it worth spending money towards RAM that could have gone towards a GPU? Nope, but I believe there is a benefit to faster RAM in gaming even on a single GPU setup.


----------



## Blameless

Quote:


> Originally Posted by *duganator*
> 
> I'm starting to wonder if my 5960x is the same way. Got it to 4.3, but it took 1.3 volts, lowered the memory oc from 3200 to 2666 and it runs on less voltage


Trend applies to pretty much everything.

However, as the parts in question have decoupled uncores (meaning the L3 cache, IMC, and system agent run on their own clock) you can usually not lose too much of a core OC, provided you can power and cool your CPU, and are willing to tweak things a bit more heavily.

That said, it's always wise to find maximum stable core overclock before tweaking uncore, then tweak memory itself last of all.
Quote:


> Originally Posted by *Slink3Slyde*
> 
> Is it worth spending money towards RAM that could have gone towards a GPU? Nope, but I believe there is a benefit to faster RAM in gaming even on a single GPU setup.


I always favor cheap RAM, but I do tune the snot out of it.

The Ballistix Sport I purchased with my first 5820K setup over two years ago...

Performance with XMP settings:










Tuned settings (stable), with the same core and uncore clocks (yes, some of those timings are below their functional threshold and aren't doing anything, that's been corrected, but the point about performance remains):










~25% increase in bandwidth and reduction in latency with only a ~10% increase in frequency.


----------



## prjindigo

Basically attempting to use Fallout 4 as some kind of benchmark for proving that a clock increase in ram increases frame rates is a bit like using dog vomit as proof of God.

_The issue with the entire article is that Fallout 4's frame-rate is *LITERALLY* limited by the processor through intentional coding - the physics engine having to complete a motion resolve between each frame draw._

Playing Fallout 4 in SLi is a bit like playing two copies of the game at the same time - SLi doesn't work to provide you individual frames, it provides two frames in the place of one. In the case of Fallout 4 that means that every other frame _ISN'T_ a real game play frame and is just redundant frame count. Additionally the game is "pegged" at 60fps to begin with.

Failing to understand these facts about Fallout kinda puts the entirety of their idiocy into question in quick order.

So, that said lets delve into the hardware they're using...

Asrock Z170M Extreme is an x8/x8/x4 board with the M.2's x4 on chipset... So an increase in ram bus speed would increase the throughput possible from the chipset from the m.2 drive and also make up for the mainboard only operating in x8/x8 with a pair of 980ti in SLi as well as increase the speed at which the _SINGLE THREAD_ nVidia drivers were able to communicate with 2 cards instead of one?

Their first test, the SiSoft memory bandwidth test, was the only truth in their entire article: A 7.5% increase in single-loop memory bandwidth from 3600 to 4000 which is honestly anemic considering the clock increase is 11%. Seriously bad juju on that page. Looks like maybe they should have paid attention to the mainboard spec of 3466? I know, I know - timings and OC tweakery... There's a lot of twerky hanky panky going on in all their benchmarks.

Its fun to see them wig out about Handbrake getting a 10% increase in performance from an 11% increase in clock speed... When SiSoft's benchmark only showed a 7.5% increase in speed... Gotta respect Handbrake, results speak clearly.

Basically the take-away from this entire article shouldn't be "hoemahgawd: fastar ram mak gam fastar" it should be "learn what hardware does and how it works before you try to make claims about individual component changes when the professionally made games that run well don't support your claim"

And Techspot: A game designed to self limit and operate at 60FPS is going to show _EXACTLY_ the result that Witcher 3 showed when you ran it with a single card while _stupidly_ leaving the PCIe1 at x8 while doing it.

So I'm gonna go a little out on a very thick limb here and say that the limiting factor on this mainboard is the M.2 PCIe x4 port being connected to CHIPSET instead of being a main device. I know it's not going to be a popular conclusion but I'd bet you that setting that board up with a single 980ti, 4x4gb DDR4-3200, the intel SSD on the M.2 port as the boot drive and a ZTSSD-PG3-480G-LED on PCIe#4 for game installations will produce better results on all the games.

...and cost less than the second 980ti configuration.

Have a good one


----------



## Slink3Slyde

Quote:


> Originally Posted by *Blameless*
> 
> I always favor cheap RAM, but I do tune the snot out of it.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> The Ballistix Sport I purchased with my first 5820K setup over two years ago...
> 
> Performance with XMP settings:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Tuned settings (stable), with the same core and uncore clocks (yes, some of those timings are below their functional threshold and aren't doing anything, that's been corrected, but the point about performance remains):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ~25% increase in bandwidth and reduction in latency with only a ~10% increase in frequency.


Very nice, if you have the time and the will you can also get extra performance for free, and MHZ isnt the only factor at all.

Out of interest roughly how long did it take you to tune it like that? I'm just curious because my kit wouldn't run at XMP settings with my board/processor, so I spent a couple of evenings playing with the timings and testing and I barely got into the lower timings. makes me a bit itchy to play around again.

I assume that most gamers would probably just buy the faster clocked stuff and leave it at that as long as it works out of the box, that's why I normally suggest getting DDR4 3000 kits and up these days as theyre not much more then the 2133/2400 kits with similar primary timings.


----------



## Blameless

Quote:


> Originally Posted by *Slink3Slyde*
> 
> Out of interest roughly how long did it take you to tune it like that? I'm just curious because my kit wouldn't run at XMP settings with my board/processor, so I spent a couple of evenings playing with the timings and testing and I barely got into the lower timings. makes me a bit itchy to play around again.


Took a while, as it was my first DDR4 kit and my first 5820K, so I was learning new memory ICs and a new platform at the same time.

Right now, since I'm familiar with the basic trends of the various DDR4 ICs (e.g. Micron ICs favor timing reductions over clock increases, need a loose tRFC, can run low tRCD, and crap out much past 1.35-1.4v; while SK Hynix ICs favor clock increases more, and like tRCD and tRP to be 1-2 clocks looser than CAS, etc) in circulation and have at least passing familiarity with most DDR4 platforms, I can get a moderately tweaked memory setup pretty quickly, with stability testing taking up the bulk of the time.

To fully tweak a kit still takes quite a while...weeks even, if I'm binning DIMMs in the process and tweaking a new CPU's IMC from scratch. But, I enjoy this sort of thing, and when I'm done, I almost always have impressive gains and stock or better levels of stability.
Quote:


> Originally Posted by *Slink3Slyde*
> 
> I assume that most gamers would probably just buy the faster clocked stuff and leave it at that as long as it works out of the box, that's why I normally suggest getting DDR4 3000 kits and up these days as theyre not much more then the 2133/2400 kits with similar primary timings.


This is probably the case, and good advice for those who don't enjoy the hobbyist aspect of tweaking their systems as much as I do.


----------



## EniGma1987

Quote:


> Originally Posted by *prjindigo*
> 
> Asrock Z170M Extreme is an x8/x8/x4 board with the M.2's x4 on chipset... So an increase in ram bus speed would increase the throughput possible from the chipset from the m.2 drive and also make up for the mainboard only operating in x8/x8 with a pair of 980ti in SLi as well as increase the speed at which the _SINGLE THREAD_ nVidia drivers were able to communicate with 2 cards instead of one?


How does a RAM clock increase somehow make the m.2 drive to chipset faster? the chipset is connected over PCI-E 3.0 with 4 lanes, the chipset then connects to the M.2 drive over 4 lanes of PCI-E. The PCI-E bus speed has nothing to do with the RAM speed, they are two completely separate buses and transistors in the CPU die. Overclocking RAM will have no effect on an increase in bandwidth from the chipset.


----------



## DNMock

Fresh off the presses, we've received word that yes, where ever you have a bottleneck in a game or any other type of software, making said bottleneck faster will, in fact, improve it's performance.


----------



## prjindigo

Quote:


> Originally Posted by *EniGma1987*
> 
> How does a RAM clock increase somehow make the m.2 drive to chipset faster? the chipset is connected over PCI-E 3.0 with 4 lanes, the chipset then connects to the M.2 drive over 4 lanes of PCI-E. The PCI-E bus speed has nothing to do with the RAM speed, they are two completely separate buses and transistors in the CPU die. Overclocking RAM will have no effect on an increase in bandwidth from the chipset.


The system has only 8GB of ram - the OS is paging to the SSD. If the chipset is doing ANYTHING other than transmitting data, one or more of those PCIe channels is unavailable. An increase in the speed in which the ram can receive data is a reduction in the amount of time the chipset has to spend interrupting literally everything else.

Increases in ram speed are also increases in CPU ram controller throughput (identity theorem) and thus reductions in latency across the board for the operation of a computer _en grosse_ if not in any single particular thread.

The installation of 8GB of system memory in this case is _precisely_ the limiting factor in the operation of the computer so an increase in bandwidth to the memory is going to result in gains due to reduced wait and latency across the mainboard - thus will directly result in an increase in actual "work done" by the system.

In a system with a limited number of threads/queues this will result in an effective increase in bandwidth across the system as a whole in a complex task that utilizes the entire system because it reduces the bottleneck of the insufficient ram volume.

_In this article's specific case_ they made no attempt to compare 4x4GB 3200 and 2x4GB 4000 because the whole point was to sell higher speed ram. While the actual bandwidth of the memory architecture at 4x4GB 3200 would be _slower_ than 2x4GB 4000, the resulting systemic bandwidth of the system would be higher because operational memory wouldn't be forced off onto a _relatively_ much slower SSD that is operating on pass-through _at up to PCIe x4_ on the Chipset.

The same hard drive mounted on the 4th PCIe slot port would not perform noticeably better in benchmarks but would perform FAR better on a system under "full" load in the way the game tests were run because it would be operating with clear paths to the CPU instead of being hitched through a switchboard chip that is in constant communication with several other devices.

Remember that DDR4 isn't any wider than DDR3 or DDR2 - they're all 64bit. So the total bandwidth of the system ram in this case - being dual channel - is 128bits and the hard drive can send 256bits (4x 64bit serial). What this literally means is that an increase in ram speed increases the data speed possible through the chipset. When a system is paging to drive it still must load the paged data to system ram to use it, but before it can do that it must write some data _the computer assumes isn't going to be used any time soon_ to the drive first. This means that at any given time up to 4gb of the process memory isn't where it is supposed to be (MS standard page file) and in the setting of a game such as Fallout4's single thread the entire operation is limited by total memory bandwidth.

What Techspot has done here is used a non-gaming configuration on a motherboard to produce fake results to try to get people to buy the fastest ram possible at a huge premium. Its been well known for more than a decade that a computer operating in a paged data state is exceptionally sensitive to ram speed.

I can guarantee you that switching down to 16GB of DDR4 3200 will reduce the bandwidth load running through the chipset and increase the system performance in this case. I would estimate a 30% reduction in total bandwith through the chipset combined with a 5% or greater reduction in system-wide latency and a 15% performance increase by dropping the ram speed by 20% while doubling the memory size. The minimum frame rate will come up too.

Or... you can trust a website that's trying to sell you top bin ram at a major mark-up.

Learn hardware, come back.


----------



## EniGma1987

Quote:


> Originally Posted by *prjindigo*
> 
> The system has only 8GB of ram - the OS is paging to the SSD. If the chipset is doing ANYTHING other than transmitting data, one or more of those PCIe channels is unavailable. An increase in the speed in which the ram can receive data is a reduction in the amount of time the chipset has to spend interrupting literally everything else.
> 
> Learn hardware, come back.


Normally I dont like to indulge persons such as yourself, but perhaps you should learn to choose your words more carefully? What you explained still doesn't result in what you actually said originally. Your further explanation also doesn't add up logically if you really think about it.


----------



## czin125

Aren't the timings listed in his newegg links or are they unrelated?


----------



## DNMock

Quote:


> Originally Posted by *prjindigo*
> 
> The system has only 8GB of ram - the OS is paging to the SSD. If the chipset is doing ANYTHING other than transmitting data, one or more of those PCIe channels is unavailable. An increase in the speed in which the ram can receive data is a reduction in the amount of time the chipset has to spend interrupting literally everything else.
> 
> Learn hardware, come back.


increasing the amount of RAM you have will provide the exact same effect. Rendering your original statement moot.

kind of like saying a faster HDD improves performance when you have resources split between an SSD and HDD, yeah, a higher RPM HDD will help, or you could just move all the resources to the SSD and squash the issue entirely.


----------



## Blameless

Quote:


> Originally Posted by *DNMock*
> 
> Fresh off the presses, we've received word that yes, where ever you have a bottleneck in a game or any other type of software, making said bottleneck faster will, in fact, improve it's performance.


Well, I'm glad that's resolved.

And to think, I've been overclocking my CD-ROM drives for 25 years trying to get transcoding performance to improve!


----------



## lilchronic

LOL


----------



## DNMock

Quote:


> Originally Posted by *Blameless*
> 
> Well, I'm glad that's resolved.
> 
> And to think, I've been overclocking my CD-ROM drives for 25 years trying to get transcoding performance to improve!


lol, saddest part is thats basically what we are discussing for all intent and purpose.


----------



## DVLux

Quote:


> Originally Posted by *Blameless*
> 
> Well, I'm glad that's resolved.
> 
> And to think, I've been overclocking my CD-ROM drives for 25 years trying to get transcoding performance to improve!


Real Men overclock their USB Hubs!

Get with the program!


----------



## Excession

Going from DDR4 2133 CL15 to 3200 CL16 nearly doubled my average FPS in a very, _very_ heavily modded copy of Skyrim. That's a larger improvement than I saw going from stock to 4.9 GHz with my 6700K. Curiously, the RAM overclock didn't provide anywhere near as much of an improvement with the CPU at 4.9 as it did with the CPU at stock.

That's obviously a corner case of a corner case, though. It wouldn't even apply to unmodded skyrim, let alone modern games running on modern engines.


----------



## ipv89

In general this is not worth the upgrade for gaming I don't think. If you were building a new system and wanted to be as future proof as possible then I would look at it but more spent on video card or cpu might be a better investment.

Real men overclock their LED's


----------



## inedenimadam

I dont know the ins and outs of why it is, but as an SLI user, I can say beyond a shadow of a doubt that my results in FO4 back this up.


----------



## Baasha

Quote:


> Originally Posted by *prjindigo*
> 
> _The issue with the entire article is that Fallout 4's frame-rate is *LITERALLY* limited by the processor through intentional coding - the physics engine having to complete a motion resolve between each frame draw._
> 
> Playing Fallout 4 in SLi is a bit like playing two copies of the game at the same time - SLi doesn't work to provide you individual frames, it provides two frames in the place of one. In the case of Fallout 4 that means that every other frame _ISN'T_ a real game play frame and is just redundant frame count. Additionally the game is "pegged" at 60fps to begin with.


Here's Fallout 4 played at 8K w/ a ton of mods (including ENB etc.) - dat scaling tho:


__
http://instagr.am/p/BNSR69Rg3-T%2F/

I understand the FPS is low but that's in 8K. When I play in 5K, I get around 70 - 80FPS.

Your post makes me curious as to what it means for me to run 4 way SLI in a game like Fallout 4. Can you expand a bit more?


----------



## Owhora

I thought CAS stuff is a big deal.

I use the simple formula for calculating the CL time in nanoseconds: (CAS / Frequency (MHz)) × 1000 = X ns. This helps me to spot a best RAM.

I wonder if that formula is useless because DDR4 is different?

I am thinking about to return G.SKILL 64GB (4 x 16GB) Ripjaws V Series DDR4 PC4-25600 3200MHz Desktop Memory Model F4-3200C14Q-64GVK. Is it worst RAM?

The example of using the formula:

My current ram: (14/3200) x 1000 = 4.375 - lower is better?

Ram in that article in OP's post: (19/4000) x 1000 = 4.75


----------



## MrKoala

It's still useful. The nominal frequency of DDR (double data rate) memory is double the physical frequency though.

But latency may not be as important as you imagined when the CPU is already getting close to being saturated with enough parallelism.


----------



## EniGma1987

Quote:


> Originally Posted by *Owhora*
> 
> I thought CAS stuff is a big deal.
> 
> I use the simple formula for calculating the CL time in nanoseconds: (CAS / Frequency (MHz)) × 1000 = X ns. This helps me to spot a best RAM.
> 
> I wonder if that formula is useless because DDR4 is different?
> 
> I am thinking about to return G.SKILL 64GB (4 x 16GB) Ripjaws V Series DDR4 PC4-25600 3200MHz Desktop Memory Model F4-3200C14Q-64GVK. Is it worst RAM?
> 
> The example of using the formula:
> 
> My current ram: (14/3200) x 1000 = 4.375 - lower is better?
> 
> Ram in that article in OP's post: (19/4000) x 1000 = 4.75


Latency doesnt matter as much now with modern CPUs that hide the latency through very advanced caching techniques with their pre-fetching. I think it was around Sandy Bridge era when timings became "mostly irrelevant". It still has its place of course. Lowest latency will perform better than higher latency in any cache miss scenario.


----------



## jprovido

is there a big difference between dual channel and quad channel? let's say 4000mhz dual channel vs. 3200mhz quad channel


----------



## dbLIVEdb

I am also very interested in being able to overclock 16GB of Viper Cas 16 mem xmp 3400 on an X99 form factor.. I'd like to see what noticeable fps improvement there could be if any. Forza Horizon 3 at true 4k is barely staying above 60fps in Ultra. Heres a video of performance I maintain with my current gaming machine in race seat setup I am currently building. 3 more buttkickers and the full chair to connect them is my current remaining items to complete the build.


----------



## sumitlian

Quote:


> Originally Posted by *EniGma1987*
> 
> Latency doesnt matter as much now with modern CPUs that hide the latency through very advanced caching techniques with their pre-fetching. I think it was around Sandy Bridge era when timings became "mostly irrelevant". It still has its place of course. Lowest latency will perform better than higher latency in any cache miss scenario.


I think Matter of latency is still a matter of number of IO operations processed per unit of time. There can be significant impact in performance when accessing from memory if say data-B which depends on (already in cache)data-A doesn't fit in a _single cache line_ which I think is 64 Bytes(correct me if I'm wrong) in current gen CPUs, this will still result in wait state from CPU to fetch Data-B. In this scenario, even with 99% cache hit, accessing _millions_ of alike operations per second will always be a bottleneck due to higher memory access times than the cache. I mean you literally feel the increase in speed and it is not placebo effect at all, either by reducing memory timing at fixed clock (reducing latency) or increasing frequency at fixed timings (also reducing latency). It is not rare at all to gain higher minimum fps in many games when you reduce access times.
No doubt cache memory is advancing in terms of efficient cache processing, but the number of FLOPS a Core can do have been increasing at much higher rate that the read/write efficiency of memory. And the problem specially exists for higher number of vector operations.


----------



## MuscleBound

What some Z170 mobos that support 4000 RAM?


----------



## czin125

Quote:


> Originally Posted by *MuscleBound*
> 
> What some Z170 mobos that support 4000 RAM?


Asrock supports up to 4500+ on the Z170 mobo


----------



## MuscleBound

Asrock is a third rate brand.


----------



## Particle

ASRock is somewhere in the middle behind leaders like ASUS and Gigabyte. They're ahead of brands like Epox and Biostar.


----------



## EniGma1987

Quote:


> Originally Posted by *MuscleBound*
> 
> Asrock is a third rate brand.


They are pretty close to Asus and Gigabyte IMO. Their top end boards have the same capabilities and OC potential as the top Asus boards do, and their low end boards are better than the Junk Asus puts out. It is just their midrange stuff that falls behind a little bit. They have the features, back the midrange lacks the OC potential.

Quote:


> Originally Posted by *czin125*
> 
> Asrock supports up to 4500+ on the Z170 mobo


Only the Z170 OC Formula does.


----------



## Blameless

Quote:


> Originally Posted by *MuscleBound*
> 
> Asrock is a third rate brand.


I've been gradually transitioning to more and more ASRock boards over the last decade and have generally been pretty impressed with them, with a handful of exceptions.


----------



## ZealotKi11er

Quote:


> Originally Posted by *jprovido*
> 
> is there a big difference between dual channel and quad channel? let's say 4000mhz dual channel vs. 3200mhz quad channel


I think games do not use Quad Channel. I have not tested personally but it could be interesting.


----------



## Bojamijams

Quote:


> Originally Posted by *Lays*
> 
> Here's a test I ran a long time ago on my 4790k with 2800 CL9-12-12 vs 6700k with~4100 mhz 12-11-11


How do you have CL12 at DDR4-4100? and CR1? I didn't think anything like that was possible. Like not even CLOSE to possible


----------



## czin125

Dry ice or LN2


----------



## Blameless

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I think games do not use Quad Channel. I have not tested personally but it could be interesting.


Channel count is only relevant with regard to the performance it provides.

There is a point of diminishing returns for the performance any subsystem and it's probably true that quad channel platforms often reach that point, as far as memory bandwidth is concerned, at much lower memory clocks than dual-channel platforms.


----------



## DNMock

Quote:


> Originally Posted by *ipv89*
> 
> In general this is not worth the upgrade for gaming I don't think. If you were building a new system and wanted to be as future proof as possible then I would look at it but more spent on video card or cpu might be a better investment.
> 
> Real men overclock their LED's


Quote:


> Originally Posted by *ZealotKi11er*
> 
> I think games do not use Quad Channel. I have not tested personally but it could be interesting.


pretty sure everything does, I think it's similar to having a 4-way raid vs a 2-way raid hard drive set-up. The program doesn't have any real say in it, the system just utilizes it. I know that aside from benchmark scores, I haven't seen or noticed a single difference in going from 2133 to 2666 quad channel DDR4. I suppose I could do a benchmark using dual channel 3333 vs quad channel 2133 this evening and see if there are any improved results.


----------



## lilchronic

Quote:


> Originally Posted by *Bojamijams*
> 
> How do you have CL12 at DDR4-4100? and CR1? I didn't think anything like that was possible. Like not even CLOSE to possible


It's comparing the fastest DDR3 vs the fastest DDR4.


----------



## Asus11

Quote:


> Originally Posted by *MuscleBound*
> 
> What some Z170 mobos that support 4000 RAM?


Asus impact viii 4133mhz is max ive gotten but prefer 4000 with better timings









also asrock is meant to be the highest mhz a motherboard can take but also heard MSI also can go far but don't advertise it


----------



## Particle

Quote:


> Originally Posted by *sumitlian*


I do have to wonder how this graph was prepared given the vast improvement in memory bandwidth and decent improvement in memory access times between 1980 and 2010. Bandwidth improved by about an order of magnitude and access times fell by about two thirds each decade during that span. You end up with numbers of about 1000x higher bandwidth and 20x lower access times. By either metric or any combination that graph would seem to be overly pessimistic versus reality.

Don't get the wrong idea though. I've heard of and support the basic premise that computational resources have improved at a much greater rate than memory.


----------



## zipper17

The problem is I don't have FO4 or ARMA 3.

For the other games the different could be still small.

I have DDR3 2400mhz, but my motherboard only can run it at 2133mhz.

Edited: for some reason, i manage to fix the problem, now i run at 2400mhz.


----------



## Lays

Quote:


> Originally Posted by *Bojamijams*
> 
> How do you have CL12 at DDR4-4100? and CR1? I didn't think anything like that was possible. Like not even CLOSE to possible


The z170m OC formula is a beast at clocking memory, and I got lucky with my memory lol. It's on air, needs crazy volts though for those clocks. (2+ volt) no degradation yet from tons of benching sessions.


----------



## Blameless

Quote:


> Originally Posted by *Bojamijams*
> 
> How do you have CL12 at DDR4-4100? and CR1? I didn't think anything like that was possible. Like not even CLOSE to possible


Definitely possible, but not something anyone should expect to achieve for 24/7 use.
Quote:


> Originally Posted by *Particle*
> 
> I do have to wonder how this graph was prepared given the vast improvement in memory bandwidth and decent improvement in memory access times between 1980 and 2010. Bandwidth improved by about an order of magnitude and access times fell by about two thirds each decade during that span. You end up with numbers of about 1000x higher bandwidth and 20x lower access times. By either metric or any combination that graph would seem to be overly pessimistic versus reality.
> 
> Don't get the wrong idea though. I've heard of and support the basic premise that computational resources have improved at a much greater rate than memory.


Was probably done on a per-device basis from 1980 to 2007 then extrapolated from there. This wouldn't take increased parallelism or the memory controller into account. Still, it does look like it's about an order of magnitude short.

Anyway, latency has been pretty stagnant for the last decade (I have early Athlon 64 systems that aren't far behind modern DDR3/DDR4 platforms) while bandwidth has improved by about a factor of 10.


----------



## Lays

Quote:


> Originally Posted by *czin125*
> 
> Dry ice or LN2


Samsung b die doesn't scale well on cold, it was just on air cooling. A proper z170 board, good IMC + good ram with maxmem setting in windows is all ya need.


----------



## sumitlian

Quote:


> Originally Posted by *Particle*
> 
> I do have to wonder how this graph was prepared given the vast improvement in memory bandwidth and decent improvement in memory access times between 1980 and 2010. Bandwidth improved by about an order of magnitude and access times fell by about two thirds each decade during that span. You end up with numbers of about 1000x higher bandwidth and 20x lower access times. By either metric or any combination that graph would seem to be overly pessimistic versus reality.
> 
> Don't get the wrong idea though. I've heard of and support the basic premise that computational resources have improved at a much greater rate than memory.


My bad that was from 2007, yes it might not be that big difference in today as it shows but current graph is still very similar to that despite the cache memory advancement.
That is the CPU (register)access time vs Memory access time. We know that clock period is reciprocal of clock frequency.

A 4000 MHz DDR4 that has CAS latency of 19 cycles has a clock period of ( (10^9) / (2000 x 10^6) ) = 0.5 ns. Since this memory takes 19 cycles to ready. The time delay between two access is 19 * 0.5 = 9.5 ns. This means the memory latency is 9.5 ns. At the most basic level a 4000 MHz CPU core has a latency of ( (10^9) / ( 4000 x 10^6) ) = 0.25 ns. Therefore memory latency is 9.5 / 0.25 = 38 times higher than CPU.
CPU has to wait for a very significant amount of time as compared to memory since CPU is already ready to read/write data to memory each 0.25 ns.

But this didn't sound what that graph is showing, right ?








That was single core we calculated for, and when we assume most basic scalar type instruction. Now if you take Vectorization done per core into consideration, the difference in latency becomes even bigger. A single core can already do 16 DP FLOPs per cycle. Multiply the number of cores (depending on CPU model) and you get even bigger FLOPs. And now there are multiple sockets. Yes with the advent of higher DRAM frequency it should be closer to the CPU than it was 10 years ago.

_On the contrary_ I think I might have gone too paranoid with this memory latency thing. I just found a good article that very much supports Enigma1987's statements in our previous comments, that shows instruction cache hit rate is at 100% for these workloads. http://www.extremetech.com/extreme/188776-how-l1-and-l2-cpu-caches-work-and-why-theyre-an-essential-part-of-modern-chips




But the problem remains when cache miss occur and the IO numbers are very high, at this exact moment reduction on memory latency will definitely improve performance.


----------



## ChronoBodi

Um, im curious if this is exclusive to z170 alone. How does x99 quad channel 2133 mhz fair against dual channel 4000 mhz?

We need to see if quad vs dual channel does anything.


----------



## Silent Scone

The graph wars taking place in here isn't likely to tell you anything worth knowing.


----------



## MrKoala

Quote:


> Originally Posted by *sumitlian*
> 
> _On the contrary_ I think I might have gone too paranoid with this memory latency thing. I just found a good article that very much supports Enigma1987's statements in our previous comments, that shows instruction cache hit rate is at 100% for these workloads. http://www.extremetech.com/extreme/188776-how-l1-and-l2-cpu-caches-work-and-why-theyre-an-essential-part-of-modern-chips


What kind of magic is 100% cache hit rate?


----------



## sumitlian

Quote:


> Originally Posted by *Silent Scone*
> 
> The graph wars taking place in here isn't likely to tell you anything worth knowing.


Whaaaaaat ? It is not *war* we are doing in here. Where did you take that from ?








The title of this thread is about 10-19% increase in fps with DDR4 4000 MHz iver DDR4 2133/DDR3 1600. I am not an expert or something but some of us wants to know how and why that happens.
Quote:


> Originally Posted by *MrKoala*
> 
> What kind of magic is 100% cache hit rate?


100% cache hit means whatever data/instruction CPU needs to read, it will always be found in CPU's cache memory rather than accessing main memory(DRAM) for that piece of data. Since the latency and bandwidth of Cache memory is much faster than main memory(you can see it in Aida64's cache and memory benchmark or other similar testing software), this way CPU will perform at optimum speed. But how is data/instruction already available in the cache from main memory ? This technique is called Prefetching which is responsible for the data/instruction to be available in the cache in advance from memory before that data/instruction is requested by the CPU. But that chart accounts for 100% instruction hit rate, not the data. I think no CPU for now has the ability to get all the data from cache it requires rather than memory, this is a complex topic but you can find all the info in internet to understand it.

Think like this, if your object or goal is to eat anything you want as fast as possible without any delay, 100% hit rate means it will always be available in your wife's or husband's hands who is sitting in front of you. I mean if he/she don't have that piece of food you want to eat, they will have to go to market to fetch that food and it will take time and you will have to wait for them to return. You know the difference of access time between fetching food from their hand's and fetching food from the market. Assume food as data, your request to them is instruction, Market as DRAM, their hands where you grab food from can be described as cache memory and your _object or goal_ is what CPU wants to executes. Might be a bad analogy LOL







, but it may get you a rough idea of all of this.


----------



## The Robot

Quote:


> Originally Posted by *ipv89*
> 
> In general this is not worth the upgrade for gaming I don't think. If you were building a new system and wanted to be as future proof as possible then I would look at it but more spent on video card or cpu might be a better investment.
> 
> Real men overclock their LED's


Real men overclock their case fans to over 9000 RPM so it hovers above your table like an UFO.


----------



## KarathKasun

Quote:


> Originally Posted by *The Robot*
> 
> Real men overclock their case fans to over 9000 RPM so it hovers above your table like an UFO.


Good to know I'm more than a 'real man' then. I have server fans that spin at 13k RPM.


----------



## akromatic

all this talk about faster ram only matters if the CPU is overclocked right?

if stock clock, none of this ram biz matters?


----------



## Lays

Quote:


> Originally Posted by *akromatic*
> 
> all this talk about faster ram only matters if the CPU is overclocked right?
> 
> if stock clock, none of this ram biz matters?


In theory it'd matter in every CPU intensive scenario, since "most" of the time, CPU intensive tasks benefit from faster RAM.


----------



## Telstar

Quote:


> Originally Posted by *caenlen*
> 
> Need to see tests with gtx 1080, new chip design plus GDDR5x and future hBM2 single gpu setups may still benefit the same...


This (or with 1080Ti). But I suspect that result will be similar to a 980 sli.
I wanted 32GB of ram but I may be getting only 16GB of 4000Mhz+


----------



## sumitlian

Quote:


> Originally Posted by *akromatic*
> 
> all this talk about faster ram only matters if the CPU is overclocked right?
> 
> if stock clock, none of this ram biz matters?


Quote:


> Quote:
> 
> 
> 
> Originally Posted by *Excession*
> 
> Going from DDR4 2133 CL15 to 3200 CL16 nearly doubled my average FPS in a very, _very_ heavily modded copy of Skyrim. That's a larger improvement than I saw going from stock to 4.9 GHz with my 6700K. Curiously, the RAM overclock didn't provide anywhere near as much of an improvement with the CPU at 4.9 as it did with the CPU at stock.
> 
> That's obviously a corner case of a corner case, though. It wouldn't even apply to unmodded skyrim, let alone modern games running on modern engines.
Click to expand...


----------



## akromatic

questions is though if that improvement is before overclocking or after overclocking.

sure i get getting faster ram with an overclocked CPU is going to have vast improvement in frames but you are all taking in the context of overclocked CPU.

what im interested in if moving from 2133mhz to 3200mhz or 4000mhz would actually yield the same sorts of improvements on a stock clocked non K cpu.

point is is actually worth while spending the premium on 4000mhz ram over topping up on the CPU/GPU


----------



## Desolutional

I'd assume with an overclocked system, the effect will be notability lower in normal use. Best test for this I find that is realistic is video encoding/transcoding or compression. Games like Fallout 4 and ARMA aren't fantastic tests given to how poorly they perform on multiple configurations (bad optimisation).


----------



## TopicClocker

Quote:


> Originally Posted by *akromatic*
> 
> questions is though if that improvement is before overclocking or after overclocking.
> 
> sure i get getting faster ram with an overclocked CPU is going to have vast improvement in frames but you are all taking in the context of overclocked CPU.
> 
> what im interested in if moving from 2133mhz to 3200mhz or 4000mhz would actually yield the same sorts of improvements on a stock clocked non K cpu.
> 
> point is is actually worth while spending the premium on 4000mhz ram over topping up on the CPU/GPU


If you're CPU bound using faster ram and/or overclocking your CPU will give you more performance, Digital Foundry looked at this with a i5 2500K and 6600K, with the 2500K at stock speeds coupled with faster ram, and also overclocked while coupled with faster ram.



They also tested the i7 3770K, 6700K, 5820K and the 5960X the same way, here's the 3770K:
Source: Core i7 Face-Off: which is the fastest gaming CPU?



And here's the i7 6700K, 5820K and 5960X:
Source: Is it finally time to upgrade your Core i5 2500K?


----------



## sumitlian

Quote:


> Originally Posted by *TopicClocker*


+1 !

This shows why you should not buy locked CPUs for gaming, specially when faster RAM is cheaper now.


----------



## akromatic

unlocked cpus are often gimped and lack features i need.

i prefer xeons where possible


----------



## KarathKasun

Quote:


> Originally Posted by *akromatic*
> 
> unlocked cpus are often gimped and lack features i need.
> 
> i prefer xeons where possible


The only thing the i7-6700k lacks is ECC.


----------



## Blameless

Quote:


> Originally Posted by *sumitlian*
> 
> But that chart accounts for 100% instruction hit rate, not the data. I think no CPU for now has the ability to get all the data from cache it requires rather than memory


Definitely easier to see high instruction than data hit rate, but depending the size of the cache and the data involved, it's possible to fit everything in cache in some scenarios.

You should see native DOS Quake benchmarks on a part with 15MiB of L3!


----------



## nyxagamemnon

Say X99 platform and Quad channel 2666 vs 3200mhz what would be the difference. Too bad digital foundry never did that and there just using 3200mhz only on the x99 systems.


----------



## TheBloodEagle

I wish we moved on to Quad Data Rate system RAM already.


----------



## Asmodian

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I wish we moved on to Quad Data Rate system RAM already.


Latency.









We need very large very fast L4 caches first.


----------



## MrKoala

I guess the take away of the story is that higher throughput does help atm.


----------



## TheBloodEagle

Quote:


> Originally Posted by *Asmodian*
> 
> Latency.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> We need very large very fast L4 caches first.


Wouldn't it be less latency since more work is getting done per cycle? Doesn't GDDR5X (QDR) have less latency than GDDR5? If anything QDR has theoretically half the latency of DDR, no?


----------



## MrKoala

No.

The time delay between the actual cell and the pin is over an order of magnitude higher and not directly related.

If you take a say DDR4 2133 CL15 and modify the RAM controller so that half of the DDR signal is ignored, it becomes SDR (from controller's perspective). But the CL is not doubled, it's still 15 cycles.


----------



## TheBloodEagle

Quote:


> Originally Posted by *MrKoala*
> 
> No.
> 
> The time delay between the actual cell and the pin is over an order of magnitude higher and not directly related.
> 
> If you take a say DDR4 2133 CL15 and modify the RAM controller so that half of the DDR signal is ignored, it becomes SDR. But the CL is not doubled, it's still 15 cycles.


I'm not understanding how QDR would be worse than DDR. The image shows 4 read & writes per cycle compared to 2 read & writes of DDR. That to me says it does more work in the same cycle than DDR even if the amount of cycles is the same. Can you explain it further?


----------



## EniGma1987

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I'm not understanding how QDR would be worse than DDR. The image shows 4 read & writes per cycle compared to 2 read & writes of DDR. That to me says it does more work in the same cycle than DDR even if the amount of cycles is the same. Can you explain it further?


he just did. You can get more work done, but it does not lower latency in the slightest, only increases total data moved.

And GDDR5 is already QDR, that wasnt an improvement brought in GDDR5X.


----------



## Asmodian

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I'm not understanding how QDR would be worse than DDR. The image shows 4 read & writes per cycle compared to 2 read & writes of DDR. That to me says it does more work in the same cycle than DDR even if the amount of cycles is the same. Can you explain it further?


The memory chips themselves are basically the same as DDR or SDR, it is the bus that is different, so you need to "pack" and "unpack" the reads and writes at either end of the bus. There is more buffer at the ends of the bus as you push more data across it without increasing the clock speed.


----------



## sumitlian

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I'm not understanding how QDR would be worse than DDR. The image shows 4 read & writes per cycle compared to 2 read & writes of DDR. That to me says it does more work in the same cycle than DDR even if the amount of cycles is the same. Can you explain it further?


I think I get you, This is the problem of Throughput vs Latency you may be confused in between.
At fixed speed, four guys are running in Parallel from Point A and their goal is to reach point C. But the *only* *rule is they have to stop for say 2 seconds at point B within journey. Total time elapsed from B to C will be 2 seconds + time delay between B and C. There is no restriction in speed at which they run but they will have to follow the *rule.
Now assume only one guy has to follow same procedure. Should that one guy reach to C at lesser time ? No, not at all.

The point is, no matter whether it is four guys at a time or one guy at a time, they took same amount of time to travel because of the *rule. Given their(four or one guy) speed is constant.
Replace guys with data and now you understand you can actually process more data at a time but you can not reduce the timing of *rule even when you process less data.

But what is actually the delay(*rule) in here in DRAM ?
The *rule in here means or you may call it _ready time_ of DRAM cells (the time required by the DRAM cells to prepare itself and place the data on the Data Bus of the CPU). When you increase the main clock frequency you can reduce the timing between [after the data *has been left from DRAM*] _and_ [when the data *reaches on the Data Bus* of the CPU ], but *we can not reduce the number of clock cycles of 'ready time' of DRAM cells which is basically limited by the technology or the architecture which it is based on.*

Therefore it doesn't matter whether you use SDR or QDR or Dual vs Quad channel from CPU/memory controller side, you are still limited to process some variable amount of data by 'DRAM ready time' which is usually told to us as the number of clock cycles it takes to process the data (CL, RAS....etc). Don't be confused between nanoseconds and clock cycles. *The DRAM ready time is fixed in number of clock cycles.* So if you increase the main clock frequency you still have to multiply 'the **latency in time unit' *to* 'the number of clock cycles that specific DRAM technology takes to process'.

**reciprocal of clock frequency.

I am no expert, I'll be always happy to be corrected / beat / slapped if I went wrong at something.


----------



## MrKoala

A more direct analogy would be driving from A to B using either a small sedan or a bus. Under the same speed limit arrival time is the same, but the bus carries more people in each run.


----------



## sumitlian

Quote:


> Originally Posted by *MrKoala*
> 
> A more direct analogy would be driving from A to B using either a small sedan or a bus. Under the same speed limit arrival time is the same, but the bus carries more people in each run.


Agreed ! This is actually much better analogy compared to mine lol.









I think I went the long way because I wanted to differentiate 'the DRAM specific delay in clock cycles' from 'the delay in time unit between two cycles (1 / clock frequency)'.


----------



## KarathKasun

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I wish we moved on to Quad Data Rate system RAM already.


Eh, as long as the bus is fast enough it doesn't matter. QDR requires much more expensive motherboards and would suffer from noise introduced from the contacts.

GDDR5 DIMMs were designed an implemented in R&D, but the speeds/latencies were not good enough to be competitive for use as system memory.


----------



## Asmodian

Quote:


> Originally Posted by *MrKoala*
> 
> A more direct analogy would be driving from A to B using either a small sedan or a bus. Under the same speed limit arrival time is the same, but the bus carries more people in each run.


To extend the analogy, the bus has to wait for everyone to get on before it can leave. You move a lot more people per unit time but the first person arrives at the destination later when using the bus.

Happily you can also increase the speed (MHz) to allow the bus get there even earlier than the slower sedan would have but at a given speed the bus will always get the first person to the destination later than the sedan.


----------



## Excession

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I'm not understanding how QDR would be worse than DDR. The image shows 4 read & writes per cycle compared to 2 read & writes of DDR. That to me says it does more work in the same cycle than DDR even if the amount of cycles is the same. Can you explain it further?


You can get more done in the same time, yes. But that's a measure of _throughput_, not latency. Just because QDR can perform _four_ operations in one cycle doesn't mean it can do _one_ operation in a quarter-cycle. Latency is a measure of how fast you can access memory, not a measure of how quickly you can move data once you already have accessed it.


----------



## FreeElectron

I can't take Fallout 4 game's engine seriously.


----------



## feathers632

So... Ryzen (Piezen) benefits from higher ram speed because the "fabric" linking the two quad core processors depends on ram frequency for its transfer rate.

Still no match for an 8700k at 5ghz plus though eh?


----------



## boredgunner

Quote:


> Originally Posted by *feathers632*
> 
> So... Ryzen (Piezen) benefits from higher ram speed because the "fabric" linking the two quad core processors depends on ram frequency for its transfer rate.
> 
> Still no match for an 8700k at 5ghz plus though eh?


Current Infinity Fabric speed (1/2 RAM frequency) is actually a problem for a lot of games, so yeah. For Ryzen to be a match for that, it would need not only a similar frequency, but Infinity Fabric speed would need to be 1:1 with RAM speed, it would need better compatibility with faster RAM, and preferably faster L3 cache as well (at that point it should be a match core for core).


----------



## feathers632

Quote:


> Originally Posted by *boredgunner*
> 
> Current Infinity Fabric speed (1/2 RAM frequency) is actually a problem for a lot of games, so yeah. For Ryzen to be a match for that, it would need not only a similar frequency, but Infinity Fabric speed would need to be 1:1 with RAM speed, it would need better compatibility with faster RAM, and preferably faster L3 cache as well (at that point it should be a match core for core).


A friend a mine has his 2666mhz ram running at 4ghz with his piezen 1700 at 3.9. I find it hard to believe and I'm gonna ask him to show me a screen shot.

He has the same giga gamibg 5 mobo I have.

It wouldn't surprise me to learn that his ram is running at default speed and he thinks it's at 4000 but we will see.

I'm debating whether to go running back to intel and beg forgiveness. Piezen isn't a bad CPU at all but I run VR and it's HTC Vive so every last fps helps. From benchmarks i see ryzen can match 7700k (5ghz) on some games or surpass it by a few fps but 8700k can be 20fps more than 7700k.

If i go 8700k i would delid and use my liquid cooling.


----------



## Robilar

If he has the timings loose enough, its possible to hit that speed. I'm running a 4x8GB kit of DDR4 3200hz with CL14 timings default. If I loosen the timings significantly and up the CL, I can get the RAM up to 4000hz but I'm not sure there is any real benefit. I prefer lower CL, tighter timings.


----------



## Chargeit

Ram propaganda. Want to waste a few hundred on an upgrade that doesn't matter, get faster ram. Only difference I've ever noticed moving from slower to faster ram was in the oddball benchmark. Think it's biggest benefit shows up in higher fps during a loading screen.


----------



## Asmodian

Quote:


> Originally Posted by *Robilar*
> 
> If he has the timings loose enough, its possible to hit that speed. I'm running a 4x8GB kit of DDR4 3200hz with CL14 timings default. If I loosen the timings significantly and up the CL, I can get the RAM up to 4000hz but I'm not sure there is any real benefit. I prefer lower CL, tighter timings.


Remember latency is a combination of timings and frequency. We want low latency, not just low timings.

DDR4 3200 CL 14 has the same latency as DDR4 4000 CL 17.5, so if you can get DDR4 4000 at CL17 it has lower latency than DDR4 3200 CL14.


----------



## The Robot

Quote:


> Originally Posted by *Chargeit*
> 
> Ram propaganda. Want to waste a few hundred on an upgrade that doesn't matter, get faster ram. Only difference I've ever noticed moving from slower to faster ram was in the oddball benchmark. Think it's biggest benefit shows up in higher fps during a loading screen.


Yeah, 3200Mhz is the sweet spot right now price/performance wise for both AMD and Intel. Anything more is epeen and diminishing returns territory.


----------



## GoLDii3

Too bad where i live 16 GB DDR4 @4 GHz is almost 300$. Just lol.


----------



## michaelius

Quote:


> Originally Posted by *feathers632*
> 
> So... Ryzen (Piezen) benefits from higher ram speed because the "fabric" linking the two quad core processors depends on ram frequency for its transfer rate.
> 
> Still no match for an 8700k at 5ghz plus though eh?


Of course not - Intel benefits from ram speed as much or more

https://www.purepc.pl/procesory/test_procesora_intel_core_i5_8600k_rzeznik_zwany_coffee_lake?page=0,45


----------



## czin125

Would be nice if they listed the NB clock for those too.


----------



## feathers632

The fps increase is nothing to do with the ram performance. Just the infinity fabric being tied to ram clock.


----------



## inedenimadam

FO4 has been known since the days after launch to get a solid boost from RAM overclocking.


----------



## Desolutional

Quote:


> Originally Posted by *inedenimadam*
> 
> FO4 has been known since the days after launch to get a solid boost from RAM overclocking.


That's because Fallout 4 uses the equivalent of an old man with a walking stick game engine.


----------



## Infrasonic

Quote:


> Originally Posted by *inedenimadam*
> 
> FO4 has been known since the days after launch to get a solid boost from RAM overclocking.


ANY CPU-bound game engine/software will benefit from faster RAM clocks.

Not just Fallout 4.


----------



## TFL Replica

Quote:


> Originally Posted by *Desolutional*
> 
> That's because Fallout 4 uses the equivalent of an old man with a walking stick game engine.


An old man with a smartphone. Absolutely clueless, but thinks he's "up to date".


----------



## inedenimadam

Quote:


> Originally Posted by *Desolutional*
> 
> Quote:
> 
> 
> 
> Originally Posted by *inedenimadam*
> 
> FO4 has been known since the days after launch to get a solid boost from RAM overclocking.
> 
> 
> 
> That's because Fallout 4 uses the equivalent of an old man with a walking stick game engine.
Click to expand...

No argument from me! modified gamebryo engine, right? Morrowind era technology









Quote:


> Originally Posted by *Infrasonic*
> 
> Quote:
> 
> 
> 
> Originally Posted by *inedenimadam*
> 
> FO4 has been known since the days after launch to get a solid boost from RAM overclocking.
> 
> 
> 
> ANY CPU-bound game engine/software will benefit from faster RAM clocks.
> 
> Not just Fallout 4.
Click to expand...

Yep, dynamic shadows are generated on the CPU for FO4, so it really isn't a surprise.


----------



## Jared Pace

Quote:


> Originally Posted by *michaelius*
> 
> Of course not - Intel benefits from ram speed as much or more
> 
> https://www.purepc.pl/procesory/test_procesora_intel_core_i5_8600k_rzeznik_zwany_coffee_lake?page=0,45




wow amazing

That review shows huge gains going from 2133mhz to 3200mhz. Wonder if going from 4200mhz -> 4600mhz will show double of those



http://cdn.wccftech.com/wp-content/uploads/2017/10/04.z370_spec_sheet_eng-1030x426.png


----------



## BoredErica

Quote:


> Originally Posted by *Jared Pace*
> 
> wow amazing
> 
> That review shows huge gains going from 2133mhz to 3200mhz. Wonder if going from 4200mhz -> 4600mhz will show double of those
> 
> 
> 
> http://cdn.wccftech.com/wp-content/uploads/2017/10/04.z370_spec_sheet_eng-1030x426.png


Why expect extra 400mhz to matter much? 3200/2133 vs 4600/4200 or 4600c19/3200c15.


----------



## Jared Pace

oops i meant 3200 to 4600


----------



## StrongForce

Quote:


> Originally Posted by *Infrasonic*
> 
> ANY CPU-bound game engine/software will benefit from faster RAM clocks.
> 
> Not just Fallout 4.


I would say it would be useful with any engine in CPU-intensive situations too, a good example would be that scene in Crysis 3 with the tower, but if you take into account it would help in any CPU intensive situation it really make sense to go for higher speed RAM, not just for "sub-par" engines lol


----------



## Neo_Morpheus

The problem with increasing fps in games with fast ram is the increase in thermals from the graphics card. I think a cpu from the future, you would have to turn the cards right down.


----------



## czin125

You should be able to OC the 4600C19 1.50v kits to 4500 16-16-16-36 535 1.50v ( the latter was demonstrated at computex 2016 with a 6700K with 3x120mm cooling at stock on a test bench ). Considering Coffeelake is Skylake's third generation, it should be even easier to run if paired with a 2dimm board.

4500C16 should edge out 4000C14 by a bit.


----------



## Malinkadink

Quote:


> Originally Posted by *czin125*
> 
> You should be able to OC the 4600C19 1.50v kits to 4500 16-16-16-36 535 1.50v ( the latter was demonstrated at computex 2016 with a 6700K with 3x120mm cooling at stock on a test bench ). Considering Coffeelake is Skylake's third generation, it should be even easier to run if paired with a 2dimm board.
> 
> 4500C16 should edge out 4000C14 by a bit.


Other way around, 4000CL14 will edge out 4500CL16

14/4000*2000 = 7ns
16/4500*2000 = 7.11ns


----------



## Excession

Quote:


> Originally Posted by *Malinkadink*
> 
> Other way around, 4000CL14 will edge out 4500CL16
> 
> 14/4000*2000 = 7ns
> 16/4500*2000 = 7.11ns


That's a 1.5% increase in latency, yes, but you get 12.5% better bandwidth. I can't imagine that being a loss in real-world performance.


----------



## czin125

But you do get something like 32 / 4.5 and 39 / 4.5 for 4500C16. For 4000C14, you'd have 28 / 4.0 and 35 / 4.0.
32 + 5 / 4.5 = 8.22 ( 6th) starts getting lower than 4000C14
28 + 5 / 4.0 = 8.25 ( 6th )
39 / 4.5 = 8.667
35 / 4.0 = 8.750


----------



## Offler

Quote:


> Originally Posted by *Desolutional*
> 
> That's because Fallout 4 uses the equivalent of an old man with a walking stick game engine.


Quote:


> Originally Posted by *TFL Replica*
> 
> An old man with a smartphone. Absolutely clueless, but thinks he's "up to date".


Its not just because of old engine. Anyway effects like mentioned in firs post can be measured by LinX or other tests which are designed to stress RAM and CPU.

For example my ram 1600 CAS 6-6-6-18 vs 1600 cas 9-9-9-27 was difference over 10% in Gflops ~ CPU performance.

If you want to have 10% more FPS, at situation that each part of the system is used at same rate, you should OC by 10% everything. CPU, RAM, GPU...If its archieved by OC of only one of the components, it usually shows that it was bottlenecking.

Thats why i wrote to people who were doing either Gaming FPS tests or CPU performance tests to always include their RAM settings, both frequency and latency. In most cases they left everything on "auto" and results they gathered were much worse i archieved from some memory tuning.

If you really want to make fake review and say some CPU has more/less performance, the easiest thing is to tamper with memory settings and then not mention anything related to memory settings.


----------



## VeritronX

With my old i7 4790K at stock I got a 9-10% boost in cpu performance in anything I cared to test by increasing the ram from 1600C9 to 2400C10.. And with this new ryzen 8 core I get a 30% boost in the firestrike combined score going from 2400C15 to 3466C15. Both systems had 2x8GB kits of ram, never really used more than about 12GB of it for anything other than stability testing.


----------



## Infrasonic

Quote:


> Originally Posted by *StrongForce*
> 
> I would say it would be useful with any engine in CPU-intensive situations too, a good example would be that scene in Crysis 3 with the tower, but if you take into account it would help in any CPU intensive situation it really make sense to go for higher speed RAM, not just for "sub-par" engines lol


Thanks. That's....exactly what I said though with less words.


----------



## diggiddi

Quote:


> Originally Posted by *Chargeit*
> 
> Ram propaganda. Want to waste a few hundred on an upgrade that doesn't matter, get faster ram. Only difference I've ever noticed moving from slower to faster ram was in the oddball benchmark. Think it's biggest benefit shows up in higher fps during a loading screen.


IIRC in BF4 with mantle stock FX 8350 with 2400mhx gskill snipers was way smoother than 4.8ghz OC CPU and 1600mhz ram
Only problem was unable to record fps with mantle running


----------



## prjindigo

Quote:


> Originally Posted by *EniGma1987*
> 
> Normally I dont like to indulge persons such as yourself, but perhaps you should learn to choose your words more carefully? What you explained still doesn't result in what you actually said originally. Your further explanation also doesn't add up logically if you really think about it.


I do not think you understand reality at all.

Any devices on the chipset take priority over hard drive traffic and since the drive is x4 and the chipset is x4, that means moving the mouse has to ALSO go through that x4.


----------



## TheBloodEagle

Quote:


> Originally Posted by *Chargeit*
> 
> Ram propaganda. Want to waste a few hundred on an upgrade that doesn't matter, get faster ram. Only difference I've ever noticed moving from slower to faster ram was in the oddball benchmark. Think it's biggest benefit shows up in higher fps during a loading screen.


What a hypocrite. Your main rig has 3600MHz RAM. If RAM speed doesn't matter why then hell did you bother then? Oh, and speaking of "an upgrade that doesn't matter", I'm sure the RGB LEDs boosted your FPS. Another fake enthusiast thinking he knows best, saying one thing, doing another, trying to pretend they have a clue.

http://www.overclock.net/lists/display/view_item/id/6769857


----------



## TheBloodEagle

Quote:


> Originally Posted by *Excession*
> 
> You can get more done in the same time, yes. But that's a measure of _throughput_, not latency. Just because QDR can perform _four_ operations in one cycle doesn't mean it can do _one_ operation in a quarter-cycle. Latency is a measure of how fast you can access memory, not a measure of how quickly you can move data once you already have accessed it.


Quote:


> Originally Posted by *sumitlian*
> 
> post


I appreciate the responses from everyone.

But don't you still have to consider the latency of the system as a whole, in an Amdahl's Law kind of way? QDR can do _Read & Write at the same time,_ same cycle while DDR can only do Read *or* Write. So although the latency itself, within the chip, electrically hasn't changed, access wise, the time overall as an entire sequence of events, is probably faster, no? Irregardless of throughput. DDR5 seems to be SOLELY focused on throughput and density. That's already getting "fixed"; that's not really our issue though as gamers and enthusiasts, that's an enterprise want/need. So what are WE getting with DDR5? The CPU already waits tens to hundreds of cycles with RAM, I'm not sure how long a GPU waits after that, instruction & swapping wise, but isn't there a big benefit in being able to do two things at once time rather than waiting on one function to complete first? Isn't there less "overhead", isn't that part of the latency as a whole? I guess what I'm struggling with is the idea of time of action and the sequence and whether it's all faster. I mean, would we LOSE anything with QDR? It seems like a gain overall.

I think this is called *"round trip"* latency.


----------



## spin5000

Quote:


> Originally Posted by *VeritronX*
> 
> .. And with this new ryzen 8 core I get a 30% boost in the firestrike combined score going from 2400C15 to 3466C15.


Ya but those are terrible 2400 timings, also, you're testing almost 50% higher frequency without raising the timings so obviously that combo will win by miles.

Test 2400 MHz @ 11-11-11-24 or at least @ 11-11-11-27.


----------



## Blameless

Quote:


> Originally Posted by *TFL Replica*
> 
> An old man with a smartphone. Absolutely clueless, but thinks he's "up to date".


I'm a young man who is confounded by touch screens and stopped using my phone entirely when my wife upgraded it to one that didn't have a full keyboard.

What game engine am I?
Quote:


> Originally Posted by *Excession*
> 
> That's a 1.5% increase in latency, yes, but you get 12.5% better bandwidth. I can't imagine that being a loss in real-world performance.


It won't be, except in the very rare, completely latency limited, task.
Quote:


> Originally Posted by *Offler*
> 
> For example my ram 1600 CAS 6-6-6-18 vs 1600 cas 9-9-9-27 was difference over 10% in Gflops ~ CPU performance.


LINPACK is exceptionally memory performance limited.
Quote:


> Originally Posted by *spin5000*
> 
> Ya but those are terrible 2400 timings, also, you're testing almost 50% higher frequency without raising the timings so obviously that combo will win by miles.
> 
> Test 2400 MHz @ 11-11-11-24 or at least @ 11-11-11-27.


Not all memory will do 2400 11-11-11, though a fair bit will.

Almost everything will do 2400 CL12 or 13.


----------



## Offler

Quote:


> Originally Posted by *Blameless*
> 
> LINPACK is exceptionally memory performance limited.


yes. But rendering process is partially based on creating wireframe data, storing them in RAM where they will be picked up by graphic card.

I used problem size 9992 (700mb). Knowhing how much data is generated by 3d engine might help to simulate exact problem size with Linpack and you can measure that way how much is CPU or RAM dependant is the CPU part of render process.


----------



## MrKoala

Quote:


> Originally Posted by *TheBloodEagle*
> 
> In my mind, if we talk about analogies, it's more like instead of having more than one person or multiple vehicles like mentioned it's more like DDR is a one-armed guy (gimped), while QDR is a guy that has two arms. The device, say "drums" is still the same, but the "musician" has way more finesse now and can handle multiple functions with two arms, rather than one. It's not "more people" per se, but the person can handle more, quicker, together, in sync.


You're still talking about throughput. Using your analogy, a pure latency-limited situation is when the musician has two limbs but there's only one drum to hit during that song.

Pure latency-limited workloads are very rare in real life work, but as a performance metric latency means what it means.


----------



## profundido

Quote:


> Originally Posted by *KarathKasun*
> 
> Increased memory speeds at relatively tight timings will net a performance benefit when any single core is pushed up to maximum load. It reduces the latency between requesting data and receiving data which can impact maximum per-core throughput.


Bullseye ! Therefore people be shouldn't comparing memory modules by their speed in Mhz as a rough indicator for performance but rather a relative factor number such as Speed/CL-timing. Anything above 220 is fast ! For instance Gskill 3600Mhz/CL15= 240

So many people in the first 25 pages of this thread did a test of 2 different modules with a large different in speed but roughly same factor and hence saw no difference...


----------



## profundido

[/quote]
Quote:


> Originally Posted by *Lays*
> 
> Here's a test I ran a long time ago on my 4790k with 2800 CL9-12-12 vs 6700k with~4100 mhz 12-11-11


That is madness. I didn't even think these timings were possible at that speed. What exact memory modules did you use to achieve this 4100Mhz CL12 result if I may ask ?


----------



## czin125

Quote:


> Originally Posted by *Blameless*
> 
> Not all memory will do 2400 11-11-11, though a fair bit will.
> 
> Almost everything will do 2400 CL12 or 13.


2400C11 is only 218. It should be even more doable than 3200C14 ( 228 at higher frequency )


----------



## Chargeit

Quote:


> Originally Posted by *TheBloodEagle*
> 
> What a hypocrite. Your main rig has 3600MHz RAM. If RAM speed doesn't matter why then hell did you bother then? Oh, and speaking of "an upgrade that doesn't matter", I'm sure the RGB LEDs boosted your FPS. Another fake enthusiast thinking he knows best, saying one thing, doing another, trying to pretend they have a clue.
> 
> http://www.overclock.net/lists/display/view_item/id/6769857


I have the ram I wanted in my system. That doesn't mean I'd recommend fast/expensive ram to someone wanting real world performance gains. Do as I say not as I do and you'll be alright.


----------



## PontiacGTX

Quote:


> Originally Posted by *profundido*
> 
> Quote:
> 
> 
> 
> 
> 
> 
> 
> That is madness. I didn't even think these timings were possible at that speed. What exact memory modules did you use to achieve this 4100Mhz CL12 result if I may ask ?
Click to expand...

maybe F4-3600C16Q-32GTZR
CMD16GX4M4B3666C18


----------



## Blameless

Quote:


> Originally Posted by *czin125*
> 
> 2400C11 is only 218. It should be even more doable than 3200C14 ( 228 at higher frequency )


What clocks and timings ICs are capable of often doesn't follow any such relationship.


----------



## DR4G00N

Quote:


> Originally Posted by *profundido*
> 
> That is madness. I didn't even think these timings were possible at that speed. What exact memory modules did you use to achieve this 4100Mhz CL12 result if I may ask ?


Likely some sort of G.SKILL 16GTZ (Regular stuff, not LED) or GALAX HOF with Samsung B-Die.

Have to bin kit's for something that good of course and probably up around 1.9V VDIMM.


----------



## Asus11

so which is faster?

F4-3600C17D-32GTZR

or

F4-3200C14D-32GTZR

or do they work out to be about the same?


----------



## Seyumi

Quote:


> Originally Posted by *Asus11*
> 
> so which is faster?
> 
> F4-3600C17D-32GTZR
> 
> or
> 
> F4-3200C14D-32GTZR
> 
> or do they work out to be about the same?


It's been proven multiple times that higher speed outweights lower CAS latency on slower speeds.

So yes 3600 Cas 17 will be faster than 3200 Cas 14


----------



## NorcalTRD

Quote:


> Originally Posted by *Seyumi*
> 
> It's been proven multiple times that higher speed outweights lower CAS latency on slower speeds.
> 
> So yes 3600 Cas 17 will be faster than 3200 Cas 14


What about 3200 Cas 14 vs 4266 cas 19


----------



## Seyumi

Quote:


> Originally Posted by *NorcalTRD*
> 
> What about 3200 Cas 14 vs 4266 cas 19


Yes the 4266 C19 will beat the 3200 C14. There's 1 major issue though:

If you use a fairly overclocked CPU, you become limited with your ram speeds. For example, my 5.2ghz 7700k can only handle 3600 ram and anything higher leads to instability & blue screens. Obviously you would want to have a higher clocked CPU versus RAM. Of course this is just XMP profile, I'm sure with WEEKS of manual tweaking and testing I could maybe run something higher than 3600 but I couldn't be bothered. This is why I roll my eyes everytime G Skill comes out with higher speed ram. I'm pretty sure you can only run those speeds on a STOCK cpu and no overclock at all. And yes I've experienced this on multiple systems such as Z77, Z87, Z170, X99, etc.


----------



## NorcalTRD

Quote:


> Originally Posted by *Seyumi*
> 
> Yes the 4266 C19 will beat the 3200 C14. There's 1 major issue though:
> 
> If you use a fairly overclocked CPU, you become limited with your ram speeds. For example, my 5.2ghz 7700k can only handle 3600 ram and anything higher leads to instability & blue screens. Obviously you would want to have a higher clocked CPU versus RAM. Of course this is just XMP profile, I'm sure with WEEKS of manual tweaking and testing I could maybe run something higher than 3600 but I couldn't be bothered. This is why I roll my eyes everytime G Skill comes out with higher speed ram. I'm pretty sure you can only run those speeds on a STOCK cpu and no overclock at all. And yes I've experienced this on multiple systems such as Z77, Z87, Z170, X99, etc.


Glad you mentioned that, its the same reason I went with the 3200 cas 4 over the 4266 cas 19.
I intend to delid and overclock my 8700k 5.0ghz+


----------



## Asus11

Quote:


> Originally Posted by *Seyumi*
> 
> Yes the 4266 C19 will beat the 3200 C14. There's 1 major issue though:
> 
> If you use a fairly overclocked CPU, you become limited with your ram speeds. For example, my 5.2ghz 7700k can only handle 3600 ram and anything higher leads to instability & blue screens. Obviously you would want to have a higher clocked CPU versus RAM. Of course this is just XMP profile, I'm sure with WEEKS of manual tweaking and testing I could maybe run something higher than 3600 but I couldn't be bothered. This is why I roll my eyes everytime G Skill comes out with higher speed ram. I'm pretty sure you can only run those speeds on a STOCK cpu and no overclock at all. And yes I've experienced this on multiple systems such as Z77, Z87, Z170, X99, etc.


maybe if you had a better motherboard it could help, I know the impacts are one of the best boards to accept high memory speeds also ASrock do a board also.


----------



## BoredErica

If ram speeds are everything, the z370 Strix I has the highest rated ram overclocking according to Asus.


----------



## ZealotKi11er

Going from 1600 CL9 to 2400 CL10 transformed my 3770K.


----------



## TUFinside

Sadly, no way i can push my 2400Mhz to 3000, not even 2800. All ram slots are occupied, so too much strain. I'm running my 32GB at 2400 15-15-15-28


----------



## The Robot

3200MHz CL14 is still the sweet spot.
https://www.techpowerup.com/reviews/Intel/Core_i7_8700K_Coffee_Lake_Memory_Performance_Benchmark_Analysis/


----------



## PontiacGTX

Quote:


> Originally Posted by *The Robot*
> 
> 3200MHz CL14 is still the sweet spot.
> https://www.techpowerup.com/reviews/Intel/Core_i7_8700K_Coffee_Lake_Memory_Performance_Benchmark_Analysis/


yes, performance gain beyond 3466 seems minimal


----------



## ZealotKi11er

Quote:


> Originally Posted by *PontiacGTX*
> 
> yes, performance gain beyond 3466 seems minimal


You have to test in CPU limiting games and places. Just doing average runs will not show memory gains.


----------



## PontiacGTX

Quote:


> Originally Posted by *ZealotKi11er*
> 
> You have to test in CPU limiting games and places. Just doing average runs will not show memory gains.


well yeah but in most of cases people who owns a RX 580 or slower might not see a performance gain unless the game itself is cpu limited


----------



## ZealotKi11er

Quote:


> Originally Posted by *PontiacGTX*
> 
> well yeah but in most of cases people who owns a RX 580 or slower might not see a performance gain unless the game itself is cpu limited


Yes. If budged is a thing than you do not not look into getting the very best memory. 4266MHz is not that much more than 3200.


----------



## Asus11

just purchased some 32gb 16gb x 2 sticks 3200 c14 samsung b die double sided.. hoping they overclock as good as the 8gb sticks couldn't find much info online about 16gb sticks.. hoping for 3600 c16 or even 3866 lol.


----------



## ht_addict

Quote:


> Originally Posted by *Asus11*
> 
> just purchased some 32gb 16gb x 2 sticks 3200 c14 samsung b die double sided.. hoping they overclock as good as the 8gb sticks couldn't find much info online about 16gb sticks.. hoping for 3600 c16 or even 3866 lol.


 I had Corsair Vengeance 2x16GB 3200 which were B-die. Not very good at overcooking. Now have GSkill 2x8GB RGB 3600(B-Die). Haven't tried overcooking but did tighten the timings a bit.(16-16-16-16-36 to 14-15-15-15-36)


----------



## abso

Quote:


> Originally Posted by *ht_addict*
> 
> I had Corsair Vengeance 2x16GB 3200 which were B-die. Not very good at overcooking. Now have GSkill 2x8GB RGB 3600(B-Die). Haven't tried overcooking but did tighten the timings a bit.(16-16-16-16-36 to 14-15-15-15-36)


How do I check if I have B-die chips on my Corsair Vengeance? I have 2x8GB 3000 and AIDA tells me chips are from Samsung. I cant get my system to boot pas 3100Mhz but I think it is my ASUS Z170 Pro Gaming Boards fault. This Board seems to have problems with >=3200Mhz and Skylake. Now I decided to try and lower timings instead of overclocking. Just Testing 14-14-14-29-2T (Default 15-17-17-35). Is HCI Memtest still the best option to test when lowering timings?


----------



## Shiftstealth

Quote:


> Originally Posted by *abso*
> 
> How do I check if I have B-die chips on my Corsair Vengeance? I have 2x8GB 3000 and AIDA tells me chips are from Samsung. I cant get my system to boot pas 3100Mhz but I think it is my ASUS Z170 Pro Gaming Boards fault. This Board seems to have problems with >=3200Mhz and Skylake. Now I decided to try and lower timings instead of overclocking. Just Testing 14-14-14-29-2T (Default 15-17-17-35). Is HCI Memtest still the best option to test when lowering timings?


You probably have Samsung E-Die. Try running thaiphoon burner.


----------



## abso

Quote:


> Originally Posted by *Shiftstealth*
> 
> You probably have Samsung E-Die. Try running thaiphoon burner.


Thanks, your guess was right it is E/20nm. I just passed 500% HCI Memtest on 14-14-14-29-2T/3000Mhz/1.36V with them. So It must really be the Asus board that wont go past 3100Mhz no matter what timings.


----------



## Shiftstealth

Quote:


> Originally Posted by *abso*
> 
> Thanks, your guess was right it is E/20nm. I just passed 500% HCI Memtest on 14-14-14-29-2T/3000Mhz/1.36V with them. So It must really be the Asus board that wont go past 3100Mhz no matter what timings.


Looking at your boards QVL it is really scarce. Theres no B-Die listed at all at 3200CL14 for your board. Some speeds above 3000 only run on a single stick. Others only with 2x4GB. That being said i'm not sure that the board is 100% to blame. The kit isn't of the best quality either. E-Die is pretty basic stuff. Probably a little bit of column at A little bit of column B.


----------



## Asus11

Quote:


> Originally Posted by *ht_addict*
> 
> I had Corsair Vengeance 2x16GB 3200 which were B-die. Not very good at overcooking. Now have GSkill 2x8GB RGB 3600(B-Die). Haven't tried overcooking but did tighten the timings a bit.(16-16-16-16-36 to 14-15-15-15-36)


c14 or c16? apparently the c16 is an older iteration and the c14 are meant to clock better...well that's what I've heard


----------



## nrpeyton

Hi guys,

I've been doing some tinkering (and research) to try and tighten up my *secondary* *memory timings* and so far here are the results:

*BEFORE* 3600 Mhz - (14, 15, 15, 32, 1-T @ 1.475V) with secondary timings below _and_ results:



*AFTER* (with secondary timings changed as shown) with results:











*Before:*
_(average *read* between two tests)_
49,836 MB/s
] _(average *write* between two tests)_
52,395 MB/s
_(average *copy*between two tests)_
43,156 MB/s

*After:* .. . .








_(average *read* between two tests)_
53,405 MB/s
_(average *write* between two tests)_
55,343 MB/s
_(average *copy*between two tests)_
44,114 MB/s

Anyone got any advice R.E. the secondary timings I've changed? (On bottom table). Is there anything I've miscalculated that could cause a problem/or is there anything you think I could safely tighten up some more?

Thanks.


----------



## Shiftstealth

Quote:


> Originally Posted by *Asus11*
> 
> c14 or c16? apparently the c16 is an older iteration and the c14 are meant to clock better...well that's what I've heard


My B-Die 3200CL14 without much tinkering went to 3466CL14. I pushed it to 3555CL14 but didnt stress it too hard. Only 1.4v. Will try 3600CL14 when i get some more time.

PS: On Ryzen it only went to 2933CL16 - Sigh.


----------



## Asus11

Quote:


> Originally Posted by *Shiftstealth*
> 
> My B-Die 3200CL14 without much tinkering went to 3466CL14. I pushed it to 3555CL14 but didnt stress it too hard. Only 1.4v. Will try 3600CL14 when i get some more time.
> 
> PS: On Ryzen it only went to 2933CL16 - Sigh.


my set came yesterday.. can't test right now.. need to install windows pro/ultimate limited to 16gb atm lol.


----------



## TheBloodEagle

Still bothers me so much that no 3200MHz ECC kit exists even though that's the current JEDEC spec.


----------



## Evanlet

I can believe it. Going from my 3770k @ 4.4GHz with 1600MHz DDR3 ram to an 8700K @ 4.9GHz with 3200MHz DDR4 ram the difference in WoW at Dalaran's busiest parts went from 30fps to well over 150fps average on a 1080 at 4K.

The nice boost in power from the extra cores and improved ipc performance of CF over IB would be the biggest contributor of course, but I doubt that alone was adding 120+ fps.


----------



## MrKoala

Quote:


> Originally Posted by *Evanlet*
> 
> I can believe it. Going from my 3770k @ 4.4GHz with 1600MHz DDR3 ram to an 8700K @ 4.9GHz with 3200MHz DDR4 ram the difference in WoW at Dalaran's busiest parts went from 30fps to well over 150fps average on a 1080 at 4K.
> 
> The nice boost in power from the extra cores and improved ipc performance of CF over IB would be the biggest contributor of course, but I doubt that alone was adding 120+ fps.


It totally can add 120fps+.

A game is much more than a graphics tool, and the CPU load from other subsystems don't scale with frame rate. Having a dramatic increase in frame rate from limited CPU performance boost means your older CPU was running out of breathing room. Non-graphical load was taking most of the available CPU time already and the graphical subsystem doesn't have much left. In this case even a small boost in total CPU throughput is a big deal for frame rate.


----------



## Targonis

*RAM update for Ryzen*

I know this thread is a bit old at this point, but BIOS updates are allowing a lot of progress when it comes to RAM support for Ryzen. I use the Asus ROG Crosshair VI Hero, so have gotten newer BIOS versions than what is on the site, and even Hynix M-die can hit 3200 speeds at this point with 3101 and 3501 BIOS versions. People are reporting hitting 3600 and over in a number of situations as well. The results from benchmarks and games are significant, and should not be discounted. Faster RAM speeds that might result in a 5 percent improvement on Intel can result in a 10-15 percent improvement on Ryzen.

DDR4-2133 or 2400 speeds should be seen as intentionally hurting the performance of Ryzen based systens at this point.


----------



## Pavulonix

Targonis said:


> Faster RAM speeds that might result in a 5 percent improvement on Intel can result in a 10-15 percent improvement on Ryzen.
> 
> DDR4-2133 or 2400 speeds should be seen as intentionally hurting the performance of Ryzen based systens at this point.


Actually Intel CPU's have huge gains from faster RAM in new games (up to +30-40%) :
https://www.purepc.pl/pamieci_ram/test_pamieci_ddr4_2133_3600_mhz_na_intel_core_i5_8600k?page=0,3


----------



## feathers632

What are the highest ram clocks (sustainable) with Ryzen 2700?


----------

