# Ok... So who needs a 50-66TB server??



## spinFX

Looks all good. Whats that going to be used for? Home lab + file server?


----------



## fg2chase

Plex mostly


----------



## parityboy

*@OP*

Does the 750D come with all of those drive cages or did you have to order extras?


----------



## fg2chase

Quote:


> Originally Posted by *parityboy*
> 
> *@OP*
> 
> Does the 750D come with all of those drive cages or did you have to order extras?


I had to order them from Corsair.


----------



## Liranan

Very nice setup. Hopefully this will serve you for years to come. My server is frustrating me, so far I've had to deal with several broken drives and am about to RMA yet another. Fortunately I haven't lost any data this time but still annoying.

What is your average hard disk replacement rate?


----------



## parityboy

*@Liranan*

Are you just unlucky or do you think there might be a fundamental underlying issue? Vibration? Bad power?


----------



## PCSarge

Quote:


> Originally Posted by *parityboy*
> 
> *@Liranan*
> 
> Are you just unlucky or do you think there might be a fundamental underlying issue? Vibration? Bad power?


may not be bad power, but lack of amps for the motors.


----------



## parityboy

Quote:


> Originally Posted by *PCSarge*
> 
> may not be bad power, but lack of amps for the motors.


Could be, but surely that would simply cause the drives to drop off? It shouldn't physically damage the drive, should it?


----------



## fg2chase

Got the two step sisters here. Lol




Took the old one offline today as well.... years and years of faithful service.


----------



## Liranan

Quote:


> Originally Posted by *parityboy*
> 
> *@Liranan*
> 
> Are you just unlucky or do you think there might be a fundamental underlying issue? Vibration? Bad power?


The authorised Toshiba reseller I usually frequent didn't have any drives in stock at the time so I bought a drive from a less reputable seller and now I'm paying the price for it.


----------



## Liranan

Quote:


> Originally Posted by *fg2chase*
> 
> 18 ST3000DM008 in sort of a raid with parity using windows storage spaces (it works, been using it for years)


From what I understand Storage Spaces takes up two thirds of available drive space. That leaves 6 drives for data in an 18 drive array. Am I right in understanding this?


----------



## fg2chase

Actually I changed it from parity to 2 way mirror. It does not use 66% of the pool lol.


----------



## fg2chase

yeah so the 1709 windows update totally broke my server, I was up all night doing a reformat and starting over.

Luckily my 3x 8TB drives came to the rescue.


----------



## fg2chase

Did some important upgrades to these cards. They were getting up to 140F according to my temp gun, now they are showing 95F!!!!


----------



## fg2chase

I decorated for Christmas too


----------



## The Pook

Are you sure you need such a powerful GPU as that to run your server?


----------



## fg2chase

Quote:


> Originally Posted by *The Pook*
> 
> Are you sure you need such a powerful GPU as that to run your server?


lol... It was the only 1x PCI-e slot GPU I could find and it was $12


----------



## fg2chase

Upgraded the motherboard today to the Crosshair VI Hero, I like the features of the ROG boards and this thing overclocked my Ryzen to 5GHz right out of the box. The gigabyte board didn't want to overclock at all.

Overall I like the way it turned out. I also ordered a 1050Ti for it too since plex support hardware acceleration now.


----------



## Liranan

From what I've read hardware acceleration leaves a lot to be desired, quality is sub par.

How did you overclock that 1800X to 5GHz? I didn't think that was possible.

https://support.plex.tv/hc/en-us/articles/115002178853-Using-Hardware-Accelerated-Streaming

Quote:


> Windows and Linux devices using Intel hardware-accelerated encoding do not have any artificial limit to the number of simultaneous videos.
> Windows and Linux devices using NVIDIA GeForce graphic cards are limited to hardware-accelerated encoding of 2 videos at a time. This is a driver limitation from NVIDIA.


Only two when the GPU is more capable than the CPU? No thanks.

Edit: I can't find any details on AMD hardware acceleration. Do you know if AMD GPU's are supported? If so I might look into getting one for my server.


----------



## parityboy

Quote:


> Originally Posted by *Liranan*
> 
> How did you overclock that 1800X to 5GHz? I didn't think that was possible.


It's certainly possible..._on LN2._ On air or water, absolutely no chance. It's an issue of the manufacturing process, which is why they top out at ~4GHz.


----------



## Liranan

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> How did you overclock that 1800X to 5GHz? I didn't think that was possible.
> 
> 
> 
> It's certainly possible...on LN2. On air or water, absolutely no chance. It's an issue of the manufacturing process, which is why they top out at ~4GHz.
Click to expand...

I am aware that Samsung's LP process stands for Low Power and not High Performance. I'm just curious how he got to 5GHz when the best binned Zen (Thread Ripper) maxes out at 4.2.

Zen+ (or 2.0) will hopefully reach 4.4 to 4.5 but that still isn't 5.


----------



## fg2chase

Quote:


> Originally Posted by *Liranan*
> 
> From what I've read hardware acceleration leaves a lot to be desired, quality is sub par.
> 
> How did you overclock that 1800X to 5GHz? I didn't think that was possible.
> 
> https://support.plex.tv/hc/en-us/articles/115002178853-Using-Hardware-Accelerated-Streaming
> 
> Only two when the GPU is more capable than the CPU? No thanks.
> 
> Edit: I can't find any details on AMD hardware acceleration. Do you know if AMD GPU's are supported? If so I might look into getting one for my server.


ah crap that is a typo. I meant 4GHz, it's my 8700k that is running at 5ghz and I had a brain fart... my mistake. Sorry guys


----------



## Liranan

You got my hopes up then dashed my dreams on the rocks of reality









How is that CPU when it comes to 4K transcoding?


----------



## fg2chase

Quote:


> Originally Posted by *Liranan*
> 
> You got my hopes up then dashed my dreams on the rocks of reality
> 
> 
> 
> 
> 
> 
> 
> 
> 
> How is that CPU when it comes to 4K transcoding?


I tested that yesterday and while the CPU does go to 100% usage it doesn't seem to affect the other streams just as I had hoped. It will go to 100% for 5-10 seconds and then throttle back a bit.


----------



## TheBloodEagle

Hey, just wanted to throw in that you could totally add another 120mm fan to the front, if you remove 3 of the 5.25 bay panels and use something like this:



I did it on mine:



I also recommend a PCI slot cooler. Jonsbo just came out with a really nice one if you want some RGB goodness in it (red to match your fans).



But I think something like this would work better to exhaust the air instead of circulate.

http://www.performance-pcs.com/lian-li-internal-pci-cooler-140mm-fan-x-1-1000rpm-black.html


----------



## Iwamotto Tetsuz

Do you have any idea on the maximum read write speeds of the system drives?


----------



## sakae48

meanwhile i'm here with my 10TB of storage.. daym..


----------



## Iwamotto Tetsuz

I own 8x 1tb drives in raid 0 with a 6tb drive for back up. Very fast


----------



## fg2chase

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Hey, just wanted to throw in that you could totally add another 120mm fan to the front, if you remove 3 of the 5.25 bay panels and use something like this:
> 
> 
> 
> I did it on mine:
> 
> 
> 
> I also recommend a PCI slot cooler. Jonsbo just came out with a really nice one if you want some RGB goodness in it (red to match your fans).
> 
> 
> 
> But I think something like this would work better to exhaust the air instead of circulate.
> 
> http://www.performance-pcs.com/lian-li-internal-pci-cooler-140mm-fan-x-1-1000rpm-black.html




I tried that, didn't work in the 750D, there is a fan in that top area that is kinda rigged up and it does blow on the drives. The drives do not currently exceed 80F

I put this in today too, the old 1x geforce I had was crashing the system so I threw this in. Supposedly it will help with Transcoding.

I will test the I/O speeds of the drives soon. I never really concerned myself with that tbh.

I use a temp gun to check the temps of the parts and tbh they don't really get all that hot, I think this thing is moving enough air.


----------



## Liranan

I'm curious about GPU streaming, does it really work well? I am wondering whether it's worth getting an AMD GPU instead of nVidia as AMD don't place a 2 stream artificial limit in their drivers (buy Tesla!).

As all GPU's made in the past few years can decode H.264 and newer ones can also decode H.265 I wonder whether it's necessary to get a GPU as powerful as the 1050Ti or whether a lower end one will work for a few streams.

Edit: I was too enthusiastic in my assessment:

https://www.techspot.com/article/1131-hevc-h256-enconding-playback/

Quote:


> Here's a quick rundown of well-known hardware that includes dedicated HEVC decoding blocks, which definitely support efficient HEVC playback:
> 
> 
> Intel 6th-generation 'Skylake' Core processors or newer
> AMD 6th-generation 'Carizzo' APUs or newer
> AMD 'Fiji' GPUs (Radeon R9 Fury/Fury X/Nano) or newer
> Nvidia GM206 GPUs (GeForce GTX 960/950) or newer
> Other Nvidia GeForce GTX 900 series GPUs have partial HEVC hardware decoding support
> Qualcomm Snapdragon 805/615/410/208 SoCs or newer. Support ranges from 720p decoding on low-end parts to 4K playback on high-end parts.
> Nvidia Tegra X1 SoCs or newer
> Samsung Exynos 5 Octa 5430 SoCs or newer
> Apple A8 SoCs or newer
> Some MediaTek SoCs from mid-2014 onwards


Disappointing.


----------



## fg2chase

Quote:


> Originally Posted by *Liranan*
> 
> I'm curious about GPU streaming, does it really work well? I am wondering whether it's worth getting an AMD GPU instead of nVidia as AMD don't place a 2 stream artificial limit in their drivers (buy Tesla!).
> 
> As all GPU's made in the past few years can decode H.264 and newer ones can also decode H.265 I wonder whether it's necessary to get a GPU as powerful as the 1050Ti or whether a lower end one will work for a few streams.
> 
> Edit: I was too enthusiastic in my assessment:
> https://www.techspot.com/article/1131-hevc-h256-enconding-playback/
> 
> Disappointing.


Well it looks like the 1050ti was a good choice then


----------



## fg2chase

had some stability issues with the 4.0ghz OC... backed it back down to a stock clocks.

I wonder if Ryzen 2 will be out next year?


----------



## Iwamotto Tetsuz

Did you exprience data corruption as a result?


----------



## fg2chase

Quote:


> Originally Posted by *Iwamotto Tetsuz*
> 
> Did you exprience data corruption as a result?


nope


----------



## Iwamotto Tetsuz




----------



## tiro_uspsss

Quote:


> Originally Posted by *Iwamotto Tetsuz*
> 
> Did you exprience data corruption as a result?


if OP isnt' running ReFS, then there's no way he'd know if he had/has data corruption. He's also not using ECC RAM, so good luck with that!


----------



## Iwamotto Tetsuz

Quote:


> Originally Posted by *tiro_uspsss*
> 
> if OP isnt' running ReFS, then there's no way he'd know if he had/has data corruption. He's also not using ECC RAM, so good luck with that!


He could do a full drive check with data checkig enabled, using windows built in thing.


----------



## stephenn82

Quote:


> Originally Posted by *fg2chase*
> 
> yeah so the 1709 windows update totally broke my server, I was up all night doing a reformat and starting over.
> 
> Luckily my 3x 8TB drives came to the rescue.


Running this kind of madness, you should run it in a virtual server and if its broken, you can just reload that last good config and be good to go


----------



## fg2chase

Quote:


> Originally Posted by *stephenn82*
> 
> Running this kind of madness, you should run it in a virtual server and if its broken, you can just reload that last good config and be good to go


not a bad idea but it's up and running now, with Acronis running a continuous backup so if anything goes awash I will just restore that.

it's up, working and I don't want to screw with it now lol


----------



## stephenn82

That is still good to go, other than running in a safe bubble. If anyhting crashes/dies client side, just redo. As long as its not on the RAID...thats a PAIN IN THE TUCUS to work on man. Even a RAID 0 is recoverable...but SO....MUCH....EFFORT!!

Glad you got your Plex machine running again!


----------



## fg2chase

updates... Server is still running great!

I ran passmark on my desktop (green bars) and server (red bars) and this is what it spat out.

Intel 
8700K @ 5Ghz
1080TI SLI
Maximus X Hero
32GB G skill RGB
HX1000i
Fractal S36
960PRO

Ryzen 1800X (plex server)
Crosshair VI Hero
32GB G Skill RGB
HX1000i
crap load of 3TB drives for 54TB 
960PRO
1050Ti


----------



## fg2chase

really waiting for the 2800X to come out so I can drop that in it.


----------



## burksdb

Nice! i just upgraded mine not too long ago.


----------



## fg2chase

burksdb said:


> Nice! i just upgraded mine not too long ago.


your plex server? What are the specs?


----------



## burksdb

Photo is slightly outdated but starting at the top:
Rack is a Hp 14u 10614 - Picked up for $20

24port 1g switch

24 port patch panel

1U - dual e5-2670v1, 64gb ddr3 ecc, 1x 256gb ssd w/ Ibm M1015 sas card - running ubuntu server - This is my zfs box

1U - dual e5-2670v1, 64gb ddr3 ecc, 4x 256 gb ssd's - running ubuntu server - I run my setup on docker via docker-compose for everything: Emby, Nginx, Nzbget, Ombi, Pihole, Plex, Plexpy, Portainer, Radarr, Sonarr, Squid, Watchtower. 

Shelf-
Mac Mini i5 256gb ssd running apple server for ios caching 
Raspberry Pi 3 model b running hypriot for docker arm testing

4u - no longer being used is an empty case - Norco 4220

4u - Norco 4224 JBOD, 850w corsair psu with a Hp sas expander card - connected to the first 1u with an external sas cable
10x 8tb Segate archive drives in raidz2
10x 3tb Wd red drives in raidz2

any questions let me know.


----------



## Cindex

What a couple of monster systems.... Glad I'm not the only one who likes Ryzen for servers though!


----------



## parityboy

*@burksdb*

Do you have plans to include the the unused 4U? How full is the first 4U in terms of diskspace usage?


----------



## burksdb

parityboy said:


> *@burksdb*
> 
> Do you have plans to include the the unused 4U? How full is the first 4U in terms of diskspace usage?


I most likely throw some more drives into the other 4u unless they bring out zfs pool expansion before my current setup is full.

i have about 15tb free between both pools. I need to move some stuff around.


----------



## Blindsay

fg2chase said:


> updates... Server is still running great!
> 
> I ran passmark on my desktop (green bars) and server (red bars) and this is what it spat out.
> 
> Intel
> 8700K @ 5Ghz
> 1080TI SLI
> Maximus X Hero
> 32GB G skill RGB
> HX1000i
> Fractal S36
> 960PRO
> 
> Ryzen 1800X (plex server)
> Crosshair VI Hero
> 32GB G Skill RGB
> HX1000i
> crap load of 3TB drives for 54TB
> 960PRO
> 1050Ti


Nice build!

Working on my plex server as well (Running Unraid as the base)

Do you have any photos of the rear of the case? Curious how you managed to run all of those SATA cables lol.

Also which model Supermicro card are you using? I need a decent card that can do JBOD (I have an LSI right now that doesnt)


----------



## fg2chase

Blindsay said:


> Nice build!
> 
> Working on my plex server as well (Running Unraid as the base)
> 
> Do you have any photos of the rear of the case? Curious how you managed to run all of those SATA cables lol.
> 
> Also which model Supermicro card are you using? I need a decent card that can do JBOD (I have an LSI right now that doesnt)


Well it was quite easy as there are only 5 cables right now. SAS to SATA breakout's and they are very small.


----------



## Blindsay

fg2chase said:


> Well it was quite easy as there are only 5 cables right now. SAS to SATA breakout's and they are very small.


Yeah maybe its because i am just using normal sata cables but i had a heck of a time getting the rear panel back on my case lol (Fractal Design R5)


----------



## fg2chase

Blindsay said:


> Yeah maybe its because i am just using normal sata cables but i had a heck of a time getting the rear panel back on my case lol (Fractal Design R5)


Yes sir, got some supermicro SAS cards in this beast.


----------



## fg2chase

I started tearing down the old server today and added the 3TB drives from it to the new server. They are the same model. Now the capacity is 66TB. (33TB) technically since 1/2 is redundancy. Now I have 12 2TB drives I need to figure out what to do with. I also had to trim one of the drive cages with tin snips to clear the rad and the ram.


----------



## Lord Xeb

@fg2chase What OS are you running? Windows server? FreeNAS? Unraid? Linux with some goodies? What kind of topology are you running? How is it all backed up?

Nevermind I should have looked closer. 

>.> I was hoping for some unraid.


----------



## fg2chase

Lord Xeb said:


> @fg2chase What OS are you running? Windows server? FreeNAS? Unraid? Linux with some goodies? What kind of topology are you running? How is it all backed up?
> 
> Nevermind I should have looked closer.
> 
> >.> I was hoping for some unraid.


Yessir, just regular windows 10, it works.

Started with windows home server in 2008 and it?s just kind of evolved to this point.

Started 100% fresh and just decided to keep it using Windows 10 in November and using Windows storage spaces.

There are better options but this is what I went with.0


----------



## Lord Xeb

Keep a backup. I was in the data recovery business for a while. Your server would be EXPENSIVE to recover.


----------



## fg2chase

I got into the religion of backing up from a really young age. Trust me I know aside from the duplication there are 4 8TB external drives that I backup to quarterly and I keep at my office. 

I have yet to need my offsite backups in ten years. I have never suffered a catastrophic loss in fact I have data (mp3?s) on this server from 1996.


----------



## Lord Xeb

Well done sir. You impress me. :thumb:

Oh, fg, FYI IR guns do not work on copper. Copper absorbs IR. FYI. >.>


----------



## fg2chase

Lord Xeb said:


> Well done sir. You impress me. :thumb:
> 
> Oh, fg, FYI IR guns do not work on copper. Copper absorbs IR. FYI. >.>


i'll be damned learn something new everyday. the temps weren't that far off from what shows in HW temp and I used them on the back the BCP where the chipset would be too.


----------



## Lord Xeb

Actually I stand corrected. It reflects IR. That is what it is used in Yeti and Thermus coolers. Either way, IR guns dont work well on copper itself. You need to either paint it or measure it with a probe.


----------



## fg2chase

Lord Xeb said:


> Actually I stand corrected. It reflects IR. That is what it is used in Yeti and Thermus coolers. Either way, IR guns dont work well on copper itself. You need to either paint it or measure it with a probe.


Learn something knew everyday. Thanks for that.


----------



## fg2chase

Well a week ago one of my RAM sticks went bad and corrupted the metadata on my storage pool. It corrupted it so bad that even reinstalling the OS didn't help and I had to go to the office and get my offsite backups.

I lost about 60 days worth of data because that is how old my backups were, I was running this RAM at the stock 3000mhz... man that sucked.

They accepted my RMA


----------



## levifig

Damn… I had no idea you could fit that many 3.5’’ drives in a 750D… I might’ve thought about my server different, but ended up going with a Supermicro 3U build (dual Xeon). I’m pretty happy with my build but had I see this build a few months ago, I would’ve probably taken a similar route (except there’s no way I would’ve gone without ECC RAM).  

Nice job man.


----------



## fg2chase

levifig said:


> Damn… I had no idea you could fit that many 3.5’’ drives in a 750D… I might’ve thought about my server different, but ended up going with a Supermicro 3U build (dual Xeon). I’m pretty happy with my build but had I see this build a few months ago, I would’ve probably taken a similar route!  Nice job man.


Yeah man, I had an extra 750D and a H115i so I just went with it... I COULD even game on it seeing as how it has a 1050ti in it too if for some reason my 1080Ti SLI rig or my alienware 17 R4 with 1080 both went down LOL...

I kinda like that it is unique and isn't just a rack server. IT was a lot of fun and I was able to mod it with just tin snips.


----------



## fg2chase

If you want to know how many drives a 750D can fit the answer is 24...


----------



## koulaid

This thread inspired me to build a esxi server with freenas vm. Will also host other VMs too, but for now mainly plex and freenas. Loved the look of the case so I went out to get the same one. Waiting for my hdd cages to come in and also waiting on that tax money to get the drives. Are you selling those 2tb drives? I may be interested! Will post pics soon.


----------



## EniGma1987

fg2chase said:


> 
> 
> Did some important upgrades to these cards. They were getting up to 140F according to my temp gun, now they are showing 95F!!!!




Could you post a link to what heatsinks you used for that upgrade? and did the fan come on them or did you add that yourself?


----------



## fg2chase

EniGma1987 said:


> Could you post a link to what heatsinks you used for that upgrade? and did the fan come on them or did you add that yourself?


Crap I don't remember which ones I got... I got copper ones from newegg or amazon I think.


----------



## fg2chase

Upgraded to a 2700X this week.


----------



## parityboy

fg2chase said:


> Upgraded to a 2700X this week.


Nice .  has there been a tangible performance uplift e.g. supporting an additional stream?


----------



## fg2chase

parityboy said:


> Nice .  has there been a tangible performance uplift e.g. supporting an additional stream?


Well, users report that the transcoding is nearly instant now, I notice it too when I am out of town even during peak loads from Thursday to monday nights when It is not uncommon to see 18-30 streams. I also notice that Ripping a BLU RAY takes about 10 mins less time! To be fair I think my 1800X was a dud anyway. It never would stay stable at 4GHz


----------



## EniGma1987

That is pretty nice amount of streams. Are they pretty much all 720p? Or do you have 1080 and 4k streams going too?


----------



## fg2chase

EniGma1987 said:


> That is pretty nice amount of streams. Are they pretty much all 720p? Or do you have 1080 and 4k streams going too?



Mixture of 720 Ana 1080p, 4K isn’t very common I only have a few 4K titles.


----------



## fg2chase

Did some spring cleaning today


----------



## SpykeZ

That would be such a good Plex Server haha


----------



## fg2chase

SpykeZ said:


> That would be such a good Plex Server haha


It usually does a good job, has the typical little problems like any gaming rig would.


----------



## fg2chase

So I’ve been struggling with temps on this 2700X it’s been idling in the 50-55c range on all stock clocks. Sometimes during transcoding it’s been been hitting 90C!!! It even shut down twice. Did some research and it may be that the AER RGB fans don’t have high static pressure and maybe that was it. 

I came up with this idea and did a repaste job and now im idling in the high 30C range. Whew.

Ordered some fan grilles because it looks like these are here to stay.


----------



## zeroibis

Hey, I was originally looking at a 900D to fit a lot of HDDs in but it looks like this is a better option as I would be able to fit 24 drives easily which is what I want.

What RPM are your drives running at anyways and what sort of temps do you see on them. Does the cage/trays do a good job of vibration damping?


----------



## fg2chase

zeroibis said:


> Hey, I was originally looking at a 900D to fit a lot of HDDs in but it looks like this is a better option as I would be able to fit 24 drives easily which is what I want.
> 
> What RPM are your drives running at anyways and what sort of temps do you see on them. Does the cage/trays do a good job of vibration damping?


Same, These are ST3000DM008 3TB 64MB which are 7200RPM drives, the cages do a decent job as the drive actually sits in plastic before you slide them into the cages. The temps are great, stay right around room temp. When I modded the 750D by cutting out the 5.25" bays with tin snips I also put a 120MM fan behind the Drive bay covers. You will definitely need 3 of these for 24 drives. https://www.newegg.com/Product/Product.aspx?Item=N82E16816101792 which I modded to put these on. https://www.newegg.com/Product/Product.aspx?Item=N82E16835708006 using artic silver.


----------



## zeroibis

Very nice, yea as I am looking more and more into your build it appears that you basically have exactly what I am looking to do minus the transcoding. 

I am not going to be filling it from the start but I already have 16 drives to toss in. 

How much clearance do you have between the motherboard and the HDD cage? I assume I will be needing a right angle 24 pin adapter. But I am not sure if there will be enough clearance for my motherboard as I am using an old p6t7ws supercomputer right now.

Good to hear about the temps, I plan to be using some ML120s as the intake and control the fans with an aquareo so I can keep the system as quiet as possible when it is not under heavy IO load. 

Also thanks for the info on the HBA card!

Edit: noticed you are using the 1x slot for the GPU very nice! Do you know if all GPUs support this or is anything special required on the motherboard or GPU end?
Edit: NVM I see that they actually make 1x GPUs, holy crap in all my time building and working on computers not sure how I never knew this!
Edit: from the look of it my motherboard would be 1" wider which would likely be too much so I guess it is time for a Ryzen build which is great because finally I can move from 6gb to 10. (I will just pickup one of those asrock boards with the 10ge)

Another question: is there any reason that you decided not to use the built in SATA on your motherboard? Your motherboard should have had 8 sata ports so you should have only needed 2 HBA cards...


----------



## fg2chase

zeroibis said:


> Very nice, yea as I am looking more and more into your build it appears that you basically have exactly what I am looking to do minus the transcoding.
> 
> I am not going to be filling it from the start but I already have 16 drives to toss in.
> 
> How much clearance do you have between the motherboard and the HDD cage? I assume I will be needing a right angle 24 pin adapter. But I am not sure if there will be enough clearance for my motherboard as I am using an old p6t7ws supercomputer right now.
> 
> Good to hear about the temps, I plan to be using some ML120s as the intake and control the fans with an aquareo so I can keep the system as quiet as possible when it is not under heavy IO load.
> 
> Also thanks for the info on the HBA card!
> 
> Edit: noticed you are using the 1x slot for the GPU very nice! Do you know if all GPUs support this or is anything special required on the motherboard or GPU end?
> Edit: NVM I see that they actually make 1x GPUs, holy crap in all my time building and working on computers not sure how I never knew this!
> Edit: from the look of it my motherboard would be 1" wider which would likely be too much so I guess it is time for a Ryzen build which is great because finally I can move from 6gb to 10. (I will just pickup one of those asrock boards with the 10ge)
> 
> Another question: is there any reason that you decided not to use the built in SATA on your motherboard? Your motherboard should have had 8 sata ports so you should have only needed 2 HBA cards...


Actually my GPU is in the 16x slot, but it does work in a 1x slot.

The RAID cards are in the other 1x - 8x slots. with my board it is a tight fit, the hard drive cage is basically flush with the RAM. I did need a right angle adapter for the 24 PIN correct.

Also I added another RAID card because of the breakout cables, using the SATA connectors on the board would require 8 SATA cables which are thick. Using the card I was able to use one cable that breaks out into 8 connectors. I briefly used a gigabyte gaming 3 board which didn't have the clearance issues that the ROG board does. It's certainly a tight fit but I pulled it off nicely I think.

Hope this helps.


----------



## fg2chase

Here is an updated photo. The original ones are out of date.


----------



## fg2chase

Even better


----------



## fg2chase

It’s not ideal but it’s fine and it works.


----------



## zeroibis

Nice, luckily I will probably avoid the ram issues and I running strictly an SMB file server so I do not need more than 1 stick. 

As for sata cables they do make thin ones: http://www.performance-pcs.com/silv...ra-thin-90-degree-6gb-s-sata-cable-300mm.html

What did you do for the sata power?

Also good to know about a 16x card working in a 1x slot. I will try that out!


----------



## fg2chase

zeroibis said:


> Nice, luckily I will probably avoid the ram issues and I running strictly an SMB file server so I do not need more than 1 stick.
> 
> As for sata cables they do make thin ones: http://www.performance-pcs.com/silv...ra-thin-90-degree-6gb-s-sata-cable-300mm.html
> 
> What did you do for the sata power?
> 
> Also good to know about a 16x card working in a 1x slot. I will try that out!


for SATA power I used normal sata POWER cables. I wasn't interested in individual cables at all and I wouldn't really call the RAM an "issue", it doesn't hurt anything for it to be flush like that.


----------



## zeroibis

fg2chase said:


> for SATA power I used normal sata POWER cables. I wasn't interested in individual cables at all and I wouldn't really call the RAM an "issue", it doesn't hurt anything for it to be flush like that.


Yea and given those SATA cables are $10 each you are better off spending a little extra just getting the HBA and related cables instead.

In my case I will need to worry about individual cables at the start because I have an 8 drive 12TB raid 6 I will be moving over. Eventually as drive prices fall I will be dropping that array but I am not going to spend over $500 just to go from 8 drives to 2 or 4.


----------



## zeroibis

Something I just noticed, only two of your three cards can be running in PCIe x8 mode. The third is limited to PCIe 2.0 x4 speed. Obviously it still works but I suspect you would not really see any performance decrease unless you were accessing all 8 drives off that card at the same time.


----------



## zeroibis

fg2chase said:


> Same, These are ST3000DM008 3TB 64MB which are 7200RPM drives, the cages do a decent job as the drive actually sits in plastic before you slide them into the cages. The temps are great, stay right around room temp. When I modded the 750D by cutting out the 5.25" bays with tin snips I also put a 120MM fan behind the Drive bay covers. You will definitely need 3 of these for 24 drives. https://www.newegg.com/Product/Product.aspx?Item=N82E16816101792 which I modded to put these on. https://www.newegg.com/Product/Product.aspx?Item=N82E16835708006 using artic silver.


With your drives hooked up to the AOC-SAS2LP-MV8 are you able to view the individual drives SMART data with a program like CrystalDiskInfo? I am just trying to verify that this card runs in IT mode out of the box in that it is a true JBOD HBA that does not have any interface firmware like a normal RAID card.


----------



## fg2chase

zeroibis said:


> With your drives hooked up to the AOC-SAS2LP-MV8 are you able to view the individual drives SMART data with a program like CrystalDiskInfo? I am just trying to verify that this card runs in IT mode out of the box in that it is a true JBOD HBA that does not have any interface firmware like a normal RAID card.


Yes, I am. I can view each hard drive status individually.


----------



## fg2chase

zeroibis said:


> Something I just noticed, only two of your three cards can be running in PCIe x8 mode. The third is limited to PCIe 2.0 x4 speed. Obviously it still works but I suspect you would not really see any performance decrease unless you were accessing all 8 drives off that card at the same time.


correct, there is no perceptible loss of performance. The drives themselves are the "bottleneck" here not any of the other hardware.


----------



## zeroibis

fg2chase said:


> Same, These are ST3000DM008 3TB 64MB which are 7200RPM drives, the cages do a decent job as the drive actually sits in plastic before you slide them into the cages. The temps are great, stay right around room temp. When I modded the 750D by cutting out the 5.25" bays with tin snips I also put a 120MM fan behind the Drive bay covers. You will definitely need 3 of these for 24 drives. https://www.newegg.com/Product/Product.aspx?Item=N82E16816101792 which I modded to put these on. https://www.newegg.com/Product/Product.aspx?Item=N82E16835708006 using artic silver.


Thanks again for all your help on this, today I have finally started ordering parts for the build. I am not ordering the supermicro cards until the last second in hope that the price will go back to $98 again lol but I have ordered the heatsinks you recommended already. The only difference was that I found that if I ordered them from PPCS I could get the ultra version for the same price. (same as you got but with more pins)

Once I actually get started in a few weeks I will post up a build log. Obviously I will be linking and crediting your awesome build for the idea and support.


----------



## fg2chase

zeroibis said:


> Thanks again for all your help on this, today I have finally started ordering parts for the build. I am not ordering the supermicro cards until the last second in hope that the price will go back to $98 again lol but I have ordered the heatsinks you recommended already. The only difference was that I found that if I ordered them from PPCS I could get the ultra version for the same price. (same as you got but with more pins)
> 
> Once I actually get started in a few weeks I will post up a build log. Obviously I will be linking and crediting your awesome build for the idea and support.


Happy to help man, I am actually planning on ordering another one of those supermicro cards to have a spare on the shelf. WHen you replace the stock heatsinks make sure to have plenty of alcohol on hand to clean the old crappy paste off and I ended up using thermal paste.


----------



## zeroibis

fg2chase said:


> Happy to help man, I am actually planning on ordering another one of those supermicro cards to have a spare on the shelf. WHen you replace the stock heatsinks make sure to have plenty of alcohol on hand to clean the old crappy paste off and I ended up using thermal paste.


Yep, I got a big tube of kryonaut that will be going in there. In my case I am going with a Ryzen 3 but I got one of those nice Ryzen 7 heatsinks from my current system so I will use that on the cpu not that it needs it because I am just running a strict file server. Went with ryzen so I could get ECC support on a cheap cpu.


----------



## Mr Underhill

ok you got 18 3tb drives ? now if you upgraded to the new seagate 14tb hdds you'd have 252tb of storage.


----------



## camry racing

I would already start changing 3TB drives for 8TB atleast I did that with my server and now I'm running 3 8TB drives and one 10TB drive for parity I used to have 3 4TB drives and 1 6TB drive


----------



## EniGma1987

Mr Underhill said:


> ok you got 18 3tb drives ? now if you upgraded to the new seagate 14tb hdds you'd have 252tb of storage.





For a cost of about $10,000 


I mean sure its a great price for the total storage capacity, but for a home user that's pretty rough.


----------



## fg2chase

Mr Underhill said:


> ok you got 18 3tb drives ? now if you upgraded to the new seagate 14tb hdds you'd have 252tb of storage.


no I have 23 of the 3TB Drives, As of right now I have plenty of hard drives.


----------



## zeroibis

fg2chase said:


> no I have 23 of the 3TB Drives, As of right now I have plenty of hard drives.


Clearly you need at least 1 more to make it even...


----------



## fg2chase

zeroibis said:


> Clearly you need at least 1 more to make it even...


well no there us a 850 EVO SSD occupying that slot. my plex DB and webserver are running on that.


----------



## zeroibis

fg2chase said:


> well no there us a 850 EVO SSD occupying that slot. my plex DB and webserver are running on that.


We both know you got sata ports on that motherboard, time to use them up too lol!

I assume you running the OS on the SSD as well or do you have another drive that is connected directly to the motherboard for your boot drive?


----------



## fg2chase

zeroibis said:


> We both know you got sata ports on that motherboard, time to use them up too lol!
> 
> I assume you running the OS on the SSD as well or do you have another drive that is connected directly to the motherboard for your boot drive?


the host OS is on a 512GB 960 pro


----------



## EniGma1987

You ever thought about using a 512GB SSD and caching software to keep recently used or most used things cached? I wonder what it would do for number of users who could stream at once with new releases.


----------



## zeroibis

EniGma1987 said:


> You ever thought about using a 512GB SSD and caching software to keep recently used or most used things cached? I wonder what it would do for number of users who could stream at once with new releases.


I plan on using a 970evo 250gb as a write cache. I am holding off till the black friday deals because the price on that thing keep dropping on it.


----------



## fg2chase

EniGma1987 said:


> You ever thought about using a 512GB SSD and caching software to keep recently used or most used things cached? I wonder what it would do for number of users who could stream at once with new releases.



I considered this but it’s not needed. I’m using diskeeper which has caching. The server is fast and snappy.


----------



## EniGma1987

Does the 750D have mount points for the extra hard drive cages? Or did you have to do your own mounting to get them all in there? I was looking at pictures of the 750D but it doesnt look like the mounting points go all the way up the 2nd row.


----------



## fg2chase

EniGma1987 said:


> Does the 750D have mount points for the extra hard drive cages? Or did you have to do your own mounting to get them all in there? I was looking at pictures of the 750D but it doesnt look like the mounting points go all the way up the 2nd row.


The drive bays will physically stack on top of one another until you run out of room. To do what I did you will need to cut the 5.25” bay out with some tin ships or something.


----------



## zeroibis

How did you fit the 24 pin connector in?


----------



## OCTDBADBRO

I would be careful using Seagate Barracuda's in a 24/7 NAS environement, I had 5 of them in my old RAID6 array and 2 died and 1 fell out of the array within months of each other... lasted about 2.5 years before this happend. but nice looking server. I'm currently looking at Seagate Ironwolf, HGST NAS and WD Red / (gold overkill)


----------



## fg2chase

OCTDBADBRO said:


> I would be careful using Seagate Barracuda's in a 24/7 NAS environement, I had 5 of them in my old RAID6 array and 2 died and 1 fell out of the array within months of each other... lasted about 2.5 years before this happend. but nice looking server. I'm currently looking at Seagate Ironwolf, HGST NAS and WD Red / (gold overkill)


I am not overly worried, I keep offline and offsite backups. I had some 2TB Seagate's in my last server that ran 24x7 from late 2009-2017.


----------



## fg2chase

zeroibis said:


> How did you fit the 24 pin connector in?


I wish I had taken a photo but I found a 90* connector for it so now the 24 pin connects from the side right behind the drive cages.


----------



## cps68500

Great build! Did you use a SATA power splitter to supply power to all the drives? How many drives did you put on each type 3 Corsair cable?


----------



## zeroibis

Hey, just wanted to check if you were also on the driver 4.0.0.2020-whql for the AOC-SAS2LP-MV8


----------



## fg2chase

cps68500 said:


> Great build! Did you use a SATA power splitter to supply power to all the drives? How many drives did you put on each type 3 Corsair cable?


Actually I had enough SATA connectors from having a few HX1000's so every drive has it's own connector.


----------



## fg2chase

zeroibis said:


> Hey, just wanted to check if you were also on the driver 4.0.0.2020-whql for the AOC-SAS2LP-MV8


Good question, I have no idea honestly. I feel like I should not change it if it is working well.


----------



## EniGma1987

What screws did you use to secure the extra HDD bays? My tower only came with enough screws to add 3 extra bays worth and the HDD upgrade kits didnt come with extra screws. The screws for these look more like mini sheet metal screws than the typical computer screw


----------



## zeroibis

EniGma1987 said:


> What screws did you use to secure the extra HDD bays? My tower only came with enough screws to add 3 extra bays worth and the HDD upgrade kits didnt come with extra screws. The screws for these look more like mini sheet metal screws than the typical computer screw



The HDD upgrade kits should have come with screws. That is how I screwed them in. Only difference is in my setup I used standoffs to push the top cadges by the motherboard forward to make room as needed.


----------



## Lady Fitzgerald

EniGma1987 said:


> What screws did you use to secure the extra HDD bays? My tower only came with enough screws to add 3 extra bays worth and the HDD upgrade kits didnt come with extra screws. The screws for these look more like mini sheet metal screws than the typical computer screw


3.5" HDDs usually use 6-32 x 1/4" or 5/16" screws and 2.5" HDDs, SSDs, and ODDs usually use M3-5 screws. Actual length of both will vary depending on the thickness of the cage.


----------



## assaulth3ro911

How many movies and shows do you have on your PLEX server, and how many people are you serving simultaneously, or over a month? Is that Ryzen truly enough to transcode all of that?


----------



## anticommon

I had that same antec case just sold it the other day with some X58 hardware I had. I also have the 750D which needs a new fascia (broke clips taking the front off for full disassembly... grrr) but is sitting otherwise empty (along with having a tempered glass mod for the side panel). Should really do something with it... hmm.


----------



## fg2chase

assaulth3ro911 said:


> How many movies and shows do you have on your PLEX server, and how many people are you serving simultaneously, or over a month? Is that Ryzen truly enough to transcode all of that?


yes, Ryzen shrugs off these transcodes, it is also aided by a quadro P2000.


----------



## EniGma1987

P2000 is a great card for plex. It can encode 13 1080p h.265 streams in high quality mode which is a great deal for $350. It is important to note for people looking at PLEX that they must use a Quadro card if they want to be able to use it much, regular GeForce is driver locked to only encode 2 streams max no matter what. This Quadro card is the best bang for buck deal right now on getting a lot of plex streams for minimum money.


----------



## EniGma1987

I was using the same Supermicro AOC-SAS2LP-MV8 cards to connect my drives as @fg2chase was. Got them because of this thread actually. Unfortunately a major windows 10 OS update caused me nothing but problems on these cards so I replaced my cards with a LSI 9305-24i. It costs a good bit, but allows connecting far more drives on a single card which is nice for me since I dont have to take up 3 PCI-E slots with cards anymore. It also is faster than the other card and runs much cooler. This card has been performing like a champ for the past 2 months I have had it. Just wanted to let people know who are looking at this thread for hardware ideas on connecting a ton of drives.


----------



## fg2chase

EniGma1987 said:


> I was using the same Supermicro AOC-SAS2LP-MV8 cards to connect my drives as @fg2chase was. Got them because of this thread actually. Unfortunately a major windows 10 OS update caused me nothing but problems on these cards so I replaced my cards with a LSI 9305-24i. It costs a good bit, but allows connecting far more drives on a single card which is nice for me since I dont have to take up 3 PCI-E slots with cards anymore. It also is faster than the other card and runs much cooler. This card has been performing like a champ for the past 2 months I have had it. Just wanted to let people know who are looking at this thread for hardware ideas on connecting a ton of drives.


Hmm that makes me wonder if that’s why my server refuses to update passed 1703....

BSOD every time.


----------



## EniGma1987

fg2chase said:


> Hmm that makes me wonder if that’s why my server refuses to update passed 1703....
> 
> BSOD every time.





Could be. I dont remember what version it was that I was updating to, but I could only use the version that came with my retail installation media. Any major OS version updates beyond that caused crashes constantly. I figured out it was the sas/sata card because when I removed it entirely from the system it worked fine. Might be a pain to test, but if you have a spare drive you could remove your current OS drive and put the spare in. Do a new win10 install and then try to update the OS. If/when it crashes you could remove the card from the system including drivers for it, and then try the update again. If it works then the card most likely is the issue for you as well. Then you could put the old OS drive back in that way you havent messed anything up for it or have to both with reinstalling any software and such you have been using.


I was looking at replacing the cards with an LSI 9300-8i as it had the newer 12gb SAS connections and had newer drivers available. I realized though that the 9305-24i was cheaper than buying three of the lower end ones I would need to connect my the 22 drives I wanted to eventually use so I just went with the high end card.


----------



## pdasterly

fg2chase said:


> yes, Ryzen shrugs off these transcodes, it is also aided by a quadro P2000.


nice build, started to do something like this but went with nas instead. Had to choose either 1050ti or the 10gb nic, i choose the nic
The 4 ssd are in Raid 0 and the hdd are in raid 5
ssd for cache only

Qnap ts-873 with 32gb ram
2x 250gb m.2
8x 10gb hdd
10gb nic w/ 2x 250gb m.2 nvme

ux-500p expansion unit
5x 8tb drives

https://www.qnap.com/en-us/product/ts-873
https://www.qnap.com/en-us/product/ux-500p
https://www.qnap.com/en-us/product/qm2-m.2ssd-10gbe

100TB


----------



## fg2chase

EniGma1987 said:


> Could be. I dont remember what version it was that I was updating to, but I could only use the version that came with my retail installation media. Any major OS version updates beyond that caused crashes constantly. I figured out it was the sas/sata card because when I removed it entirely from the system it worked fine. Might be a pain to test, but if you have a spare drive you could remove your current OS drive and put the spare in. Do a new win10 install and then try to update the OS. If/when it crashes you could remove the card from the system including drivers for it, and then try the update again. If it works then the card most likely is the issue for you as well. Then you could put the old OS drive back in that way you havent messed anything up for it or have to both with reinstalling any software and such you have been using.
> 
> 
> I was looking at replacing the cards with an LSI 9300-8i as it had the newer 12gb SAS connections and had newer drivers available. I realized though that the 9305-24i was cheaper than buying three of the lower end ones I would need to connect my the 22 drives I wanted to eventually use so I just went with the high end card.


I considered this, I really just need to do a backup and then move it all to Linux/unraid or something.


----------



## zeroibis

EniGma1987 said:


> I was using the same Supermicro AOC-SAS2LP-MV8 cards to connect my drives as @*fg2chase* was. Got them because of this thread actually. Unfortunately a major windows 10 OS update caused me nothing but problems on these cards so I replaced my cards with a LSI 9305-24i. It costs a good bit, but allows connecting far more drives on a single card which is nice for me since I dont have to take up 3 PCI-E slots with cards anymore. It also is faster than the other card and runs much cooler. This card has been performing like a champ for the past 2 months I have had it. Just wanted to let people know who are looking at this thread for hardware ideas on connecting a ton of drives.





fg2chase said:


> Hmm that makes me wonder if that’s why my server refuses to update passed 1703....
> 
> BSOD every time.





EniGma1987 said:


> Could be. I dont remember what version it was that I was updating to, but I could only use the version that came with my retail installation media. Any major OS version updates beyond that caused crashes constantly. I figured out it was the sas/sata card because when I removed it entirely from the system it worked fine. Might be a pain to test, but if you have a spare drive you could remove your current OS drive and put the spare in. Do a new win10 install and then try to update the OS. If/when it crashes you could remove the card from the system including drivers for it, and then try the update again. If it works then the card most likely is the issue for you as well. Then you could put the old OS drive back in that way you havent messed anything up for it or have to both with reinstalling any software and such you have been using.
> 
> 
> I was looking at replacing the cards with an LSI 9300-8i as it had the newer 12gb SAS connections and had newer drivers available. I realized though that the 9305-24i was cheaper than buying three of the lower end ones I would need to connect my the 22 drives I wanted to eventually use so I just went with the high end card.





fg2chase said:


> I considered this, I really just need to do a backup and then move it all to Linux/unraid or something.



I am also running the same Supermicro AOC-SAS2LP-MV8 cards with NO PROBLEMS. I am on 1803 and updated without issue.


Also I can check what drivers I am using but the latest on the supermicro is from 2/4/2019 11:30 AM. I am on 4.0.0.2020 and the latest driver is 4.0.0.2022


----------



## The_Rocker

Personally I just had a Synology DS1817+ with 8 x 10TB Seagate Ironwolfs when I wanted loads of space.

With 2 disk fault tolerance, around 60TB of usable space, supports caching with M2 drives and also 10Gb networking, if you don't want to bond the 4 x 1Gb.

Much smaller form factor, much less power and the awesome DSM OS.

The DS1819+ is the new kid on the block, and of course there are drives bigger than 10TB now.


----------



## EniGma1987

zeroibis said:


> I am also running the same Supermicro AOC-SAS2LP-MV8 cards with NO PROBLEMS. I am on 1803 and updated without issue.
> 
> 
> Also I can check what drivers I am using but the latest on the supermicro is from 2/4/2019 11:30 AM. I am on 4.0.0.2020 and the latest driver is 4.0.0.2022



Ah so they finally released new drivers then. I had to switch off of them before those were released but ill try adding one of the cards back in and use the new drivers sometime in the next couple weeks.


----------



## zeroibis

The_Rocker said:


> Personally I just had a Synology DS1817+ with 8 x 10TB Seagate Ironwolfs when I wanted loads of space.
> 
> With 2 disk fault tolerance, around 60TB of usable space, supports caching with M2 drives and also 10Gb networking, if you don't want to bond the 4 x 1Gb.
> 
> Much smaller form factor, much less power and the awesome DSM OS.
> 
> The DS1819+ is the new kid on the block, and of course there are drives bigger than 10TB now.



Cant run backblaze unless your a lottery winner with that config.


----------



## zeroibis

EniGma1987 said:


> Ah so they finally released new drivers then. I had to switch off of them before those were released but ill try adding one of the cards back in and use the new drivers sometime in the next couple weeks.



Not sure which ones you were using before but the drivers I am using "4.0.0.2020" I have had installed for over 6 months now.


----------



## fg2chase

zeroibis said:


> I am also running the same Supermicro AOC-SAS2LP-MV8 cards with NO PROBLEMS. I am on 1803 and updated without issue.
> 
> 
> Also I can check what drivers I am using but the latest on the supermicro is from 2/4/2019 11:30 AM. I am on 4.0.0.2020 and the latest driver is 4.0.0.2022


This is very likely my issue, I will perform a full system backup today and then install that driver.


----------



## parityboy

*@fg2chase*

I have a whitebox server running Proxmox VE 6.0; the hardware is an mITX board with an i5-8400 (6C6T) and 16GB RAM. I have Serviio running in an Ubuntu 18.04 LTS VM with 2GB of system memory. With 2 vCPUs, transcoding a single HD stream (Serviio uses ffmpeg) to medium quality has the CPUs pegged at nigh-on 100% load. 4 vCPUs averages 95% load and Increasing to 6 vCPUs has the CPUs at around 80% load.

I re-read this thread and couldn't find this: what CPU load are you seeing on average? Did you ever test with a single stream as a reference point?


----------



## EniGma1987

parityboy said:


> *@fg2chase*
> 
> I have a whitebox server running Proxmox VE 6.0; the hardware is an mITX board with an i5-8400 (6C6T) and 16GB RAM. I have Serviio running in an Ubuntu 18.04 LTS VM with 2GB of system memory. With 2 vCPUs, transcoding a single HD stream (Serviio uses ffmpeg) to medium quality has the CPUs pegged at nigh-on 100% load. 4 vCPUs averages 95% load and Increasing to 6 vCPUs has the CPUs at around 80% load.
> 
> I re-read this thread and couldn't find this: what CPU load are you seeing on average? Did you ever test with a single stream as a reference point?


IDK what all that software is, but typically people use something like Plex for streaming and that supports GPU encoding as well as direct play if the player supports it. With GPU acceleration you could be using QuickSync from your i5-8400 and you would not only have higher performance but see very little CPU load.


----------



## parityboy

EniGma1987 said:


> IDK what all that software is, but typically people use something like Plex for streaming and that supports GPU encoding as well as direct play if the player supports it. With GPU acceleration you could be using QuickSync from your i5-8400 and you would not only have higher performance but see very little CPU load.


Serviio is a media streaming server analogous to Plex, Emby, Jellyfin etc, while ffmpeg is the utility used underneath by Serviio (and no doubt others) to handle video transcoding when necessary. I've tried Plex and Emby and didn't get on well with them; Plex doesn't seem to be that good at doing DLNA on the local LAN, while Emby seemed to be unstable. Serviio seems to give me the least amount of operational issues and it comes with a couple of Android apps for server control and remote media browsing & playback (e.g. over an LTE connection).

Re: GPU acceleration I completely forgot about QuickSync!  I _may_ be able to pass the iGPU through to the Serviio VM; if so, ffmpeg should be able to detect and use the iGPU for transcoding. I've heard many conflicting reports concerning the quality of GPU transcoding - have you any experiences you can share? With Plex, can the CPU and iGPU be used in parallel when decoding multiple streams, or is it an either/or choice?

Cheers.


----------



## skupples

nvm...


----------



## EniGma1987

parityboy said:


> Re: GPU acceleration I completely forgot about QuickSync!  I _may_ be able to pass the iGPU through to the Serviio VM; if so, ffmpeg should be able to detect and use the iGPU for transcoding. I've heard many conflicting reports concerning the quality of GPU transcoding - have you any experiences you can share? With Plex, can the CPU and iGPU be used in parallel when decoding multiple streams, or is it an either/or choice?


Intel's iGPUs were pretty bad for video quality when they first launched, but Intel has kept increasing the number of profiles they have built in. I believe they are at 9 now with Kaby Lake launch, and have maintained that through each gen since. The newer, higher profiles have better and better quality each new QS release. So if you can get your VM to pass through the iGPU, you should be able to select which preset profile the iGPU should be using. Any of the highest 3 profiles will look fine and have good performance. The top profile actually seems to be the best GPU quality of any between Intel, Nvidia, and AMD. 

With Plex when using GPU acceleration, it will default to that for any supported file types the GPU can do, and anything that is not a supported type will fall back to the CPU for transcoding. I dont know if Serviio allows direct play or not, but using direct will keep transcoding from happening at all and will maintain original video quality. On Plex, it will choose this as a first option if the player end supports the file type and encoding method built in on the player hardware and software.


edit: reading a bit about Serviio it seems it originally used DLNA to stream on the network, but now has the ability to stream without DLNA. And I saw you mention using DLNA on Plex in your previous post. Is there a reason you are using this? I tried enabling it once when I first started using Plex but didnt like how my other PCs picked up the plex stuff in "My Computer" area so I turned it off. Plex doesnt seem to use DLNA for its normal operation, so Im just wondering what you use it for? Does it offer additional device support for things that dont have an app or something?


----------



## parityboy

EniGma1987 said:


> Intel's iGPUs were pretty bad for video quality when they first launched, but Intel has kept increasing the number of profiles they have built in. I believe they are at 9 now with Kaby Lake launch, and have maintained that through each gen since. The newer, higher profiles have better and better quality each new QS release. So if you can get your VM to pass through the iGPU, you should be able to select which preset profile the iGPU should be using. Any of the highest 3 profiles will look fine and have good performance. The top profile actually seems to be the best GPU quality of any between Intel, Nvidia, and AMD.
> 
> With Plex when using GPU acceleration, it will default to that for any supported file types the GPU can do, and anything that is not a supported type will fall back to the CPU for transcoding. I dont know if Serviio allows direct play or not, but using direct will keep transcoding from happening at all and will maintain original video quality. On Plex, it will choose this as a first option if the player end supports the file type and encoding method built in on the player hardware and software.
> 
> 
> edit: reading a bit about Serviio it seems it originally used DLNA to stream on the network, but now has the ability to stream without DLNA. And I saw you mention using DLNA on Plex in your previous post. Is there a reason you are using this? I tried enabling it once when I first started using Plex but didnt like how my other PCs picked up the plex stuff in "My Computer" area so I turned it off. Plex doesnt seem to use DLNA for its normal operation, so Im just wondering what you use it for? Does it offer additional device support for things that dont have an app or something?


Many thanks for the iGPU info 

The nice thing about using DLNA is that as a standard, users are not restricted to using a server-specific app. Any DLNA/UPnP-compliant renderer (e.g. smart TVs, DLNA apps on Android/iOS, Kodi) can see the server and stream media from it. DLNA uses IP multicast and is therefore limited to the local LAN (but there are ways around that). You're absolutely right about other PCs on the network seeing the server (and its content), but proper network segmentation can solve that issue. DLNA also offers a control mechanism whereby a device such as a smartphone can be used to tell a streaming server to send video to different devices (_renderers_ in DLNA parlance) around the home.


----------



## fg2chase

parityboy said:


> *@fg2chase*
> 
> I have a whitebox server running Proxmox VE 6.0; the hardware is an mITX board with an i5-8400 (6C6T) and 16GB RAM. I have Serviio running in an Ubuntu 18.04 LTS VM with 2GB of system memory. With 2 vCPUs, transcoding a single HD stream (Serviio uses ffmpeg) to medium quality has the CPUs pegged at nigh-on 100% load. 4 vCPUs averages 95% load and Increasing to 6 vCPUs has the CPUs at around 80% load.
> 
> I re-read this thread and couldn't find this: what CPU load are you seeing on average? Did you ever test with a single stream as a reference point?


Hey man, I just saw this. I forgot all about this site... I don't monitor CPU usage really but I do know that the Quadro P2000 does most of the heavy lifting when it comes to transcoding so the CPU usage usually remains pretty low.


----------



## parityboy

fg2chase said:


> Hey man, I just saw this. I forgot all about this site... I don't monitor CPU usage really but I do know that the Quadro P2000 does most of the heavy lifting when it comes to transcoding so the CPU usage usually remains pretty low.


Thanks for that. I tried passing though my iGPU last night but it appears I need to create a fresh virtual machine with UEFI support, so that's been queued for a later date. In further testing, adding a second transcoded stream with all 6 vCPUs active pushes the usage back up to around 99% load. For the moment it's only me using the media server so it's not an issue, but if I were to support other users (i.e. family members) I'd need a lot more horsepower.

For the moment I'm limited to a single mITX host and the single PCIe slot on the motherboard is taken with a storage controller, so an add-in GPU is a no-go. Furthermore, if I were to upgrade the CPU to something I could still cool in such a small space, I'm probably limited to 65W TDP, which means something like a Ryzen 2700 which of course doesn't have an iGPU and would also mean a motherboard replacement. Of course, I could wait until AMD release their 8C16T Navi-based APU (which I think is inevitable). 

And yeah, no surprise you forgot about OCN. This place is quieter than a church on Tuesday....where did everyone go? Reddit?


----------



## parityboy

*@thread*

Quick update: while perusing the 'net I've discovered that QuickSync isn't fully supported by the open source driver in the Linux kernel and that the proprietary Intel driver requires a patched build of _ffmpeg_ to work. The options are VA-API (assuming I can get the iGPU to pass through) or a better CPU (I'm leaning towards the latter) but apparently some boards will not boot without a GPU active and the Ryzen 1700 I was considering (currently in my workstation) doesn't have one.

"More research is needed" (tm),


----------



## fg2chase

*fg2chase*

I have made a few upgrades. It physically looks the same but it has grown to over 92TB and it now has a Ryzen 3800X/quadro P2000


----------



## EniGma1987

fg2chase said:


> I have made a few upgrades. It physically looks the same but it has grown to over 92TB and it now has a Ryzen 3800X/quadro P2000


Have you noticed data corruption in your storage pool? I never had issues on my plex server before when it was single drives for everything. I converted to storage pools recently and have had 4 or 5 issues where whole folders have become inaccessible with file system errors and need to run an error check to fix it. I had this issue on the plex server and my own personal computer with my new game drive in a pool. Plex server runs Server 2019 Datacenter edition, my own PC runs WIn 10 X64 v2004. So kinda weird. Just wondering if you noticed similar issues when using storage spaces pool?


----------



## parityboy

fg2chase said:


> I have made a few upgrades. It physically looks the same but it has grown to over 92TB and it now has a Ryzen 3800X/quadro P2000


A couple of questions:

*1)* How did you make the jump to 92TB? I see 3TB drives in that screenshot and previously your max capacity was 66TB. Did you replace them with 4TB drives?

*2)* Considering you have a P2000 for your transcoding needs, what tangible benefits does the 3800X bring? Once the GPU hits a certain load, can Plex fallback to the CPU for subsequent transcodes?


----------



## fg2chase

EniGma1987 said:


> Have you noticed data corruption in your storage pool? I never had issues on my plex server before when it was single drives for everything. I converted to storage pools recently and have had 4 or 5 issues where whole folders have become inaccessible with file system errors and need to run an error check to fix it. I had this issue on the plex server and my own personal computer with my new game drive in a pool. Plex server runs Server 2019 Datacenter edition, my own PC runs WIn 10 X64 v2004. So kinda weird. Just wondering if you noticed similar issues when using storage spaces pool?


I have not noticed any corruption at all actually, and some of this data goes back to 2008 when I was using windows home server V1 which was 'known' for corruption. Storage spaces has come a long way since then and is now baked into Windows Server 2019.


----------



## fg2chase

parityboy said:


> A couple of questions:
> 
> *1)* How did you make the jump to 92TB? I see 3TB drives in that screenshot and previously your max capacity was 66TB. Did you replace them with 4TB drives?
> 
> *2)* Considering you have a P2000 for your transcoding needs, what tangible benefits does the 3800X bring? Once the GPU hits a certain load, can Plex fallback to the CPU for subsequent transcodes?


I am now sitting at over 120TB, I have replaced 1/2 of the 3TB drives with 8TB ones and will continue to do so as the storage needs rise.

The plan now is to max out the board to a 3950X, I am still noticing a lot of CPU and GPU usage when I am both hosting movies and TV and recording live TV from my cable provider as well. A 3950X will be the last CPU I put in there for the life cycle of the system. I suspect I will start a new server around 2028 or so if the last server I had is any indication. That one existed from 2008-2018 and was using an AMD 1090T if I recall correctly.


----------



## fg2chase

BuzzLighter said:


> OP, as I understand, you know a lot about this stuff. I want to open an online shop and it seems that it would require a hell lot of free space. I don't have a permanent IT specialist in my team, so I asked one of my friends, who claims to be an IT specialist and he told me that I need an instant dedicated server for this purpose. Even told me where to find one. And now I'm collecting information. As I understood by this friend of mine, I need something like 70TB of a free space. Is it only possible to get it only if I buy/rent dedicated server?


This depends on what you are doing in your business, What did he say you needed 70TB of space for? Fault tolerance? I need some more information in order to give you an educated response.

What is your business doing? I just deployed a small poweredge server with 10TB of space for a small attorneys office, they are only using 700GB of word documents and such....


----------

