# [Build Log] NUS Server



## tycoonbob

Hello everyone. I have finally started building my new NUS (Network Unified Storage) server (about time, right??), and this will be the build log. I do not plan to purchase everything at once, and get it all a few days later...so this build will take me a several weeks, if not a months. Sit back and enjoy the ride!

*The Purpose of This Build Log*
This build log will be written in a Work Log style, showing progress for each day of work. I will be posting about my server build, moving my equipment to my rack, building of my mDC (miniDataCentre), which will be an 8x8 room that will house all my equipment, wiring my house with Cat6 cable, and anything relating to these tasks. I see many challenges ahead, that I look forward to tackling.
I also plan to record videos during this project. I will have a separate video giving an in-depth review of each separate component in the below NUS build, as well as the completed build. I will also be recording videos of other tasks being completed, or challenges that I overcome.
(All videos will be recorded with my Sony Nex-5N DSLR Camera, at 1080i, 60fps in AVCHD format)

*The Plan*
So what am I trying to achieve here? This server will be my storage server, but it will be much more than that. It will also be an iSCSI host for my two Hyper-V Host Servers, which I am going to cluster. I currently run about 14 VMs between these two Hyper-V Host Servers, and hope to run as many as 25 while using this new NUS Server for VM storage. Each Hyper-V Host Server has an 8-core CPU, with 32GB of RAM...as well as a pair of 60GB SSDs in Raid 1. I will also be using this storage server for media distribution, to various multimedia devices. Primarily, it will provide content to a HTPC that I will build afterwards. It will also be providing content (FLAC/MP3 Audio, and 720/1080p Video) over my WAN, via Subsonic. Lastly, it will be a centralized location for backups, and possibly off-site backups for friends and family members.
Once this server is built, it will be moved to a rack that will house all my servers and networking equipment. Getting the rack is also part of this project, as well as building a 8ft by 8ft room in the corner of my garage to house everything. Lastly, I will also be wiring my house with Category 6 cabling, and terminating to keystone jacks throughout the house (4 behind the TV for all media devices, 1 for a Digital Aquatics Lifeguard device to monitor my 125gal Malawi Cichlid aquarium, and 4 more ports in my office), as well as a patch panel in the mDC. The rack will consist of (but not limited to) my firewall (running Untangle), all servers, LAN and DMZ switches, battery backup unit(s), patch panel, and my new NUS Server.

*Component List*
*Server Chassis:* Norco RPC-4224 [Purchased; Received]
*Motherboard:* ASUS P8B-E/4L ATX [Purchased; Received]
*Processor:* Intel Xeon E3-1220 V2 Ivy Bridge 3.1Ghz [Purchased; Received]
*RAM:* 4x Kingston KVR1333D3E9/4G [Purchased 2; Received 2]
*Raid Controller:* LSI MegaRAID SAS 9261-8i [Purchased; Received]
*Raid Controller BBU:* LSI LSI00264 LSIiBBU08 BAT1S1P [Purchased; Received]
*SAS Expander:* HP SAS Expander [Purchased; Received]
*SAS Expander* Chenbro CK23601 [purchased; Received]
*Power Supply:* Rosewill FORTRESS-550 [Purchased; Received]
*Hard Drive(s) [Boot]:* x2 Mushkin Enhanced Chronos Deluxe 60GB SATA III [Purchased 2; Received 2]
*Hard Drive(s) [Storage]:*x8 Hitachi HGST Desktsart NAS 6TB (Raid 10) [Not Purchased]
*Server Rack:* Wright Line (Eaton) 45U Four Post Rack [NOT USED] [Purchased; Received]
*Other:*
---x2 Syba SY-MRA25023 2.5" Hot Swap caddy for PCI Slot [Purchased 2; Received 2]
---x7 Tripp Lite S506-18N SFF-8087 Cable [Purchased 7; Received 7]
---Norco 3x120mm Fan Bar [Purchased; Received]
---x3 Delta AFB1212GHE-CF00 120mm Case Fan [Purchased 1; Received 1]
---x2 Norco C-P1T7 1-to-7 SATA Extension Splitter [Purchased 2; Received 2]
---Scythe Grand Kama CPU Cooler[Purchased; Received]
---Crucial Ballistix Active Cooling Fan (RAM) [NOT USED] [Purchased; Received]
---Norco RL-26 Slide Ball Bearing Rails [Not Purchased]
---x2 bgears b-Blaster 80 80mm Case Fans (Rear Exhaust) [Purchased 2; Received 2]

*Operating System:* Windows Server 2012 [Release Candidate publicly available]

*Build Log*

_June 18, 2012_
Received my server chassis (Norco RPC-4224), 3x120mm Norco fan bar, and 1 Delta 120mm case fan. I have inspected the chassis pretty well, and I am impressed. I really like the layout of this chassis, and I should have plenty of room to get in here and work. I have also connected my first Delta 120mm case fan to one of my existing servers, and man can this thing move air. It's rated at 240.96 CFM of Air Flow, which is amazing. Yes, at 62 dBA it is quite noisy...but once racked inside my sound proofed mDC, that shouldn't be a concern. Until then, I may use the 4x80mm fan bar that came with the chassis. Not as powerful, but a bit more quite.
I also got my Wright Line (now Eaton) 45U server rack.

_June 19, 2012_
Video review of my rack has been recorded and uploaded.

_June 22, 2012_
Video review of my Norco RPC-4224 Server Chassis has been recorded and uploaded.

_June 25, 2012_
Ordered my new Power Supply (Rosewill FORTRESS-550), and x2 1-to-7 Molex extensions (Norco C-P1T7). Also found out that the Norco RPC-4224 has a 15% discount going on, which is worth about $60. I contacted Newegg about it, since I already ordered mine, and they gave me a $15 off my new order (PSU and 1-to-7 Molex), which is awesome. Go Newegg!
Looking forward to this PSU as well. I know Rosewill isn't top of the line, by any means...but I have always had good luck with Rosewill, and they just released their new FORTRESS series of PSUs, which are 80+ Platinum. Yes, Platinum. Can't find any reviews on this new PSU, but getting a 20% discount on it (and 20% on the Molex Splitters too), and it's 80+ Plat...which I have never had before. 89% efficiency minimu, and up to 94%. According to PSU calculators, with 24 7200 RPM SATA III HDDs, I need a 497w minimum...with 520w recommended, so this should just be enough once fully loaded (which will be awhile). New items to review once I return from vacation! Woohoo.

_June 29, 2012_
Ordered motherboard and 2 x4GB sticks of RAM. RAM was $33.99 each, and the motherboard is normally $239.99, but had a 15% off server motherboards, so I only paid $204.
PSU and Molex Expansion cables made it here.

_July 3, 2012_
Received motherboard and RAM.

_July 4, 2012_
-Video of Motherboard, PSU, RAM, RAM Cooling Fans, and Molex Expanders have been uploaded.
-Crucial Ballistix Active Cooling Fan added to build list. I got this free when I bought a 2x8GB (16GB) RAM kit from Newegg. Since I bought two kits, I got two of these fans...and one is in my VMHOST01 server. This second one will go in my NUS server. More fans can't hurt, right?
-NEW video of my server chassis has been uploaded.

_July 5, 2012_
-Uploaded video of my Delta 120mm Case Fan.
-Started building the NUS server. Got the motherboard and PSU mounted, RAM seated, RAM cooler attached, Molex Expanders in place, and PSU connected up. New fan 3x120mm fan bar is not in place yet, due to noise of my 120mm fan. I will replace that bar once the garage is ready for my rack (1-2 months hopefully). I verified power was being received to the motherboard, which makes me very excited! Just need a CPU and a pair of SSDs and I can load the OS. Yippie!!!
-Several build progress photos have been added to post 3...check them out.

_July 13, 2012_
-Pulled the trigger, and bought my CPU and a cooler to go with it. Should have that either Tuesday (17th) or Wednesday (18th), so check back for videos of these two products. This also means that I will be able to load an OS on here now (temporarily, since I still don't have the SSDs for this build) and test out the compatibility between all the components!

_July 15, 2012_
-Tracking information shows that the CPU and cooler will be here on Tuesday, July 17th.
-With some downtime, and after reading this thread (Actively Cooled Modem/Router Club), I decided to modify my Thomson Cable Modem to allow for better airflow. Check out this thread ([Modem Mod] Added a 60mm fan to my Thomson Cable Modem).

_July 17, 2012_
-Received CPU and CPU cooler, and got both into server.
-3 New photos, and 3 new videos uploaded below (3 photos in the build log, and 1 video of the CPU, 1 video of the CPU Cooler, and 1 video in the build log section in post 3 below).
-Connected 2 2.5" 250GB Hard Drives and am installing Windows Server 2012 to check things out, performance wise!

_July 26, 2012_
-Sorry for the lack of updates, money is not as available as I'd like, and time is also of the essence. Hopefully can order my HP SAS Expander this weekend, or at least my SSDs.
-Slapped a HDD in one of the caddies and popped it in each slot, and verified the backplanes are at least supplying power.
-Modded my Netgear WNDR3700 router. Check it out. [Router Mod] 80mm fan on my Netgear WNDR3700
-Ordered my Raid Controller and Battery Backup Unit. Should receive hopefully by Aug. 3.

_July 29, 2012_
-Finally made a storage drive decision (updated part list above).
x20 3TB Hitachi 7K3000 drives for storage. Raid 50 (if my controller can do online expansion for spanned arrays) or Raid 6 (if my controller can't do online expansion for spanned arrays). If I don't do the Raid 50, I may consider doing two separate Raid 6 arrays, and use Server 2012 Storage Spaces to make it appear as one logical pool.
x4 600GB Hitachi Ultrastar 15K600 drives for VM storage. Raid 5, which would allow for approximately 1.8TB of storage...which should be plenty for 15-25 HAVM's. Most VMs will be fixed disk, 30-50GB...and a few will be larger (such as my System Center 2012 Configuration Manager server) which will be more like 250GB, to account for software updates, SQL databases, etc. I expect not to even use 1TB of this right away.

_August 1, 2012_
-Still very slow, and with working a lot I haven't had a chance to do much with this. I have been researching information about building my mDC room, and taking measurements...but haven't done much more than that.
-Added new items to my NUS build. I added x2 SYBA SY-MRA25018s, which are a 2.5" HDD hot-swap tray, that mounts to an unused PCI slot...giving hot swap in the back of the case. I will be using two of these in my last two PCI slots, which should leave 3 open slots if I ever wanted to add anything else on the motherboard. These trays use a regular SATA data cable, so they will still be connected to the motherboard, and each of these will be populated with a SSD...which will be in Raid 1 for the OS. I think it's pretty neat, and around $20 each it's not bad. This is better than rigging something up to mount the two drives inside the case, and if I need to get to them, I don't have to take the top off. Just for the record, I was also considering doing something like THIS which is neat, and allows for 2.5" HDDs to be mounted in a PCI slot, but no hot swap. The SYBA has a few reviews out there, and the main flaw is that the tray CAN be tight, depending on the drive that goes in...but oh well. I still think it would be a great addition!

_August 2, 2012_
-Received LSI SAS MegaRAID 9621-8i Raid Controller and BBU.
-New photos and a video uploaded of the Raid Controller.
-New build log photos uploaded.

_August 3, 2012_
-Installed latest version of LSI MegaRAID drivers.
-Loaded up LSI WebBIOS and cleared all configurationss of the Raid Controller, and reset all settings to default.
-Installed LSI MegaRAID Storage Manager v12.05.03.00, and checked the status of everything, and it all looks great.
-Updated the firmware:
Old Firmware Version--2.90.03-0933, New Firmware Version--2.130.353.1663
Old Firmware Package Version--12.9.0-0037, New Firmware Package Version--12.12.0-0111
Old WebBIOS Version--6.0-18-Rel, New WebBIOS Version--6.0-49-Rel
Old Version--3.18.00 (Build Jun 17, 2010), New Version--3.24.00 (Build Oct 26, 2011)
As you can see, the firmware was kind of old, but it updated in a matter of 2 minutes (plus a reboot).
-Battery stats:
Full Capacity--1460 mAh
Voltage--4087mV
Battery Replacement--Not Required
As you can see, the battery is in great condition. It is current charging, which has about an hour to go. It's currently running around 102.2F degrees, and I hope to drop that to low 90s once I switch out the fans in my chassis. Once the battery finishes it's charge, I'm going to kick off a manual learn cycle.
-I have no SAS Cables, so I can't connect any drives, but the controller does see it's two ports, and potentially 8 drives. I will be out of town all week next week, but I will order a cable during that time so I can do some further testing next weekend, to ensure all is good.
-Windows Server 2012 goes GA on Sept. 4, which will be the latest date that I get it. RTM is already released, and just waiting for that to hit on MSDN, and I will grab it (or if it leaks before, I will get it)...then I will install Server 2012 on my NUS...which is currently running Server 2008 R2 Standard. I will get the new OS on there before I build any arrays for home-production use.

_August 10, 2012_
-Added 2 new items to my part list. x2 Syba SY-MRA25023, which is a 2.5" hot-swap caddy that mounts in a PCI slot. I will be using two of these for my SSD Boot Drives, which will be in a Raid 1. This will give me the ability to hot swap in case of a failure, without pulling off the front cover. It's also a great way to have the drive mounted in place, and not have it free floating in the case (since there is not 2.5" HDD mounts inside my Norco RPC-4224). I ordered one, which is more or a less a test, to see how well it works. At $20 each, it's not bad. I also added my SFF-8087 cables to my part list, now that I have decided what I will use. I decided not to use cheap cables, so I am going with the Tripp Lite S506-18N, which is an 18" SFF-8087 to SFF-8087 cable. I will have a total of 8 of these, 6 from the HP SAS Expander to the backplanes, and 2 from the HP SAS Expander to the LSI 9261-8i Raid Controller (Link Aggregation). I will be testing the Link Aggregation to see if I can actually notice an increase in performance, and if not, that I won't be using it that way. If I ever fill this box up and run out of room, I will build a SAS Expander Case with another Norco chassis, and use some kind of SFF-8087 to SFF-8088 converter, so I can connect another HP SAS Expander, in another Norco Chassis for more drives (if I need it, and would be a year or two away).
-I also ordered the PCI bracket for my LSI 9261, since it is a low profile card and came with a low profile PCI bracket. ~$10, and should have next week. So all in all, I bought three new things, which are needed. Once I get my SAS Cable, I will be able to fully test out all my backplanes (which I have already power tested), and build a simple array to do some performance testing of my controller.

_August 11, 2012_
-Finally got my hands on a copy of Server 2012 RTM. No, it's not cracked...it's going to be unlicensed for now (180 trial), and licensed once it hits TechNet next month. Going to get it installed, and pre-configured and ready to build my first array once my SFF-8087 cable arrives on Tuesday (08/14/2012) (along with the 2.5" hot swap caddy and PCI Bracket for my raid controller).

_August 14, 2012_
-Having some issues with the leaked Server 2012 RTM, as it likes to blue screen right after the install welcome screen. Going to wait until tomorrow (August 15, 2012) as that is when RTM is released to TechNet/MSDN Subscribers. I thought that was early September, but I was wrong...so tomorrow!
-Received my SFF-8087 cable, and the 2.5" hot swap caddy, which is called a PCI Mobile Rack. Videos and pictures will be uploaded later this evening, and this thing is awesome!

_August 17, 2012_
-Disappointing news. I finally got my hands on a copy of Windows Server 2012, direct from the Volume Licensing page...but Server 2012 will not load on this motherboard. ASUS support wasn't really helpful, stating that I will just have to wait for a BIOS update since it's not supported. Logical, I guess...but saddening. I guess I will be using Server 2008 R2 SP1 for now then, which should do me just fine really. Oh well. Going to install my OS on here now, and maybe look at migrating over my current 2TB drives and see if performance is increased on this controller!

_September 3, 2012_
-Yeah, it's been a while. Money is tight, lots of traveling with work.
-Just purchased my 2 80mm rear case fans (bgears b-Blaster 80), which are a 62 CFM, 39 dba fan. Some may say that is noisey, but when all is done this server will have 3 Delta AFB1212GHE-CF00 which are 240CFM, 62 dba fans...which will be approximately 66 dba combined between those 3 fans. This server again, will be tucked away in my mDC, with soundproofed walls. I want maximized airflow, for maximized cooling.
-Also purchased one of my Mushkin Enhanced Chronos Deluxe 60GB SSDs. So I will be rebuilding the box once I get it.
-Lastly, since I will need 8 total HDDs before I can build my Raid 60 and get my storage going, I have decided to migrated my existing storage to a Raid 5 with 3 Hitachi 7K3000 2TBs that I currently have. I bought a fourth, which is damaged...and I will be RMAing, but I will at least have 6TB of Raid 5 storage, and will be getting use out of this storage server, other than folding. I will definitely be doing some performance testing with these drives (Raid 0, 1, and 5)...as well as rebuild testing.

_September 10, 2012_
-Finally ordered my HP SAS Expander. Hopefully will have it later this week.
-Received my 2 bgears b-Blaster 80 80mm Case Fans, and love these things. They aren't really that loud compared to what I already have running in my office, and they move some great air. I can tell a noticable different with the heatsink on my Raid card, so that is good. Also got my SSD, but have not installed yet...as I am waiting for a BIOS update to allow Server 2012 to be installed on this motherboard. It's a shame that a BIOS update was just released, but didn't fix this.
-I have ordered all the SFF-8087 cables that I need.
-Lastly, I got my hands on a copy (from MSDN) of Windows Storage Server 2012, which is designed to run on OEM hardware. It's a slight stripped down version of Server 2012, with improved iSCSI support and SIS (data-deduping, I believe). I want to use this, but until it will install I can't really test it. I will be sure to take some pics of my SSD when I take pics of the SAS Expander.
-After a slight debacle, I have decided to order 3 120mm Scythe Ultra Kaze fans. They push around 133 CFM of air, at 45dba. This is about 15 dba quieter than the Delta, and about 100 CFM less. After plenty of though, I decided that the Delta was indeed overkill that I could cut back on. The Ultra Kaze are about half the price, and will still push plenty of air and a noise level no louder than what I already got going on in my home office. 3 120mm 133 CFM fans in the fan bar, and 2 80mm 62 CFM exhaust fans should move plenty of air. The other problem with the Delta is the power draw. I read somewhere that this fan can pull as much as 50w, which I don't think is true...but made me think that even if they pulled 25w each, that's 75w that I probably don't have to spare on this PSU. My choice of PSU is probably going to be a temporary thing, but will do just fine for now.

_September 12, 2012_
-Received my 3 120mm Scythe Ultra Kaze, and man these things are great. 133CFM each...and is no louder than the 4 Delta AFB812Hs they replaced. The Delta AFB0812H is rated at 35.3CFM, so technically one Ultra Kaze 120mm moves almost as much air as all 4 of these Deltas. Again, I can noticeably tell a temperature different. Mainly, my raid controller heatsink is much cooler (this thing has come close to burning me before).
-Looks like my HP SAS Expander and 6 SFF-8087 cables should be here tomorrow or Friday, which will complete my server, minus drives and OS.
-After contacting ASUS support, they have confirmed that Server 2012 is planned to be supported on this motherboard, but there is another line of motherboards that take priority. Such a shame, and probably means I will be running Windows Storage Server 2008 R2, until Windows Storage Server 2012 is supported. Oh well, not a big deal as they both will give me the iSCSI support I am looking for. Shouldn't be hard to upgrade in a few months.
-More pictures and videos will come soon, I promise. Been a busy week, but this weekend is when I can take a few pictures of the fans (and hopefully my HP SAS Expander), along with videos.

_November 21, 2012_
It's been awhile! I have done a few things differently on my NUS server, but still no progress on my mDC. I am guessing that it will be after the holidays before I get to start on my mDC.
-FINALLY loaded Windows Server 2012 on my NUS01, utilizing a single 60GB SSD. I am still debating if I want to get a second and do a Raid 1 for the OS, but I am probably not going to. As far as the OS, I had to upgrade the BIOS and then Server 2012 loaded perfectly, which made me happy. For the specific OS, I am actually running Windows Storage Server 2012, which is an OEM ONLY OS, so thank you MSDN! The reason I went with Windows Storage Server is simple, since WSS is focused on Storage. The new Server manager in 2012 makes it so easy to manage everything from one pane. When I say everything I am talking Volumes, Drives, VDs, iSCSI LUNs, Data Dedup, etc. So easy and pretty! I'm sure I can take some screenshots sometime and share.
-I also rebuilt my domain this past weekend, and did a few things different. Namely, I set up two 250GB iSCSI LUNs. I have my TORRENT01 VM, which is a very lightweight VM running only uTorrent 3.2.0. 20GB VHD for the host OS and software, and a 250GB iSCSI LUN attached for Torrent storage. Similarly, is my NZB01 server which hosts my Usenet stuff (SABnzbd, SickBeard, CouchPotato, and Headphones). 20GB OS/Software VHD, and a 250GB iSCSI LUN for downloads (which with the automation I never have to touch, since things are auto-renamed and moved to the correct location). Lastly, I also set up NIC teaming with the built in NIC Teaming in Server 2012. My NUS server has 4 Intel gigabit NICs, but I am currently (and temporarily) using a unmanaged gigabit switch. Since my switch is unmanaged, I can't set up LACP...so I used the switch-independent NIC teaming, which allows for aggregated OUTBOUND traffic. File streaming to one HTPC, iSCSI traffic, and other file traffic...it does great. Once I get my Cisco Smart Switch, I will set up LACP which will allow for aggregated outbound AND inbound traffic.
-I also got a new Access Point to replace my aging Netgear WNDR3700 (with DD-WRT). Ubiquiti's Unifi APs are great, and I finally got around to buying one. Much better range and speed than my previous AP.

_October 26, 2013_
It's been a really long time (almost a year) since I've done anything to my storage box. It's been chugging along great for the past year with 5 x 2TB in a RAID 5, which is not ideal but it's what my budget has allowed for. I've got about 800GB free in my current array, and while I do have other 2TB drives, I am not willing to expand this array. 8TB with 5 drives is the biggest I will take a RAID 5, but I have been pleasantly surprised at how stable it's been.
The reason for updating this post is to outline my new storage strategy, which has changed yet again. Originally, I was going to do an 18 drive RAID 60 with 3TB drives, 2 3TB hot spares, and a 4 x 2TB RAID 10 for VM storage via iSCSI. I have since decided that I likely won't need ~42TB of storage within the next 3-5 years and have decided to stick with 3TB drives (Toshiba DT01ACA300's) and do a RAID 10 instead. I have 2 DT01ACA300's right now and hope to order two more in the next few weeks, just to get my 3TB RAID 10 storage. 4 x 3TB in RAID 10 will not be enough for me to replace my existing RAID 5, but once I expand that to 6 x 3TB in a RAID 10, I can migrate data from my RAID 5 and blow that array away. That will leave me with several 2TB drives so I likely will build a separate RAID 10 for some other purpose. I will add drives as needed (in pairs, to expand my RAID 10 -- hopefully I can get to the point of expanding 4 drives at a time) with the end goal of 20 3TB drives in a RAID 10 (~30TB usable, actually I think it will be around 27.2TB) with 2 3TB drives for warm spares, and two SSDs used for CacheCade (not sure if I will do RAID 0, 1, or JBOD on that). I will probably aim for 512GB SSDs by that point since it will be 6-12 months down the road, so hopefully I can get them for $200 by then. That should fill up the chassis and keep me happy for a while. I figure if I can get 5 years out of that, surely we will have mainstream 5-6 TB drives by then and I can re-evaluate my storage sitatuion then (10Gb/s Fibre or more, SATA IV at 12Gb/s, 6TB drives -- that would be cool).
In the coming weeks I will post some pictures of the drives I'm using, and do some performance testing once I have 4 x 3TB drives I can play around with (RAID 0, 1, 5, 6 numbers for seq and random R/W).

_July 12, 2014_
Not much has changed with this build, except for my storage configuration. Long story short, I've been running with 5 x 2TB in a RAID 5 for my primary data, long with 2 x 3TB in RAID 1 for backups. I recently freed up 2 2TB drives from a PC build of mine, so I am in the process of migrating my 5 x 2TB RAID 5 (~7.2TB) to a 7 x 2TB RAID 6 (~9.1TB). Pretty easy to start the migration, but it's definitely nerve wracking! The migration/reconstruction has been running now for 35 minutes, and claims there is 3 days, 7 hours, and 26 minutes left until completion. I don't find this time reliable though, as back when the elapsed time was ~20 minutes in, it said the remaining time was over 4 days. I'm heading to bed here in a few, so I'm hoping to see some good progress within the next 10-12 hours or so.
*Fingers crossed*

If I have no problems, and everything works like it should, I will feel much better about my data being on a RAID 6 vs a RAID 5. I'm not a big fan of RAID 5, and preach about RAID 10, but money has prohibited me from building my RAID 10 array with 3TB drives. I have 2 of my 3TB drives, but other builds and servers (R610, my new PC build, my girlfriends PC build, etc) has consumed my PC funds. Once I can get my hands on 6 more matching 3TB drives, I will be building a 8 x 3TB Toshiba DT01ACA300 RAID 10 array (~11TB usable) to move all my data over. I will then build another RAID 10 with my existing 2TB drives (or 6 of them at least) for iSCSI storage and backups.

Just wanted to share an update after all this time! Everything has been running great for over two years now!


_July 15, 2014_
The rebuild is done, and now I have 7 x 2TB in a RAID 6, with ~9TB of usable space in that array.
*Reconstruction:* 2 days, 21 hours, 53 minutes, 44 seconds
*Background Initialization:* 0 days, 13 hours, 25 minutes, 39 seconds
*Total time:* 3 days, 11 hours, 19 minutes, 23 seconds

That 3 days felt a lot longer, but I feel a bit better about my data until I can get my RAID 10 going (yes, I have backups -- my important data has 3 copies actually).

_November 14, 2014_
Planning a hardware and software refresh of this device. Will be swapping out the HP SAS Expander with a Chenbro CK23601, adding another Syba 2.5" drive bracket with another 60GB SSD for the OS (RAID 1...finally). I will also be rebuilding my storage arrays with a RAID 10 volume, starting with 4-6 6TB drives. OS-wise, I am looking to switch to OpenMediaVault, most likely.

_November 20, 2014_
Received Chenbro SAS Expander, second Mushkin 60GB SSD, and second Syba 2.5" drive try. Still waiting on the iKVM, which should arrive in the next few days. I hope to order 8 6TB drives in the coming weeks, and once received I'll be able to swap in this new hardware, rebuild the server on a new OS (likely OpenMediaVault), and build a new RAID 10 storage array!
New pictures have been added in post #3, 60GB Mushkin SSD and Chenbro CK23601 Expander.

_November 22, 2014_
Received my iKVM chip, and pictures added. I can't wait to put all this new gear in once I can buy some new HDD's!


----------



## tycoonbob

Reserved.


----------



## tycoonbob

*Photos*
This section will contain High Quality photos of each component, as well as build progress. I know several people will enjoy the pictures more than the videos.










Spoiler: Norco RPC-4224 Server Chassis






















Spoiler: ASUS P8B-E4/L Motherboard















Spoiler: ASUS ASMB5-iKVM











Spoiler: Kingston KVR1333D3E9/4G RAM











Spoiler: LSI SAS MegaRAID 9621-8i Raid Controller



















Spoiler: Chenbro CK23601 SAS Expander


















Spoiler: Rosewill FORTRESS 550W Power Supply














Spoiler: Mushkin Enhanced Chronos Deluxe 60GB SSD Boot Drives















Spoiler: Wright Line (Eaton) 45U Four Post Rack--NOT USED









Spoiler: Syba SY-MRA25023 2.5















Spoiler: Tripp Lite S506-18N SFF-8087 Cable













Spoiler: Norco 3x120mm Fan Bar










Spoiler: Delta AFB1212GHE-CF00 120mm Fan--NOT USED










Spoiler: Norco C-P1T7 1-to-7 SATA Extension












Spoiler: Crucial Ballistix Active Cooling Fan (RAM)--NOT USED












Spoiler: Build Progress








































Spoiler: Configuration








*Videos*
This section will contain High Quality videos of each component, as well as build progress. These videos are embedded, which is handy for those who don't want to go to Youtube to watch each video.










Spoiler: Norco RPC-4224 Server Chassis













Spoiler: ASUS P8B-E4/L Motherboard













Spoiler: Intel Xeon E3-1220 V2 Processor













Spoiler: Kingston KVR1333D3E9/4G RAM













Spoiler: LSI SAS MegaRAID 9621-8i Raid Controller













Spoiler: Rosewill FORTRESS 550W Power Supply













Spoiler: Wright Line (Eaton) 45U Four Post Rack













Spoiler: Syba SY-MRA25023 2.5













Spoiler: Tripp Lite S506-18N SFF-8087 Cable













Spoiler: Delta AFB1212GHE-CF00 120mm Fan













Spoiler: Norco C-P1T7 1-to-7 SATA Extension













Spoiler: Scythe Grand Kama CPU Cooler













Spoiler: Crucial Ballistix Active Cooling Fan (RAM)













Spoiler: Progress Videos


----------



## tycoonbob

Reserved.


----------



## killabytes

What's the purpose of the SAN? Personal use or...?


----------



## tycoonbob

Quote:


> Originally Posted by *killabytes*
> 
> What's the purpose of the SAN? Personal use or...?


This is for personal use. I have a Windows Active Directory domain built out here at home, with several things going on. It is also used as a home lab/learning environment.


----------



## parityboy

OK, subbed.


----------



## killabytes

Quote:


> Originally Posted by *tycoonbob*
> 
> This is for personal use. I have a Windows Active Directory domain built out here at home, with several things going on. It is also used as a home lab/learning environment.


Awesome. I haven't jumped into an AD environment at home, mostly because all my experience is from work.

I look forward to your updates.


----------



## tycoonbob

Thanks guys. I'm really looking forward to getting this all up and going, but it's going to take some time and work. I just shot two videos of my rack out in the garage; gonna check them out and see if they are any good. If so, I will upload it and link it above. Plan to do this with all components I get, as well as a build vlog. Gonna be fun!


----------



## Lt.JD

I am *very* interested. I have played with SAN's at work but haven't thought to use it in a consumer setting due to cost.


----------



## tycoonbob

Quote:


> Originally Posted by *Lt.JD*
> 
> I am *very* interested. I have played with SAN's at work but haven't thought to use it in a consumer setting due to cost.


I'm sure there are people that would like to argue that this isn't a SAN, which is partially true. A SAN is a storage solution that makes storage accessible to servers so that devices appear like locally attached devices to the OS, which will be the case of me using iSCSI Targets for my VMs. This will give my VM Host Servers byte-level access to this storage, or at least part of it. I will also be using this as a NAS to store files, at the file level. The correct term of this, I believe, is NUS...Network Unified Storage. NUS is a relatively new term, but is basically a hybrid supporting fibre channel SAN (which I won't be using), IP-based SAN (which I will be using in the form of iSCSI), and NAS (which I will also be using).

I know none of that is what you asked, but I wanted to go ahead and get this information out there, before the trolls come out.









This is a whitebox NUS!


----------



## Lt.JD

Quote:


> Originally Posted by *tycoonbob*
> 
> I'm sure there are people that would like to argue that this isn't a SAN, which is partially true. A SAN is a storage solution that makes storage accessible to servers so that devices appear like locally attached devices to the OS, which will be the case of me using iSCSI Targets for my VMs. This will give my VM Host Servers byte-level access to this storage, or at least part of it. I will also be using this as a NAS to store files, at the file level. The correct term of this, I believe, is NUS...Network Unified Storage. NUS is a relatively new term, but is basically a hybrid supporting fibre channel SAN (which I won't be using), IP-based SAN (which I will be using in the form of iSCSI), and NAS (which I will also be using).
> I know none of that is what you asked, but I wanted to go ahead and get this information out there, before the trolls come out.
> 
> 
> 
> 
> 
> 
> 
> 
> This is a whitebox NUS!


Ahh that's very cool. At my work we don't have a NUS machine just SAN's and NAS's. I will be following this very closely, I'm hoping I can get this done in the next couple of years. Any reason you went with Hyper-V? What OS's are you using for the VM's and the NAS?


----------



## tycoonbob

Quote:


> Originally Posted by *Lt.JD*
> 
> Ahh that's very cool. At my work we don't have a NUS machine just SAN's and NAS's. I will be following this very closely, I'm hoping I can get this done in the next couple of years. Any reason you went with Hyper-V? What OS's are you using for the VM's and the NAS?


The *NUS* will be running Windows Server 2012. I have more personal experience with Hyper-V over VMware and XenServer combined, and quite honestly...I think Hyper-V 3.0 is better than the rest. Then again, I am a Systems Engineering Consultant working for a Microsoft Gold Partner, so I get paid to say things like that.









My two VM Host servers are currently running the Windows Server 2012 RC, and will continue running Server 2012 with each new version. All my VMs are either Server 2008R2, Server 2012, Windows 7, or Windows 8, with the exception of my SNMP monitoring server, which runs OpenSUSE 12.1. I like to have something running Linux to play around with from time to time.


----------



## Lt.JD

Ahh well my specialty would be Vmware and Open-Source OS's so I'm excited to see how this all works.


----------



## Beezie

Will be very interesting to follow your project.
Are you going to show us some of the WS12 set-up and config with Hyper-V?, You said 14-24 VM`s will you set up an entire Windows enterprise network with exchange server,SQL,web within your home and a second DC for fail over?


----------



## tycoonbob

Quote:


> Originally Posted by *Beezie*
> 
> Will be very interesting to follow your project.
> Are you going to show us some of the WS12 set-up and config with Hyper-V?, You said 14-24 VM`s will you set up an entire Windows enterprise network with exchange server,SQL,web within your home and a second DC for fail over?


Of course I am going to show some Server2012 stuff! I currently have 13 VMs running between 2 Hyper-V servers...and they are:
DC01 - Server 2012 RC (Roles - AD DS, DNS, DHCP)
DC02 - Server 2012 RC (Roles - AD DS, DNS, DHCP)
MC01 - Server 2012 RC (Minecraft server. =] )
MEDIA01 - Server 2012 RC (Subsonic for LAN/WAN Music/Video streaming, Plex for HTPC LAN streaming)
MX01 - 2008R2 Ent (Exchange 2010, RD Gateway, RD Session Host for RDApps)
SCACPRI - 2008R2 Ent (System Center 2012 App Controller)
SCCMPRI - 2008R2 Ent (System Center 2012 Configuration Manager, SQL Server 2008R3 Ent -- this is my shizz)
SCORCHPRI - 2008R2 Ent (System Center 2012 Orchestrator, SQL Server 2008R3 Ent -- love this, but still learning)
SCVMMPRI - Server 8 "Beta" (System Center 2012 Virtual Machine Manager -- Need to rebuild with the latest CTP2, and Server 2012 RC)
SNMP01 - OpenSUSE 12.1 (Qwest Software Foglight Network Monitoring System -- SNMP/network monitoring/reporting)
TORRENT01 - 2008R2 Ent (uTorrent 3.2b server)
WS01 - 2008R2 Std (Web Server via Apache, with MySQL, PHP, WordPress - hosting 3 sites)
WIN7TEST01 - Win7 Ult (App-V Sequencing, Deployment testing, whatever)

I do this stuff for a living, and for fun (Hyper-V, Microsoft System Center, Servers, Storage, and Networking). I will definitely do videos and talk about my Hyper-V environment/virtual networks, how I have MPIO and LACP configured, switch and firewall configurations, and even cable management.


----------



## tycoonbob

Video review of my Norco RPC-4224 has been uploaded.


----------



## ZFedora

Awesome! I really enjoyed watching your videos too, looks like it's gonna be a good project!


----------



## tycoonbob

Quote:


> Originally Posted by *ZFedora*
> 
> Awesome! I really enjoyed watching your videos too, looks like it's gonna be a good project!


Thanks ZFedora. I'm really excited about this build log, and it's been fun so far! I'm out of town all this week or so for vacation, and will be out of town the week following for work (ah, the life of an IT Consultant), but once I hope that I will at least have ordered my motherboard by then, and ran some Cat6 throughout the house. It's going to be slow, but definitely worth it!


----------



## Citra

OT: but is cat 6 really faster then cat 5e?


----------



## ZFedora

Quote:


> Originally Posted by *Citra*
> 
> OT: but is cat 6 really faster then cat 5e?


Yes, Cat6 can theoretically support 10Gbps transfers but is limited at certain lengths. CAT6A is also shielded twisted pair, or STP, which prevents EMI.


----------



## tycoonbob

Quote:


> Originally Posted by *ZFedora*
> 
> Yes, Cat6 can theoretically support 10Gbps transfers but is limited at certain lengths. CAT6A is also shielded twisted pair, or STP, which prevents EMI.


Cat5e can do giabit speeds, but it's not certified for gigabit speeds. Cat5e is rated at up to 100Mhz, up to a max length of 100 meters (~330 ft). Cat6 is rated at up to 250Mhz, at the same distance I believe. Cat6a is rated at up to 500Mhz if using Plenum Shielded Twisted Pair cables.

Ethernet cabling is not rated in measurements of 10MB, 10/100MB, 10/100/1000MB, or 10GB...but instead in it's frequency rating. To ensure maximum speed, not only do you have to use quality cables, you also have to use quality jacks/keystones, rated at the same speed as the cable. Using Cat5e cable with Cat6a keystone jack will still only get Cat5e speeds. Cat6 can do 10GB speeds, but at a much shorter distance (~120-180 ft, depending if bundled with other cables or not), whereas Cat6a can do 10GB speeds at the same distance as previous versions (~330 ft).


----------



## Mikey976

very interested in how this setup pans out, subbed!


----------



## Citra

Quote:


> Originally Posted by *ZFedora*
> 
> Yes, Cat6 can theoretically support 10Gbps transfers but is limited at certain lengths. CAT6A is also shielded twisted pair, or STP, which prevents EMI.


Quote:


> Originally Posted by *tycoonbob*
> 
> Cat5e can do giabit speeds, but it's not certified for gigabit speeds. Cat5e is rated at up to 100Mhz, up to a max length of 100 meters (~330 ft). Cat6 is rated at up to 250Mhz, at the same distance I believe. Cat6a is rated at up to 500Mhz if using Plenum Shielded Twisted Pair cables.
> Ethernet cabling is not rated in measurements of 10MB, 10/100MB, 10/100/1000MB, or 10GB...but instead in it's frequency rating. To ensure maximum speed, not only do you have to use quality cables, you also have to use quality jacks/keystones, rated at the same speed as the cable. Using Cat5e cable with Cat6a keystone jack will still only get Cat5e speeds. Cat6 can do 10GB speeds, but at a much shorter distance (~120-180 ft, depending if bundled with other cables or not), whereas Cat6a can do 10GB speeds at the same distance as previous versions (~330 ft).


Ah I see.

Thanks guys!


----------



## tycoonbob

_June 25, 2012_
Ordered my new Power Supply (Rosewill FORTRESS-550), and x2 1-to-7 Molex extensions (Norco C-P1T7). Also found out that the Norco RPC-4224 has a 15% discount going on, which is worth about $60. I contacted Newegg about it, since I already ordered mine, and they gave me a $15 off my new order (PSU and 1-to-7 Molex), which is awesome. Go Newegg!
Looking forward to this PSU as well. I know Rosewill isn't top of the line, by any means...but I have always had good luck with Rosewill, and they just released their new FORTRESS series of PSUs, which are 80+ Platinum. Yes, Platinum. Can't find any reviews on this new PSU, but getting a 20% discount on it (and 20% on the Molex Splitters too), and it's 80+ Plat...which I have never had before. 89% efficiency minimu, and up to 94%. According to PSU calculators, with 24 7200 RPM SATA III HDDs, I need a 497w minimum...with 520w recommended, so this should just be enough once fully loaded (which will be awhile). New items to review once I return from vacation! Woohoo.

---

Stay tuned!


----------



## parityboy

*@OP*

As much as the PSU might deliver 550W sustained, make sure you get a controller that supports staggered spin-up. 50W isn't really a lot of headroom for a cold boot with 24 drives...


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@OP*
> 
> As much as the PSU might deliver 550W sustained, make sure you get a controller that supports staggered spin-up. 50W isn't really a lot of headroom for a cold boot with 24 drives...


Yeah, this is something I was thinking about last night. Once I fill this up with 24 drives, I will get a new PSU if I have problems. Cold boots will be rare, and will check for staggered spin up before buying a controller.

Sent from Tapatalk.


----------



## tycoonbob

Ordered my motherboard and 2 4GB sticks of RAM. Getting closer! Just need my 2 SSDs for my boot drives and a CPU, and I can start building. Hopefully two weeks or less until I can order those. PSU was received, as well as the SATA Expanders, so keep a look out for more videos soon!


----------



## tycoonbob

New posts for July 3, and July 4. New videos (motherboard, RAM, RAM Cooling, PSU, and Molex Expander) also added! I removed the video links, and just embedded the Videos in Post 3...in Spoiler tabs, for easier viewing. Pictures will be added soon.
Server Chassis video was rerecorded, using a tripod for better stability.

Have a happy (and safe) July 4th to all Americans!


----------



## Lt.JD

Good stuff, can't wait for the next update.


----------



## tycoonbob

Quote:


> Originally Posted by *Lt.JD*
> 
> Good stuff, can't wait for the next update.


Thanks, Lt! So very exciting for me.

I have uploaded about 60 photos, in the photos section on post 3. Be sure to click them to view full size images. I have also started the build, and those photos are also in post 3. Video of my new fan has also been uploaded, and is in post 3...so check it out.


----------



## swat565

What made you go with windows 2012 for NAS/SAN? Mind you 2012 is most likely free for you, but seems like your performance would be better/simpler setup with more bare-bones Linux install with iSCSI and CIFS? Also are you doing any sort of high availability for the SAN?

Awesome build though(and very jealous), and I love the norco 2440 I've used in a few NAS/SAN builds.


----------



## tycoonbob

Quote:


> Originally Posted by *swat565*
> 
> What made you go with windows 2012 for NAS/SAN? Mind you 2012 is most likely free for you, but seems like your performance would be better/simpler setup with more bare-bones Linux install with iSCSI and CIFS? Also are you doing any sort of high availability for the SAN?
> Awesome build though(and very jealous), and I love the norco 2440 I've used in a few NAS/SAN builds.


My home environment is all Windows once again, and that's my personal preference. Now I plan to run 2012, since I can get a free copy...but I won't be relying on Storage Spaces, exactly. Since I am using a hardware controller, disk I/O performance won't really matter on the OS. Secondly, I will also be using MPIO between this server, as well as my Hyper-V servers.

Nothing is simpler than Windows to me.


----------



## tycoonbob

It's been a week, but more progress has been made. I ordered my CPU and CPU cooler, and should have those mid next week. I will finally be able to power this on and test it all out!


----------



## dushan24

Sorry if this was already posted.

But what sort of interconnect are you using for the SAN, I assume Gigabit Ethernet via CAT6, if you have multiple NICs are you gonna team them for more bandwidth?

What sort of throughput are you expecting?


----------



## tycoonbob

Quote:


> Originally Posted by *dushan24*
> 
> Sorry if this was already posted.
> But what sort of interconnect are you using for the SAN, I assume Gigabit Ethernet via CAT6, if you have multiple NICs are you gonna team them for more bandwidth?
> What sort of throughput are you expecting?


I believe I posted about my network plan in the first post, but yes...the NUS server has 4 gigabit NICs, and everything else in my house is (purposely) gigabit. I currently have a 16-port unmanaged gigabit switch, and I will be getting a 24 port smart switch in the future. With the NUS server, I am actually going to set up MPIO with the 4 NICs, and on my Hyper-V Hosts I am going to set up MPIO with 2 ports. I will be running some Cat6 STP throughout my house in the coming weeks. Maybe Cat6a, if I can find some quality cable cheap enough. As far as the throughput, I really don't know. Server to server over my existing 16 port gigabit witch is a constant and stable 120ish MB/s assuming I'm not streaming or anything. I'm not looking to maximize my throughput, but as long as I can run 15-25 VMs with iSCSI Targets from this NUS server, I will be more than happy. I have also thought about getting some 20GB/s Infiniband HBAs and linking those back to back (without a switch), which would be an investment of $200-300 (for 3 HBAs, plus cables).

Things may change once I get to this point though. I plan to finish up my NUS before working on building my mDC room, and running my cables, and having my ISP come out and running a new cable, etc.

Thanks for the questions!


----------



## dushan24

Switched fabric (such as InfiniBand) is cool but not worth it in my opinion. If you're all CAT6, try finding a 2nd hand 10G switch on the cheap.

Even that is overkill. We use gigabit ethernet through a managed switch over CAT6 in our data centre. That is an infrastructure with 4 SAN's, 8 ESXi hosts and a few other things and we don't saturate the pipe.

All we do is:
Use Multi Path I/O on the iSCSI initiator in all the Windows VM's
Use link aggregation (NIC teaming) on each of the servers
Segment the traffic into 2 VLAN's
Port trunking on the switches.


----------



## tycoonbob

Quote:


> Originally Posted by *dushan24*
> 
> Switched fabric (such as InfiniBand) is cool but not worth it in my opinion. If you're all CAT6, try finding a 2nd hand 10G switch on the cheap.
> Even that is overkill. We use gigabit ethernet through a managed switch over CAT6 in our data centre. That is an infrastructure with 4 SAN's, 8 ESXi hosts and a few other things and we don't saturate the pipe.
> All we do is:
> Use Multi Path I/O on the iSCSI initiator in all the Windows VM's
> Use link aggregation (NIC teaming) on each of the servers
> Segment the traffic into 2 VLAN's
> Port trunking on the switches.


10GigE ether is still too expensive. Would be $500 minimum for the switch, then cost of 10GigE adapter for each host in my cluster, plus the NUS server. You can get IB HBA's for $75/ea, and not use a switch...but it's much less supported, and iSCSI over IB isn't officially supported. Going 10GigE would kinda defeat the purpose of me getting the motherboard that I got, and since I will have 25 VMs max (probably), 4 NICs MPIO on the NUS and 2 NICs MPIO on each host, should be plenty for my network.

I probably won't deal with VLANs, but I did think about it. Now I am a little confused on what you said you are using at work...MPIO on your iSCSI Initiator, but LACP on your other servers? Other servers as in your hosts, or other servers in your environment? You can't MPIO to LACP...as they are two different things, but both are still failovers.

Anyway, I work for an IT consulting company so I see different environments all over the place. I've worked in anything from doing a P2V on all two servers in an environment, to working in environments such as Papa Johns, Humana, Nissan, and Toyota (of America). It's amazing seeing all the different ways things are done, but I am confident that what I am planning for is more than what I would really need.


----------



## dushan24

Quote:


> Originally Posted by *tycoonbob*
> 
> Now I am a little confused on what you said you are using at work...MPIO on your iSCSI Initiator, but LACP on your other servers? Other servers as in your hosts, or other servers in your environment? You can't MPIO to LACP...as they are two different things, but both are still failovers.


Each server has two 4 port Ethernet cards. We aggregate (team) the ports on each card. This yields additional bandwidth. This is obviously done on the host level.

We then use MPIO with the 2 discrete NIC teams at the hypervisor level to map LUNs on the SAN as storage repositories for the given ESXi host.

Additionally each VM is configured with multiple vNICs so we can again use MPIO via the iSCSI initiator to map LUNs directly to the VM. We have also implemented 2 VLANs to segregate VM and SAN traffic.

I see your confusion as my previous description implied we were using MPIO and LACP together on a single NIC.

PS: Edited to correct mistakes (I'm tired).


----------



## dushan24

Quote:


> Originally Posted by *tycoonbob*
> 
> 10GigE ether is still too expensive. Would be $500 minimum for the switch, then cost of 10GigE adapter for each host in my cluster, plus the NUS server. You can get IB HBA's for $75/ea, and not use a switch...but it's much less supported, and iSCSI over IB isn't officially supported. Going 10GigE would kinda defeat the purpose of me getting the motherboard that I got, and since I will have 25 VMs max (probably), 4 NICs MPIO on the NUS and 2 NICs MPIO on each host, should be plenty for my network.


That's a fair point about the cost.

But as you said iSCSI over InfiniBand may be troublesome.

I'd just stick with gigabit Ethernet.


----------



## tycoonbob

Quote:


> Originally Posted by *dushan24*
> 
> Each server has two 4 port Ethernet cards. We aggregate (team) the ports on each card. This yields additional bandwidth. This is obviously done on the host level.
> We then use MPIO with the 2 discrete NIC teams at the hypervisor level to map LUNs on the SAN as storage repositories for the given ESXi host.
> Additionally each VM is configured with multiple vNICs so we can again use MPIO via the iSCSI initiator to map LUNs directly to the VM. We have also implemented 2 VLANs to segregate VM and SAN traffic.
> I see your confusion as my previous description implied we were using MPIO and LACP together on a single NIC.
> PS: Edited to correct mistakes (I'm tired).


Ah, I see. I have never thought about layering NICs like that. That could really push some good throughput, but definitely not needed at my home lol. Separate VLANs to segregate VM and SAN traffic is pretty common from what I have seen, so I assumed that. Thanks for the clarification.

And yes, I am just gonna stick with my quad GigE....the only reason I went with that motherboard, otherwise it would have been a socket C32 or G34, instead of Intel.


----------



## dushan24

Quote:


> Originally Posted by *tycoonbob*
> 
> Ah, I see. I have never thought about layering NICs like that. That could really push some good throughput, but definitely not needed at my home lol. Separate VLANs to segregate VM and SAN traffic is pretty common from what I have seen, so I assumed that. Thanks for the clarification.
> And yes, I am just gonna stick with my quad GigE....the only reason I went with that motherboard, otherwise it would have been a socket C32 or G34, instead of Intel.


Cool mate,

I'm interested to see how this pans out.


----------



## u3b3rg33k

As someone who does a lot of work with data cabling and related networking equipment, I would like to steer your towards jperf (the gui front end for iperf). if you have trouble getting it to run on windows, I made a custom .bat file to make things easy (PM me if you want it). With this lovely tool you can prove (or disprove) the actual capacity of your network - for example, I found that on the ASUS P5WDG2-WS PRO boards, one of the controllers seems to be limited to around 40MB/s of continuous throughput, while the other controller will do full throughput, on the same cabling and switch - it saved me a lot of headache (the fluke said the cabling was good to go).

For the average user, your transfer speed limitations are not network related, but are bottlenecked somewhere else (often it's non-SSD systems, SSD systems with small drives that can't maintain 100+ MB/s writes). I would go so far as to say properly terminated category 5e wiring is more than sufficient. you can make or break a network by how well you terminate the wire onto the jacks / patch panel. I've seen up to 11dB difference between the work done by a good installer, and the work done by someone who really doesn't care. It's simple to do right - just keep the twist as tight as possible. You can often get category 5 jacks to pass data at gigabit speeds if you know what you're doing (not that I'd recommend it).

For the most part, the real benefits of higher spec'd wire don't come into play in a home installation - the big benefits are when there's a tray full of wire and you're looking for more overhead to deal with noise from the other data wires, RFI, and so on.

Insofar as cat 6A is concerned, 6A wire does NOT have to be shielded wire (it can be but it's not required). I usually see 6A specified for the most foolish of installations - one I just certified was done in an industrial kitchen for the POS terminals and the order screens - and they spec'd exposed plenum wire with exposed service loops! in a kitchen! Egads!


----------



## dushan24

Quote:


> Originally Posted by *u3b3rg33k*
> 
> As someone who does a lot of work with data cabling and related networking equipment, I would like to steer your towards jperf (the gui front end for iperf). if you have trouble getting it to run on windows, I made a custom .bat file to make things easy (PM me if you want it). With this lovely tool you can prove (or disprove) the actual capacity of your network - for example, I found that on the ASUS P5WDG2-WS PRO boards, one of the controllers seems to be limited to around 40MB/s of continuous throughput, while the other controller will do full throughput, on the same cabling and switch - it saved me a lot of headache (the fluke said the cabling was good to go).
> For the average user, your transfer speed limitations are not network related, but are bottlenecked somewhere else (often it's non-SSD systems, SSD systems with small drives that can't maintain 100+ MB/s writes). I would go so far as to say properly terminated category 5e wiring is more than sufficient. you can make or break a network by how well you terminate the wire onto the jacks / patch panel. I've seen up to 11dB difference between the work done by a good installer, and the work done by someone who really doesn't care. It's simple to do right - just keep the twist as tight as possible. You can often get category 5 jacks to pass data at gigabit speeds if you know what you're doing (not that I'd recommend it).
> For the most part, the real benefits of higher spec'd wire don't come into play in a home installation - the big benefits are when there's a tray full of wire and you're looking for more overhead to deal with noise from the other data wires, RFI, and so on.
> Insofar as cat 6A is concerned, 6A wire does NOT have to be shielded wire (it can be but it's not required). I usually see 6A specified for the most foolish of installations - one I just certified was done in an industrial kitchen for the POS terminals and the order screens - and they spec'd exposed plenum wire with exposed service loops! in a kitchen! Egads!


You're right about the first bottleneck usually not being network related.

And the your info on cables was interesting.

I'd be interested to see the script, PM sent.


----------



## tycoonbob

Quote:


> Originally Posted by *u3b3rg33k*
> 
> As someone who does a lot of work with data cabling and related networking equipment, I would like to steer your towards jperf (the gui front end for iperf). if you have trouble getting it to run on windows, I made a custom .bat file to make things easy (PM me if you want it). With this lovely tool you can prove (or disprove) the actual capacity of your network - for example, I found that on the ASUS P5WDG2-WS PRO boards, one of the controllers seems to be limited to around 40MB/s of continuous throughput, while the other controller will do full throughput, on the same cabling and switch - it saved me a lot of headache (the fluke said the cabling was good to go).
> For the average user, your transfer speed limitations are not network related, but are bottlenecked somewhere else (often it's non-SSD systems, SSD systems with small drives that can't maintain 100+ MB/s writes). I would go so far as to say properly terminated category 5e wiring is more than sufficient. you can make or break a network by how well you terminate the wire onto the jacks / patch panel. I've seen up to 11dB difference between the work done by a good installer, and the work done by someone who really doesn't care. It's simple to do right - just keep the twist as tight as possible. You can often get category 5 jacks to pass data at gigabit speeds if you know what you're doing (not that I'd recommend it).
> For the most part, the real benefits of higher spec'd wire don't come into play in a home installation - the big benefits are when there's a tray full of wire and you're looking for more overhead to deal with noise from the other data wires, RFI, and so on.
> Insofar as cat 6A is concerned, 6A wire does NOT have to be shielded wire (it can be but it's not required). I usually see 6A specified for the most foolish of installations - one I just certified was done in an industrial kitchen for the POS terminals and the order screens - and they spec'd exposed plenum wire with exposed service loops! in a kitchen! Egads!


The only reason I would go with Cat6a for this project, is because I got a great deal on it...and for future-proofing. Yes, Cat5e can unofficially run gigabit, if done right...what about in the future, 3-5 years from now if 10gigE adapters are reasonably affordable? I'd rather have Cat6 STP, which would provide better than Cat5e...and not to mention I already have a spool of Cat6 STP, so that's what I will be using. I'm not an expert at terminations, but I do know to keep twists tight, and how to punch down. I did upgrade to STP over UTP though, but the highest density of cabling would be the intake to my mDC...where I will have 15-20 Cat6a cables, as well as 2 coax cables...which may be near some 120v cabling. I know UTP would be fine, but why not go with better stuff for future proofing? I'm not looking for the bare minimum...as if I was, I would be using Lack Racks (instead of a real rack), external NAS devices such as Drobos or QNAPs, and all unmanaged switches. I would still be using my Netgear WNDR-3700 (with DD-WRT) as my gateway device, but I'm not...since I run Untangle.

I do appreciate your comments, and am interested in that script. I currently am able to maintain 120MB/s writes to my Raid5 array, which *is* limited by my network. If I write locally from a Raid 1, to my Raid 5...I can beat that 120MB/s. Quite honestly, as long as I can maintain 100MB/s writes in a single transaction...I will be more than happy. Since I will be running MPIO with 4 NICs on the storage side, and MPIO on the host side with 2 NICs...along with 15-25 VMs, I want to see something else be the bottleneck instead of the network...which will probably be my storage (which will probably end up as a Raid 50, with 7200RPM HDDs...on a real raid controller, mind you). Please send me that script, if you don't mind. Very interested!


----------



## tycoonbob

New update posted, as I modded my cable modem out of boredom.

[Modem Mod] Added a 60mm(?) fan to my Thomson Cable Modem


----------



## u3b3rg33k

Quote:


> Originally Posted by *tycoonbob*
> 
> The only reason I would go with Cat6a for this project, is because I got a great deal on it...and for future-proofing. *Yes, Cat5e can unofficially run gigabit, if done right*...what about in the future, 3-5 years from now if 10gigE adapters are reasonably affordable? I'd rather have Cat6 STP, which would provide better than Cat5e...and not to mention I already have a spool of Cat6 STP, so that's what I will be using. I'm not an expert at terminations, but I do know to keep twists tight, and how to punch down. I did upgrade to STP over UTP though, but the highest density of cabling would be the intake to my mDC...where I will have 15-20 Cat6a cables, as well as 2 coax cables...which may be near some 120v cabling. I know UTP would be fine, but why not go with better stuff for future proofing? I'm not looking for the bare minimum...as if I was, I would be using Lack Racks (instead of a real rack), external NAS devices such as Drobos or QNAPs, and all unmanaged switches. I would still be using my Netgear WNDR-3700 (with DD-WRT) as my gateway device, but I'm not...since I run Untangle.
> I do appreciate your comments, and am interested in that script. I currently am able to maintain 120MB/s writes to my Raid5 array, which *is* limited by my network. If I write locally from a Raid 1, to my Raid 5...I can beat that 120MB/s. Quite honestly, as long as I can maintain 100MB/s writes in a single transaction...I will be more than happy. Since I will be running MPIO with 4 NICs on the storage side, and MPIO on the host side with 2 NICs...along with 15-25 VMs, I want to see something else be the bottleneck instead of the network...which will probably be my storage (which will probably end up as a Raid 50, with 7200RPM HDDs...on a real raid controller, mind you). Please send me that script, if you don't mind. Very interested!


Maybe I just have trouble letting it go, but as far as I'm concerned, gigabit support on 5E is pretty official - even belden recognizes it as suitable for horizontal cabling: http://www.belden.com/techdatas/english/1583a.pdf, and belden LOVES selling you cable that is spec'd higher than the standard, which can be a double edged sword at times - if you do put in cable with more headroom, who's to say that the next tech will work over it? Not that I think it's bad. It's also useful when you exceed the recommended distance of 100m - I have a few cameras installed that are on the end of 400ft of shielded flooded 5e, and aside from the actual distance being out of spec, everything else is good to go - the shielding even hides the ticking from the cattle fence charger.

http://www.newark.com/pdfs/techarticles/belden/DifferenceBetweenCat6Cat5Standards.pdf
belden loves to toot their own horn - honestly if your network is having trouble passing data over 5_*e*_ @ 100Mb/s speeds, upgrading to cat 6 is probably not the answer.

Here's the script that goes in a .bat:

*PathTemp=%Path%

Path=C:\Program Files (x86)\Java\jre6\bin;%Path%

start javaw -classpath jperf.jar;lib\forms-1.1.0.jar;lib\jcommon-1.0.10.jar;lib\jfreechart-1.0.6.jar;lib\swingx-0.9.6.jar net.nlanr.jperf.JPerf

Path=%PathTemp%

PathTemp=

exit

#start javaw -classpath jperf.jar;lib\forms-1.1.0.jar;lib\jcommon-1.0.10.jar;lib\jfreechart-1.0.6.jar;lib\swingx-0.9.6.jar net.nlanr.jperf.JPerf
exit*

I love me some jperf, and win7 seems to dislike it without that (FYI, not my work, I found it and love it). I use it for "proofing" cable and fiber runs on occasion, especially when people tell me "that won't work", or "your cable/fiber is bad" (how you can say the latter when the link lights are active on both sides is beyond me). Two Intel PRO/1000 GT cards (more often integral broadcom or realtek laptop adapters) passing 100+ MB/s over it usually gets people to shut up and fix the programming in their switch.

Glad to hear you're getting 120MB/s+ speeds out of the fileserver - fun fact, that's near payload capacity for 33MHz/32bit PCI - I've also seen that be a bottleneck on older machines (long live PCI-X 133MHz?).


----------



## tycoonbob

Quote:


> Originally Posted by *u3b3rg33k*
> 
> Maybe I just have trouble letting it go, but as far as I'm concerned, gigabit support on 5E is pretty official - even belden recognizes it as suitable for horizontal cabling: http://www.belden.com/techdatas/english/1583a.pdf, and belden LOVES selling you cable that is spec'd higher than the standard, which can be a double edged sword at times - if you do put in cable with more headroom, who's to say that the next tech will work over it? Not that I think it's bad. It's also useful when you exceed the recommended distance of 100m - I have a few cameras installed that are on the end of 400ft of shielded flooded 5e, and aside from the actual distance being out of spec, everything else is good to go - the shielding even hides the ticking from the cattle fence charger.
> http://www.newark.com/pdfs/techarticles/belden/DifferenceBetweenCat6Cat5Standards.pdf
> belden loves to toot their own horn - honestly if your network is having trouble passing data over 5_*e*_ @ 100Mb/s speeds, upgrading to cat 6 is probably not the answer.
> Here's the script that goes in a .bat:
> *PathTemp=%Path%
> Path=C:\Program Files (x86)\Java\jre6\bin;%Path%
> start javaw -classpath jperf.jar;lib\forms-1.1.0.jar;lib\jcommon-1.0.10.jar;lib\jfreechart-1.0.6.jar;lib\swingx-0.9.6.jar net.nlanr.jperf.JPerf
> Path=%PathTemp%
> PathTemp=
> exit
> #start javaw -classpath jperf.jar;lib\forms-1.1.0.jar;lib\jcommon-1.0.10.jar;lib\jfreechart-1.0.6.jar;lib\swingx-0.9.6.jar net.nlanr.jperf.JPerf
> exit*
> I love me some jperf, and win7 seems to dislike it without that (FYI, not my work, I found it and love it). I use it for "proofing" cable and fiber runs on occasion, especially when people tell me "that won't work", or "your cable/fiber is bad" (how you can say the latter when the link lights are active on both sides is beyond me). Two Intel PRO/1000 GT cards (more often integral broadcom or realtek laptop adapters) passing 100+ MB/s over it usually gets people to shut up and fix the programming in their switch.
> Glad to hear you're getting 120MB/s+ speeds out of the fileserver - fun fact, that's near payload capacity for 33MHz/32bit PCI - I've also seen that be a bottleneck on older machines (long live PCI-X 133MHz?).


There is always a debate around Cat5e vs Cat6, but I have no reason not to go with Cat6. I like that Cat6 is _officially_ certified for gigabit. I like that my Cat6 is STP, for less interference. It has a higher bandwidth, which means little to me other than "it's better than Cat5e".

As far as that script, doesn't seem to work for me. For starters I don't have JRE6 installed...and I'm not quite sure what it's supposed to do. Two exits in the script, and no export? Is it supposed to tell me something?


----------



## u3b3rg33k

you need jperf (http://www.softpedia.com/get/Network-Tools/Network-Testing/JPerf.shtml) as well as the jre installed.

Are you using shielded jacks and patch cords with your STP cable?


----------



## trueg50

Pretty neat build! I never knew there was a block-level (SAN?) storage component to server 2012.

I know of a bunch of popular SAN-type storage OS/software, but I have only used Unified storage on EMC VNX's with Celerra/EMC Datamovers.


----------



## tycoonbob

Quote:


> Originally Posted by *u3b3rg33k*
> 
> you need jperf (http://www.softpedia.com/get/Network-Tools/Network-Testing/JPerf.shtml) as well as the jre installed.
> Are you using shielded jacks and patch cords with your STP cable?


Yeah, I'm not much with jperf...but will install it later and give it another go.

I have not yet run my Cat6, nor bought my keystones. I was planning on getting shielded jacks...which I found for around $9/ea.

Quote:


> Originally Posted by *trueg50*
> 
> Pretty neat build! I never knew there was a block-level (SAN?) storage component to server 2012.
> I know of a bunch of popular SAN-type storage OS/software, but I have only used Unified storage on EMC VNX's with Celerra/EMC Datamovers.


Yes, it has been built in with Windows for a while now. Microsoft iSCSI Initiator/Target. Allows for block-level access to the storage. I plan to use that for my VMs, while just good old SMB for storage. Unified storage is considered a device that can do both, support both IP iSCSI and FC.


----------



## u3b3rg33k

Quote:


> Originally Posted by *tycoonbob*
> 
> Yeah, I'm not much with jperf...but will install it later and give it another go.
> I have not yet run my Cat6, nor bought my keystones. *I was planning on getting shielded jacks*...which I found for around $9/ea.
> Yes, it has been built in with Windows for a while now. Microsoft iSCSI Initiator/Target. Allows for block-level access to the storage. I plan to use that for my VMs, while just good old SMB for storage. *Unified storage is considered a device that can do both, support both IP iSCSI and FC.*


FC is just a medium, not a file transfer scheme. you mean smb/afp/nfs?

As for shielded jacks, you either need grounded patch panels or shielded patch cords (grounding via the PC/switch). otherwise the signals that are picked up by the shield won't be removed, and you can see worse performance than you would with UTP.


----------



## tycoonbob

Quote:


> Originally Posted by *u3b3rg33k*
> 
> FC is just a medium, not a file transfer scheme. you mean smb/afp/nfs?
> As for shielded jacks, you either need grounded patch panels or shielded patch cords (grounding via the PC/switch). otherwise the signals that are picked up by the shield won't be removed, and you can see worse performance than you would with UTP.


A NUS is a device that has both file-based and block-based support, in a single platform...while supporting FC SAN, IP-Based SAN (iSCSI) and NAS (SMB). I did not mean smb/afp/nfs, as these are all file-based access, not block-based in the means of FC SANs.

Also, new post, photos, and videos have been added. Server is alive with Server 2012!


----------



## tycoonbob

Sorry for the disappearance, but time is limited and so is money. I hope to order my HP SAS Expander this weekend, and if not then I will order my SSDs and get those mounted.

New update in Post 1, and check out my new router mod.

[Router Mod] 80mm fan on my Netgear WNDR3700

EDIT:
Just ordered my MegaRAID SAS 9261-8i, along with a LSI BAT1S1P Battery Backup Unit. Lightly used, 10 day DOA warranty, $308 for both, shipped! Great deal in my opinion, as long as it works. Will have to order a SAS SFF-8087 cable or two so I can test it out, along with all my backplanes...and hopefully the SAS Expander in a few more weeks!


----------



## tycoonbob

New update in post 1, where I finally figured out what drives I am going to use.


----------



## Lt.JD

This build is looking real good. Keep up the work.


----------



## tycoonbob

Quote:


> Originally Posted by *Lt.JD*
> 
> This build is looking real good. Keep up the work.


Thanks!

Things are slow, got a week long work trip in Memphis next week, and work is keeping me busy. I should have my controller tomorrow, and if I can get my hands on a SFF-8087 cable, I can test it out. New update in the first post, and added a new part to the build list...a nifty hot swap tray for a 2.5" drive, that mounts in a PCI bay. I will use two of these for my SSDs, which will be in Raid 1. Here is a great review of them, over at Overclockers.com:
http://www.overclockers.com/syba-25-hdd-enclosure-pci-slot/

Let me know what you think, and if you know of a better alternative!


----------



## tycoonbob

New post, new photos (Raid Controller and build log), new video (raid controller)!!!


----------



## tycoonbob

New update in post 1.

Ordered some new parts (a single SFF-8087 cable, PCI slot 2.5" hot swap caddy, and a PCI Bracket for my raid controller). The only main component I still need to order to complete the build, is my HP SAS Expander then I can start buying drives.


----------



## Taisho

Hey. I have been folowing this build close for 14 days now. Becuase i am going too build a server like this my own. But i was thinking why are you going with the HP sas expender ? and how are you going too conect the sas expender and the raid card ? one more thing when the box is filled with drives and you are going too expend the server with a new nacro case how will you do this. I know this is alot of questions but i really hope you will repley in a helpfull way.

I hope you understand what i am trying too explain, and please respond in the best possible way


----------



## tycoonbob

Quote:


> Originally Posted by *Taisho*
> 
> Hey. I have been folowing this build close for 14 days now. Becuase i am going too build a server like this my own. But i was thinking why are you going with the HP sas expender ? and how are you going too conect the sas expender and the raid card ? one more thing when the box is filled with drives and you are going too expend the server with a new nacro case how will you do this. I know this is alot of questions but i really hope you will repley in a helpfull way.
> I hope you understand what i am trying too explain, and please respond in the best possible way


Sorry I didn't get back to you sooner, I just had a 6.5 hour drive home from Memphis last night...ugh. Anyway, the HP SAS Expander is what I will need to connect all my drives to my raid controller.

I am doing an actual hardware raid, and there are quite a few on this forum who will try to shut that down right away, and use ZFS (software raid), but I have reasons that I am using a hardware raid. The HP SAS Expander, if you want to think about it this way, is nothing more than a SAS Switch. The HP SAS Expander has 8 internal SFF-8087 connections, which 6 of them (using SFF-8087 cables) will connect to the backplanes of my Norco RPC-4224. Then I will connect a 7th SFF-8087 cable to the HP SAS Expander, and the other end to my Raid Controller...bringing all the backplanes to my raid controller. Make sense?

I could have spent the money and bought a raid controller with 6 SFF-8087 ports, but those cards are over $1000...instead of $400 for my controller and another $200 for the SAS Expander, so I spent $600 instead of $1000+.

Now as far as what I will do if I ever fill this box up...I will have a free SFF-8087 port on my Raid Controller, which is inside the chassis. I will first use something like this:
http://www.pc-pitstop.com/sas_cables_adapters/AD8788-1.asp

and connect a SFF-8087 cable to that and the free port on my raid controller. This will give me an external SFF-8088 port on my chassis. I would build a new box using something like the Norco RPC-4224 again, and mount a power supply, and this:
http://www.chenbro.com/corporatesite/products_detail.php?sku=76

That adapter (which is around $300-400) is the SAS Expander, as well as a power switching board to kick on power to the HDDs, etc. I would use 6 SFF-8087 cables from that UEK, to the backplanes of the second Norco RPC-4224. the UEK has an external SFF-8088 port, which I would use a SFF-8088 cable and connect to my main box, adding 24 more drives to that raid controller. That UEK actually has an In and an Out SFF-8088 port, so if I wanted to build another Norco RPC-4224 chassis using another UEK, then I could connect that up for effectively 72 drives connected to my controller. If I did another, that would be 96 drives on that single raid controller...and another puts me at 120 drives, which is just shy of the maximum drives the controller can see (128). So by using this, (and about $800 per expansion enclosure) I could have up 120 drives if I ever wanted to expand that far. Using 3TB drives, that's 360TB of raw storage...which I don't think I will need anytime soon.


----------



## Taisho

Thanks alot, but you did not answer why the HP sas expender, I could see that chenbro also made a sas expender http://www.xcase.co.uk/product-p/expander-ck23601.htm why not this one ? I am not trying too be smart or anything. Just want too know why.

And it did make sense how you wanted too connect the drives. I am just not sure i got the last part right.

When you add a secend narco box. you will add a PSU and that chenbro thing and connect that too your raid controller. But what about the third box, you dont have any more ports on the raid card. So how will you add those too the controller ? i know its really far away from your setup but i really need too know







the chenbro have an output and input on the outside what are those needed for ? i guees one of them is going straigt too the raid conttroller, and the other is too a sekend expender ?

When you add the UEK too the 2 box, and connects that too your raid controller, you will just get a 3 box and a UEK and connect the UEK too the first UEK which is the one going too your controller. So only one of the UEKs are connected too the raid controller. i am really trying too understand this









So first UEK goes too raid controller, and secned UEK is going too UEK number 1 who is going too controller. so you will just keep conneting UEKs too eachother till its reach the card limit? what about transfears speed will that not drop when its all connected too eachother ? becuase if you get a 4 box all the data will need going thrugh all UEKs before its get too the server ?

I know its complitated now but hey i think you can explain it it really helped the first post.


----------



## tycoonbob

Quote:


> Originally Posted by *Taisho*
> 
> Thanks alot, but you did not answer why the HP sas expender, I could see that chenbro also made a sas expender http://www.xcase.co.uk/product-p/expander-ck23601.htm why not this one ? I am not trying too be smart or anything. Just want too know why.
> And it did make sense how you wanted too connect the drives. I am just not sure i got the last part right.
> When you add a secend narco box. you will add a PSU and that chenbro thing and connect that too your raid controller. But what about the third box, you dont have any more ports on the raid card. So how will you add those too the controller ? i know its really far away from your setup but i really need too know
> 
> 
> 
> 
> 
> 
> 
> the chenbro have an output and input on the outside what are those needed for ? i guees one of them is going straigt too the raid conttroller, and the other is too a sekend expender ?
> When you add the UEK too the 2 box, and connects that too your raid controller, you will just get a 3 box and a UEK and connect the UEK too the first UEK which is the one going too your controller. So only one of the UEKs are connected too the raid controller. i am really trying too understand this
> 
> 
> 
> 
> 
> 
> 
> 
> So first UEK goes too raid controller, and secned UEK is going too UEK number 1 who is going too controller. so you will just keep conneting UEKs too eachother till its reach the card limit? what about transfears speed will that not drop when its all connected too eachother ? becuase if you get a 4 box all the data will need going thrugh all UEKs before its get too the server ?
> I know its complitated now but hey i think you can explain it it really helped the first post.


That is pretty much right. UEK #1 connects to the Raid Controller. UEK #2 connections to UEK #1 at the "In" port. UEK #3 conenctions to UEK #2 at the "In" port. Etc.

I choose the HP SAS Expander over Chenbro because I trust HP more than Chenbro. I think the HP SAS Expander is better quality than the Chenbro, and more available. If I went with Chenbro, I'd need the CK23601 which is over $300. Also, the ports on the Chenbro just would work as well for me. I'm sure the Chenbro CK23601 is a good SAS Expander, but the HP SAS Expander just works perfectly with the Norco RPC-4224.


----------



## tycoonbob

New photos and videos uploaded and in Post #3. Check out the photos for the Syba SY-MRA25023 and the Tripp-Lite SAS cable. Video uploaded for them both as well.

I will do another Build Log video later tonight!

Check em out and let me know what you think (that Syba SY-MRA25023 PCI Mobile Rack thing is awesome)!


----------



## Taisho

Thanks that helped me







Did you get the thing up and running ? What are you planning too do with your storrage drives ? are you gonna go with RAID5 or 6, and how big volumes are planning ?


----------



## tycoonbob

Quote:


> Originally Posted by *Taisho*
> 
> Thanks that helped me
> 
> 
> 
> 
> 
> 
> 
> Did you get the thing up and running ? What are you planning too do with your storrage drives ? are you gonna go with RAID5 or 6, and how big volumes are planning ?


Well, I will have 4 15K RPM 600GB SAS drives in a Raid 5 (~1.7TB) which will be storage for all my VMs. That will leave 20 hot swap bays, which will be filled with 3TB 7200 RPM drives, in a Raid 60 (with 2 hot spares). I am going to do 2 Raid 6 sets, of 4 drives each to start (8 drives to start), and expand out when needed, 2 drives at a time. 20 3TB drives in a Raid 60 (2 hot spares) should be approximately 42TB of available storage, and could sustain a total of 4 drive failures (2 drives in each Raid 6 set), but I will have 2 hot spares available. I have less than 10TB of data currently, so it should take some time to fill up 42TB. At that point, I will build a SAS Expander Chassis and fill it with 4 (or maybe 5) TB drives at that point, and build another Raid 60 (with 24 drives--2 hot spares), which would be 72TB (4TB drives) or 90TB (5TB drives).

I have not gotten it up and running yet, other than testing with Server 2008 R2. Sept.4 is when Server 2012 goes General Availability, so I am waiting until then to install the OS.


----------



## tycoonbob

Few new updates. Received 1 SSD (debating if I want to do Raid 1 or not for the OS), 2 rear exhaust fans, ordered my HP SAS Expander, and 6 more SFF-8087 cables. I also ordered another 2TB 7K3000 drive (making 4), which I am going to set up a Raid 5 to be able to do some performance testing. I will be running a 4 drive Raid 5 (6TB storage) until I run into some cash (probably my quarterly bonus at work) to be able to buy some Toshiba 3TB drives (BT01ACA300, once released). Affording all these drives is the biggest hurdle, but all in time. I'm also rethinking using 3 120mm Delta AFB1212GHE-001 fans. They are super powerful, but wondering if it will have reverse effects since I have some pretty good 80mm exhaust fans. The AFB1212s are about $25 each, and push over 200CFM at over 60dba. I can get 120mm fans that can do 130CFM with 45dba, for $12...and probably wouldn't notice any performance differences between them and the AFB1212s, it would just be cheaper on my wallet. I am right now looking at other fans, but still not 100% sure.

Lastly, I have decided that I want to use Windows Storage Server 2012 instead of Windows Server 2012 Standard. I got a copy from MSDN, with a license key...so it will be a complete legit license, and will do great for home. You can't buy Storage Server like you can any other OS, as you can only get it from an OEM with hardware.

Check the latest updates in the first post for more, but no new pictures/videos have been posted yet. I have this box up and running Server 2008R2, and all it's doing is folding on the CPU, lol. I have been playing around with the Raid Controller though, with 2 80GB and 2 160GD drives. The LSI MSM software is excellent!


----------



## parityboy

*@OP*

When you say you're going to use iSCSI targets for your VMs, does that mean that the "OS disk" of the VM will in fact be a volume exported as an iSCSI target? Or will the iSCSI target(s) be data storage to be accessed from within the VM's guest OS?


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@OP*
> When you say you're going to use iSCSI targets for your VMs, does that mean that the "OS disk" of the VM will in fact be a volume exported as an iSCSI target? Or will the iSCSI target(s) be data storage to be accessed from within the VM's guest OS?


I am going to use Windows iSCSI Software Target to carve out LUNs from my 15K RPM SAS array, and all of my VMs OS Disk will be running from CSVs from that. This way, I can cluster my two Hyper-V Hosts for HAVMs (Highly Available Virtual Machines). If they need an additional storage drive, I would give them another CSV.

On a side note, I got my HP SAS Expander in the box yesterday and spent a good 9 hours moving my data around so I could put my 3 2TB Hitachi 7K3000s into this new box and get a Raid 5 going. I have a 4 7K3000 that will be added to this array shortly, so that will be 6TB of available space, until I can afford to buy 8 Toshiba DT01ACA300 (basically rebranded Hitachi 7K3000s) to start my Raid 60, and expand when needed. Looks like those Toshiba's are starting to get around too. www.allhdd.com has them now, $175 for bulk packaging, and $185 for retail.


----------



## GrimNights

Very Interested in this subbed


----------



## parityboy

Quote:


> Originally Posted by *tycoonbob*
> 
> I am going to use Windows iSCSI Software Target to carve out LUNs from my 15K RPM SAS array, and all of my VMs OS Disk will be running from CSVs from that. This way, I can cluster my two Hyper-V Hosts for HAVMs (Highly Available Virtual Machines). If they need an additional storage drive, I would give them another CSV.


CSV = ...Storage Volume?


----------



## tycoonbob

Quote:


> Originally Posted by *GrimNights*
> 
> Very Interested in this subbed


Thanks!

I got my 3 2TB drives in the box, and set up a Raid 5. I started a full initialization around 12 EST, and it reports 3.5 hours left. ~5.5 hours to initialize a 4TB Raid 5 is not bad, IMO. Will be adding a 4th 2TB to the array in a week or two, and hopefully be buying my replacement drives in a month or so. I've gotten lazy, but I do have some pics of the HP SAS Expander to post, and plan to do a new update video. Once I straighten some stuff out with my Hyper-V Hosts, and get my SAS drives, I will definitely be doing some videos/walkthroughs of the iSCSI/MPIO configuration!


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> CSV = ...Storage Volume?


CSV = Cluster Shared Volume.

Basically the VHD inside of the LUN, and the LUN is connected to both my Hyper-V hosts. The OS Drive is on the storage server, but the VM is running on Host01...but Host01 goes down for some reason, then Host02 is going to pick up and continue running that VM with minimal downtime (a matter of seconds ideally)...or at least that is the plan. I'm hoping that it will work well with the hardware I have.


----------



## parityboy

Ahhh, cool. Cheers for that.


----------



## Taisho

Any updates ?


----------



## tycoonbob

Quote:


> Originally Posted by *Taisho*
> 
> Any updates ?


Afraid not much. The server is pretty much done, minus installing Server 2012 on the SSD (waiting for Server 2012 drivers/BIOS update for the motherboard), and loading it up with drives. Money is kinda tight right now, so can't really afford to go out and buy 8 drives to start my Raid 60, nor the 4 15K RPM SAS drives. I did get 3 Hitachi 7K3000 2TB drives set up in a Raid 5 in the new box, and migrated all my data to it. Yesterday I added a 4th Hitachi 7K3000 2TB drive, and the OCE (Online Capacity Expansion) is still running. Looks to be about 25 hours to add a 2TB drive to my Raid 5, which had ~4TB of data on it to begin with. I at least have 6TB of storage, which is great for now.

Keep in mind that the storage drives I want (Toshiba DT01ACA300 3TB 7200RPM 64MB Cache) are just now appearing in the American market place (ebay mostly), and the prices are great currently ($160ish). I'm hoping to start buying them here soon as getting my Raid 60 going, then I can get my 15K RPM SAS Raid 5 going, and cluster my Hyper-V hosts. That would be great.

I have made no progress at all on building out my 8' X 8' closet, since money is tight...I wish I could win the lottery.


----------



## tycoonbob

Posted a new update on the first post.

I know it's been awhile, but time and money hasn't been on my side. Still running 4 2TB Hitachi 7K3000s in a Raid 5, and working on saving to buy my first 8 Toshiba DT01ACA300 3TB 7200RPM drives so I can start my Raid 60!

Upgraded my OS and boot drive finally, along with better configuration of things. All the info is at the bottom of the first post, so check it out and let me know your questions. Windows Storage Server 2012 is nice.


----------



## mitchtaydev

Subbed! The build is looking great ... congratulations.

I'll use this for inspiration as I am considering doing something similar albiet lower scale than this as I intend to start my CCNA and MCITP next year and have dreams of building a mini homelab. I only wish rackmount gear was affordable where I live because that case and rack look beautiful.


----------



## tycoonbob

Quote:


> Originally Posted by *mitchtaydev*
> 
> Subbed! The build is looking great ... congratulations.
> I'll use this for inspiration as I am considering doing something similar albiet lower scale than this as I intend to start my CCNA and MCITP next year and have dreams of building a mini homelab. I only wish rackmount gear was affordable where I live because that case and rack look beautiful.


Thanks! It's been a stable and powerful storage server so far, that I have no doubts on anything in my build. Good luck on your MCITPs and CCNA; I expect to test for my CCNA early next year.


----------



## mitchtaydev

Thanks. I'm only in the planning phase at the moment so still in the process of speccing everything at a high level in order to define my requirements and plan to start building infrastructure sometime Q2 next year. So far I am looking to build a SAN and two identical VMHosts to play with failover clustering and high availability. I already have my Cisco routers and switches. If you have any advice from your own experiences it would be greatly appreciated.

Sent from my GT-I9300T using Tapatalk 2


----------



## tycoonbob

Well, if you are looking to build a SAN and 2 hypervisor hosts, I do have a few recommendations I could share.

-Make sure your hypervisor hosts are identical. Not a requirement by any means, but can make things easier. Having the same amount of resources (RAM and CPU Cores) as well as the same CPU, will make things like migration and VM placement a tad easier.
-Plenty of gigabit ethernet ports! I have 4 on my SAN and 3 on each host. Utilize iSCSI with MPIO for great performance. I would say 3 ports on your SAN, and 2 ports on each hypervisor host would make for great setup. The third port on each hypervisor host could be for failover, or a DMZ network (DMZ is what I use mine for). To go even better, use a port (or two in LACP) on your SAN for increased file storage (FYI, if your SAN is doing file level storage, it's not just a SAN anymore...it would be a NUS--Network Unified Storage, since you are serving BOTH file and block level storage).
-Don't mismatch HDDs (if you are doing hardware raid). It could complicate things.
-If you get a hardware controller, make sure you have a BBU on it...and a UPS on your storage server. This will save you from loosing data in cache during a power outage.
-Use smart or managed switches. Unmanaged switches can't set up LACP, which can be a bummer. At least you can do switch independent NIC teaming in Server 2012.

That's about all I got off the top of my head. It may sound like a lot building something like this for a lab/home, but I find that I learn, do, tear stuff up, rebuild my domain and start over...every few months. Nothing wrong with that, at all. Just make sure that any "production" PCs in your house are not joined to your domain...else your lady friend or family may get upset when smething blows up and they can't log into their PC.


----------



## parityboy

*@tycoonbob*
Quote:


> -Plenty of gigabit ethernet ports! I have 4 on my SAN and 3 on each host. *Utilize iSCSI with MPIO for great performance*.


Can you explain this a little further? Does it mean multiple NICs handle requests for multiple iSCSI targets, from the VM host to the SAN node? So each iSCSI target can be reached using more than one network path, from VM host to SAN node?


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> Can you explain this a little further? Does it mean multiple NICs handle requests for multiple iSCSI targets, from the VM host to the SAN node? So each iSCSI target can be reached using more than one network path, from VM host to SAN node?


What you described is LACP (Link Aggregation Control Protocol). LACP is one method of NIC Teaming where you have multiple paths to get to the same content, but each transaction (I/O flow) only has the bandwidth of one NIC port (2Ggbps in this case, since a Full Duplex gigabit port provides 1gbps down and 1gbps up at the same time). LACP also has to be configured on your switches (albeit, Server 2012 can set up switch independent LACP that will affect only outbound traffic--useful if you have unmanaged switches).

MPIO works a little bit differently. Multi-Path Input/Output can allow a single transaction (or I/O flow) to split up and traverse multiple paths to get to the same place, effectively doubling (or more) your speed. If you have a Raid array that could perform around 400MB/s, and you wanted to use that a an iSCSI array to another PC/server but you wanted that 400MB/s speed, you would need 4 gigabit ports linked using MPIO (since each gigabit connection can push at least 100MB/s, up to 120MB/s).

LACP is great if you have multiple endpoint accessing the same content, i.e. a file share being accessed by 100 end user PCs. Each transaction will only go over 1 gigabit link/port, but there will be 4 lanes to load balance the traffic.

MPIO is great for iSCSI/FC storage since it effectively doubles/triples/quadruples/etc the I/O flow (or read/write transaction, however you want to word it).

MPIO isn't configured the same was as LACP though, since MPIO is configured on your iSCSI Target and Initiators, but it's something that is built into Microsoft Operating Systems.


----------



## parityboy

So would it be fair to say that MPIO could be described as "iSCSI RAID 0"? Whereas LACP could be described as "iSCSI RAID 1"?

*EDIT:*

Found something interesting here regarding the Linux bonding driver:
Quote:


> mode=6
> 
> Adaptive load balancing: includes balance-transmit load balancing plus receive load balancing for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.


Quite useful if you don't have a managed switch.


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> So would it be fair to say that MPIO could be described as "iSCSI RAID 0"? Whereas LACP could be described as "iSCSI RAID 1"?
> *EDIT:*
> Found something interesting here regarding the Linux bonding driver:
> Quite useful if you don't have a managed switch.


I don't do much with Linux so I can't really comment on that. Windows Server 2012 introduced native NIC teaming, and one of the options is switch independent which works similar though.

Regarding calling MPIO iSCSI Raid 0...that is very very incorrect. Raid 0 implies performance, and half reliability...which is not true. for MPIO.

LACP is like having more lanes free, which each lane allowing the max size of each lane...where MPIO makes that lane bigger based on the number of ports used.

Long story short, if it's a SAN with iSCSI or FC targets, use MPIO. If it's a NAS...use LACP. Both add failovers.


----------



## parityboy

*@tycoonbob*

I used the RAID 0 analogy from a performance standpoint _only_, where a single transaction involves all of the disks, not just one as with RAID 1. I accept that MPIO also has inherent failover.


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> I used the RAID 0 analogy from a performance standpoint _only_, where a single transaction involves all of the disks, not just one as with RAID 1. I accept that MPIO also has inherent failover.


I understand, but I still think it's a bad comparison. LACP and MPIO have nothing at all to do with the disks, in the terms of using one or multiple disks.


----------



## parityboy

*@tycoonbob*

I think we're getting our wires crossed. I was talking about the _model_ (one transaction/one resource, one transaction/multiple resources). I wasn't trying to connect disks to interfaces, physically or otherwise.


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> I think we're getting our wires crossed. I was talking about the _model_ (one transaction/one resource, one transaction/multiple resources). I wasn't trying to connect disks to interfaces, physically or otherwise.


If by resources you mean physical network path (NIC and cable), then your model is incorrect since both MPIO and LACP both use multiple resources (NICs, cables, paths, etc) to get traffic to where it needs to go. One provides multiple tunnels while one provides a larger tunnel.


----------



## parityboy

*@tycoonbob*

Ahhh. I was under the impression that with LACP, a single transaction (such as a single file transfer) only used one NIC at a time. From post #83:
Quote:


> LACP is one method of NIC Teaming where you have multiple paths to get to the same content, but each transaction (I/O flow) only has the bandwidth of one NIC port


So if _multiple clients_ asked for the same file, _multiple NICs_ would be used by the teaming driver in a round-robin fashion, and each transaction would only use one NIC at a time. Is this correct?

I'm just trying to understand the model.


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> Ahhh. I was under the impression that with LACP, a single transaction (such as a single file transfer) only used one NIC at a time. From post #83:
> So if _multiple clients_ asked for the same file, _multiple NICs_ would be used by the teaming driver in a round-robin fashion, and each transaction would only use one NIC at a time. Is this correct?
> I'm just trying to understand the model.


Each transaction would use one NIC, but you could have two transactions going at the same time by using two NICs. If you had 4 NICs set up in LACP, you could have 4 different users pulling a file with full gigabit speeds, instead of sharing 1 gigabit link.

Not to complicate things further, but I also don't recommend round robin unless it's a file share with very specific files (such as medical images in a healthcare settings). Reason being is this:
2 NICs (NIC1, NIC2) and 3 users (User1, User2, User3)...

User1 starts to copy a Win8 iso from the file share (~4GB), LACP assigns NIC1 to do the copy.
User2 starts to copy a Ubuntu iso from the file share (~600MB), LACP assigns NIC2 to do the copy.
User3 starts to copy MS Office installer iso from the file share (~1GB), and LACP assigns NIC1 to do the copy.

This is how Round Robin works...but it doesn't make sense. NIC2 is less busy than NIC1, so why use NIC1 for User3? By using the Least Queue Depth method for LACP, LACP would have assigned NIC2 for User3's file copy, since it has the least queue depth (files waiting to be copied, by size)...so it would actually be this:

User1 starts to copy a Win8 iso from the file share (~4GB), LACP assigns NIC1 to do the copy.
User2 starts to copy a Ubuntu iso from the file share (~600MB), LACP assigns NIC2 to do the copy.
User3 starts to copy MS Office installer iso from the file share (~1GB), and LACP assigns NIC2 to do the copy.

Making the file share more efficient.

MPIO wouldn't be used in the scenario above, since you would have to set that up on each end user computer, which just isn't logical. LACP is setup on the server and switch, where as MPIO is setup on the host and target (switch doesn't matter).

Hope this helped.


----------



## parityboy

*@OP*

Yep big help, so thank you.








Quote:


> User1 starts to copy a Win8 iso from the file share (~4GB), LACP assigns NIC1 to do the copy.
> User2 starts to copy a Ubuntu iso from the file share (~600MB), LACP assigns NIC2 to do the copy.
> User3 starts to copy MS Office installer iso from the file share (~1GB), and LACP assigns *NIC1* to do the copy.


That's the reason I likened sending requests to a bonded interface (LACP) to sending requests to a RAID 1 array - I assumed LACP only used the same round-robin type scheme for allocating requests. As you say, Least Queue Depth is far more efficient, but RAID 1 doesn't use it as far as I know.


----------



## mitchtaydev

@tycoonbob

Thanks for the recommendations and explanation between LACP and MPIO. I have saved this entire post for reference material and further reading.

@parityboy

Thank you for that link regarding Linux network bonding information ... an intriguing read.

I have found that I have yet more questions, but I don't wish to hijack this thread so when the times comes I'll open another myself and post the technical design and build documentation for my project and ask all my questions there.

Thanks guys.


----------



## tycoonbob

It's my pleasure. I love learning new technology as much as sharing it with other people, so I will definitely keep an eye out for your build thread with questions (and PM me if I haven't seen it).


----------



## parityboy

*@mitchtaydev*

You're welcome. Indeed it is intriguing, because it'll save me a few notes - I won't have to buy a managed switch just to get full-duplex channel bonding.

*@tycoonbob*

The switch independent NIC teaming...is it NIC-independent also, like in Linux? I know that traditionally in the Windows world, NIC teaming was in the particular NIC driver, therefore you had to have two NICs of the same model. Does the NIC teaming in 2012 operate at a higher level, and therefore won't care what NICs you're using? I.e., you can mix Broadcom and Intel?


----------



## mitchtaydev

@Parityboy

Btw, may I just say that your Location is Awesome! I wish I lived were you are. haha


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@mitchtaydev*
> You're welcome. Indeed it is intriguing, because it'll save me a few notes - I won't have to buy a managed switch just to get full-duplex channel bonding.
> *@tycoonbob*
> The switch independent NIC teaming...is it NIC-independent also, like in Linux? I know that traditionally in the Windows world, NIC teaming was in the particular NIC driver, therefore you had to have two NICs of the same model. Does the NIC teaming in 2012 operate at a higher level, and therefore won't care what NICs you're using? I.e., you can mix Broadcom and Intel?


That's the beauty of Server 2012.

Previously in the Windows world, NIC teaming was done via software that came from the vendor of the NIC...for example, if you added a PCIe card with 2 Broadcom NICs, you had to use a Broadcom piece of software to team them.

In Server 2012, NIC Teaming is done right from the server manager...doesn't matter what brand they are, or even what link speed they are.

In the NIC Teaming menu, you type in a Team Name, select what NICs you want to be members (assuming drivers are installed and the host OS can see the hardware, any NIC could be teamed)...and select additional properties:
*Teaming Mode*
-Static Teaming (Similar to LACP, but is used when you have a managed switch that doesn't support LACP--switch dependent. Useful when you have a server with a heavy outbound AND inbound network load, or when you have VMs that have a network load greater than the bandwidth of a single NIC in the team.)
-Switch Independent (Works with any switch, because nothing has to be configured on the switch. Outbound traffic is aggregated since it's controlled by the host, but inbound traffic is not aggregated--You can team across multiple switches, which is ideal for network fault tolerance!)
-LACP (Configure LACP on the host as well as the switch, for aggregated outbound AND inbound traffic--can team only to one switch.)

*Load Balancing Mode*
-Address Hash (Load balances outbound traffic across all active NICs, but only recieves inbound traffic via ONE of the NICs in the team--Great for servers with lots of outbound and little inbound traffic, such as web servers or file servers with lots of reads.)
-Hyper-V Port (VMs will be distributed across the network team and each VMs outbound and inbound traffic is handled by a specific active NIC in the team--Great for load balancing VMs, when none of the VMs generate more traffic than what a gigabit link can handle. It's like each VM has their own NIC in the team, and nothing else uses it...dedicated link per VM if you will.)

*Standby Adapter* (Specify if any of the NICs are for standby, otherwise all NICs in the team are active)

Lastly, you can have up to 32 NICs in a single team.


----------



## mitchtaydev

Quote:


> Originally Posted by *tycoonbob*
> 
> That's the beauty of Server 2012.
> Previously in the Windows world, NIC teaming was done via software that came from the vendor of the NIC...for example, if you added a PCIe card with 2 Broadcom NICs, you had to use a Broadcom piece of software to team them.
> In Server 2012, NIC Teaming is done right from the server manager...doesn't matter what brand they are, or even what link speed they are.
> In the NIC Teaming menu, you type in a Team Name, select what NICs you want to be members (assuming drivers are installed and the host OS can see the hardware, any NIC could be teamed)...and


Can you clarify that you can mix and match NICs with NIC teaming via server manager?
What I mean is, that you can use any NIC and link speed for teaming, so long as all NIC's in the group are identical? Or if any NICs in a group can also be different?

Also, do you know if there is a performance difference (positive or negative) over using the NIC Teaming build into Server 2012 vs the NIC driver.

I imagine that the difference would be negligable as the load is likely shifted from the driver to the kernel proper ... unless the driver utilises on-chip functionality for offloaded processing and syncronization?

Sorry if I haven't made myself clear.


----------



## tycoonbob

Quote:


> Originally Posted by *mitchtaydev*
> 
> Can you clarify that you can mix and match NICs with NIC teaming via server manager?
> What I mean is, that you can use any NIC and link speed for teaming, so long as all NIC's in the group are identical? Or if any NICs in a group can also be different?
> Also, do you know if there is a performance difference (positive or negative) over using the NIC Teaming build into Server 2012 vs the NIC driver.
> I imagine that the difference would be negligable as the load is likely shifted from the driver to the kernel proper ... unless the driver utilises on-chip functionality for offloaded processing and syncronization?
> Sorry if I haven't made myself clear.


I don't know for sure if there is a performance difference between using the built-in Windows Server 2012 NIC Teaming, versus the previous methods using vendor software.

You can mix and match any NICs into a team. No matter the brand or speed...as long as it can been seen and used by the OS, it can be put into a team with any other kind of NIC that the OS can see. I wouldn't recommend teaming a gigabit NIC with a 10/100 NIC though, as I am not sure if you would cap out at 10/100 speeds, or if it's smart enough to determine queue length based on the NIC speed.


----------



## parityboy

*@tycoonbob*

Thanks for the info. 32 NICs in a team eh? That's _eight_ PCIe slots, filled with quad-port adapters. Ever seen a server board with eight PCIe slots?







*EDIT:* Oh, I just thought of another question: how does NIC-independent teaming affect ToE (if at all) considering that ToE is driver controlled?

*@mitchtaydev*

Yeah I like this location too.


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> Thanks for the info. 32 NICs in a team eh? That's _eight_ PCIe slots, filled with quad-port adapters. Ever seen a server board with eight PCIe slots?
> 
> 
> 
> 
> 
> 
> 
> *EDIT:* Oh, I just thought of another question: how does NIC-independent teaming affect ToE (if at all) considering that ToE is driver controlled?
> *@mitchtaydev*
> Yeah I like this location too.


It would be some expensive hardware to get 32 NICs in a single team, that's for sure.

As far as ToE...to be honest, I have no idea how that is affected in the available different NIC Teaming scenarios. I'm sure it's there, but it's not something I have come across yet. Sorry.


----------



## tycoonbob

_October 26, 2013_
It's been a really long time (almost a year) since I've done anything to my storage box. It's been chugging along great for the past year with 5 x 2TB in a RAID 5, which is not ideal but it's what my budget has allowed for. I've got about 800GB free in my current array, and while I do have other 2TB drives, I am not willing to expand this array. 8TB with 5 drives is the biggest I will take a RAID 5, but I have been pleasantly surprised at how stable it's been.
The reason for updating this post is to outline my new storage strategy, which has changed yet again. Originally, I was going to do an 18 drive RAID 60 with 3TB drives, 2 3TB hot spares, and a 4 x 2TB RAID 10 for VM storage via iSCSI. I have since decided that I likely won't need ~42TB of storage within the next 3-5 years and have decided to stick with 3TB drives (Toshiba DT01ACA300's) and do a RAID 10 instead. I have 2 DT01ACA300's right now and hope to order two more in the next few weeks, just to get my 3TB RAID 10 storage. 4 x 3TB in RAID 10 will not be enough for me to replace my existing RAID 5, but once I expand that to 6 x 3TB in a RAID 10, I can migrate data from my RAID 5 and blow that array away. That will leave me with several 2TB drives so I likely will build a separate RAID 10 for some other purpose. I will add drives as needed (in pairs, to expand my RAID 10 -- hopefully I can get to the point of expanding 4 drives at a time) with the end goal of 20 3TB drives in a RAID 10 (~30TB usable, actually I think it will be around 27.2TB) with 2 3TB drives for warm spares, and two SSDs used for CacheCade (not sure if I will do RAID 0, 1, or JBOD on that). I will probably aim for 512GB SSDs by that point since it will be 6-12 months down the road, so hopefully I can get them for $200 by then. That should fill up the chassis and keep me happy for a while. I figure if I can get 5 years out of that, surely we will have mainstream 5-6 TB drives by then and I can re-evaluate my storage sitatuion then (10Gb/s Fibre or more, SATA IV at 12Gb/s, 6TB drives -- that would be cool).
In the coming weeks I will post some pictures of the drives I'm using, and do some performance testing once I have 4 x 3TB drives I can play around with (RAID 0, 1, 5, 6 numbers for seq and random R/W).



Spoiler: New Pictures


----------



## tycoonbob

*Update!*

_July 12, 2014_
Not much has changed with this build, except for my storage configuration. Long story short, I've been running with 5 x 2TB in a RAID 5 for my primary data, long with 2 x 3TB in RAID 1 for backups. I recently freed up 2 2TB drives from a PC build of mine, so I am in the process of migrating my 5 x 2TB RAID 5 (~7.2TB) to a 7 x 2TB RAID 6 (~9.1TB). Pretty easy to start the migration, but it's definitely nerve wracking! The migration/reconstruction has been running now for 35 minutes, and claims there is 3 days, 7 hours, and 26 minutes left until completion. I don't find this time reliable though, as back when the elapsed time was ~20 minutes in, it said the remaining time was over 4 days. I'm heading to bed here in a few, so I'm hoping to see some good progress within the next 10-12 hours or so.
*Fingers crossed*

If I have no problems, and everything works like it should, I will feel much better about my data being on a RAID 6 vs a RAID 5. I'm not a big fan of RAID 5, and preach about RAID 10, but money has prohibited me from building my RAID 10 array with 3TB drives. I have 2 of my 3TB drives, but other builds and servers (R610, my new PC build, my girlfriends PC build, etc) has consumed my PC funds. Once I can get my hands on 6 more matching 3TB drives, I will be building a 8 x 3TB Toshiba DT01ACA300 RAID 10 array (~11TB usable) to move all my data over. I will then build another RAID 10 with my existing 2TB drives (or 6 of them at least) for iSCSI storage and backups.

Just wanted to share an update after all this time! Everything has been running great for over two years now!


----------



## Sean Webster

It will take about 3 days, good luck! lol. Mine took about 2 and a half days when I went from a 6 drive array to a 8 drive RAID 6. :/


----------



## tycoonbob

Quote:


> Originally Posted by *Sean Webster*
> 
> It will take about 3 days, good luck! lol. Mine took about 2 and a half days when I went from a 6 drive array to a 8 drive RAID 6. :/


Yeah, I was hoping more like 20-24 hours. Looks like mine will be right around 3 days after all. 9 hours in, still going good, no errors, and 2 days 10 hours left.


----------



## tycoonbob

So the migration/expansion has been running for ~22 hours. Here's another screenshot.



It's very odd, because it says it's only been running for 4h 23m, which I know is wrong. The server hasn't been rebooted, and the expansion/migration can't be stopped, but I've been gone all day. All files are intact, no errors in MegaRAID Storage Manager, so I assume all is good. I'd love to see it finish up in 9h, which means it would be done by the time I roll out of bed tomorrow.








I don't believe that time though, and feel like (based on current progress) it will be more like Monday evening before it finishes. Time will tell!


----------



## tycoonbob

Checking it today (~14.5 hours later), it's showing good progress on the percentage bar, but what I still think is highly inaccurate times:


----------



## Sean Webster

I've had mine say it was 99% done and there was 2 minutes left and after about 3 hours it finally finished...So I would have to agree with you haha


----------



## tycoonbob

Wohoo, 3 minutes left!



It's very interesting how VERY inaccurate the Estimated time left has been. I'm running a fairly recent version of MSM, and my drivers aren't more than a year old, so you would think this is something LSI would have fixed. Not to mention the Elapsed time. It's almost like the Elapsed time starts over every 20 hours, or so.

It's actually been running right under 55 hours so far. Based on that, it should be less than 24 hours to completion; maybe around 20 hours. Almost there!


----------



## Sean Webster

So how long did that 3 minutes take? haha


----------



## tycoonbob

Quote:


> Originally Posted by *Sean Webster*
> 
> So how long did that 3 minutes take? haha


At least 8.5 hours, so far....haha.



Looks to be another 6 hours or so before completion, so hopefully before midnight tonight (EST). I've come this far with no problems, let's hope it can finish up with no problems!


----------



## tycoonbob

Reconstruction completed last night around 10:40PM, and Background Initialization started then as well. Background Init should be done in about 2 more hours, so things are looking great!


----------



## tycoonbob

Finished!


----------



## subassy

Hey tycoon

On the elapsed time thing is there a possibility that progress/time indicator is just for when work is going? Like maybe the system will do work for much time, which the timer counts, then some other processes will run/the hardware will catch up or whatever...then more work which the timer counts and then catch up again? It seems like I've seen weird time elapsed kind of timers like that before although I don't remember where.

By the way this is very impressive. I understood almost all of it


----------



## tycoonbob

Quote:


> Originally Posted by *subassy*
> 
> Hey tycoon
> 
> On the elapsed time thing is there a possibility that progress/time indicator is just for when work is going? Like maybe the system will do work for much time, which the timer counts, then some other processes will run/the hardware will catch up or whatever...then more work which the timer counts and then catch up again? It seems like I've seen weird time elapsed kind of timers like that before although I don't remember where.
> 
> By the way this is very impressive. I understood almost all of it


Thanks.

I don't think that's what happened with the Elapsed time counter, as it should have been doing something the entire time (every single block of data was read, and re-written with a new parity calculation, and a second copy of that parity bit was also made). So the more data you have, the longer the rebuild would take, but I actually did see the Elapsed time go up to 20 hours, the the next morning it was back down to 7 hours. Pretty clear indication it wasn't accurate, but because of that specific scenario it seemed like it would get up to 20 hours, 59 minutes, 59 seconds, and instead of going to 21:00:00 it would go to 00:00:01. Odd, but not that big of a deal.

Over all, here are the times:
*Reconstruction*: 2 days, 21 hours, 53 minutes, 44 seconds
*Background Initialization*: 0 days, 13 hours, 25 minutes, 39 seconds
*Total time*: 3 days, 11 hours, 19 minutes, 23 seconds


----------



## parityboy

*@tycoonbob*

In the context of this reconstruction, what would "background initialisation" be? I thought that was reserved for fresh arrays?


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> In the context of this reconstruction, what would "background initialisation" be? I thought that was reserved for fresh arrays?


So the Reconsuction process rewrites every single bit/block of data, striping it across all drives and calculating the parity of this data. The Initialization is the resilvering of the parity bits, basically verifying and updating segments that were written during the Reconstruction process. The Initialization just verifies everything and makes sure it's healthy and clean...at least, that's how I've always understood it.


----------



## parityboy

*@tycoonbob*

Ahh, yeah makes sense the way you've explained it. Cheers.


----------



## tycoonbob

_November 14, 2014_
Planning a hardware and software refresh of this device. Will be swapping out the HP SAS Expander with a Chenbro CK23601, adding another Syba 2.5" drive bracket with another 60GB SSD for the OS (RAID 1...finally). I will also be rebuilding my storage arrays with a RAID 10 volume, starting with 4-6 6TB drives. OS-wise, I am looking to switch to OpenMediaVault, most likely.

_November 20, 2014_
Received Chenbro SAS Expander, second Mushkin 60GB SSD, and second Syba 2.5" drive try. Still waiting on the iKVM, which should arrive in the next few days. I hope to order 8 6TB drives in the coming weeks, and once received I'll be able to swap in this new hardware, rebuild the server on a new OS (likely OpenMediaVault), and build a new RAID 10 storage array!
New pictures have been added in post #3, 60GB Mushkin SSD and Chenbro CK23601 Expander.


----------



## Sean Webster

Quote:


> Originally Posted by *tycoonbob*
> 
> November 14, 2014
> Planning a hardware and software refresh of this device. Will be swapping out the HP SAS Expander with a Chenbro CK23601, adding another Syba 2.5" drive bracket with another 60GB SSD for the OS (RAID 1...finally). I will also be rebuilding my storage arrays with a RAID 10 volume, starting with 4-6 6TB drives. OS-wise, I am looking to switch to OpenMediaVault, most likely.
> 
> November 20, 2014
> Received Chenbro SAS Expander, second Mushkin 60GB SSD, and second Syba 2.5" drive try. Still waiting on the iKVM, which should arrive in the next few days. I hope to order 4 6TB drives within the next few weeks, and once received I'll be able to swap in this new hardware, rebuild the server on a new OS (likely OpenMediaVault), and build a new RAID 10 storage array!
> New pictures have been added in post #3, 60GB Mushkin SSD and Chenbro CK23601 Expander.


I may have to grab those Syba 2.5" drive brackets too...I'm gonna swap out my 60GB OS drive for a 128GB samsung 830 and continue to use my 480GB sandisk extreme II for VMs. Also going to finally update to server 2012 R2. 

What was the purpose of the Chenbro CK23601 over the HP? I remember reading it somewhere, but forgot lol. Was it HDD activity lights not working? Update me if it works...that would make finding the bad HDDs so much easier. My Intel expander doesn't do it either.


----------



## tycoonbob

Quote:


> Originally Posted by *Sean Webster*
> 
> I may have to grab those Syba 2.5" drive brackets too...I'm gonna swap out my 60GB OS drive for a 128GB samsung 830 and continue to use my 480GB sandisk extreme II for VMs. Also going to finally update to server 2012 R2.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What was the purpose of the Chenbro CK23601 over the HP? I remember reading it somewhere, but forgot lol. Was it HDD activity lights not working? Update me if it works...that would make finding the bad HDDs so much easier. My Intel expander doesn't do it either.


Yeah, I love the Syba drive rack and glad I finally ordered my second to RAID1 my OS drives. I'm still not positive on what OS I'm going to, but leaning toward OpenMediaVault.

The reason for the change is because the HP has little quirks with my current LSI card, like the HDD lights not working. I'm HOPING that the Chenbro (since it's based on a LSI SoC) will work more nicely with the LSI 9261 controller I have. I'll definitely be sharing updates once I get this gear in, but it will probably be in the next 4-6 weeks.


----------



## tycoonbob

_November 22, 2014_
Received my iKVM chip, and pictures added. I can't wait to put all this new gear in once I can buy some new HDD's!


----------



## TRusselo

Hey just found this awesome thread. Thanks for sharing your experience.

I am currently expanding my storage on my Philco server (see sig), currently adding 4x 5tb drives, purchased an older LSI card that ended up not supporting drives larger than 2tb, now i have an LSI MegaRAID 9260CV-8i on its way. it will do me for a while but I already know i will need a SAS expander someday.

Why did you recently switch from the HP expander to the Chenbro?
I take it that the Chenbro is still compatible with the LSI 9260/1 card? or are you going jbod/zfs?

I am also building my first real server starting with a hp proliant DL360 G5 that i picked up for 175$ -16GB ECC mem, 2x E5345 2.33GHz, no drives, 2x 1port 4GB Fiber HBAs, dual redundant psu, dvd/cd-rw. So far seems like a great machine, if you know anything of them please share/PM


----------

