# [Build Log] Virtualized home server



## Aximous

Hello OCN!









This is the first buildlog I decided to start, tbh I was too lazy to take pictures and write stuff in the past, but hey I'm doing this rather than studying for the exam in the morning so here we go!

*The reasoning:*
My existing NAS filled up so I decided it was time to build a proper one, as this one was basically an HDD in the HTPC. Repurposing my old rig's hardware for this one, should be pretty reliable hopefully.

*The hardware:*

Rampage Extreme, first gen, socket 775, X48, pretty good board tbh, was kinda sad when I upgraded, kinda shame to run this in a server, I build with what I can and I can't afford a server board now
Q9450, should be plenty horsepower, plan is to run at stock clocks undervolted as much as possible
Corsair CX430v2
8GB 1333 Kingston ECC UB ram, part number: KVR13E9K2/8I
Bunch of HDDs, what I have lying around and some new WD greens. Currently 1 of these: WD20EZRX, WD20EARX, Seagate 7200.11 500GB
Hyper 212+ for the CPU
Some of the above hardware is not yet here, but hopefully everything will be working (I'm little worried about the ram, the board is very picky with the ram). Also I'll update with the final hardware's part numbers.

*Goals for the build:*
As the title suggests the server will be virtualized, I'll be running ESXi on it with several guest os's, which will have to fulfill the following tasks:

pfSense as a router, as my old WRT54GL seems to be giving out slowly







also it seems much more robust than any other retail router, notable requirements are QoS and VPN
Ubuntu server:
PXE server as I'll be making the HTPC diskless
Media streaming and file sharing
Client backup
SVN and/or Git repos
SFTP/SSH access
MySQL and Postgres server
probably a LAMP stack, not sure yet on this one
Email server
Probably some monitoring too, not sure yet if I want it

unRaid for handling the storage needs and to provide some redundancy to it.
My plan is to get this thing up and running after I finish my exams by the beginning of February.

I'll try to provide the solutions here to all the problems I encounter, as I read up on some stuff I'm bound to encounter quite few of them, hopefully some of you will find this useful. Also I'll post a link to most the bash scripts and config files I'll end up writing.
*
The current state of the build:*

Big one

Running it with a spare PSU and some ram from my rig as I'm waiting for those to arrive. I will be putting it my current case when I get a new one.

*Index:*

PSU, RAM, HDD arrival
ESXi installation
pfSense installation and config
unRaid installation and config


----------



## Norse

you will need to make sure you set pfsense to boot straight away upon ESXI starting up and set ESXI to be static IP as it obviously wont get an IP until the pfsense system is booted


----------



## Aximous

pfSense auto-starting is a nobrainer







Haven't really though about the IP of the box but it's will be either static IP or static DHCP, will test out which one works better.


----------



## Norse

you will have to have static IP otherwise it wont be able to get an IP when it boots so you'll then not be able to manage it


----------



## Aximous

I'll do that then, thanks for the heads up.


----------



## Norse

Are you going to be running any kind of raid to prevent a whole datastore being lost? the perc 5 and perc 6's are good, cheap and handle 8 SATA drives

If you want any help with ESXI just tell me, im still learning it but am fairly confident


----------



## Aximous

That's what unraid is there for, I'll be passing the hdd's via RDM to that VM and that will provide redundancy. I quite enjoy discovering what I can do with ESXi, I'll be sure to throw you a PM if I'm stuck, thanks


----------



## Norse

Quote:


> Originally Posted by *Aximous*
> 
> That's what unraid is there for, I'll be passing the hdd's via RDM to that VM and that will provide redundancy. I quite enjoy discovering what I can do with ESXi, I'll be sure to throw you a PM if I'm stuck, thanks


i am not sure how well that will work, effectively you are using unraid to make a network accessible datastore? so unraid system will have to boot before everything else (before pfsense) then boot the others up afterwards once the unraid datastores are available.

you are effectively going the below i think?

Physical drive > ESXI > Small datastore for Unraid OS > Multiple datastores just for unraid > ISCSI? > ESXI datastore > VM's


----------



## saranya21

or just pass the whole controller card through esxi and then assign it to the unraid vm which should then have access to all the drives (as if they were native) on the controller card. at least I think that's how it works.


----------



## Aximous

I didn't invest in a controller card, will be using onboard. The VMs won't be on the unraid array so that won't be a problem, AFAIK it's not possible to keep the images on a location that only comes online with one of the VMs, I'll use a small drive to hold the VMs and the rest will be passed to the unraid VM.

The layout would be:
Small physical drive to hold VMs and act as datastores for all of them, pendrive to hold unraid as you can't bypass that because you need the GUID for the license, rest of the drives RDM'd to unraid and mounted as NFS shares everywhere else, which should be fine with correct boot order for the VMs. With RDM there won't be datastores for the drives, the whole point is that ESXi won't do anything with those drives other than passing it through to the VM as they are.


----------



## Norse

i suggest you at least use onboard for the VM datastore else if the drive fails you'll lose all the stuff


----------



## killabytes

Why use ECC RAM if your board doesn't support it?


----------



## Aximous

Quote:


> Originally Posted by *Norse*
> 
> i suggest you at least use onboard for the VM datastore else if the drive fails you'll lose all the stuff


I'm not sure what do you mean here. If you are referring to unraid on pendrive, I'll probably end up doing a workaround so it ends up as all the VMs on the drive, though the pendrive will still be required in the system. Actually it is pretty unlikely that it will kill the drive since it basically reads it when starting up and saves the settings when some setting changes and that's all the usage it does. If you are referring to all the VMs being stored on a single drive with no redundancy, I'll be running regular backups of them to the unraid array.
Quote:


> Originally Posted by *killabytes*
> 
> Why use ECC RAM if your board doesn't support it?


The board does support ECC ram, only that they didn't update the memory support list. Hopefully it will work, if not I can return it anyway.


----------



## Norse

Quote:


> Originally Posted by *Aximous*
> 
> I'm not sure what do you mean here. If you are referring to unraid on pendrive, I'll probably end up doing a workaround so it ends up as all the VMs on the drive, though the pendrive will still be required in the system. Actually it is pretty unlikely that it will kill the drive since it basically reads it when starting up and saves the settings when some setting changes and that's all the usage it does. If you are referring to all the VMs being stored on a single drive with no redundancy, I'll be running regular backups of them to the unraid array.
> The board does support ECC ram, only that they didn't update the memory support list. Hopefully it will work, if not I can return it anyway.


Well ESXI needs to be able to store the files ie the virtual drives and such somewhere in the datastore which is where the OS and such will be installed

you basically go

Physical drive > a form of raid (not required but good idea) > ESXI datastore > VM setup using the datastore to store its virtual drive that the OS is then installed on

you seem to be going
physical drive > ESXI > unraid > making a fileshare? > OS's using the files?

Im not sure how you are going to be using unraid for anything other than some file storage?


----------



## Aximous

Yes the unraid array will only hold media, family photos, backups. The other VMs will work independently from that, only that they will be backed up the the array every now and then. I'll be doing the first scenario you wrote, without raid. Yes the unraid array will be shared and then the linux vm will stream content from that and back clients up the that array etc, but it won't store any of it's own stuff on there. I hope this clears it up.


----------



## Ecstacy

This is exactly like something I was planning to build. ESXi with PfSense, FreeNAS or ZFSGuru off a flash drive with a 6 drive RAIDZ2 array for file storage (mostly media), a VM for torrenting, a VM for hosting a small webserver (mainly for me to learn), and another VM for testing and fooling around with. I was thinking of having all the Virtual machines installed on a SSD and have them backed-up to the ZFS array periodically and have the backups and my sensitive data (Less than 100 GB) backed up to an external hard drive for extra security. If for some reason the ZFS array and the external hard drive were to both fail (or got stolen or my house burned down), I'd have my extremely sensitive data encrypted and stored in the cloud (Less than 1 GB).

I can't afford this now, but I'm interested as to how this turns out. I've never used ESXi and I don't know much about Linux/BSD, but it should work. Keep us updated.


----------



## Aximous

Thanks for your interested!

I was thinking about ZFS too but loosing the ability to easily expand the array turned me off, I can't afford to buy all the drives I'd need right now, and with unraid I can expand on the go. Your proposed setup looks pretty good, I'm not that worried about my storage dying and loosing the data, I had only one HDD developing bad sectors in the last 10(?) years and I could still get that off of that. Though you never can be too safe


----------



## Aximous

Update time!

The PSU, RAM, HDD and the pendrive arrived:

Big one

Installed the ram to see the compatibility:

Big one

It's running Prime95 right now, no problems so far, running at rated speed and timings, also testing how low I can go with the CPU it's currently at 1.19V at stock speeds, running pretty cool, 58°C max with only one fan running on low.

Once I'm done with final voltages and if it's 48 hour prime stable then I'll go back to setting up everything else, but it will probably have to wait some times as I have some pretty bad exams coming up









BTW sorry for the quality of the pics, forgot my DSLR at my parents' place.


----------



## Aximous

48 hour stress testing passed, everything is rock stable, temps are nice and cool, I'm pretty satisfied by the results, temps never exceeded 60°C even with 23°C room temps and it stayed pretty silent too.









Next up will be ESXi setup, although I probably won't be able to work on this in the next 2 weeks because of my exams







Expect frequent updates after that, I'll probably be working on this or I'll be pretty damn hungover.


----------



## Norse

Quote:


> Originally Posted by *Aximous*
> 
> 48 hour stress testing passed, everything is rock stable, temps are nice and cool, I'm pretty satisfied by the results, temps never exceeded 60°C even with 23°C room temps and it stayed pretty silent too.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Next up will be ESXi setup, although I probably won't be able to work on this in the next 2 weeks because of my exams
> 
> 
> 
> 
> 
> 
> 
> Expect frequent updates after that, I'll probably be working on this or I'll be pretty damn hungover.


ESXI 5.1 i am assuming? though free version seems to max out at 8 cores per VM though when i "removed" the license key to test things it seemed to revert back to a 60 day trial


----------



## Aximous

Yes, 5.1 I'm fine with 8 cores per VM tbh.


----------



## Aximous

Turns out my HDD which I was going to install stuff on died, so couldn't start working on this again... Gotta wait until I get a new one next week.


----------



## Aximous

Finally found some time to work in this again, writing my bachelor's thesis really takes a toll on my free time. Anyway got a new HDD so here it goes!

Installing ESXi

Just put the image on a flash drive with unetbootin or a similar usb installer. Installation is pretty straightforward although an important thing to watch out for is to *remove all other HDDs* from the system as the installer will format them and assign them as datastores which we don't want! After installation is done activate it in the vSphere client under the configuration tab in the licensed features menu with the serial you get after downloading.

*Installing NIC driver:*
Since there's no official support for the Marvell 8085s on my board I gotta install 3rd party drivers for them. You can get them here. To install them:

Put the unzipped vib on the datastore
Put the server in maintenance mode
In a shell install it with the following commands:

Code:



Code:


esxcli software acceptance set --level=CommunitySupported
esxcli software vib install -v /vmfs/volumes/your-datastore/net-sky2-1-1.1.0.x86_64.vib


Reboot.
After this the now recognized NICs show up in the configuration menu, time to assign them.

Under networking menu click Add Networking, select the 2 available NICs for the new vSwitch. When the vSwitch is created go to it's properties, edit the port group, select NIC teaming tab and check load balancing to enable teaming.

This is how it looks like:


Actually I cheated here a little, I have my WAN connection connected to the LAN vSwitch so I can manage it initially, after the static IP is set I'll swap around the cables and everything will be in order. We need to set a static IP for the host since the management network is on the vSwitch which will be used for the LAN connection of pfSense and it wouldn't get an IP until the VM is up and running. Setting it is real easy, go to the configuration tab networking menu (pictured above), click properties of the vSwitch containing the management network. Edit the management network, on the IP settings tab check "Use the following IP settings" and set the values appropriate for your network. After this you'll need to restart the vSphere client as obviously it will lose connection, you'll also probably need to set a static IP temporarily of the same network on your NIC connected to the host to be able to connect to it again, but after pfSense is set up it can be set back to automatic DHCP.

*Creating Raw Device Mapping (RDM) for HDDs for the unRaid VM:*
The point here is to give direct access of the hard drives to the VM instead of creating a datastore on the drives and assigning that to the VM. Also I'm using this option since my board doesn't support Vt-d so I can't map the port/controller itself which would be the best option.

First get shell access on the server then get the disk info and serial numbers by typing

Code:



Code:


ls -l /dev/disks




Spoiler: Sample output



Code:



Code:


~ # ls -l /dev/disks/
-rw-------    1 root     root     160041885696 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185
-rw-------    1 root     root       4161536 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:1
-rw-------    1 root     root     4293918720 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:2
-rw-------    1 root     root     154804231680 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:3
-rw-------    1 root     root     262127616 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:5
-rw-------    1 root     root     262127616 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:6
-rw-------    1 root     root     115326976 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:7
-rw-------    1 root     root     299876352 Mar 10 00:17 t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:8
-rw-------    1 root     root     2000398934016 Mar 10 00:17 t10.ATA_____WDC_WD20EZRX2D00DC0B0_________________________WD2DWMC300140495
-rw-------    1 root     root     2000398901248 Mar 10 00:17 t10.ATA_____WDC_WD20EZRX2D00DC0B0_________________________WD2DWMC300140495:1
lrwxrwxrwx    1 root     root            74 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:1 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:1
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:2 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:2
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:3 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:3
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:5 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:5
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:6 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:6
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:7 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:7
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d57434156324c323431313835574443205744:8 -> t10.ATA_____WDC_WD1600AAJS2D00M0A0________________________WD2DWCAV2L241185:8
lrwxrwxrwx    1 root     root            74 Mar 10 00:17 vml.0100000000202020202057442d574d43333030313430343935574443205744 -> t10.ATA_____WDC_WD20EZRX2D00DC0B0_________________________WD2DWMC300140495
lrwxrwxrwx    1 root     root            76 Mar 10 00:17 vml.0100000000202020202057442d574d43333030313430343935574443205744:1 -> t10.ATA_____WDC_WD20EZRX2D00DC0B0_________________________WD2DWMC300140495:1





The part that interests us here is this one:

Code:



Code:


vml.0100000000202020202057442d574d43333030313430343935574443205744 -> t10.ATA_____WDC_WD20EZRX2D00DC0B0_________________________WD2DWMC300140495

Let me explain this a bit, we can identify the disk by the last two strings, the first is obviously the part number, the last one is the serial number, these are necessary information when you have multiple disks.

So we want to create an RDM for this device we can do this with the following command:

Code:



Code:


vmkfstools -z /vmfs/devices/disks/<vml.xxx> /vmfs/volumes/datastore1/unRaid/<RDM name>.vmdk

For example the RDM for my 2TB WD Green which you can see in the sample output above:

Code:



Code:


vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202057442d574d43333030313430343935574443205744 /vmfs/volumes/datastore1/unRaid/WDC_WD20EZRX2D00DC0B0_WD2DWMC300140495.vmdk

As you can see I named it p/n_s/n so the passthroughs can be identified. Obviously we need to do this for all the drives which are to be used in unRaid.

*Preparing for VMs*
First thing first, we have to copy the installation images to the datastore, this can be done on the Summary tab under resources, storage. After that we can go ahead to the Virtual Machines tab to create the VMs. Right click in the white area and select New Virtual Machine... I'll go through the settings for each VM in the posts about the VMs themselves. Important thing about the VMs is to be mindful of the resources you allocate, ESXi has a very good resource manager, but if you assign more than what you physically have available it won't be able to help.

Installation and basic configuration is pretty much done with this, things left to do later are to set static IP and to set up the automatic VM startup/shutdown order. Next up is VM installations and configuration.


----------



## Aximous

Installing pfSense

This is the first VM I'm going to set up, the VM I created for this one is pretty lightweight. Use FreeBSD as guest OS for the VM and be sure to assign both vSwitches for it. I assigned 2 cores and 512MB ram for this one, it's probably overkill, but I can spare this much tbh also for this one I like to reserve the resources since I feel like this is essential to run smoothly at anytime. After the VM is created go to it's settings and assign the installer image to it's optical drive as a Datastore ISO file, also make sure to check connect at power on too. After this we can open a console and start up the VM!

I won't go into the installation as it is very easy and well documented on the pfSense website. After installation is done and the VM started for the first time be sure to note the interface details as we will be asked to identify them and it's hard to do with only their name, but we can identify them by the MAC addresses. The MAC addresses can be checked in the VM settings. After the interfaces are set up pfSense will start up and the rest of the configuration can be done on the web interface.

Go to 192.168.1.1 (the default address of pfSense) in a browser, the default login is admin/pfsense. The first thing to do is to change the password to something sensible, go to System/User Manager, click on the icon with an 'e' in it next the line of the admin user and type in a new password. Next thing to do is to install vmware tools, go to System/Packages, switch to the available packages tab and search for Open-VM-Tools and click the + button of it's line.

When these are done we need to set a static IP for the WAN interface so we can forward all the ports and traffic to that IP from the crappy modem the ISP provide







. Go to Interface/WAN set Type to Static and type in an IP address preferably not in the DHCP range of the modem and add the default gateway, save and apply and the only thing left to do with this is to go to the modem and forward all ports to the IP set.

Next we need to set pfSense to autostart with the hypervisor this can be done on the configuration tab under Virtual Machine Startup/Shutdown, click properties and check "Allow virtual machines to start and stop automatically with the system" and "Continue immediately if VMware Tools start" too and move pfSense up to the automatic startup group. After this it will start automatically with the system so we'll get IP automatically and can access the host







.

Configuration

*Set up static IPs and WoL*
If you already have clients connected then you can go to Status/DHCP Leases and click the add static IP and add WoL mapping buttons to do this. If you don't then go to services/DHCP server and at the bottom you can add static leases. For WoL go to services/wake on lan and you can add the clients there.

*Creating aliases*
Aliases are quite useful, they let you to handle stuff by names, let it be port groups, hosts or networks. They can be set up in Firewall/Aliases. Name is the string what you will be able to use. Can be used to create names for clients to use instead of IPs or to create groups of ports and just forward a single name that contains all the ports you need.

*Block unRaid from the internet*
Since unRaid isn't a secure OS and it doesn't really need anything from the internet it's not a bad idea to block it's internet access. This can be done in the Firewall/Rules menu, LAN tab, add a new rule: action is block obviously, set the source to the static IP we set for the unraid host and set protocol to any.

*DynDNS*
Since I need access to my stuff while I'm away I need a dynamic dns service, I use DynDNS for this. This can be set up in Services/Dynamic DNS, add a new client and fill in the fields, interface should be set to WAN.

*Port forwards*
Again this is pretty important for remote access, these can be set up in Firewall/NAT port forward tab. Adding a new rule is pretty straightforward interface: WAN, set the protocol to what your traffic needs, set the destination port range to the port you need and set the IP in the redirect target IP and target port. Aliases can be used here which is quite useful. Also I enabled UPnP because lot of the stuff I use maybe used in multiple instances on the network so forwarding them isn't that great idea.

*Preparations for PXE*
We need to tell pfSense that we want to use a PXE server on the network for it to work. Go to Services/DHCP Server, advanced options for network booting, enable it and set the IP of the server and the filename, usually "pxelinux.0".

*QoS*
Now this is my favorite feature of all non-basic routers but in pfSense I find it to be a big PITA to set up. My goal was to prioritize the traffic so big downloads like steam go in the background and important stuff can get priority over it. The way I found that works for me is to use CBQ scheduler in pfSense. The Single Lan multi Wan wizard creates a pretty good baseline just make sure to select CBQ as scheduling for both interfaces and select the services that need shaping on the network. After this it's time fix it didn't do justice, first make sure that the bandwidth cap is set for both interfaces also the queues should look something like in the picture below, if you didn't get queues on one of the interfaces or have them under another level then it's time to fix it by hand. Also I found that it didn't work that good if I didn't set a queue limit and didn't check the two random early detection options. Also make sure that "Borrow from other queues when available" is checked on the queues you don't want a hard limit on.


After this is done it's time to set some more firewall rules for shaping go to Firewall/Rules Lan tab. The ones that the wizard created are under the floating tab they are pretty much fine, some of the port may need changing but other than that they work ok, you can use them as examples with a few changes for the new lan rules. So the new rules: new rules for the lan interface with the same ports set as source ports and the same Ackqueue/Queue options set at the bottom as the corresponding floating rule. If you've set up filtering rules make sure to set these in the correct order, pfSense processes rules from to to bottom, if it finds a rule that accepts the package that's it, it won't check more. This is how they look on my config, obviously you can't see everything but the concept is there:


With all this set up you should get uninterrupted traffic on your important things while your downloads and stuff run in the background. For example I just tried P2P traffic using the full bandwidth and playing BF3 in the meantime and I was getting ping in the 30s, so I'm pretty satisfied with the results.

The very useful tool to debug your rules for the queues is the Status/Queues menu, it will display info about the traffic recognized by your queues. The same thing is available on console with pftop and then pressing the left arrow once, I find the console version more useful, also not that the data isn't the same because they are on different refresh times and also the console one works with bytes while the webinterface is with bits.

Also in windows you can you resource monitor to find out what ports are used by your applications:


----------



## mitchtaydev

Nice build log! It good to see some progress. I've just setup my home lab as well running ESXi and was reading through your posts a couple of weeks ago. I use pfsense as well, though that server isn't virtualised. I like the software but to be honest I can find the interface a little cumbersome, what do you think of it?
As an aside, do I see you using hungarian notation in your Names/Descriptions? if so can you elaborate on what the prefixes signify?
oh, and i'm not asking that because you are from hungary!









I am guessing q[name] is a queue, and m_ is member of service?


----------



## Aximous

Quote:


> Originally Posted by *mitchtaydev*
> 
> Nice build log! It good to see some progress. I've just setup my home lab as well running ESXi and was reading through your posts a couple of weeks ago. I use pfsense as well, though that server isn't virtualised. I like the software but to be honest I can find the interface a little cumbersome, what do you think of it?
> As an aside, do I see you using hungarian notation in your Names/Descriptions? if so can you elaborate on what the prefixes signify?
> oh, and i'm not asking that because you are from hungary!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I am guessing q[name] is a queue, and m_ is member of service?


Thanks for stopping by







Tbh I HATE the pfSense interface I find many parts of it counter intuitive and hard to figure, also many of it's menus aren't well documented online, only in some books, really like the features and how robust the software itself is but the interface could really improve.

The notations are actually what the wizard creates by default and I decided to stick with them since I couldn't think of any better. I went with q is for queue and didn't really care about the m_ prefix the rest tells very much what it is and the prefix helps to distinguish those rules from the rest. I tend to avoid using anything but English in my work (I'm a software developer) and I usually carry that over to the configurations and stuff like that too.

If you have some insights or corrections of any of the writeups please post I'm by no means an expert of any of these.


----------



## Hoppo2Def

Really looking forward to this. I've been contemplating building an ESXI server with pfSense, Untangle, and either freenas or unRaid. Thank you for giving me yet another reason to tax my bank account.


----------



## Aximous

Quote:


> Originally Posted by *Hoppo2Def*
> 
> Really looking forward to this. I've been contemplating building an ESXI server with pfSense, Untangle, and either freenas or unRaid. Thank you for giving me yet another reason to tax my bank account.


Thanks a lot







Good luck with your build, be sure to post a link when you start it!


----------



## Aximous

Installing unRaid

Now this one is a little different from the the rest. Installation will take more work than the rest and config needs less work.

*Creating the vmdk*
UnRaid is designed to boot from a pendrive and doesn't come with an ISO just a zip containing the OS and a script to make the flash drive bootable. Now this needs to be turned into a vmdk image, this can be done WinImage. First it needs to be installed on a flash drive as usual then in WinImage in the disk menu select "Create Virtual Hard Disk Image from physical drive..." and create a vmdk with it. Copy that to the datastore and plug the flash drive in too, it's time to create the VM!

*Creating the VM*
This is based on FreeBSD x64 too, I've given it 4 cores and 4GB ram, I really don't know if it requires this much or not, but hell I know the rest doesn't and at least this one can use the ram to cache stuff. The interesting part comes with the storage set the controller to LSI Logic SAS, and add the vmdk we just created to it and then check the modify the settings before finishing. In the settings popup we need to add the RDM images for the raid drives and a USB controller for the flash drive so the license can work. First the USB controller, nothing fancy here, select EHCI+UHCI and finish, next add the USB device select the flash drive containing unRaid and add it. Now the VM should be able to read the GUID of the flash drive so the license can work, note that I don't have a license yet so I can't be sure that this works at the moment, but as far as I know it should. Now it's time to add the RDMs, Add hard disk, existing virtual disk, browse the RDM image and in the next step be sure set it to Independent, Persistent. Add all the images this way and unRaid will see the drives, although note RDM doesn't pass through SMART info so unRaid won't see that and also it won't be able to spin down the HDDs sadly, but that's just the nature of using ESXi without Vt-d







Last thing is to make sure that the SCSI Controller is on LSI Logic SAS, I had it change it's mind sometime, this is important because of the RDMs, so be sure to check it.

*Configuring unRaid*
When first starting up disable the firewall rule in pfSense that blocks internet access to this VM as we need to install some packages. First VMWare tools, you should grab the latest one (or the one appropriate for your version) from here. To install it get a console session of the VM and go to /boot/config:

Code:



Code:


cd /boot/config

create a folder named plugins:

Code:



Code:


mkdir plugins

go into that directory

Code:



Code:


cd plugins

and download the plugin file from the link above:

Code:



Code:


wget http://unraid.zeron.ca/plugins/open-vm-tools/open_vm_tools-2012.10.14.874563_unRaid5.0rc11-i686-8Zeron.plg

After this and a reboot VMWare tools will be installed on every bootup, I know this sounds strange but unRaid works like this, you need to install stuff you want to use on every boot.
Another useful addon I use is unMenu, I won't describe installing it as they have a good guide on their site.

_Clearing disks_
With these out of the way it's time to do settings and set up the array, but first if you want to use empty disk it is recommended to pre-clear them to do this I recommend booting from the flash directly instead of using the VM as the script relies on SMART info which it won't get in ESXi. Download the script from the first post of this thread. Also feel free to follow that guide but I'll just write down how I did it. After the script is on the pendrive boot up from it, login with root and find the script

Code:



Code:


cd /boot

find the disk you want to clear

Code:



Code:


preclear_disk.sh -l

and the start clearing it

Code:



Code:


preclear_disk.sh /dev/hdk

where /dev/hdk is the device identifier of the disk. This take a *LONG* time, clearing my 2TB WD green drive took almost a day so be prepared for it.

_Settings_
Time to do settings, first enable time sync in VMTools, out of sync clocks are never good! In the settings menu enable the sharing services you need, nothing to note there. In disk settings I enabled auto start and 4K aligned MBR, but I don't think that makes a difference on HDDs. In share settings be sure to enable user shares as those will provide the backbone of sharing the library.

_Disks and shares_
It's time to finally add disks to the array, go to main and select the disks for the appropriate devices, make sure to use a drive that is as large as the larges drive in the array for the parity drive! If everything is looking good start the array and let's set up the shares. Go to shares menu and here comes the interesting part, now unRaid works the way that it uses the folders it finds in the roots of the disks as user shares and those get accessible on the network and distributes those on the disks according to the rules you set. You won't get a classic raid style huge volume containing everything unless you create one user share and everything under that, but that kinda defeats the purpose.
Creating a new share need some consideration regarding the settings:
Allocation method:

High-water: The high water allocation method attempts to step fill each disk so at the end of each step there is an equal free space left on each disk.
Most-free: Pretty self-explanatory name, it writes to the disk with the most free space.
Fill-up: Attempts to fill up the disks consecutively, when using this min free space needs to be set else it will fill the disks up full and performance will be poor.
High-water seems like the best choice for me, so I'm sticking with that for all the shares.

Split level, now this is the interesting part, this sets that at what directory depth are the directories forced to be on one disk, above that level they can be written to any disk, but under that level everything is kept on one disk. For example this can be useful for TV show seasons or categorized photos, so when watching/browsing them you don't have to wait an other disk to be read instead they are on the same disk, but other seasons could be put on other disks. A more detailed writeup is available here. Some example: my tv shows are set up the following /series/series_folder/season_folder/season_files the goal is to keep the contents of each season folder on one disk, but let the season folders balance out between disks, to do this a split level of 2 is needed with this structure. The explanation that worked the best for me is that split level 2 (series_folder) means that that level is the last that can have multiple instances across the disks, meaning the same series_folder can exist on multiple disks, but anything under that level meaning the season_folders can only exist in one instance on one disk so you could have Season 01 on disk1 and Season 02 on disk2, and all of their content will be held on that disk. You should set this for all the shares so your array gets properly balanced. If you want for some reason to have one of your shares to be kept on one disk you can set the split level to 0 and it will be kept on a single disk but in this case you have to create the folders by hand on the disk your desire it to be on.

You can set included/excluded disks if you want but I don't really see a use for this in my case.

Only thing to set is the sharing options, these are pretty straightforward the only important thing to think about is if you set NFS to private then you need to set a rule for it like *(rw) and secure settings need users created to allow access.

I feel like this got a little long and dry but I felt like I'd rather explain everything in depth than be shallow.


----------



## Aximous

I guess it's time to resurrect this project, it quite literally died after the CPU socket (I guess) died. Got a new mobo and cpu since then, and finally found some time to work on this again, although there are some changes, but the goal remains the same.


----------



## Aximous

First major change is the new motherboard. I got a pretty good deal on a used supermicro board and cpu combo. It is the X8DTL-i with 2 xeon L5520s. Nice low power CPUs, plenty of power which I'll probably never use, but whatever, it was still very cheap for what it is.

Here's the board:


Since this is a 2P board I had to make a 2nd EPS connector on the PSU, I did this using the 6pin pci-e connector and the 12V line from the molex line, shortened them and crimped the right terminals on it and put an 8pin connector on it, and voila it works like a charm.

It doesn't have IPMI sadly, but I can live with that, not like I have to manage it very frequently, I have a 15" LCD and a PS/2 keyboard that I'll hook up and fire up when I need to do something with it.

The board passed 12 hours P95 blend, which is enough for me, it's not overclocked obviously also it's server grade stuff so it should be okay. With a Hyper 212+ and a Hyper 212 EVO the highest temp I saw was 57°C at around 24°C room temp, that looks pretty good to me.

I'm waiting on a PCI sata card for the ESXi datastore drive, so I can have the whole sata controller on the ICH10R passed through to the unraid VM, when I receive that I can start working on the VMs again.

Also I got a new case, a CM elite 331, installed stuff in it, also had to do some ghetto hacking to have the coolers installed on the motherboard, I'll have pictures of these up on Sunday or Monday.

PS: Anyone looking at this board, you NEED active cooling on the northbridge or it WILL overheat, just saying


----------



## DaveLT

Do they really need the second EPS connector? Seeing as 2 L5520s only pull 120W so i doubt the point of it.


----------



## Aximous

That's true, by the power draw I don't think it's necessary, but afaik it is wired so that each cpu gets it's own EPS connector, that's why it needs it. I may be wrong on this one, but I made the connector anyways, not like it was a big deal.


----------



## Aximous

So here's how I managed to mount the coolers, those are standard M3 motherboard standoffs, the other cooler has standoffs that are taller, so for that I had to screw together 2 standoffs and sand one of them down to the correct size. I got carried away while working on those so I forgot to take pictures.

I know it is ghetto, but it works well, as I said earlier during P95 the processors never exceeded 60°C so it works. Way cheaper than buying coolers that fit natively.

Here's a picture of the system put together, please ignore the sata cables, those aren't final yet as I'm waiting for the sata card. The rest of the cables I consider pretty much final. I used the drive cage from my Storm Stryker for the HDDs and a Xilence HDD silencer for the datastore drive. Also modded one of the sata power cables to be the correct length for the HDDs in the cage.

(please ignore the crappy picture with flash, it's dark)

Installed a 120mm in the front to have a little more positive pressure in the case, both fans in the front are filtered with silverstone filters, so hopefully the system won't be dusty. To be able to clean those easily I won't have the front panel installed so I'll have to think about something else for the power and reset switches, I have an old case, I'll probably try to salvage the switches from that.

Also installed a tp-link PCI gigabit NIC which I'll probably use as the management network for ESXi if my plan with pfSense and passing through NICs works out.

Tomorrow I'm getting an APC ES UPS, though I'll have to wait for the data cable to come in the mail later the week as well as the sata card.


----------



## void

Nice, been following the build keenly.


----------



## StatikGP

Cool thread. I've been a Virtualization Administrator / Network Administrator for Large Business for several years with expert knowledge using vmware esxi and hyper-v. hit me up if u have questions.


----------



## DaveLT

Hmm ... why did you have to mod the standoffs? I thought Hyper 212s supported 1366 natively?


----------



## Aximous

Quote:


> Originally Posted by *void*
> 
> Nice, been following the build keenly.


Thanks, I really appreciate the interest!








Quote:


> Originally Posted by *StatikGP*
> 
> Cool thread. I've been a Virtualization Administrator / Network Administrator for Large Business for several years with expert knowledge using vmware esxi and hyper-v. hit me up if u have questions.


Thanks, I'm sure I'll have some questions and stuff that I do wrong, I'm by no means an expert on this stuff, all I know is pretty much from forums, random blogs and what I learned myself.
Quote:


> Originally Posted by *DaveLT*
> 
> Hmm ... why did you have to mod the standoffs? I thought Hyper 212s supported 1366 natively?


Because the board has threaded holes for the cooler, not normal holes like on desktop boards, so I couldn't use the backplate and standoff that came with the cooler.


----------



## Aximous

Little update: got the UPS today, I bought it used with a dead battery so I've got a brand new battery for it so it should last quite some time. It's 550VA which gives 330W max load, which while not plenty should be enough until I get a ridiculous amount of HDDs so I'm not really concerned.

Also here is a picture of the sata power cable I mentioned before, turned out pretty nice I think:

Only the first connector is plugged in, that's why it looks a little weird, but it lines up nicely when all of the connectors are in.

The sata card and the data cable for the UPS should be in the mail tomorrow, so hopefully I'll receive them this week, meanwhile I have 2 exams for my driving license, also there's the BF4 beta tomorrow







So I think no updates until I receive the stuff in the mail.

(Again excuse the flash in the pictures)


----------



## Aximous

I got the sata card in the mail today, so it's time to get some work done on this box. But first here are some pictures of it:

It's sata1 and old tech and whatnot, but the whole reason for this card, the SIL3512 contoller. The only controller I've found which is recognized by ESXi and doesn't cost a fortune.

Here it is installed in the case and connected up to the HDD to be used as datastore.


The card works as advertised, I started installing ESXi 5.5, because why not upgrade? Turns out VMWare dropped a lot of 'officially unsupported drivers' from 5.5 which were in 5.1 including the r8169 driver which is for the TP-Link card I'm using. Fortunately the drivers in 5.1 are working in 5.5 and they're available in the VMWare repos, so they just need to be put back in place. To do this PowerCLI is needed. After installing it run it as administrator to avoid permission problems, and execute the following commands:

Code:



Code:


#Connects to the software depot. Takes a few seconds to connect.
Add-EsxSoftwareDepot https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
#Takes the standard ESXi 5.5 iso and clones it so we can essentially slipstream in the missing drivers.
New-EsxImageProfile -CloneProfile "ESXi-5.5.0-1331820-standard" -name "ESXi-5.5.0-1331820-Whitebox" -vendor "withNICs"
#Add the missing drivers.
Add-EsxSoftwarePackage -ImageProfile "ESXi-5.5.0-1331820-Whitebox" -SoftwarePackage "net-r8169"
#Take our newly modified profile and spit out an iso to use. This will take a few minutes. Be patient.
Export-ESXImageProfile -ImageProfile "ESXi-5.5.0-1331820-Whitebox" -ExportToISO -filepath D:\ESXi-5.5.0-1331820-Whitebox.iso

After this the ISO created in the last step install just fine and the TP-Link card is recognized again, well I had to take out and put back in the cards to both get recognized at the same time, not sure why, but at the moment both cards are working, hopefully that won't change by not touching them









The above is based on the guide I found here, and as you can see there this isn't the only driver one can add, but I only needed this one so why bother with more.

That's it for this update, this is the only thing I managed to get done today as the cards not getting detected was very very frustrating...







Anyway ESXi is installed, I have the old VMs on the datastore, hopefully they can either start up or I can extract the config files from them.


----------



## Aximous

Was messing around with pfSense today, my plan was to pass through both onboard intel nics directly to the VM and use a vswitch for the rest of the VMs getting less CPU usage and overall better network performance this way, but after literally 10 hours of watching interfaces dropping, kernel panics, VMs getting IP addresses but not able to ping anything I decided to throw in the towel. I'll just use regular vSwitches and be done with it, used it like with crap marvell nics and the performance was good enough, I got ~110MBps bursts while copying on the network.

Would've been nice if I could get this working but this was just too frustrating. If anyone is good with pfSense and is willing to help me with this I may try this again but at the moment I had enough of this and don't see enough reason to struggle with it more.


----------



## The_Rocker

Quote:


> Originally Posted by *Aximous*
> 
> Was messing around with pfSense today, my plan was to pass through both onboard intel nics directly to the VM and use a vswitch for the rest of the VMs getting less CPU usage and overall better network performance this way, but after literally 10 hours of watching interfaces dropping, kernel panics, VMs getting IP addresses but not able to ping anything I decided to throw in the towel. I'll just use regular vSwitches and be done with it, used it like with crap marvell nics and the performance was good enough, I got ~110MBps bursts while copying on the network.
> 
> Would've been nice if I could get this working but this was just too frustrating. If anyone is good with pfSense and is willing to help me with this I may try this again but at the moment I had enough of this and don't see enough reason to struggle with it more.


To do this you need to run two vswitches. One for the WAN side of pfSense and one for the LAN side. One of your physical NIC's connected to each vswitch. Then your incoming physical network must plug into the NIC associated with the vswitch you are using for the WAN side of pfsense.

Then assign your vm's to the lan side vswitch.


----------



## Aximous

Yes, that's clear I was running it like that before, I was trying to passthrough the NICs with vt-d to reduce overhead on the VM side. Ended up doing it the way you wrote and works fine, but turns out something is not right with hardware passthrough in 5.5 as the ICH10R wasn't working either in the unraid VM, rolled back to 5.0 and it works fine. I'm trying 5.1 now, if the ICH10R works there I may try the NICs again.


----------



## Aximous

Well rolled back to ESXi 5.1u1 installed pfsense and unraid so far. I gave up on passing through the nics to pfsense but the ICH10R passthrough for unraid is working very well. Using vmxnet3 virtual nics, performance is very nice with them, few notes on this for pfsense:

pfSense doesn't support vmxnet3 out of the box, so for installing an e1000 adapter must be used, it can be removed afterwards.
Hop into console and do these steps to isntall vmware tools, open-vm-tools doesn't have support for vmxnet3 so this is the way to go, start installing vmware tools before.

Code:



Code:


pkg_add -r perl compat6x-amd64

mount_cd9660 /dev/acd0 /mnt
cd /tmp
tar zxvf /mnt/vmware-freebsd-tools.tar.gz
cd vmware-tools-distrib
./vmware-install.pl -d --clobber-kernel-modules=vmxnet3,pvscsi,vmmemctl

To make it load every time, you'll have to modify the file _/etc/rc_ to search for additional libraries
within _/usr/local/lib/compat/_ folder too. Open it with your favourite text editor and locate the line

Code:



Code:


/sbin/ldconfig -elf /usr/lib /usr/local/lib /lib

and append the previously specified directory at the end, turning it into

Code:



Code:


/sbin/ldconfig -elf /usr/lib /usr/local/lib /lib /usr/local/lib/compat

Sometimes vmware tools might not start, that's because _/etc/vmware-tools/not_configured_ file exists, deleting that let's it start again. If the issue comes back commenting out this part of the startup script can solve it, the startup script is _/usr/local/etc/rc.d/vmware-tools.sh_.

With this done, the e1000 adapter can be removed and the vmxnet adapters can be configured.

For the unraid VM I used the image from this thread. Just point the vmdk as the hard drive of the VM and give it the pendrive. As for vmware tools just download the latest version from here and put the _.plg_ file in the _config/plugins_ folder on the pendrive and the _.tgz_ file in the _packages_ folder (create it if it doesn't exist) and it should be ready.


----------



## Aximous

Received the data cable and managed to get the graceful shutdown with the UPS working. For that deployed VMA, I won't go into details, it's pretty straightforward process. After it's deployed (I've set static IP to both this and the host so they can communicate even if the dhcp server on the network is down for some reason) hop onto SSH or console and do the following:

Code:



Code:


# Install the apc daemon.
sudo zypper install apcupsd
# Open up the required ports.
sudo iptables -I INPUT -p tcp --dport 3052 -j ACCEPT
sudo iptables -I INPUT -p udp --dport 3052 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 3551 -j ACCEPT
sudo iptables -I INPUT -p udp --dport 3551 -j ACCEPT

Now it's time to configure the daemon for the particular UPS, mine is a dump one with a USB cable so the relevant lines in the conf file are the following (the rest is good on default), edit _/etc/apcupsd/apcupsd.conf_

Code:



Code:


UPSCABLE usb
UPSTYPE usb
DEVICE
MINUTES 6

I recommend testing out the battery life and the shutdown time of the host to decide when to start shutting down, for me the UPS holds out for 12 minutes and the host takes 2,5 minutes to shut down, so I decided shutting down at 6 remaining minutes should be ok.

It's time to test if everything's working, run _apcaccess_ as vi-admin, if it spits out a bunch of lines about the UPS it's fine if it gives a connection error then either the configuration isn't good or the ports aren't open. If everything is in order then the shutdown action needs to be configured to shut down the host, not just the VM, edit _/etc/apcupsd/apccontrol_

Code:



Code:


# Find the line with "#${SHUTDOWN} -h now "apcupsd UPS ${2} initated shutdown" and change it to this
su - vi-admin -c "vicfg-hostops --server <esxi host ip> --username root --password <root password> -o shutdown -f"

For this to work the VMs running on the host has to be in the startup/shutdown list and the shutdown action should be Guest Shutdown for graceful shutdown also running VMWare tools on the guests is required.

Now apparently this UPS doesn't support hibernating (or at least I couldn't find it) so starting back up has to be manual unless the battery actually runs out.


----------



## Dream Killer

i have my esxi set up this way. don't forget to map your hard drives as raw in physical compatibility mode to the fileserver vm. this will give the vm raw access and it will be able to access smart and monitor temperature. otherwise esxi will mask the i/o and your vm may not respond to hardware errors on the drive.



ps: my server does not have vt-d either.


----------



## StatikGP

I would advise against mapping drives as raw luns. its not a best practice by any means. esp if you plan on backing up these machines, or are looking for easy expansion down the road.


----------



## Aximous

I have the storage controller passed through to the VM with vt-d, so that's not a problem fortunately.


----------



## Aximous

Sold the ram from this and got a new kit of 24GB for less than that, also got a second identical 160GB drive for the OS, I wanna give xen another go and try to set up a raid 1 on those.

Turns out one of the sticks isn't recognized, but still 20 gigs is fine, and I'm getting back the price of one stick.

I'm having my final exam on the 8th so probably I won't be messing around with xen until then.


----------



## tycoonbob

Quote:


> Originally Posted by *Aximous*
> 
> Turns out one of the sticks isn't recognized, but still 20 gigs is fine, and I'm getting back the price of one stick.


I'd recommend replacing the bad DIMM instead of just returning it. Not sure about that board specifically, but it's best practice to use the same model DIMM, size, and amount per CPU (each CPU has it's own bank of DIMMs). That board may support tri-channel RAM, but if one CPU only has 2 DIMMs you loose that tri-channel and will see less performance overall from the server.

Just a thought.


----------



## Aximous

In the long run I'm planning on replacing it, but at the moment I'm not really concerned about it, I was running single channel up until now so it's a step up from that anyway.


----------



## tycoonbob

Quote:


> Originally Posted by *Aximous*
> 
> In the long run I'm planning on replacing it, but at the moment I'm not really concerned about it, I was running single channel up until now so it's a step up from that anyway.


----------



## Aximous

Managed to get OpenELEC working in a VM today, well not the latest one, only the 3.2.4 version.

Setup
For this to work I created a new VM with *Virtual Machine Version 7*, *Other 2.6.x Linux (64-bit)* for operating system and virtual disk in *IDE node*. I didn't really bother much with the resources, 2 CPU cores, 1 gig ram and 4 gigs for the HDD. Also added the HD4550 in the server as a PCI device to the VM for display output.

Installation
For this I followed this guide. It worked mostly without hitches, the only thing I had to change was to add *nomodeset* kernel parameter to the *extlinux.conf*. After this rebooted, Xorg fails to start, but the system itself is working so login with ssh to get Xorg working. For this I ran *aticonfig --initial --output=/storage/.config/xorg.conf* also I added *blacklist vmwgfx* to */storage/.config/modprobe.d/blacklist.conf*. This blacklists the kernel module for the VMWare SVGA adapter, I'm not sure if this is necessary but it works like this now.

With this setup I have display on the monitor hooked up to the VGA, but at the moment I can't really test playback since I don't have any HDMI devices with audio at hand, only the tv downstairs but obviously I can't haul the server down there or the tv up here so that's a no-go. I'll figure something out for testing in the coming days, maybe one of those HDMI breakout cables.

Other thing I'll have to figure out is vmware tools somehow.


----------



## Methos07

I've got a SCO 5.06 VM I manage at work, and I've definitely given up on vmware tools at this point. ha. Good log, love vmware.


----------



## Aximous

I really don't care that much about vmware tools, the only reason is because I have my UPS shut down the server on power failure and VM autoshutdown won't work without tools.

Maybe I'll try compiling the virtual project of openelec and try that since I'm using the older version anyway.


----------

