# 'Best' VM Host OS?



## Usario

Definitely some distro of Linux.


----------



## SectorNine50

Ever looked at VMWare?

https://www.vmware.com/tryvmware/index.php?p=free-esxi&lp=1

That's the free edition.

# of VM's varies GREATLY on what you are doing on each one.
What are the purposes for each VM?


----------



## ro00

Are you talking Host? As in Web? Then definitely CentOs Linux distro, would recommend 512mb for each VM to run something like LAMP. Then really your limitations would be based on the CPU usage. I have some servers that I can run 25 VMS, then others 11, so it depends on what you will exactly be doing, there isn't really a SET number.


----------



## stren

Depends what you mean by best. The least hardware overhead would probably be esxi and there is a free version, though really it's for servers as there is only a client interface for running on a seperate machine and no more than a basic text configuration that gets outputted on that box itself. It's also picky about hardware. If you're just getting started I'd suggest a 64 bit linux distro e.g. centos and vmware player. Play with that a while and see how you do.

What are you trying to do with the vm's? It's easy to run many vm's that don't do anything, it may be hard to run several intensive ones. It's like saying how long is a piece of string.


----------



## killabytes

Thanks guys,

I didn't list off what each VM would be due to the fact that I'm not 100% sure yet. I may decommission my other servers if I can transfer the load to a VM. Here is how it would stand at the moment.

1. Untangle in Bridge Mode, only scanning limited network traffic.
2. Possible web server replacement, my current web server is a P3. No need for a huge VM for this.
3. Back-up machine, something like Amanda or close to it.
4. Unsure?

Host OS means the actual OS that the VM will be running off of, not a web hosting OS. I use Ubuntu for that.


----------



## SectorNine50

I will tell you that the biggest factor in running VM's is going to be RAM.

Based on what I see there, you should be okay with 8GB, but you are going to want to make a "map" of sorts for how much RAM you want each machine to have, and how much you think it will use.

Do you have a SAN, or are you going to be running all of this off of the host's drives?

Nice thing about a SAN is ability to add more hosts "on the fly," and being able to move the machines between the hosts without moving their entire .vmdk file to the new host.

However, if you use an OS like Openfiler on one of the hosts, you can actually turn that host's drives into a SAN and have the same effect as an external one. A great alternative for a potentially growing cluster that is tight on money right now.


----------



## rocketman331

My vote is ESXi but if you're hardware isn't comaptible I'd recommend ProxMox.

ProxMox is easy and has a lot of features the others have.


----------



## killabytes

Quote:


> Originally Posted by *SectorNine50;14182095*
> I will tell you that the biggest factor in running VM's is going to be RAM.
> 
> Based on what I see there, you should be okay with 8GB, but you are going to want to make a "map" of sorts for how much RAM you want each machine to have, and how much you think it will use.


I would break it down to something like this...

1. Untangle; 1Gb, maybe 2Gb. It's not going to see a lot of traffic.
2. Web server, 512 to 1Gb at the most.
3. Back up, again 1gb at the most.
4. ?

The host will need all it can get obviously. Upgrading RAM isn't a huge deal either DDR2 is wicked cheap now. If I need 16GB, so be it.


----------



## SectorNine50

Quote:


> Originally Posted by *killabytes;14182128*
> I would break it down to something like this...
> 
> 1. Untangle; 1Gb, maybe 2Gb. It's not going to see a lot of traffic.
> 2. Web server, 512 to 1Gb at the most.
> 3. Back up, again 1gb at the most.
> 4. ?
> 
> The host will need all it can get obviously. Upgrading RAM isn't a huge deal either DDR2 is wicked cheap now. If I need 16GB, so be it.


If you are going to be doing PHP and/or SQL on the web server, I'd recommend at *least* 1GB.

The more RAM, the better!









Also, check out the bit I added to the end of my post above. I think Openfiler would be a cool solution for you.


----------



## killabytes

You're right it runs PHP, MySQL, Cacti and a bunch of other things. More RAM would be better. I don't currently have a SAN. I do have a 2 extra HDDs inside the 1U case that will be use for the VMs storage. The only network storage I have is my 10TB Windows Server 2k3 box; which I'm debating on switching to WHS.

Darn it, now you got me wanting a SAN!


----------



## SectorNine50

Quote:


> Originally Posted by *killabytes;14182209*
> You're right it runs PHP, MySQL, Cacti and a bunch of other things. More RAM would be better. I don't currently have a SAN. I do have a 2 extra HDDs inside the 1U case that will be use for the VMs storage. The only network storage I have is my 10TB Windows Server 2k3 box; which I'm debating on switching to WHS.
> 
> Darn it, now you got me wanting a SAN!


SAN's are sweet, but very, very expensive. For now, I would just install Openfiler or FreeNAS as the first VM, map it to the two extra drives in the 1U case, and then install the other VM's using the iSCSI that Openfiler/FreeNAS creates

The good news is, you can have multiple iSCSI luns attached to the host. This means if you get an external SAN later, you can transfer things to that and go from there.

EDIT:
Might want to double check what features are available with ESXi. To make moves between hosts and LUN's, you might have to power down the machine. Not a big deal, but thought I'd point it out as we have vMotion here at work that allows us to move VM's while they are powered on.


----------



## killabytes

Quote:


> Originally Posted by *SectorNine50;14182305*
> SAN's are sweet, but very, very expensive. For now, I would just install Openfiler or FreeNAS as the first VM, map it to the two extra drives in the 1U case, and then install the other VM's using the iSCSI that Openfiler/FreeNAS creates
> 
> The good news is, you can have multiple iSCSI luns attached to the host. This means if you get an external SAN later, you can transfer things to that and go from there.
> 
> EDIT:
> Might want to double check what features are available with ESXi. To make moves between hosts and LUN's, you might have to power down the machine. Not a big deal, but thought I'd point it out as we have vMotion here at work that allows us to move VM's while they are powered on.


What would the advantage be in setting up a SAN on the 1U instead of toss the 'swap' space for the VMs on the other drives?

Powering down isn't really an option. I host a few websites for some folks that may be cranky







lol.


----------



## The Master Chief

I'd use XenServer. Install the guests, manage them through XenCenter, then connect when you need to through RDP for windows, and the Linux one for obviously linux, I cant think of it's name.

However, that is a type 1 hypervisor (bare metal), if you wanted a type 2 then I would go with VMware... However, I'm thinking a bare metal hypervisor would be best for your needs.

It's what I would use, but that's just me.

VM's use about half the resources that a regular install needs.


----------



## SectorNine50

Quote:


> Originally Posted by *killabytes;14182523*
> What would the advantage be in setting up a SAN on the 1U instead of toss the 'swap' space for the VMs on the other drives?
> 
> Powering down isn't really an option. I host a few websites for some folks that may be cranky
> 
> 
> 
> 
> 
> 
> 
> lol.


Basically, it would help you "future proof" a bit. Since the "SAN" can be hooked up to more than one host at a time, it would allow you add hosts after the fact. You then could just move the VM's between the hosts freely without having to remove them from inventory, copy over the .vmdk files, then re-add them to inventory.

If you are wondering why the SAN is necessary for this, it's because the hosts can't see each others built in LUNs (ie. Host1's 100GB HDD isn't accessible by Host2's VM's).

If you don't really think you are ever going to add more hosts to your cluster, it probably doesn't make much of a difference.


----------



## trueg50

Quote:


> Originally Posted by *killabytes;14182523*
> What would the advantage be in setting up a SAN on the 1U instead of toss the 'swap' space for the VMs on the other drives?
> 
> Powering down isn't really an option. I host a few websites for some folks that may be cranky
> 
> 
> 
> 
> 
> 
> 
> lol.


The SAN would be the storage location for the VM's. Swap files for the VM's could exist on the SAN, or on another datastore.

The advantage of the SAN would be enhanced flexibility, so you can shuffle the Vm's from one host or another. VMware calls it "Vmotion", allowing you to do something like move all the VM's from one server to another (with no downtime), power down the server, swap out RAM etc.. then move the VM's back.

Sadly VMotion isn't available unless you poney up a bit of some cash (in excess of $2,000 per year I believe), however the SAN/NAS other datastore would still be a good idea.


----------



## killabytes

Love it. That's the plan then. Use the extra HDDs in the server as a _SAN_ to prevent down time.

Now I just need to wait for the 1U low profile active cooler. I hate blower fans! So loud!


----------



## trueg50

Quote:


> Originally Posted by *killabytes;14183534*
> Love it. That's the plan then. Use the extra HDDs in the server as a _SAN_ to prevent down time.
> 
> Now I just need to wait for the 1U low profile active cooler. I hate blower fans! So loud!


Doesn't quite work like that. For the SAN you would need a separate server for the drives + processing. You would also need the licensing for the additional features like Vmotion.


----------



## killabytes

Quote:


> Originally Posted by *trueg50;14183789*
> Doesn't quite work like that. For the SAN you would need a separate server for the drives + processing. You would also need the licensing for the additional features like Vmotion.


Even if I was to use Openfiler?


----------



## Lord Xeb

ESX or ESXi >.>

But really any form of linux is the best for hosting an OS. It is light and very stable (especially older variants that have had time to mature).

Grab Fedora if you wanna do any kind of VM.


----------



## trueg50

Quote:


> Originally Posted by *killabytes;14183981*
> Even if I was to use Openfiler?


I believe so.

You need direct access to the hardware; I don't know how well passing the array controllers directly to the Openfiler VM will work or how many controllers the 1U has.

Fortunately ESXi is great with handling storage, so you can take all your drives, throw them in a RAID 5/6 (or 1) depending upon how many drives you have, then they become your datastore for all the VM's. Later on if you build a SAN/NAS out of old parts, you can just map that in ESXi and voila you have another datastore.


----------



## SectorNine50

Quote:


> Originally Posted by *trueg50;14184192*
> I believe so.
> 
> You need direct access to the hardware; I don't know how well passing the array controllers directly to the Openfiler VM will work or how many controllers the 1U has.
> 
> Fortunately ESXi is great with handling storage, so you can take all your drives, throw them in a RAID 5/6 (or 1) depending upon how many drives you have, then they become your datastore for all the VM's. Later on if you build a SAN/NAS out of old parts, you can just map that in ESXi and voila you have another datastore.


Openfiler will use the VMWare hypervisor to access the local drives and create an iSCSI store that any computer on the network can access. Openfiler will run as VM (it is it's own operating system, therefore has access to the drives just like any other VM), and once you have it configured, you add the iSCSI target to the host. Once you've done that, you can install VM's to the iSCSI target, effectively building an on-board SAN that any future host will be able to access.

Please trust me on this, I work with ESX at work on a daily basis, and have experience running Openfiler.


----------



## trueg50

Quote:


> Originally Posted by *SectorNine50;14184736*
> Openfiler will use the VMWare hypervisor to access the local drives and create an iSCSI store that any computer on the network can access. Openfiler will run as VM (it is it's own operating system, therefore has access to the drives just like any other VM), and once you have it configured, you add the iSCSI target to the host. Once you've done that, you can install VM's to the iSCSI target, effectively building an on-board SAN that any future host will be able to access.
> 
> Please trust me on this, I work with ESX at work on a daily basis, and have experience running Openfiler.


Yeah that works too! I don't think this will need to be very high performance, so it should be fine.

Where are the LUN's sitting for the Openfiler SAN? Are you just giving the Openfiler VM a couple of Vdisks and using that for storage?


----------



## Shadowww

Sorry, couldn't be arsed to read all replies.

If VM's are mostly Windows - then Microsoft HyperV Server (it's free)
Mostly Linux and other UNIX - preferably VMware ESXi, or some Linux distro with Xen
Mostly Linux - VMware ESXi or, preferably, some Linux distro with KVM (preferably RHEL 6.1/CentOS 6.0/SL 6.0)


----------



## SectorNine50

Quote:


> Originally Posted by *trueg50;14186996*
> Yeah that works too! I don't think this will need to be very high performance, so it should be fine.
> 
> Where are the LUN's sitting for the Openfiler SAN? Are you just giving the Openfiler VM a couple of Vdisks and using that for storage?


Yup, exactly!

Depending on how big of a block size you give your drives (and how big your drives are), you can do one giant drive, or several smaller ones. Openfiler will use it's VDisks as iSCSI LUN's, it's actually pretty slick.


----------



## killabytes

Wow guys. Thanks for all the info. Really. It's well needed.


----------



## darknight670

Well for pure virtualisation ESXi is a good choice. Tiny overhead, you can pass entire controllers....


----------



## parityboy

Related question: when it comes to providing a datastore for virtual drives, so far I see a couple options:

1) Make the disk images available as vmdk files via an NFS share, so that they'd be accessed as if they were local disk image files.

2) Make the disk images available as network block devices, using SAN technologies like iSCSI. The drives are accessed as if they are local physical hard disks. The disk images themselves won't necessarily be vmdks on loopback - they could be LVM logical volumes.

Which way is "best"? Do network block devices yield higher performance than NFS-accessed files?


----------



## SectorNine50

Quote:


> Originally Posted by *parityboy;14308230*
> Related question: when it comes to providing a datastore for virtual drives, so far I see a couple options:
> 
> 1) Make the disk images available as vmdk files via an NFS share, so that they'd be accessed as if they were local disk image files.
> 
> 2) Make the disk images available as network block devices, using SAN technologies like iSCSI. The drives are accessed as if they are local physical hard disks. The disk images themselves won't necessarily be vmdks on loopback - they could be LVM logical volumes.
> 
> Which way is "best"? Do network block devices yield higher performance than NFS-accessed files?


In short: Yes, Network Block Devices will yield higher performance due to lower overhead than NFS.

However, in both situations the .vmdk would be mounted the same way. They are disk images, but the host translates them as physical disks to the VM, no matter how it's stored.

In terms of LVM's, VMware does not make individual logical volumes for each virtual machine, even if it's on an iSCSI. It builds a single Logical Volume on the iSCSI that it organizes the virtual disks files in. So basically, you can go through the folders on the iSCSI device, and look at all the .vmdk files that were created for each machine (which turns out is great for when a snapshot gets "orphaned").


----------



## dhenzjhen

Linux with XEN free and stable, but if you want to pay license then get VMWARE. But If you want to just play virtualzation you might want to try virutalbox.


----------



## SectorNine50

Quote:


> Originally Posted by *dhenzjhen;14308383*
> Linux with XEN free and stable, but if you want to pay license then get VMWARE. But If you want to just play virtualzation you might want to try virutalbox.


VMWare has a free version of their hypervisor, which in my opinion is the best choice in terms of simplicity, expandability, and management thanks to the vSphere software.


----------



## JedixJarf

ESXi.


----------



## Lord Xeb

As I have stated before, Linux. And IF you can get ahold of it, ESXi. The os is solely designed to be used as a virtual host and nothing more.


----------



## SectorNine50

Oye... Why are we rehashing all of the VM Host options all over again...?


----------



## killabytes

Quote:


> Originally Posted by *Lord Xeb;14308437*
> As I have stated before, Linux. And IF you can get ahold of it, ESXi. The os is solely designed to be used as a virtual host and nothing more.


Quote:


> Originally Posted by *SectorNine50;14308475*
> Oye... Why are we rehashing all of the VM Host options all over again...?


People only read the first post.


----------

