# Exploring the idea of OpenStack or Proxmox at home..



## tycoonbob

Yeah, it's been on my mind a lot lately.

My current setup exists of a Dell R610 running Server 2012R2 with Hyper-V, and my custom storage box. Out of my ~16 VM's, I'd say 13 of them are Linux-based (CentOS), while 4 are Windows based. The 4 Windows based VM's consist of 2 Windows Server 2012R2 domain controllers, and 1 Windows Server 2012R2 VM running a Torrent client (could easily be moved to Windows). The last Windows VM is a Windows 8.1 VM I use as a backup if I can't get to my Windows 8.1 physical PC when away from home. Aside from those Windows VM's, I also have 2 PC's running Windows 8.1 (mine and the lady's), while her's isn't even on my domain and my storage server running Windows Server 2012. So yeah, I find myself slowly moving to more and more Linux stuff, and I am seeing myself get smarter and more efficient with my resources.

OpenStack is very appealing to me because it's more than just a hypervisor (which is all I have now). It's full management with a web based front-end, provide templates to easily spin up VM's, manage my virtual network, and even manage storage (file level with OS Swift, and block level with OS Cinder). OpenStack can utilize about any hypervisor, so that leaves me pretty open. While I love Hyper-V, the latest OpenStack (Icehouse) doesn't support Hyper-V yet. Other options are Xen-based (XenServer, XCP, Xen), KVM/QEMU, and ESX. I hate Xen, so that's out of the question. I could use VMware, but I'm really leaning toward KVM/QEMU (since KVM can run Windows Server 2012R2/Win8.1 VMs) and is pretty lightweight. With OpenStack and KVM I'll have the features I need (templates, live migration/failover, can use iSCSI or NFS storage) plus other features that I would probably like. My problem with figuring out if I want to make this big change is hardware. If I go to OpenStack, I want to get more lower powered compute nodes and scale out, instead of up.

At one time I had 2 C1100s, each with dual L5520s and 48GB of RAM. It was wasteful, because I didn't use but ~5% of the CPUs and maybe 40% of the combined RAM. What I'd like to do is build up to 5 boxes (2 to start), each with like a single Xeon L5639/5640, or even a single Xeon L5520 and 16-32GB of RAM each. However, I'd also like dual onbaord NIC's along with a PCIe slot to add more NICs. Dual PSU's would be nice, but with multiple nodes and failover, I'm not too concerned about that. Also, I'd like to see a power draw of like 50W from each, compared to the 140W of my R610.

I would retain my current storage box, and likely find a way to share out storage as both Cinder and Swift, to see if block level (iSCSI -- aka Cinder) or file level (NFS/CIFS -- aka Swift) is better for my setup.

I'm looking for ideas for my computer nodes though, and looking to spend somewhere like $300-400 per node. I can get used Xeon L5640's on ebay for about $80/ea, but I can't seem to find much in the ways of single CPU LGA1366 motherboards/server barebones. I'm wanting small servers, such as 1U half-rack. I've been looking at dell R210's, and can get a barebone R210 with Rails for $125-150 each. Since I want L-series Xeons (and not E-Series or X-Series), I would likely want Xeon L3426's to go in these R210's, but they run about $275. That's $400 without RAM and drives (I expect a 60GB SSD, for about $50). RAM is basically $10 per GB, so 16GB of RAM would be about $160. That would be ~$600 per server, which seems like a waste when I could get a C1100 with dual L5520's and 24GB of RAM for ~$400.

Does anyone know of a used OEM server that will fit what I'm looking for, or a way I could build one? Or am I just dreaming that I could find this sort of setup? I've looked at some Dell's and HP's, and haven't really found what I'm looking for, but haven't quite looked much at IBM, SuperMicro, Rackables, and whatever else might be out there.


----------



## tycoonbob

Well, I've fallen in love with the new Intel Atom CPUs. Specifically, the 8 core/8 thread C2750.

This thing is great (albiet, more pricey than I want, but may be worth it):
SuperMicro SYS-5018A-FTN4 - $530 (Newegg)
-1U half rack chassis
-Intel Atom C2758 @ 2.4GHz (8 core, 8 thread -- supports VT-x and supposedly VT-d) TDP of 20W!!!
-4 x 204pin SO-DIMM slots, supporting up to 32GB of ECC RAM
-QUAD Gigabit NIC's
-Dedicated IPMI NIC
-200W PSU

Basically buy this and RAM. Would put me at about $650 per system if I added 16GB of RAM, but the power draw on these is somewhere around 30-40W. According to some benchmarks here, the C2750 is supposed to be more powerful than a single Xeon L5520 (or at least on par), and should do just fine for running 8-12 VM's.

SuperMicro also has another version of this box, that has the Atom C2550 (4 core / 4 thread) at 2.4GHz, with a TDP of only 14W. This model supports 64GB of ECC RAM, and costs about $400.
http://www.newegg.com/Product/Product.aspx?Item=N82E16816101874

I can get the motherboard/CPU combo that's in the first server for about $338 from Newegg:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182855

And can pick up a SuperMicro chassis (with 200W PSU) for about $75-95 on NewEgg:
http://www.newegg.com/Product/Product.aspx?Item=9SIA24G1E38394
http://www.newegg.com/Product/Product.aspx?Item=N82E16811152131

That would be about $425, plus the cost of RAM and a 60GB SSD.

While it's higher than what I was looking for, it gives me pretty much everything I want and more. 4 onboard NICs, PCI-e slot, super lower power (lower than anything else I will find) which could be very useful if I grew my OpenStack environment to 4-6 of these for Computer nodes, dedicated IPMI NIC...pretty cool.

To go even cheaper, the SuperMicro A1SRi-2558 (C2558, quad core Atom) motherboard can be had for about $235 on eBay right now:
http://www.ebay.com/itm/FREE-SHIP-Supermicro-A1SRI-2558F-B-Intel-Atom-C2558-DDR3-SATA3USB3-0-V-4GbE-/201129179564?pt=LH_DefaultDomain_0&hash=item2ed43bb9ac

I think something like this would be good with 16GB of RAM as compute nodes. Obviously the 8 core (C27xx series Atom) with 32GB of RAM would be more powerful, but there could be a cost savings going with the lower powered model and more of them.


----------



## Simmons572

I'm very curious to see what you decide on. Sub


----------



## cones

If you really got 6 nodes that drew 50w each wouldn't that be way higher of a draw then your single R610,I don't see how that would help with power usuage at all. The atom is better from a power perspective. Don't have much to add, mostly wanted to sub. So openstack is essentially a web frontend for hypervisors? What's making you move away from just windows now?


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> If you really got 6 nodes that drew 50w each wouldn't that be way higher of a draw then your single R610,I don't see how that would help with power usuage at all. The atom is better from a power perspective. Don't have much to add, mostly wanted to sub. So openstack is essentially a web frontend for hypervisors? What's making you move away from just windows now?


From Wikipdia:
[...]
OpenStack is a free and open-source software cloud computing platform. It is primarily deployed as an infrastructure as a service (IaaS) solution. The technology consists of a series of interrelated projects that control pools of processing, storage, and networking resources throughout a data center, able to be managed or provisioned through a web-based dashboard, command-line tools, or a RESTful API. It is released under the terms of the Apache License. [...]

While 6 nodes at 50W each is more than a single R610 at ~150W, my original plan was to grow to 3 R610's to allow for redundancy/clustering. That would obviously be 450W, where as 6 Atom nodes is 300W, but it would be a while before I got to 6 nodes (if ever). I plan to start with 2, and see myself growing to 3-4 total Compute nodes.


----------



## cones

Wasn't aware you wanted to grow with the R610s. That wikipedia quote helped I just hadn't looked it up myself yet.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Wasn't aware you wanted to grow with the R610s. That wikipedia quote helped I just hadn't looked it up myself yet.


No worries. It's easier for me to copy and paste instead of trying to explain it myself, lol.

If my current R610 goes down, my VMs are offline until the R610 is fixed. I don't like that. I consider my home systems to be "production", as they run various things such as my media database (XBMC centralized database), PVR software, Wireless Access Point controller, DNS, soon to be home automation, etc. If my current server goes down, my network is essentially down.


----------



## Simmons572

Quote:


> Originally Posted by *tycoonbob*
> 
> No worries. It's easier for me to copy and paste instead of trying to explain it myself, lol.
> 
> If my current R610 goes down, my VMs are offline until the R610 is fixed. I don't like that. I consider my home systems to be "production", as they run various things such as my media database (XBMC centralized database), PVR software, Wireless Access Point controller, DNS, soon to be home automation, etc. If my current server goes down, my network is essentially down.


This explains a lot then. I can understand the need for redundancy now. I guess I'm just so used to the consumer grade "plug-n-play" equipent that I don't really think about the back end. The only thing I really needed to do when I switched over to pfSense was configure my ports for my minecraft server...

I guess I will learn more about DNS and the other stuff as I continue to venture into networking.


----------



## cones

I think most of us could learn more about DNS. I'm still trying to figure out myself how to have an address go to an internal IP. Like above that makes sense why you want redundancy since it sounds like you can't do anything when that server is down.

Would you be able to use NUCs to accomplish this, they may now have hardware pass through?


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> I think most of us could learn more about DNS. I'm still trying to figure out myself how to have an address go to an internal IP. Like above that makes sense why you want redundancy since it sounds like you can't do anything when that server is down.
> 
> Would you be able to use NUCs to accomplish this, they may now have hardware pass through?


I looked at the NUC's, but there is a nice premium to these because they are tiny and mainstream. They aren't rack mountable either, so that's a negative in my book. Oh, and only a single NIC.

The more I think about it, the more the Atom builds sound really appealing. With a name like Atom, part of me wants to think low performance, but that doesn't appear to be the case at all here. These specific Atom models I'm looking at are actually server grade, and I bet well see more of these style builds in datacenters (especially things like web hosting, and public clouds).

To have an address you go to an internal address, you need to have your own internal DNS server(s) and have an A Record created for that. If you configure your computer to talk to your DNS server, it will look there first for a name resolution. If it doesn't find a record, it will then look for any configured Forwarders (i.e., forward to Google's DNS -- 8.8.8.8, 8.8.4.4) or it will look at the root hints. I don't use any forwarders personally, and my setup seems great as long as my Domain Controllers stay up (I have 2 VM's, each running Active Directory Domain Services, Microsoft DNS, and Microsoft DHCP).


----------



## cones

I figured the NUCs would be more of a premium then what it was worth. Don't want to distract but I run pfsense and need to figure out where in the settings (DNS on network goes pfsense->Google).


----------



## TheBloodEagle

I'm really quite new to all this, lot of it is going over my head but would this be a great way to create a render farm?


----------



## tycoonbob

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I'm really quite new to all this, lot of it is going over my head but would this be a great way to create a render farm?


Honestly, I have no idea. Depending on how you set up a render farm, I guess you could do it in OpenStack, assuming you have multiple virtual nodes that share the same job. Honestly, I would think it would be better to use the physical hardware has render nodes instead of virtual nodes on top of a physical machine. I could be completely wrong though.


----------



## tycoonbob

Well, just an update.

I've installed CentOS 6.5 in a VM I've named "CONTROLLER01" and now have Keystone (Identity Services) Glance (Image Service), and Horizon (Web Frontend) installed for Openstack. I've also got CentOS 6.5 installed on that DE5100, named "COMPUTE01", and configured Nova (Compute -- hypervisor). Took me maybe 30 minutes to get this far with using the OpenStack documentation, but wasn't bad.

Oddly, the web frontend sees COMPUTE01 as having 3GB of RAM, instead of 4 like the computer shows. Not a huge deal at this time, but I'm liking this web frontend so far. Maybe Compute nodes allocate 1GB for themselves or something, which is no big deal when I'm considering 32GB Compute nodes.

I'll get nova-network configured sometime this weekend, but likely won't play with any of the other modules just yet. There is so much going on that I'd rather focus on the primary modules first, lol. If all goes well, I definitely see myself selling my R610 and buying/build 2-3 of those 8-core Atom C2758 servers. I really think 3 would be the most I'd need (8 core, 24 or 32GB of RAM each), and would consume less power than my current R610 while providing more power. (The Atom C2758 is on-par to the Xeon L5520, better in some benches, and not as good in others -- I figure 3 C2758 should provide more CPU power than 2 L5520's, and consume much less power).

I've got one test image running CirrOS, just to see what it's like.


----------



## mansbigbrother

Whats the status on this?


----------



## tycoonbob

Yeah, it's been awhile.

I've decided to move forward with the Avaton Atom-based servers, but money is the limiting factor right now.

I've got one VM built as a controller, and am using that AOpen PC as a KVM node. I've probably reimaged everything 3 times now, just learning the difference pieces and the best way to install everything cleanly before I get new hardware.

It'll happen, but will probably be a bit longer...unfortunately.


----------



## The_Rocker

Im keen to know more about this 8 core Atom you speak of! I need to replace the pfsense box we currently have in the office and something as low power as possible would be perfect.

I have thought about using Openstack in a production environment i look after as a replacement for VMWare. However it sort of looks like Openstack is best used with a distributed storage system like Ceph as opposed to a traditional iSCSI SAN like we currently use. Also the virtualised networking is an awesome feature but I need to get used to that since the environment in question would need to be modified and have our public address range presented to the openstack servers directly rather than NAT'ed behind a pfSense cluster.

With regards to choosing a hypervisor, I would suggest KVM. I have been trying it out and it seems just as efficient as ESXi or HyperV for that matter.

Infact, you might want to go take a look at Proxmox!


----------



## tycoonbob

Quote:


> Originally Posted by *The_Rocker*
> 
> Im keen to know more about this 8 core Atom you speak of! I need to replace the pfsense box we currently have in the office and something as low power as possible would be perfect.
> 
> I have thought about using Openstack in a production environment i look after as a replacement for VMWare. However it sort of looks like Openstack is best used with a distributed storage system like Ceph as opposed to a traditional iSCSI SAN like we currently use. Also the virtualised networking is an awesome feature but I need to get used to that since the environment in question would need to be modified and have our public address range presented to the openstack servers directly rather than NAT'ed behind a pfSense cluster.
> 
> With regards to choosing a hypervisor, I would suggest KVM. I have been trying it out and it seems just as efficient as ESXi or HyperV for that matter.
> 
> Infact, you might want to go take a look at Proxmox!


Thanks for the feedback. If I do decide to move my home network to OpenStack (which I really want to, if I can allocate the funds to do it), I would be using either Hyper-V or KVM for my nodes, as they are both supported by OpenStack. I know Hyper-V inside and out, and I have licenses for it. I also know ESXi, but I don't want to use the free version and I have no way to get personal licenses. KVM is basically the open source leader (when ignoring Xen...which I hate with a passion).

The Atoms look pretty interesting, and the benchmarks show them being better (in some benches, and as good in others) than the Xeon L5520 (which is why my current Hyper-V box is based on). From my research, I can have comparable resources, while consuming less power...which is great if I want to get 3 or so virtualization nodes (which was always my original plan). 3 of these Avaton Atom's just sound so much better than 3 R610s, at least in a home environment.

Also, you can use a traditional SAN (iSCSI or FC) with OpenStack. OpenStack has 2 storage modules, Cinder and Swift. Cinder is a block level provider (iSCSI/FC), while Swift is a file level provider (think NFS). From my understanding, you can use whatever backend storage you want and serve it block level (with Cinder) or file level (with Swift). I plan on using Cinder if I go with Hyper-V, but I may use Switch (or Cinder and Swift) if I choose KVM.

I still have plenty of research to do, but need to get my hands on one of those Avaton Atom servers first (~$700, or so).


----------



## tycoonbob

Thought I'd share this:



So far it seems like SuperMicro is the primary market player in these things, at least in combo motherboards and barebones. I'm trying to decide the best path for me, and it seems that building one from a combo motherboard and SM chassis will save 10-12%, or so. Only problem is that I'm not sure if the SM 502L-200B chassis (~$75) will work, as it seems the I/O ports are off (based on images from NewEgg). I've read some things elsewhere that this chassis should work, so I've sent an email off to SM.

I sold a few tech toys locally today, so I have some money freed up to purchase an Avoton build to play with, without decommissioning my current R610, and it's workloads. I have a good lead on the SYS-5018A-MHN4 barebone server (C2759 CPU, 4 x 240Pin DIMM, 4 NICs), unless the A1SRM-2758F-) motherboard fits in the 502L-200B chassis, in which case I'll be going with that. I'll be spending ~$400-450 for motherboard/cpu/chassis/psu, and will likely pickup 2 16GB ECC DIMMs from eBay (can be had for ~$125-140/ea). Should do me great, and I can't wait to hear back from SM so I can make a purchase.

I plan to start out with Server 2012 R2 and Hyper-V, just to get a feel for the hardware and see what performance is like. From there I will probably move to CentoS 6.5 or 7, and build an OpenStack node with local storage (512GB SSD that's in my R610), using that AOpen miniPC as the controller.

I should be able to sell my R610 locally for $750-800, which will recoup my money from this purchase, plus $50-100.









Can't wait!


----------



## tycoonbob

And just when I think I am on to something, I'm having a hard time justifying the additional cost of the Avoton based builds.

A1SRM-2758F-O motherboard/CPU combo --- $337.79
Chassis --- $100.00
32GB RAM --- $250.00
*Total* --- $687.79

Dell R610 (dual L5520, 24GB RAM) --- $330.00

Cost difference --- $357.79

I could buy 2 more R610s for less than the cost of 1 of these Avoton builds!

So the whole point of looking at these Avoton builds is for electricity savings, so I decided to actually calculate that out. Where I live, I pay 7.952 cents per kWh (KiloWatt Hour).
My current R610 pulls about 110W (according to the front LCD panel of the server).
110 X 24 / 1000 = 2.64kWh = 20.99328 cents per day

Electricity cost to run 1 Dell R610 in my current configuration:
$0.2099328 / day
$6.297984 / month

I don't have any definitive facts on the Avoton C2758 power draw, but seeing how it's a 20W TDP CPU with a 200W PSU, I will guess 40W power draw.
40 x 24 / 1000 = 0.96kWh = 7.63392 cents per day

Electricity cost to run 1 Avoton C2758 build:
$0.0763392 / day
$2.290176 / month

So in one month, I would save ~$4.008 by using the Avoton build versus my R610. At a cost difference of $357.79 (Avoton build vs R610), it would take 89 months (or 7.5 years) before I could say that I've saved money.

(Aiming extrememly low) If the Avoton C2758 build only consumed 20W:
20 x 24 / 1000 = 0.48kWh = 3.81696 cents per day

Electricity cost to run 1 Avoton C2758 build at 20W:
$0.0381696 / day
$1.145088 / month

In one month I'd save $5.152896 versus my R610. That would take 69 months (or 5.75 years) before I could say that I've saved money by the Avoton build.

Now these numbers only apply if I was to run 1 of the debated servers. Because all of the cost increment the same, the cost savings will be higher but I'd have a higher initial investment to pay off. For example...

Let's say I run 3 of each (either 3 R610s, or 3 Avoton builds):

R610
110 X 24 / 1000 = 2.64kWh = 20.99328 cents per day per server, x 3 = 62.97984 cents per day for 3 servers
$0.6297984 / day
$18.893952 / month

C2758 @ 40W
40 x 24 / 1000 = 0.96kWh = 7.63392 cents per day per server, x3 = 22.90176 cents per day for 3 servers
$0.2290176 / day
$6.870528 / month

In one month I'd save $12.023424 using 3 Avoton C2758 builds versus using 3 R610s. $357.79 is the cost difference per Avoton server versus the R610, so that's $1,073.37 more invested for all 3. At a savings of $12.023424/month, that would still take 89 months (7.5 years) to break even.

I suspect I will replace this hardware before 7 years, so the cost savings is just not there for me anymore for the Avoton builds. If I could get a C2758 with 24GB of RAM for $300, I'd definitely go for them. But at the current costs, no thanks.

On another note, I'm currently in negotiations to purchase 2 Dell R610s (dual Xeon L5520s, 24GB RAM, SAS 6ir, etc) just like the one I already have. I'm hoping I can get them both shipped for ~$650. If I can, I will be selling one to a friend locally, but the other will likely be used as an OpenStack Compute node so I can really start some testing! I'm excited that things are possibly finally moving on this project.


----------



## tycoonbob

Another update...

Just purchased 2 Dell R610 (dual Xeon L5520, 24GB RAM) for $684 for both ($315 + 27.04 shipping, each). Already got one of these two new ones sold locally for $750, so I'm coming out with a free R610 and ~$65!

Hopefully I'll get these new ones in later this week, and I'll set one of them as a KVM host for OpenStack! I'll also be using some of that $750 to purchase a 60GB SSD for the OS, and another 512GB SSD (Crucial MX100) for VM's.


----------



## vpex

Quote:


> Originally Posted by *tycoonbob*
> 
> Hopefully I'll get these new ones in later this week, and I'll set one of them as a KVM host for OpenStack! I'll also be using some of that $750 to purchase a 60GB SSD for the OS, and another 512GB SSD (Crucial MX100) for VM's.


From what I can tell the 60GB capacity point has been largely abandoned by drive manufactures lately. The most notable recent 60GB drive is the Kingston V300 which is interesting for all the wrong reasons. Once you get past the bait and switch they pulled it would be more than enough for your usage.

For larger capacity SSDs I don't know what your time frame is but it could be worth seeing how the ADATA SP610 pans out, right now its up for pre-order and I'm hoping the pricing will drop for it post release or other wise its competing with the 840 EVO, which it will lose to, its a much more suitable comparison to the MX100 and currently its ~$30 more (512GB). The drive is marginally faster than the MX100 apart from writes (limited by the controller only having 4 channels). Still the MX100 is a very nice drive.


----------



## tycoonbob

Quote:


> Originally Posted by *vpex*
> 
> From what I can tell the 60GB capacity point has been largely abandoned by drive manufactures lately. The most notable recent 60GB drive is the Kingston V300 which is interesting for all the wrong reasons. Once you get past the bait and switch they pulled it would be more than enough for your usage.
> 
> For larger capacity SSDs I don't know what your time frame is but it could be worth seeing how the ADATA SP610 pans out, right now its up for pre-order and I'm hoping the pricing will drop for it post release or other wise its competing with the 840 EVO, which it will lose to, its a much more suitable comparison to the MX100 and currently its ~$30 more (512GB). The drive is marginally faster than the MX100 apart from writes (limited by the controller only having 4 channels). Still the MX100 is a very nice drive.


Thanks for your input, but I will pass. All the servers in my home use a 60GB Mushkin Enhanced Chronos SSD, which may not be the fastest, but they have all been rock stable. PC's in my home use 120GB Mushkin Enhanced Chronos SSD for the same reasons. I do see that the 120GB version of these are down to like $65, while the 60GB is over $100...so I guess I'll use a 120GB as the OS drive.

As far as the Cruical MX100 vs Samsung 840 EVO, Crucial is cheaper, and also rock solid. I have one already that I use in my R610 and run 18 VM's on it without a hitch. Gives me all the speed I need, and honestly I could careless about the sequential read and write speeds of these SSDs, as I only care about the IOPS. Spec-wise, the MX100 has about the same Random IOPS as the 840 EVO, so any small gains there is negligible for the money. They both have a 3 year warranty as well.

I guess I should also mention that I don't care for Samsung as a company. I have my reasons, and I respect them as a great company with great products, but I like the MX100 better.


----------



## vpex

No problem, I'm happy to help.

I agree 100% with all the points that you made. I also agree with choosing the MX100 over the 840 EVO; why pay more when you don't need to? If you don't like Samsung that is fine aswell.

I just fear my post wasn't as clear as I intended, I wasn't advocating the EVO over the MX100, I was saying that looking into the ADATA SP610 might be worth while.


----------



## tycoonbob

Couple packages arrived today. Must be my 2 Dell R610's that I ordered.













So yeah. One of these new R610's I am selling, but the other has already been installed with CentOS 6.5 and ready to become an OpenStack Compute node! These 2 R610s each came with 2 146GB 10K SAS drives, so I have 2 of them in RAID0 for my Compute node, until I order another 512GB Crucial MX100. This is all for testing, anyway.

I've also just reinstalled CentOS on my AOpen miniPC, which will become my Controller node.

Oh, and these two R610s came with DRAC6 Enterprise cards, which was totally unexpected! Sure, they only sell for $30 or so on eBay, but it was a nice surprise since my original R610 doesn't have one, but it will soon enough. I should be selling that third R610 later this week/weekend, and I will be purchasing that Crucial MX100, as well as a boot SSD for my new R610. I may purchase another R610 for reselling as well; undecided.

I probably won't get to install OpenStack modules until tomorrow, and hope to share some updates tomorrow too!


----------



## tycoonbob

I spent about an hour so far today, configuring 1 Compute node, and my Controller node. Everything is up and working with these two servers, and I have successfully created a few images based on CirrOS.

Long story short, OpenStack can be difficult to install. It seems to run pretty good with my current setup, but I'm just not feeling it. I'm going to spend a couple days playing with it, but I believe I am also going to check out OpenNebula which seems to have a more polished web UI, more options, and just SEEMS more complete (whether or not it is, I do not know yet). OpenStack is a great product for creating a cloud based solution, but I know I won't utilize all of those features. I basically just want something that I can manage my VM's and create VM's from templates, from a web ui.

Anywhere, here are some screenshots of my OpenStack setup in it's current state. I have two users; admin (which is an admin), and dhorn (which is an user).

Flavors (VM hardware templates I created):


Images (Where available images show -- I've only added CirrOS so far):


Launching 10 instances as admin:




Log of one of those instances:


VNC Console session of one of those instances:


Overview of instances as admin:


Launching 3 (screenshot shows 5, I only launched 3) instances as user:


Overview of instances as user:


System admin overview:


Hypervisor overview:


Anyone have any questions, or anything specific you want to see? I'll be playing around with it more later, and hopefully adding a CentOS 6.5 image and a Windows 7 image.


----------



## cones

Quote:


> Originally Posted by *tycoonbob*
> 
> ...I basically just want something that I can manage my VM's and create VM's from templates, from a web ui...


Have you heard of WebVirtMgr I haven't tried it myself yet but came across it recently?


----------



## parityboy

*@tycoonbob*

That's actually a question I was going to ask: OpenStack, like Eucalyptus, is a _cloud_ solution i.e. automatically spinning up extra compute when certain conditions are met. For simply managing a few VMs, wouldn't something like ConVirt be a better solution, especially considering that you're using KVM?

I've actually played with ConVirt myself briefly and it seems OK. CloudMin is also an option, but again it seems aimed towards being used by people other than the actual sys admins, i.e. the general public or perhaps other departments in a large organisation.
Quote:


> Cloudmin is designed for use by VPS hosting companies that sell virtual systems to their customers, but is also suited for anyone who wants to get into virtualization for application management, testing, controlling a cluster of Virtualmin hosts, or just to learn about cloud computing.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Have you heard of WebVirtMgr I haven't tried it myself yet but came across it recently?


I have not heard of that before, but looks quite interesting. Will definitely keep this in mind; thanks!
Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> That's actually a question I was going to ask: OpenStack, like Eucalyptus, is a _cloud_ solution i.e. automatically spinning up extra compute when certain conditions are met. For simply managing a few VMs, wouldn't something like ConVirt be a better solution, especially considering that you're using KVM?
> 
> I've actually played with ConVirt myself briefly and it seems OK. CloudMin is also an option, but again it seems aimed towards being used by people other than the actual sys admins, i.e. the general public or perhaps other departments in a large organisation.


I have used CloudMin before, but I wasn't crazy about it. I do use Webmin on a few of my VM's, too.

I have no heard of ConVirt, but I'll take a closer look at it. I'm definitely liking the looks of WebVirtMgr, but this is assuming I make the switch to KVM. With OpenStack, I could use Hyper-V, KVM, Xen, or ESXi...but would likely had made the switch to KVM. There are other things that I would utilize, but at the core, I'm mainly interested in a web based GUI to manage VM's and create/deploy VM's based on templates.
With OpenStack I was thinking of adding some local storage to my Compute Nodes (like 2 512GB SSD on each, JBOD) and using Ceph to create distributed storage to deliver with OpenStack Cider.

OpenNebula published this quadrant:


(which I don't take too seriously), but it puts CloudStack and OpenNebula more toward vCloud and datacenter virtualization, which is what I'm more interested in versus something AWS-esqe (aka, infrastructure provisiong -- IaaS). That image warrants a closer looking into OpenNebula for me, which I will be doing. Depending on the features I would use in OpenNebula will help me decide if it's worth the trouble versus setting up a KVM cluster and using something like WebVirtMgr.

Clustered hypervisors with block-level storage is what I want underneath, with some sort of web based overlay for management/provisioning. I'm also curious how well Windows based OS' work on KVM (Server 2012R2 and Windows 8.1 primarily). I'm mostly using CentOS for my VM's now, but do still have a few Windows Servers I need (AD DS/DNS/DHCP, etc).


----------



## cones

I am using Windows Server 2012R2 on KVM, i believe the specs are current in my sig for the machine. It works fine for me, not using it on the best hardware. My only issue is that it is windows and it likes RAM which i do not have much of on there. Since i gave it ~2.5GB of RAM that is just always gone from my system when that VM is running, not sure if it's an issue with KVM or what. If you end up trying WebVirtMgr post what you think of it since i might not try it myself for a while.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> I am using Windows Server 2012R2 on KVM, i believe the specs are current in my sig for the machine. It works fine for me, not using it on the best hardware. My only issue is that it is windows and it likes RAM which i do not have much of on there. Since i gave it ~2.5GB of RAM that is just always gone from my system when that VM is running, not sure if it's an issue with KVM or what. If you end up trying WebVirtMgr post what you think of it since i might not try it myself for a while.


My Server 2012R2 VM's currently live on Hyper-V, and each have 1GB of RAM which does great. If I can get the same performance on KVM, I'll be just fine.

I should be able to try WebVirtMgr pretty soon, actually. Since I am using KVM on my OpenStack Compute node, I can install WebVirtMgr in a VM and connect to that node just to see what it's like. I'll try that out this evening or tomorrow.


----------



## cones

Mine idles right around 1.5GB, i blame mediabrowser for that though.


----------



## mansbigbrother

What version of CentOS are you using? I was playing around with the Minimal install ISO last night but the lack of networking was getting to be a pain


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Mine idles right around 1.5GB, i blame mediabrowser for that though.


Yeah, that would explain it.
I have a PowerShell script I use to remote monitor RAM usage of my Windows-based VM's, which shows actual RAM usage. DC01 and DC02 are my Server 2012 R2 domain controllers, each running AD DS, DNS, and DHCP. Each have 2 vCPU assigned, along with 1024MB of RAM. DC01 is using 396-456MB of RAM, and DC02 is using 381-439MB of RAM. I could easily drop these VMs to 512MB, but I would run into slowness when I had to log on or something. I can confirm that Server 2012 R2 can definitely run on 512MB of RAM, assuming it's only running basic infrastructure services.

I also have a torrent VM, which runs server 2012 R2 with 1024MB of RAM assigned. It runs uTorrent 2.1, and is using ~637MB of RAM.

RDS01 (RD Gateway, DFS-N, and FTP/IIS) also has 2 vCPU and 1024MB of RAM assigned, and is only consuming 526MB.

All 4 of these VM's have been powered on for about 30 days, and no users are logged in. They are in their normal running state.

Quote:


> Originally Posted by *mansbigbrother*
> 
> What version of CentOS are you using? I was playing around with the Minimal install ISO last night but the lack of networking was getting to be a pain


CentOS 6.5 netinstall is what I use for what I consider production use. I'm still not quite comfortable with CentOS 7 (firewalld, systemd, etc). I actually have not had any networking problems at all. I do not use NetworkManager in any way, and only use the network service. I also only use static IPs configured in /etc/sysconfig/network-scripts/ifcfg-eth0 (assuming my NIC is eth0). I set my hostname and gateway settings in /etc/syconfig/network, also.


----------



## tycoonbob

So I reinstalled CentOS 6.5 on that R610, and set up WebVirtMgr (well, WebVirtMgr is in a VM, and the R610 is just a KVM host).

WebVirtMgr seems pretty basic, but the UI is clean. It is kinda lacking, actually, in that you can update/edit items and have to delete and recreate. Just little things like that. I haven't loaded any instance with it yet, but will get to play with it a lot more tomorrow.

I also came across Archipel, which I hope to test tomorrow. UI looks much more advanced, and it uses a XMPP server on your KVM host to communicate with KVM. Interesting, for sure. Archipel uses libvirt to manage things, so it can use any hypervisor supported under libvirt (KVM, Xen, ESX, HyperV, OpenVZ, and a few others, and gives me all the storage options I need. I really think Archipel is going to be 99% of what I'm looking for! Can't wait for the lady to go to work tomorrow, since I'm off! (don't tell her I said that though..)


----------



## moto211

Tycoonbob,

Have any of the solutions you've tried had the ability to automatically power on or power down physical hosts via ipmi/drac/ilo when more or less compute resources are needed?


----------



## parityboy

*@tycoonbob*

...and this is why sharing works.







Never heard of Archipel, but I will _certainly_ be giving it a go this weekend.







I think a combination of Archipel and Ansible will do most of what I need on my test machine.


----------



## tycoonbob

Quote:


> Originally Posted by *moto211*
> 
> Tycoonbob,
> 
> Have any of the solutions you've tried had the ability to automatically power on or power down physical hosts via ipmi/drac/ilo when more or less compute resources are needed?


I have not come across anything like this is my specific testing, but I would think a feature like this would be more along the lines of OpenStack, CloudStack, OpenNebula, etc instead of WebVirtMgr or Archipel.
Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> ...and this is why sharing works.
> 
> 
> 
> 
> 
> 
> 
> Never heard of Archipel, but I will _certainly_ be giving it a go this weekend.
> 
> 
> 
> 
> 
> 
> 
> I think a combination of Archipel and Ansible will do most of what I need on my test machine.


I said that same thing to myself last night, as I'm starting to use Ansible more and more.

OpenStack is complicated to set up, and is definitely more than I need, but Archipel looks very promising. WebVirtMgr looks like a great project and all, but I don't think it's quite up to the level of what I'm looking for. I should be giving Archipel a go today myself, and now I just need to figure out if I want to set it all up with Hyper-V, or look at making the switch to KVM. Decisions, decisions...


----------



## cones

You know you might as well just try everything since you seem to have already tried a lot at this point


----------



## tycoonbob

Yeah, not sure I have all the free time in the world just to try everything, haha.

I've got the Archipel client and ejabberd installed on a box, and am testing out various KVM based hypervisor hosts. The Archipel project has something they call ANSOS, which is basically oVirt in a live distro preconfigured to talk to Archipel. You provide a SMB share with a config file, and use custom kernel boot parameters to talk to that file at boot. I had a little trouble getting the network set up, and now I have a problem with that samba share mounting on boot. I believe I am going to ditch ANSOS for now, and just install the Archipel agent on a CentOS-based KVM host.

As of right now I can say that the web interface is a little more feature-packed than WebVirtMgr, but until I can connect it to a KVM host, I can't really test it out and comment.


----------



## moto211

So, some research has revealed that vsphere enterprise with distributed resource scheduling and dynamic power management has the ability that I was asking about earlier (it can shift guests between hosts and power down/up physical hosts on the fly as needed). Anyone know if there's a free open source alternative to this? Or even a cheaper alternative that's more friendly to a home lab budget? The 4xR610's or C6100 (still haven't decided which) is going set me back hard enough without spending thousands on VMware.


----------



## tycoonbob

Quote:


> Originally Posted by *moto211*
> 
> So, some research has revealed that vsphere enterprise with distributed resource scheduling and dynamic power management has the ability that I was asking about earlier (it can shift guests between hosts and power down/up physical hosts on the fly as needed). Anyone know if there's a free open source alternative to this? Or even a cheaper alternative that's more friendly to a home lab budget? The 4xR610's or C6100 (still haven't decided which) is going set me back hard enough without spending thousands on VMware.


I'm not familiar with there are any alternatives for what you want; sorry.

So I have moved on from Archipel (sorry for no screenshots), and am trying Proxmox (no idea why I have never tested this before). Proxmox looks like it will give me what I need out of the box, and I should even be able to use Ceph for distributed block level storage. Thus, 1 512GB SSD per physical server, and use Ceph to provide that for clustered storage giving me HA VM's.

I haven't used the UI for Proxmox yet, but should get to try it out this evening.


----------



## tycoonbob

Just to give an update, I am still playing with Proxmox and getting familiar with it. I've been having some weird trouble getting network bonding working correctly, with either Linux bonding or OVS bonding (would prefer to use OVS bonding). Long story short, I have this network config on Proxmox:


And I only have a network cable plugged into eth0, but yet I am able to ping/load the web UI on 172.16.1.202, as well as 172.16.1.222. I should not be getting a response on .202, and when I plug in either eth2 or eth3, I loose network connectivity completely. I have a thread over on the Proxmox forum asking for assistance, but at least with 1 NIC I can still play with creating VMs and what not, on local storage.

Proxmox looks promising to me, though!


----------



## parityboy

*@tycoonbob*

What's OVS?


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> What's OVS?


Sorry, it's OpenVSwitch. Looks like a promising project, which has been integrated with Proxmox.


----------



## The_Rocker

Quote:


> Originally Posted by *tycoonbob*
> 
> Sorry, it's OpenVSwitch. Looks like a promising project, which has been integrated with Proxmox.


I jumped on the new build of Proxmox as soon as I heard about OpenVSwitch support. However I quickly found out that it is very 'early days' type of support and has basically been thrown in underneath Proxmox as opposed to being built into the UI of proxmox etc...

Thats the one thing that lets proxmox down, (although for those of us that have prior linux experience it isn't difficult to understand), it's networking set up in a multi vlan environment starts to get real messy with loads of bonds, bridges and interfaces gluing it all together.

Compare it to the likes of networking in vSphere which is a relaxing holiday in the sun lol... :roll eyes:

EDIT: I have used proxmox for a while before OVS was included.


----------



## parityboy

*@tycoonbob*

So OpenVSwitch is like a managed switch but entirely in software? The fact that it supports QoS tells me that it behaves like a Layer 3 managed switch. Would that be correct?


----------



## vpex

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> So OpenVSwitch is like a managed switch but entirely in software? The fact that it supports QoS tells me that it behaves like a Layer 3 managed switch. Would that be correct?


Its a multilayer virtual switch. Its similar to something like Cisco Nexus 1000v.


----------



## tycoonbob

Yeah, what they said.

So I finally created my first VM's with Proxmox, and overall I am happy. Sure, it took me a little longer than it would take if I was doing it in Hyper-V, but I got a CentOS 6.5 and a Server 2012R2 VM built. Using virtio for my storage controller and NIC, and host for my CPU, performance seems to be good. I'm not going to do specific numbers comparing to my Hyper-V server, since my Hyper-V server is running with SSDs and this Proxmox box only has 2 10K SAS drives in RAID0. Disk and network performance seems good, overall, though.

I've also converted my CentOS 6.5 VM into a template, and created 5 full clones of that (at the same time). Going to see how well that process works for me.


----------



## finish06

Quote:


> Originally Posted by *tycoonbob*
> 
> Yeah, what they said.
> 
> So I finally created my first VM's with Proxmox, and overall I am happy. Sure, it took me a little longer than it would take if I was doing it in Hyper-V, but I got a CentOS 6.5 and a Server 2012R2 VM built. Using virtio for my storage controller and NIC, and host for my CPU, performance seems to be good. I'm not going to do specific numbers comparing to my Hyper-V server, since my Hyper-V server is running with SSDs and this Proxmox box only has 2 10K SAS drives in RAID0. Disk and network performance seems good, overall, though.
> 
> I've also converted my CentOS 6.5 VM into a template, and created 5 full clones of that (at the same time). Going to see how well that process works for me.


Can you update us with some pics? Also, did you ever try openNebula?


----------



## tycoonbob

Quote:


> Originally Posted by *finish06*
> 
> Can you update us with some pics? Also, did you ever try openNebula?


I have not tried OpenNebula, and probably won't. Proxmox seems very fitting for me, minus the network issue I'm having.

With Proxmox I've been able to easily and quickly create VM templates and deploy new VMs in less than a minute. I haven't used anything but local storage, but the ideal of using Ceph clustered storage from 3 Proxmox nodes sounds very appealing. I also really like using SPICE as the console viewer, which I find better than VNC (automatic resizing is the main reason).

I'm heading out of town this afternoon and will be gone through Monday, so I probably won't be posting any updates until Tuesday at the earliest. I've been trying to wrap things up at work prior to this little getaway (to a very rural area of eastern Kentucky, aka home), and likely won't be online much this weekend. I'll be sharing plenty of photos next week though; promise.


----------



## tycoonbob

So I finally got all 3 of my R610's set up, 2 are running in a freshly built Proxmox cluster, and the other is still my Hyper-V host.

I've spent the past week playing with Proxmox and I think I finally have things figured out, and I really like it. I just rebuilt everything and set up a 2 node cluster with a NFS share coming from my storage box (running Windows, and I'll say creating a NFS share from Windows Server 2012 was super easy, and works well). Eventually I either create a RAID 10 SSD array and use NFS or iSCSI for VM storage, or use local SSDs in each node and use Ceph for VM storage, but I will be adding the third R610 into my cluster once all my VM's are migrated, or rebuilt on Proxmox.

Rebuilding the cluster and creating a CentOS 6.5 template was my highlight for tonight, but I should make some real progress on this tomorrow, and hopefully will be sharing plenty of screenshots. In the mean time...here are some hardware pics...(*WARNING -- dusty and not very good cable management, won't likely change until I move):


----------



## cones

Curious what the little box on the right of the battery backup is? Also i know someone who killed a computer from the static of walking on the carpet along with the computer being on the carpet.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Curious what the little box on the right of the battery backup is? Also i know someone who killed a computer from the static of walking on the carpet along with the computer being on the carpet.


The box on the right is a PDU in a 1U rack mount bracket. The bracket holds 2 of those PDUs (each with 7 plugs), and I actually have 2 of the PDUs. One of the PDU's isn't in the bracket, and is used to power my two monitors.




Unless you are talking about that PC, which is a HP 8200 Elite USFF, that hasn't been used in 12+ months.


----------



## cones

Didn't notice those things actauly. Yes I was talking about the HP, wasn't sure what was going on with it from that side although first thought was a random PC.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Didn't notice those things actauly. Yes I was talking about the HP, wasn't sure what was going on with it from that side although first thought was a random PC.


Yeah, it's a random PC. Like a Core i3, 4GB RAM, and a 120GB SSD. I just have no need for it or else I'd use it for something. I'm actually looking to make my primary workstations virtual machines, i.e. a Win8.1 VM and a Fedora 20 VM, and use that HP 8200 or my DE5100 as a linux-based thin client with a SPICE client, if I could find some sort of thin client OS that gives me that out of the box. Don't really feel like rolling my own.

Also, I was concern with static since my servers are on carpet, but it's been like this for 2.5 years. The bottom device is actually just a 2U chassis with nothing in it, so I figure that might isolate things a little better.


----------



## tycoonbob

So here are some random screenshots of my current Proxmox setup. So far I have 4 templates (CentOS 6.5, CentOS 7, Win8.1 Pro, and Server 2012R2), along with 2 built VM's, and 1 VM that had been cloned over and hadn't been configured yet. You can see my NFS share which is on my storage box (on a ~9TB RAID 6 array), which I haven't set any quotas on, yet.

If you have any questions about these screenshots, let me know!


----------



## finish06

Proxmox is very nice! For anyone looking for a feature full service that doesn't have the limitations of ESXi or Xen, I would recommend Proxmox. I have also recently started using it after I saw your endorsements TycoonBob!


----------



## mansbigbrother

This is my favorite thread on OCN right now. SO MANY THINGS TO TRY.


----------



## tycoonbob

Quote:


> Originally Posted by *mansbigbrother*
> 
> This is my favorite thread on OCN right now. SO MANY THINGS TO TRY.


Thanks man, much appreciated hearing comments like this!

So I stayed up way too late tonight, BUT...I got all of my Hyper-V servers converted and migrated over to my 2-node Proxmox cluster. Long story short, now that my Hyper-V box is free, it will become my third Proxmox node, for a 3-node cluster. I am using a NFS share from my storage box, until I can get a couple more 512GB SSDs to make a Ceph array for VM storage. Currently, DC01 is stored locally on one node, while DC02 is stored locally on the second node (these each run AD DS, DNS, and DHCP -- mission critical, and can't loose them if the NFS share goes down [i.e., storage box fails/reboots], so storing them locally is working great).

Here are two new screen shots, one showing all of my VMs and nodes, and the other showing one of the nodes specs and average resource utilization over the past hour. I'll provide more like this second screenshot over the coming week, show stats from hourly, daily, and weekly, and hopefully get that third node added in the next day or two!

Enjoy!!


----------



## tompsonn

I was looking at moving my Hyper-V over to Proxmox but you just can't beat Veeam for backup...


----------



## cones

How many different VMs do you run at a time? I see you have quite a few there.


----------



## tycoonbob

Quote:


> Originally Posted by *tompsonn*
> 
> I was looking at moving my Hyper-V over to Proxmox but you just can't beat Veeam for backup...


I've never used the free version of Veeam on Hyper-V, but have used the paid version about 1.5 years ago (I'm sure things have changed). IIRC, the free Veeam lets you back up your Hyper-V VMs online, but you have to do each one manually? I used a PowerShell script to backup my VMs online, but it only ran once a week for each VM. I, honestly, didn't care too much for backing up all my VMs.

Proxmox has integrated backups, which (on paper) is really nice. From one WebUI, I can manage all my VMs, nodes, Ceph storage, and backups -- sounds great. Whether or not it is actually great, I don't know yet. Until I can afford 2 more 512GB MX100 SSDs, I can't build my Ceph array. I'm sure I'll be looking into the backups sometime this week, but it's not a priority right now (this is just a home environment, and I still have all my VM's available in Hyper-V, just powered off. Once I'm more familiar with the builtin Proxmox backups, I'm sure I'll share my opinion!

In case you haven't read anything about Proxmox's backup, take a gander at this:
https://pve.proxmox.com/wiki/Backup_and_Restore

Looks to be pretty good, on paper.

Quote:


> Originally Posted by *cones*
> 
> How many different VMs do you run at a time? I see you have quite a few there.


I basically run all my VM's 24/7. Right now the only things that are powered off are my 4 templates (which technically can't be powered on), and one VM (media-new). media-new is going to replace my media VM, once I get around to it. My current media VM runs CentOS 6.5, with MadSonic v5.0b3880, along with the latest version of bliss. media-new (once built will just be named media) will run CentOS 7, Madsonic v5.1 (once 5.1 has a stable release -- thought to be in the next few weeks), and the latest version of bliss. I'm working to rebuilt all my CentOS 6.x VMs with CentOS 7.

So right now I have 14 VMs, where 13 are currently running. Prior to moving them from Hyper-V last night, 11 of those VM's had an uptime (on Hyper-V) of over 80 days. Not too crazy, but still pretty good for a homelab. I'm sure I will build plenty more, in time.
Right now I only have those two nodes running, and they are at about 30% capacity (using ~7GB out of 24GB RAM), so I have plenty room to grow. Once I add the third node and move some VM's to it, I suspect each node will only be ~20% utilized. I plan to run each node to a max of around 66% utilized (I base my utilization soley on RAM -- I know I'm not overcommiting these CPUs, and in the next few months I will be running a Ceph array with 512GB SSDs so I will have plenty of IOPS). With running each at a max of 66% utilization, I will retain n+1 failover (HA is enabled on all VMs, except for my DCs which are on local storage, for now).

With 3 nodes, I suspect I can run at least 50 VMs without over commiting my n+1, maybe more!


----------



## tycoonbob

So out of curiosity, I just tested out the backup feature of Proxmox. I really like that it's built into the webUI, which is quite convienient, and it's very easy to schedule a backup task. I set a task to do a gzip snapshot backup, and it worked well, and created the backup in 7 minutes, 47 seconds. The VM I backed up was a CentOS VM running Nginx with an internal website (my intranet VM), which has a 10GB drive. The backup was a full backup with config files, and consumed 1.34GB of space, with no downtime. Pretty good.

I think what I'm going to do is instead of snapshot backups, have Proxmox power down VM's and do a snapshot. There is no reason I can't shutdown each VM in the middle of the night (between 2-5AM) once a week, for a good clean backup. It also will give my VM's a chance to reboot, which is always a good thing. I will have to do some testing, but I'll provide more info on this as I get more familiar with it.


----------



## tompsonn

Quote:


> Originally Posted by *tycoonbob*
> 
> I've never used the free version of Veeam on Hyper-V, but have used the paid version about 1.5 years ago (I'm sure things have changed). IIRC, the free Veeam lets you back up your Hyper-V VMs online, but you have to do each one manually? I used a PowerShell script to backup my VMs online, but it only ran once a week for each VM. I, honestly, didn't care too much for backing up all my VMs.
> 
> Proxmox has integrated backups, which (on paper) is really nice. From one WebUI, I can manage all my VMs, nodes, Ceph storage, and backups -- sounds great. Whether or not it is actually great, I don't know yet. Until I can afford 2 more 512GB MX100 SSDs, I can't build my Ceph array. I'm sure I'll be looking into the backups sometime this week, but it's not a priority right now (this is just a home environment, and I still have all my VM's available in Hyper-V, just powered off. Once I'm more familiar with the builtin Proxmox backups, I'm sure I'll share my opinion!
> 
> In case you haven't read anything about Proxmox's backup, take a gander at this:
> https://pve.proxmox.com/wiki/Backup_and_Restore
> 
> Looks to be pretty good, on paper.


I dunno, never used the free version of Veeam...

Yeah I read into the Proxmox back up stuff, but unless it supports things like VSS, its a no go for me.

If it had that and was able to do file level restore (and item level restore like Veeam can for AD, SQL Server and MS Exchange) I would be on it in a heartbeat.

But we all need something different I guess.. In any case, Proxmox still looks very very good indeed.









I could see me using it for non-critical things like small servers at sites with a RODC, print server, Squid cache... etc.


----------



## tycoonbob

Quote:


> Originally Posted by *tompsonn*
> 
> I dunno, never used the free version of Veeam...
> 
> Yeah I read into the Proxmox back up stuff, but unless it supports things like VSS, its a no go for me.
> 
> If it had that and was able to do file level restore (and item level restore like Veeam can for AD, SQL Server and MS Exchange) I would be on it in a heartbeat.
> 
> But we all need something different I guess.. In any case, Proxmox still looks very very good indeed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I could see me using it for non-critical things like small servers at sites with a RODC, print server, Squid cache... etc.


Keep in mind that I'm running all this at home. For a production environment I would definitely need more as well. I personally think it's unlikely that we will ever see a backup product that supports KVM virtualization, that can do item-level restore for AD/SQL/Exchange. It seems people who run KVM in production use the builtin backup for full VM backups, and use a different product inside the VM for item-level backup (i.e., Bacula, Amanda, etc). Not the most elegant, but it is what it is.

For my needs, Proxmox with the builtin backup will do me just fine.


----------



## cones

Now that i've looked at that on a bigger screen, are you running something like openelec for your "xbmcdb" or is it just hosting a mysql server?


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Now that i've looked at that on a bigger screen, are you running something like openelec for your "xbmcdb" or is it just hosting a mysql server?


I am running OpenELEC on my HTPC, and that xmbcdb is just a mysql VM that XBMC on my HTPC uses.


----------



## cones

Quote:


> Originally Posted by *tycoonbob*
> 
> I am running OpenELEC on my HTPC, and that xmbcdb is just a mysql VM that XBMC on my HTPC uses.


That is what i figured it was, still wish they would make a "backend" for XBMC.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> That is what i figured it was, still wish they would make a "backend" for XBMC.


I held that same stance for a long time (make a backend!), but then it would basically become Plex. I've only got 2 devices running XBMC, but my setup has been working great for a long time now. I really can't complain.


----------



## cones

Quote:


> Originally Posted by *tycoonbob*
> 
> I held that same stance for a long time (make a backend!), but then it would basically become Plex. I've only got 2 devices running XBMC, but my setup has been working great for a long time now. I really can't complain.


I don't want it to be anything like Plex. All I want is a service to run that scans my library at set intervals that update the mysql database. So XBMC without the GUI in a sense.


----------



## parityboy

Quote:


> Originally Posted by *tycoonbob*
> 
> It seems people who run KVM in production use the builtin backup for full VM backups, and use a different product inside the VM for item-level backup (i.e., Bacula, Amanda, etc). Not the most elegant, but it is what it is.


In a Linux/KVM environment, it would likely be possible to mount the VM read-only using the loopback device and use some scripting to back up the items in question. Having said that though, with things like MySQL it's probably better to pull the data out as SQL rather than simply backing up the on-disk binary files, so with _that_ in mind, an in-VM agent is probably the better all-round solution anyway.

That's just some "on paper" guesswork though. I don't get to play with this stuff in a production environment (yet).


----------



## tompsonn

Quote:


> Originally Posted by *parityboy*
> 
> In a Linux/KVM environment, it would likely be possible to mount the VM read-only using the loopback device and use some scripting to back up the items in question. Having said that though, with things like MySQL it's probably better to pull the data out as SQL rather than simply backing up the on-disk binary files, so with _that_ in mind, an in-VM agent is probably the better all-round solution anyway.
> 
> That's just some "on paper" guesswork though. I don't get to play with this stuff in a production environment (yet).


Yes this is best practice. Even with VSS on MSSQL servers, I still do backups using the built-in SQL agent (and same with MySQL).


----------



## tycoonbob

So just to provide an update, I finally received my iDRAC6 Enterprise card for my other R610, so they now all 3 have them. I am starting to rebuild my Proxmox setup, now that I'm much more familiar with Proxmox and how to properly and cleanly set it up. Also, Proxmox VE 3.3 was released about a week ago, so I will be upgrading to that version.

Since all my VMs are stored on a NFS share, I will be building HV01 with Proxmox VE 3.3, attaching the NFS share, creating new VM's on HV01, powering down VM's from the existing cluster (hosts HV02 & HV03), and attaching the disk to the new VM's. Should be a pretty simple migration, with downtime, but that's okay. Once I have all VM's live on HV01, I will proceed to rebuild HV02 and HV03, and join them to a newly created cluster with HV01.

Should be pretty simple, and maybe time consuming. I'll also be converting all my VM's from using RAW disks to using qcow2 disks. When migrating from Hyper-V (VHDX), I converted everything to RAW, but I didn't realize that RAW disks don't support snapshots under Proxmox. I don't keep snapshots in production, but do use snapshots when building out a new VM. I need my snapshots.









I'll post another update once the new cluster is up and running!


----------



## The_Rocker

Quote:


> Originally Posted by *tycoonbob*
> 
> So just to provide an update, I finally received my iDRAC6 Enterprise card for my other R610, so they now all 3 have them. I am starting to rebuild my Proxmox setup, now that I'm much more familiar with Proxmox and how to properly and cleanly set it up. Also, Proxmox VE 3.3 was released about a week ago, so I will be upgrading to that version.
> 
> Since all my VMs are stored on a NFS share, I will be building HV01 with Proxmox VE 3.3, attaching the NFS share, creating new VM's on HV01, powering down VM's from the existing cluster (hosts HV02 & HV03), and attaching the disk to the new VM's. Should be a pretty simple migration, with downtime, but that's okay. Once I have all VM's live on HV01, I will proceed to rebuild HV02 and HV03, and join them to a newly created cluster with HV01.
> 
> Should be pretty simple, and maybe time consuming. I'll also be converting all my VM's from using RAW disks to using qcow2 disks. When migrating from Hyper-V (VHDX), I converted everything to RAW, but I didn't realize that RAW disks don't support snapshots under Proxmox. I don't keep snapshots in production, but do use snapshots when building out a new VM. I need my snapshots.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'll post another update once the new cluster is up and running!


How did you get on with fencing in Proxmox 3.3?

I have a dell M series blade chassis loaded up and ready for me to attack with an open source virtualisation platform at work. From memory in older versions of Proxmox, the fencing was still a bit cumbersome. Also the OpenvSwitch support was on the pants side as well, any improvement?


----------



## tycoonbob

Quote:


> Originally Posted by *The_Rocker*
> 
> How did you get on with fencing in Proxmox 3.3?
> 
> I have a dell M series blade chassis loaded up and ready for me to attack with an open source virtualisation platform at work. From memory in older versions of Proxmox, the fencing was still a bit cumbersome. Also the OpenvSwitch support was on the pants side as well, any improvement?


So I've made some great progress today (installed VE 3.3 on HV01, moved all VMs to HV01, built HV02 with VE 3.3, created a cluster and added HV02). I still have to set up my third node (HV03), but that will happen tonight or tomorrow.

Coincidentally, I literally just configured my fencing on these two nodes. I am using the drac6 for fencing, and basically followed this post:
https://pve.proxmox.com/wiki/Fencing

Specifically, this section:


Spoiler: Code



Code:



Code:


For Dell iDRAC6 Cards you can basically use the same config as for DRAC5, but you need to change the lines
  <fencedevices>
    <fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node1-drac" passwd="XXXX" secure="1"/>
    <fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node2-drac" passwd="XXXX" secure="1"/>
    <fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node3-drac" passwd="XXXX" secure="1"/>
  </fencedevices>
to
  <fencedevices>
    <fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="X.X.X.X" login="root" name="node1-drac" passwd="XXXX" secure="1"/>
    <fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="X.X.X.X" login="root" name="node2-drac" passwd="XXXX" secure="1"/>
    <fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="X.X.X.X" login="root" name="node3-drac" passwd="XXXX" secure="1"/>
  </fencedevices>





***My full *current* cluster.conf is below

This wiki post has a second for the PE M1000e CMC, but I have no idea if it will work for you (no way for me to test). I just tested fencing on HV02 with:

Code:



Code:


fence_node HV02 -vv

And it did shut down HV02, so I assume fencing is working.

In regards to OVS (Open VirtualSwitch), I had numerous problems when configuing it on Proxmox VE 3.2, but on 3.3 it just seems to work. Basically, I have a OVS Bond configured (balance-slb mode) with 2 NICs (eth0 eth1), and an OVS bridge (to that bond, obviously).

For both HV01 and HV02 it worked perfectly, first time. I was super impressed, and am feeling very optimistic about all this.


Spoiler: Interfaces config with OVS



Code:



Code:


[email protected]:/etc/network# cat interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

allow-vmbr1 bond0
iface bond0 inet manual
        ovs_bonds eth0 eth1
        ovs_type OVSBond
        ovs_bridge vmbr1
        ovs_options bond_mode=balance-slb

auto vmbr1
iface vmbr1 inet static
        address  172.16.1.201
        netmask  255.255.255.0
        gateway  172.16.1.254
        ovs_type OVSBridge
        ovs_ports bond0







Spoiler: Cluster.conf



Code:



Code:


[email protected]:/etc/pve# cat cluster.conf
<?xml version="1.0"?>
<cluster name="deveng" config_version="3">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>

  <fencedevices>
    <fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="172.16.1.211" login="root" name="hv01-drac" passwd="xxx" secure="1"/>
    <fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="172.16.1.212" login="root" name="hv02-drac" passwd="xxx" secure="1"/>
  </fencedevices>

  <clusternodes>

    <clusternode name="hv01" votes="1" nodeid="1">
      <fence>
        <method name="1">
          <device name="hv01-drac"/>
        </method>
      </fence>
    </clusternode>

    <clusternode name="hv02" votes="1" nodeid="2">
      <fence>
        <method name="1">
          <device name="hv02-drac"/>
        </method>
      </fence>
    </clusternode>

  </clusternodes>

</cluster>


----------



## cones

What made you decide to use QEMU vs something like KVM?


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> What made you decide to use QEMU vs something like KVM?


I must first apologize while I say that I don't fully understand the difference between KVM and QEMU. To my knowledge, KVM was a fork if QEMU, and is now a part of QEMU (or something like that). The way I understand it, QEMU is what you run, but if you are running an x86 guest on an x86 CPU, it can use KVM to provide better hardware support for CPU and RAM, while using QEMU to virtualize other I/O devices (NIC, storage controller, floppy, cd-rom, etc).

To my knowledge, I am using KVM with Proxmox. Where does your question come from?


----------



## cones

I don't know the exact details of the differences, what I remember reading was KVM tried to run hardware more directly instead of a virtual version (hope that makes sense). I also remember reading that if you have a Linux host and guest with KVM they can both share the kernel, kinda like BSD jails. Again I don't know to much about virtualization, sounds like both of us could learn more.

I was looking back at some of those screenshots you posted and saw some references to QEMU so it made me curious.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> I don't know the exact details of the differences, what I remember reading was KVM tried to run hardware more directly instead of a virtual version (hope that makes sense). I also remember reading that if you have a Linux host and guest with KVM they can both share the kernel, kinda like BSD jails. Again I don't know to much about virtualization, sounds like both of us could learn more.
> 
> I was looking back at some of those screenshots you posted and saw some references to QEMU so it made me curious.


The only reference I've made previously is probably related to the disk format, and that being qcow2 (aka, QEMU image format). You are right that KVM passes through the hosts CPU to the guest, instead of virtualizing it like QEMU. Even with KVM, QEMU is still used to virtualize things like the CD-ROM, NIC, etc.

What you are referring to when you mention the LX host and guest sharing the kernel, that is actually a different technology called OpenVZ (aka, Linux Containers). Proxmox combines KVM/QEMU and OpenVZ to provide many options. Since I am using identical hardware in my cluster, KVM is being used if I need a full image. I actually have not created any OpenVZ containers...yet.


----------



## cones

Ok, so maybe it wasn't KVM specifically that does that but i knew something did. I do need to read more about the different technologies with virtualization, lots to learn.


----------



## tycoonbob

Finally got HV03 rebuilt and added to my cluster.

Things seem to be working much better this time versus last time. I'm not sure if this is related to using v3.3, or if it's because I knew what I was doing this time.











FYI, I know those say "Type: qemu", but here is another screen shot that shows "KVM hardware virtualization" is enabled.


----------



## The_Rocker

This all looks very promising. I think I may have to pencil in a project for early 2015 at work to consider building a Proxmox cluster again with a view to it replacing the current ESXi cluster that serves as our 'cloud'. As amazing as ESXi is, and even though i have specialised in it for years, Open Source and better priced alternatives are just getting better and better.

I am starting to find it hard to justify the cost of a vSphere enterprise plus license per cpu per node. Even the foundation kit is becoming a hard sell since on the foundation licenses, ESXi does next to nothing more than open source products.

RE fencing...

I read the proxmox article a while back and it looks straight forward enough, Im glad to see that it can talk to the CMC directly since the iDRAC's in the blades work on an internal network through the chassis backplane.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> I must first apologize while I say that I don't fully understand the difference between KVM and QEMU. To my knowledge, KVM was a fork if QEMU, and is now a part of QEMU (or something like that). The way I understand it, QEMU is what you run, but if you are running an x86 guest on an x86 CPU, it can use KVM to provide better hardware support for CPU and RAM, while using QEMU to virtualize other I/O devices (NIC, storage controller, floppy, cd-rom, etc).
> 
> To my knowledge, I am using KVM with Proxmox. Where does your question come from?


KVM and QEMU are different projects. KVM is just a kernel hypervisor for Linux, but it can't do any hardware emulation (ie KVM can (para)virtualise existing hardware, but it cannot virtualise hardware that isn't installed on the host OS. This is where QEMU comes into play.
Quote:


> Originally Posted by *cones*
> 
> I don't know the exact details of the differences, what I remember reading was KVM tried to run hardware more directly instead of a virtual version (hope that makes sense). I also remember reading that if you have a Linux host and guest with KVM they can both share the kernel, kinda like BSD jails. Again I don't know to much about virtualization, sounds like both of us could learn more.
> 
> I was looking back at some of those screenshots you posted and saw some references to QEMU so it made me curious.


KVM doesn't work like Jails. What you're thinking of is OpenVZ (in Proxmox) or LXC (native in newer versions of the Linux kernel and what's used by Docker). Those are what's called "OS containers" - which are awesome stuff in my personal opinion. I'd take them over virtualisation any day of the week


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> KVM and QEMU are different projects. KVM is just a kernel hypervisor for Linux, but it can't do any hardware emulation (ie KVM can (para)virtualise existing hardware, but it cannot virtualise hardware that isn't installed on the host OS. This is where QEMU comes into play.


Thanks for the info. Things made a little more sense after reading the KVM wiki page (http://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine) over the week.

To provide an update on this project, I've migrated all my VMs from RAW to qcow2, which took a few hours. I've also created a new VM which is running Zen Load Balancer Community Edition, which I'm currently using to load balance the Proxmox web UI. Don't really need a NLB, but I wanted one, so...


----------



## parityboy

*@The_Rocker*

What's the difference between ESXi and vSphere? Is ESXi the basis of vSphere? I notice a lot of third party products are integrated into vSphere at varying levels (e.g. that Dell VRTX server), so is vSphere more of the management layer?


----------



## The_Rocker

Quote:


> Originally Posted by *parityboy*
> 
> *@The_Rocker*
> 
> What's the difference between ESXi and vSphere? Is ESXi the basis of vSphere? I notice a lot of third party products are integrated into vSphere at varying levels (e.g. that Dell VRTX server), so is vSphere more of the management layer?


vSphere is the product line name that encompasses the ESXi hypervisor.

vSphere is the name given to the management tools and UI that allow access to the underlying ESXi hypervisor, as well as allowing integration into the framework. E.g. a vSphere farm is a collection of servers running the ESXi hypervisor and usually a management layer such as vCenter server or vCloud Director.


----------



## parityboy

*@The_Rocker*

Many thanks for that.







I think you're right in terms of the fact that for the majority of situations, products like Proxmox will satisfy the needs of the target environment. I think the fanciest feature that shops need for VM platforms is live migration, and all of the FOSS platforms support that as far as I know.

I think the most critical feature would be backups like Veeam, but there are ways which might require more management, but are just as effective. I know that for my own small enterprise situation, I'll likely choose between Proxmox and ConVirt.


----------



## Plan9

Proxmox can manage backups from the host. Though I only use Proxmox for OS containers (OpenVZ) so I cannot comment how good it's backup solution is for VMs.


----------



## tycoonbob

Proxmox uses vzdump for backups, which I have been running on each of my VM's (a once weekly backup, with storing 3 backups per VM). Backups seem to be creating correctly, but I haven't had to restore one so I'm not sure how well it works. The backups appear to be full system backups, and item-level restore is obviously not a possibility.
Configuring backups in Proxmox is quite easy, and you have a few different methods...Snapshot, Suspend, or Pause. I'm using Snapshot to ensure 100% uptime, but have briefly tried the Suspend method. You also have the option to use compression on the backups, either LZO or GZIP (I'm using GZIP). Compression seems pretty well, I'd say. My Guacamole server (which has a 10GB vHD) backs up for only 714MB, however my Server 2012R2 Domain Controller (20GB vHD) takes up 3.95GB of space, once compressed. Snapshot based backups seem to take between 8 and 12 minutes for me.

With 15 (currently) VM's, each doing a weekly backup and storing 3 backups, and averaging 2GB per backup, Full backups are only consuming 90GB of space on my 11TB array.

It's also worth mentioning that one of the coolest features of Proxmox 3.3 is the builtin firewall. You can set firewall rules on the virtual interfaces of any specific VM to limit traffic, or you can set firewall rules on the physical/logical interfaces on the Proxmox cluster nodes. Pretty sweet stuff.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> Proxmox uses vzdump for backups, which I have been running on each of my VM's (a once weekly backup, with storing 3 backups per VM). Backups seem to be creating correctly, but I haven't had to restore one so I'm not sure how well it works. The backups appear to be full system backups, and item-level restore is obviously not a possibility.
> Configuring backups in Proxmox is quite easy, and you have a few different methods...Snapshot, Suspend, or Pause. I'm using Snapshot to ensure 100% uptime, but have briefly tried the Suspend method. You also have the option to use compression on the backups, either LZO or GZIP (I'm using GZIP). Compression seems pretty well, I'd say. My Guacamole server (which has a 10GB vHD) backs up for only 714MB, however my Server 2012R2 Domain Controller (20GB vHD) takes up 3.95GB of space, once compressed. Snapshot based backups seem to take between 8 and 12 minutes for me.
> 
> With 15 (currently) VM's, each doing a weekly backup and storing 3 backups, and averaging 2GB per backup, Full backups are only consuming 90GB of space on my 11TB array.


That doesn't sound much different from OpenVZ backups (I have restored snapshots but as secondary guests as I didn't want to overwrite the live environment. The whole process was painless). The nice thing about containers rather than virtual machines is that you can easily backup subsets rather than the whole environment. But that's not a feature I recall seeing exposed in Proxmox (maybe it is and I overlooked it?)


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> That doesn't sound much different from OpenVZ backups (I have restored snapshots but as secondary guests as I didn't want to overwrite the live environment. The whole process was painless). The nice thing about containers rather than virtual machines is that you can easily backup subsets rather than the whole environment. But that's not a feature I recall seeing exposed in Proxmox (maybe it is and I overlooked it?)


I'm honestly not sure since I have not created any containers in Proxmox yet. I'm building out a couple testing VM's at the moment, but I will create a container sometime today and maybe do some kind of testing with backups.

I will say that configuring backups has been painless, thus far.


----------



## NickF

I am considering doing something similar to this in my home network.
Currently I have two dedicated "Servers" in my network, one is for PFSense, running an old Core2Duo and 2 gigs of ram.

The other is a Celeron G1820 w/ 8 gigs of RAM and 3x 3TB WD Reds running FreeNAS. The FreeNAS box runs a Plex jail currently, but I would like to run more jails, sickbeard, coach potato bittorent sync/crashplan. I have 3 intel gige adapters in here.

But I also have a Dell Inspiron 15R SE laptop sitting around I'm not using anymore. It has a Core i7 3612qm and 12GB of RAM. It has a single Gig-e ethernet port. Would it be better to run Proxmox on that, have it run a couple windows or Ubuntu VMs, one for plex Plex and the other for sickbeard, and coach potato and Bt Sync/crashplan?

Thanks for input


----------



## Plan9

The i7 laptops I've had all suffered from overheating issues, so I wouldn't recommend running VMs off an i7 laptop - or at least not for 24/7 usage.


----------



## NickF

Quote:


> Originally Posted by *Plan9*
> 
> The i7 laptops I've had all suffered from overheating issues, so I wouldn't recommend running VMs off an i7 laptop - or at least not for 24/7 usage.


This laptop also suffers from overheating issues-- which is largely why I don't use it much anymore.
But, after remounting the CPU with CLU and throwing it on a decent laptop cooler, though, I was able to mitigate those issues. I'm fairly confident that the laptop would be able to keep temps manageable when its on the laptop cooler mat.
I spent like $1100 on that laptop and really don't like not using it for anything, which is why i brought it up.
I dont think it will do more than run 1-3 Plex streams on the occasional rainy evening...spending most of its time idling or downloading TV shows...


----------



## Plan9

Quote:


> Originally Posted by *NickF*
> 
> This laptop also suffers from overheating issues-- which is largely why I don't use it much anymore.
> But, after remounting the CPU with CLU and throwing it on a decent laptop cooler, though, I was able to mitigate those issues. I'm fairly confident that the laptop would be able to keep temps manageable when its on the laptop cooler mat.
> I spent like $1100 on that laptop and really don't like not using it for anything, which is why i brought it up.
> I dont think it will do more than run 1-3 Plex streams on the occasional rainy evening...spending most of its time idling or downloading TV shows...


If you do, then you might want to take the battery out as well.


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> If you do, then you might want to take the battery out as well.


I've heard others say to leave it as a built in battery backup. I'm not sure how well it would like never discharging though but then again if you aren't ever going to use the battery when you take it out you aren't losing much.


----------



## NickF

Quote:


> Originally Posted by *cones*
> 
> I've heard others say to leave it as a built in battery backup. I'm not sure how well it would like never discharging though but then again if you aren't ever going to use the battery when you take it out you aren't losing much.


I always thought this too, one less thing I have to plug into my UPS. Why is it a good idea to take the battery out?


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> I've heard others say to leave it as a built in battery backup. I'm not sure how well it would like never discharging though but then again if you aren't ever going to use the battery when you take it out you aren't losing much.


I've had laptop batteries catch fire before from overheating. So my advice is always to remove the battery and buy a proper UPS if you want a backup battery.


----------



## NickF

I will take out the battery I guess lol thanks


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> I've had laptop batteries catch fire before from overheating. So my advice is always to remove the battery and buy a proper UPS if you want a backup battery.


Wonder why that happened would have thought there would be a thermal cutoff unless that over heated. Guess it is better to take it out then have it catch on fire.


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Wonder why that happened would have thought there would be a thermal cutoff unless that over heated. Guess it is better to take it out then have it catch on fire.


It was partly a fault with the battery and partly a fault with the laptop. But in any case, it's an easily avoided risk (and trust me, it's scary when those things do go up in flames!)


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> It was partly a fault with the battery and partly a fault with the laptop. But in any case, it's an easily avoided risk (and trust me, it's scary when those things do go up in flames!)


Yup they swell up and then burst into flames, youtube is a great place to "research" it.


----------



## The_Rocker

Ive been reading into Openstack for the past 3 days. I really want to try it out using Ceph as the storage backend.

I am going to give the Ubuntu openstack install a try which uses their MAAS installer with Julu for deployment of the Openstack services.

Only thing is, Openstack seems to be extremely modular and therefore extremely confusing at first to set up.... But I have a free account on an Openstack cloud somewhere and have to say, it is awesome.


----------



## tycoonbob

Quote:


> Originally Posted by *The_Rocker*
> 
> Ive been reading into Openstack for the past 3 days. I really want to try it out using Ceph as the storage backend.
> 
> I am going to give the Ubuntu openstack install a try which uses their MAAS installer with Julu for deployment of the Openstack services.
> 
> Only thing is, Openstack seems to be extremely modular and therefore extremely confusing at first to set up.... But I have a free account on an Openstack cloud somewhere and have to say, it is awesome.


Yeah, Openstack is definitely a great piece of technology, but was way more than I need (considering the time to build the infrastructure and what not). I think OpenNebula is also a very interesting project (more so than Openstack), but I'm very happy with what Proxmox has to offer.


----------



## The_Rocker

Quote:


> Originally Posted by *tycoonbob*
> 
> Yeah, Openstack is definitely a great piece of technology, but was way more than I need (considering the time to build the infrastructure and what not). I think OpenNebula is also a very interesting project (more so than Openstack), but I'm very happy with what Proxmox has to offer.


I have just stumbled across Mirantis Openstack which seems to be a streamlined openstack installer using Fuel to build it all up. Also sets up ceph clusters as well as deploying the specific services to certain nodes that you want to use in the openstack cluster.

Looks good, I am going to give it a whirl in virtualbox tomorrow, it looks like it even lets you choose the networking model like nova with a flat model or vlan tenant isolation... Or the preferred neutron with GRE tenant isolation.

I shall keep you posted, proxmox is awesome but I'm dying to set up a proper cloud platform with a clustered storage backend (that's not a San) and full Software defined networking. Each tenant gets to define their own private networks that are isolated by GRE and connected to a virtual router that the tenant can connect to the external network. My public IPv4 range becomes a pool of floating addresses which tenants can attach to vm instances via NAT on the virtual router. They can attach volume to their instances for extra persistent storage which sits on the ceph or swift backend as well.

Also, I have a /48 of IPv6 I need to start using.


----------



## cones

Since you have been using it for a while now you might know some about this. I'm thinking about switching some things around, i would end up a with a server that is capable of hardware passthrough. Do you know how well proxmox would run if i used Debian as a base so i would still be able to run a GUI (XBMC) on the host and use it for local storage. Basically i want to install proxmox on Debian and not use their ISO, does it work well if using local storage? In the end i would run Pfsense in a VM with a NIC passed through to it along with a couple VMs.


----------



## tycoonbob

Just ordered 3 Crucial MX100 512GB to put in my Proxmox servers for Ceph storage. Should have the drives and installing it by next Friday, and should have ~768GB of SSD block Ceph storage for VM's!

Yay.


----------



## finish06

Quote:


> Originally Posted by *tycoonbob*
> 
> Just ordered 3 Crucial MX100 512GB to put in my Proxmox servers for Ceph storage. Should have the drives and installing it by next Friday, and should have ~768GB of SSD block Ceph storage for VM's!
> 
> Yay.


So what is the verdict Tycoon, is the speed crazy awesome or is it just normal SSD speed? I would suspect in a smaller environment, i.e. 3 drives in your case, it will just be normal SSD speed (which don't get me wrong, for a home lab is awesome). I recently ordered two more Proxmox boxes to bring my count up to three, just like you! That way I can set up a cluster with fencing, failover, etc. Also, got three 512GB Samsung 840 Pros coming to set up ceph storage... Any advice?


----------



## tycoonbob

Quote:


> Originally Posted by *finish06*
> 
> So what is the verdict Tycoon, is the speed crazy awesome or is it just normal SSD speed? I would suspect in a smaller environment, i.e. 3 drives in your case, it will just be normal SSD speed (which don't get me wrong, for a home lab is awesome). I recently ordered two more Proxmox boxes to bring my count up to three, just like you! That way I can set up a cluster with fencing, failover, etc. Also, got three 512GB Samsung 840 Pros coming to set up ceph storage... Any advice?


Proxmox is great...when it works. I've been fighting with getting Ceph setup for the past week or so, and aren't having much luck. I suspect the speeds will seem like that of a single SSD, since SSD's are quite fast and telling the difference between 1 drive or 3 drives is going to be unlikely.

I will say that once I get it working I've decided to do a replication factor of 3, instead of 2, meaning I will only have 512GB total available storage (which is more than double what I need for all my VM's right now) for better redundancy. I can add up to 3 more 512GB SSD's per server, if I wanted. That would put me at 2TB usable storage on a all Ceph SSD array with 12 OSD's, which is pretty freaking awesome. I/O or speeds should never be a problem, but I would likely need to upgrade to a 10GbE network for Ceph.








Regardless, these three drives will be a huge upgrade over my current NFS share, which is doing just fine with ~15 VM's. However, if I power on my Splunk VM, I start to notice a slowdown because of all the writes going on with that Splunk server. Won't be a problem once I get this Ceph stuff worked out. In case you're interested, you can check out my post on the Proxmox forum where I'm trying to get my Ceph stuff setup correctly.
http://forum.proxmox.com/threads/20334-Trying-to-get-Ceph-working?p=103845#post103845

Advice? Don't really have much at this time. You have (or will have) the same setup as me (3 nodes, SSD's in a Ceph array, clustering, etc), and I'm quite happy with my setup (well, once I get Ceph going). I will say this...before you start using your cluster, make sure you test your failover and make sure you set up your fencing correctly. I'm doing my fencing through the iDRAC Enterprise that I have in my servers, and it works, but it took a little to get it going. Oh, and install this package!!
http://ayufan.eu/projects/proxmox-ve-differential-backups/

That will give you differential backups, which is quite nice. I wish they would merge that into Proxmox, but the devs thinks it makes backups too complicated. Whatever.
I do a weekly full backup, and a nightly differential backup (which uses the previous full backup as the parent). I keep 30 days of backups for each VM. A typical Linux VM (which has a 10GB drive) consumes about 2GB for the full backup, and about 200MB for the diff. So that's about 3.2GB for a full week of backups, or about 13GB for a full month of backups. Doubles my storage, but plenty to restore back to. I may reduce it to a weekly full and diffs every other night, and only keep them for two weeks...but I haven't decided yet.

Let me know if you have any questions about it though, as I've gotten quite comfortable with it. I plan to write my own blog about setting up Ceph on Proxmox, if I can ever figure out what I did wrong (since I think their wiki is...lacking).


----------



## tycoonbob

Well, I finally got my Ceph volume working but the results are disappointing versus my current NFS share.



So I've got 2 VM's (CentOS7-NFS, and CentOS7-CEPH) which are identical. 512MB RAM, 2 vCPU, 10GB volume, VirtIO wherever possible, and CentOS 7.

My Ceph volume is 3 Crucial MX100 512GB SSD's, one in each PVE host. My NFS share is over a gigabit link to a server with 7 2TB 7200RPM drives in a hardware RAID 6 (LSI MegaRAID 9261-8i controller). Now the Ceph volume is completely empty and unbusy, other than this one VM. The NFS share has ~15 other active VM's (each with a lowish load, but still active), and this share lives on this RAID 6 array with other storage (mostly media) which I was streaming during these tests. I figured surely the Ceph would do so much better than NFS, but this doesn't appear to be the case.

The command I ran to test with was:
bonnie++ -d /tmp -r 512 -s 2048 -n 512 -u root -x 5 | bon_csv2html

Quite disappointing and making me reconsider this. Maybe I should think more about going back to iSCSI, but that iSCSI server would be a single point of failure. I'd like to do some sort of distributed storage model with two nodes (active/active preferred), but I don't know of any free/open source solutions that aren't build your own. Hmm...


----------



## parityboy

*@tycoonbob*

Your hardware RAID 6 controller has a memory cache on it. Are the Ceph SSDs cached in any way?


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> Your hardware RAID 6 controller has a memory cache on it. Are the Ceph SSDs cached in any way?


My controller has a 512MB cache, and the tests I were doing were with 2GB file size, which is 4 times that of the cache. Not to mention the controller and cache are on another box, and the cache should have been in use from other things as well. I don't think the cache is the reason for my results.

No caching on the Ceph array, but it seems that these are expected results as I only have 3 drives in the array. Super disappointing, and I don't want to scale to 9-12 512GB SSD's before I start seeing the performance increase of using SSD's.


----------



## parityboy

*@tycoonbob*

hmmmm, technically a single SSD should easily keep up with your array, so yeah very disappointing and points to Ceph as the cause. I'm wondering if it's Ceph's distributed nature or CephFS itself? Or something in the config/setup? Would you be prepared something like Lustre or Gluster and see if they make a difference?

Taken from this:
Quote:


> Use caution. Acceptable IOPS are not enough when selecting an SSD for use with Ceph. There are a few important performance considerations for journals and SSDs:
> 
> 
> *Write-intensive semantics*: Journaling involves write-intensive semantics, so you should ensure that the SSD you choose to deploy will perform equal to or better than a hard disk drive when writing data. Inexpensive SSDs may introduce write latency even as they accelerate access time, because sometimes high performance hard drives can write as fast or faster than some of the more economical SSDs available on the market!
> *Sequential Writes*: When you store multiple journals on an SSD you must consider the sequential write limitations of the SSD too, since they may be handling requests to write to multiple OSD journals simultaneously.
> *Partition Alignment*: A common problem with SSD performance is that people like to partition drives as a best practice, but they often overlook proper partition alignment with SSDs, which can cause SSDs to transfer data much more slowly. Ensure that SSD partitions are properly aligned.
> While SSDs are cost prohibitive for object storage, OSDs may see a significant performance improvement by storing an OSD's journal on an SSD and the OSD's object data on a separate hard disk drive. The osd journal configuration setting defaults to /var/lib/ceph/osd/$cluster-$id/journal. You can mount this path to an SSD or to an SSD partition so that it is not merely a file on the same disk as the object data.


----------



## tycoonbob

Quote:


> Originally Posted by *parityboy*
> 
> *@tycoonbob*
> 
> hmmmm, technically a single SSD should easily keep up with your array, so yeah very disappointing and points to Ceph as the cause. I'm wondering if it's Ceph's distributed nature or CephFS itself? Or something in the config/setup? Would you be prepared something like Lustre or Gluster and see if they make a difference?


I have a feeling that Lustre and GFS will both also suffer from the same issue, as I believe this is just an issue of using a distributed storage system. Something else worth mentioning here is that Proxmox can natively connect to rbd, whereas if I were to use Lustre or Gluster, I would have to create that manually on my Proxmox servers, then deliver that storage via NFS to Proxmox, which is adding in another layer to the mix. Not a big deal, but like I said, it's worth mentioning.

I have a few tips from the guys on the Proxmox forum to possibly tweak performance, which I should get to try today. I'll run more Bonnie++ tests and share those results, of course.


----------



## tycoonbob

Well, I believe I'm going to give up on the idea of Ceph. It seems that I would need to have at least 6 (more is better) drives to get decent I/O, and my idea of 3 SSDs just isn't going to cut it. This was one of the biggest selling points of Proxmox to me, and now that that won't happen, I'm going to continue exploring other hypervisors/virtualization managers. The next one on my list is oVirt, which seems to be a more polished version of Proxmox, and it can run on CentOS, which is my preferred distro. Plenty more research to come, on my part, but a 3-node oVirt cluster could be in my future.

With the overhaul I'm doing on my storage box, I'm cosidering getting a 4th Crucial MX100 512GB SSD and running a RAID 10 with those drives, giving me 1TB of amazing I/O, and delivering that via iSCSI to whatever hypervisor I choose. I'm just not sure how else to build a HA virtualization platform at home, without spending thousands of dollars on software licensing.


----------



## snazy2000

Im having same issue im trying to setup a HA cluster with hyper-v on local storage without spending thousands (starwinds) the free version only allows 120gb for HA storage which sucks. Only really want to use hyper-v ive used loads of others but ive always gone back.


----------



## tycoonbob

Quote:


> Originally Posted by *snazy2000*
> 
> Im having same issue im trying to setup a HA cluster with hyper-v on local storage without spending thousands (starwinds) the free version only allows 120gb for HA storage which sucks. Only really want to use hyper-v ive used loads of others but ive always gone back.


I completely understand. Hyper-V has always been my go to hypervisor, but since I barely run any Windows workloads any more I thought I would branch out to something else. I remember trying that Starwind HA storage for Hyper-V when it came out, and I don't think the free version had that 120GB limit back them. That sucks, though.


----------



## snazy2000

Id love to run solusvm at home as love there web ui but wouldnt want to monthly cost :/ ive been looking in to vmware and yhere vsan but thats another cost :/


----------



## snazy2000

Have just found out about this cool looking panel. https://www.virtkick.io. not sure if will have HA features bit looks very promising ! And its open source (currently in alpha) soon to be beta i beleave


----------

