I’ve been feeling less than thrilled with the open source virtualization alternatives available. It’s not that I am not grateful for all the hard work people have put into these systems. I am increasingly finding the platforms to be buggy. I’m trying to decide on what the best setup is moving forward. Just thought I would jot down some of my recent notes.
Xen
As someone that prefers Debian and Debian-based distros using Xen means using Debian. However the Xen 4.0.1 packages in Debian Squeeze have seemed buggy to me, partcularly when allocating large amounts of memory to a DomU. In addition for the purpose of running a Windows server there are no signed paravirtualization drivers available. This leaves people in the position of choosing lower performance or the hassle of unsigned drivers. Those drivers can be found at http://www.meadowcourt.org/downloads/. The Xen team has had some recent success getting parts of the dom0 kernel code merged. If the entire
KVM
I have generally believed that KVM would overtake Xen given the level of development activity. Yet, in my dealings with KVM I still find it has a long way to go. I find bugs with virtual machines using SMP (performance problems, lockups on AMD processors) and even keeping a correct clock for Windows systems. There are signed Windows drivers but the virtio net drivers are too buggy for production use (as of version 1.1.6). You can get those drivers here: http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/. KVM is so well supported that there are many OS options: Debian, Ubuntu, Proxmox.
OpenVZ / LXC
I used linux-vservers for quite some time before switching to Xen. Container-based virtualization is seeming better to me now in that performance should be close to native, particularly with regard to latency. I also think I overlooked the value of container-based virtualization with regard to ease of management tasks such as backup or migration. I haven’t used OpenVZ or LXC. Using Proxmox to administer OpenVZ seems like it might be a pretty easy way to go. OpenVZ can also be used from Debian.
- OS
- Debian 6 – Xen, KVM, OpenVZ
- Ubuntu 10.04 – KVM
- Proxmox – KVM, OpenVZ
- Xen
- – unsigned paravirtualization drivers
- – buggy with large memory
- KVM
- + signed paravirtualization drivers
- – buggy paravirtualization nic driver
- – clock issues
- – SMP issues
- + ksmd may offer more memory for virtual servers
- OpenVZ
- untested
- LXC
- untested
I was researching the same a year ago and came to the conclusion that the “free” VMware ESXi just can’t be beat if you only need to host VMs on one physical machine. Installation is almost too easy, the vSphere client is unmatched in the opensource world, and ESXi is unbelievably stable.
I use VMware ESXi 4.1 on a Dell Precision 390 workstation with 8GB of RAM and it chugs away with 2 Linux VMs (Ubuntu and Elastix distro of CentOS), 3 Server 2003 VMs for lab purposes and 2 Server 2008 R2 for preparing for my MCITP exams.
Just my thoughts.
I was researching the same a year ago and came to the conclusion that the “free” VMware ESXi just can’t be beat if you only need to host VMs on one physical machine. Installation is almost too easy, the vSphere client is unmatched in the opensource world, and ESXi is unbelievably stable.
I use VMware ESXi 4.1 on a Dell Precision 390 workstation with 8GB of RAM and it chugs away with 2 Linux VMs (Ubuntu and Elastix distro of CentOS), 3 Server 2003 VMs for lab purposes and 2 Server 2008 R2 for preparing for my MCITP exams.
Just my thoughts.
@Aaron, similar to my situation.
I needed a lab machine with simple resource management, such as no-brainer network administration, storage management and monitoring as well as CPU resource control. VMWare ESXi has it all down close to an art form and that *free* (as in beer) product is *polished*.
Sure it can’t run with the wide range of hardware Linux supports and it can be a bit of a pita to get all required components together and working (mobo, nic, SATA HBAs and suitable disks + the hypervisor software). However when these four hardware pieces have been identified, setting up and administrating an ESXi environment is a breeze.
I’ve evaluated kemu / kvm before, but that offering really felt like a science project, requiring too much fiddling around and never quite working for the cases I needed VMs for. VMWare Server and VirtualBox are not options as they lack key components such as hardware assisted virtualization and stable drivers, allowing close to bare-metal performance of the VM).
As for Container virtualization, leveraging those capabilities from an ESXi VM providing those mechanisms (Solaris, BSD or Linux) works perfectly and gives me all the benefits of container based virtualization, while still offering a simple option for full and para-virtualization from the ESXi layer itself.
I’ll wait another three years I think, before once again checking the “state of linux virtualization”, as I don’t foresee much will have changed judging by historical rate of progress.
@Aaron, similar to my situation.
I needed a lab machine with simple resource management, such as no-brainer network administration, storage management and monitoring as well as CPU resource control. VMWare ESXi has it all down close to an art form and that *free* (as in beer) product is *polished*.
Sure it can’t run with the wide range of hardware Linux supports and it can be a bit of a pita to get all required components together and working (mobo, nic, SATA HBAs and suitable disks + the hypervisor software). However when these four hardware pieces have been identified, setting up and administrating an ESXi environment is a breeze.
I’ve evaluated kemu / kvm before, but that offering really felt like a science project, requiring too much fiddling around and never quite working for the cases I needed VMs for. VMWare Server and VirtualBox are not options as they lack key components such as hardware assisted virtualization and stable drivers, allowing close to bare-metal performance of the VM).
As for Container virtualization, leveraging those capabilities from an ESXi VM providing those mechanisms (Solaris, BSD or Linux) works perfectly and gives me all the benefits of container based virtualization, while still offering a simple option for full and para-virtualization from the ESXi layer itself.
I’ll wait another three years I think, before once again checking the “state of linux virtualization”, as I don’t foresee much will have changed judging by historical rate of progress.