Vmsrv lxc containers

From SkullSpace Wiki
Jump to navigation Jump to search


If you want to run a Linux-based x86_64 or x86 based guest on vmsrv, you should consider the benefits of running it as a Linux Container (LXC). These are a newer implementation of OS-level virtualization that is supported upstream.

The main vmsrv kernel (version 2.6.32) directly runs your processes (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with.

Kick-ass performance for your kick-ass userland

Beyond that, leave the kernel to us and focus on rocking your userland! Pretty much any GNU/Linux distro can be booted this way. (some tweaking sometimes needed)

Avoiding the overhead of full-on virtualization and that kernel-hypervisor relationship is an obvious advantage, but even more important is that you won't have to pre-define and hog a fixed amount of memory for your container as you have to do with a full virtual machine. When your processes are busy they can enjoy bursts of RAM as allocated by the host kernel, when they're idle they can be individually swapped out.

Get your container today

To get your own container, contact Mark Jenkins <mark@parit.ca>. A fresh container with a minimal install can be built and handed over or an existing file system converted.

You can also enjoy the benefits of Linux containers without having to administer your own by signing up for an account on mumd -- a cluster of Linux containers with common LDAP login hosted on vmsrv. (Read more on the mumd page)

Downsides

There are a few downsides to having a linux container over the qemu-kvm virtual machine option:

  • We don't have a management panel to help us manage things like virtual NICs, startup, shutdown/reboot, installation and console access. These things have to be performed by the administrator, Mark Jenkins <mark@parit.ca> . There is support in libvirt for managing linux containers, but we really need to have it for virt-manager as well.
  • The reboot command unfortunately leaves your container in a broken state. You'll have to email the administrator to have it fixed. Because the kernel running isn't yours but is the host OS kernel, you should never need a reboot unless your trying to upgrade to a new version of /sbin/init . You should be able to do all the things you need to do in your container without a reboot by staying in your default runlevel and just starting and stopping daemons. Unlike the Windows world, upgrading your userland (aside from /sbin/init) doesn't require a reboot.
  • You have root access in your container and are in direct communication with the host OS kernel, not your own. In theory the host kernel is "containing" you and the things you do with a separate process and network space, but with LXC being relatively new and the debian 6.0 implementation we're using being on the old side of that, perfect isolation doesn't seem to be guaranteed. We already discovered this the hard way when one of our users tried to create software bridges in one container -- the commands for this crashed the whole host OS system! So, we ask our lxc users to not touch software bridges (brctl) or anything else that's kind of hardwarish out of percaution. If you're unsure, move to a qemu-kvm VM instead. If you experience a big crash, check if the host OS is still up and running.