Difference between revisions of "Vmsrv"

From SkullSpace Wiki
Jump to navigation Jump to search
(goodbye virtualbox, hello qemu-kvm)
Line 11: Line 11:
  
 
==System==
 
==System==
 
 
* [http://www.amd.com/us/products/desktop/processors/phenom-ii/Pages/phenom-ii-model-number-comparison.aspx AMD Phenom II X6 1055T, which has 6 core, 512k L2 cache per core, a shared 6M L3 cache, and AMD's virtualization extensions]
 
* [http://www.amd.com/us/products/desktop/processors/phenom-ii/Pages/phenom-ii-model-number-comparison.aspx AMD Phenom II X6 1055T, which has 6 core, 512k L2 cache per core, a shared 6M L3 cache, and AMD's virtualization extensions]
 
* [http://ca.asus.com/en/Motherboards/AMD_AM3Plus/M5A88V_EVO/#specifications Asus M5A88-V EVO motherboard]
 
* [http://ca.asus.com/en/Motherboards/AMD_AM3Plus/M5A88V_EVO/#specifications Asus M5A88-V EVO motherboard]
 
* 4x4G (16G total) of DDR3 RAM in unganged mode, 1333.33 MT/s configuration,  
 
* 4x4G (16G total) of DDR3 RAM in unganged mode, 1333.33 MT/s configuration,  
* 4X1TB SATA hard drives in RAID 10 configuration
+
* 4X1TB SATA hard drives in RAID 10 configuration, [[wikipedia:Logical_Volume_Manager_%28Linux%29LVM|LVM]] block layer
 
* Debian GNU/Linux 6.0 amd64 host operating system
 
* Debian GNU/Linux 6.0 amd64 host operating system
 
* 1GBit internal NIC on SkullSpace lan (on host Linux bridge skspprivbr), 192.168.1.26
 
* 1GBit internal NIC on SkullSpace lan (on host Linux bridge skspprivbr), 192.168.1.26
 
* 100Mbit PCI NIC on VOI public IP switch (on host Linux bridge skspvoipubbr)
 
* 100Mbit PCI NIC on VOI public IP switch (on host Linux bridge skspvoipubbr)
* Virtualization using Linux Containers (LXC) and VirtualBox 4.0
+
* power backed by UPS
 +
* Two types of virtualization:
 +
* qemu-kvm managed by libvirt (full machine virtualization), our recommend choice for most
 +
* Linux Containers (LXC) ([[wikipedia:Operating_system-level_virtualization|OS-level virtualization]])
  
  
==Linux Containers (LXC)==
+
==Ask for Help! Free migrations available==
If you want to run a Linux-based x86_64 or x86 based guest, you should consider the benefits of running it as a Linux Container (LXC). These are a newer implementation of
+
Don't be afraid to ask for help, email Mark Jenkins <mark@parit.ca> and catch me in person on Tuesdays, hackathons (third Saturdays), special events, and by appointment.
[[wikipedia:Operating_system-level_virtualization|OS-level virtualization]] that is supported upstream.
 
<br />
 
(FreeBSD fans like Mak and Dave are permitted to gleefully says "we've had that for ages, what took you so long Linux!?")
 
  
The main vmsrv kernel (version 2.6.32) directly runs your processes (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with.
+
Some free (but not unlimited) migration consulting and assistance is also available.
  
===Kick-ass performance for your kick-ass userland===
+
==Linux Containers (LXC)==
Beyond that, leave the kernel to us and focus on rocking your userland! Pretty much any GNU/Linux distro can be booted this way. (some tweaking sometimes needed)
+
If you want to run a Linux-based x86_64 or x86 based guest, you should consider the benefits of running it as a Linux Container (LXC).
  
Avoiding the overhead of full-on virtualization and that kernel-hypervisor relationship is an obvious advantage, but even more important is that you won't have to pre-define and hog a fixed amount of memory for your container as you would with a full virtual machine (like VirtualBox, see next section). When your processes are busy they can enjoy bursts of RAM as allocated by the host kernel, when they're idle they can be individually swapped out.
+
The main vmsrv kernel (version 2.6.32) directly runs your processes (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with. There are performance upsides to using the host OS kernel directly.
  
And you get to use all 4 cores. :)
+
There are also downsides, see the [[Vmsrv_lxc_containers]] page for more info. You probably want to use our primary virtualization offering, qemu-kvm (see next section)
  
===Get your container today===
+
==qemu-kvm with libvirt==
To get your own container, contact Mark Jenkins <mark@parit.ca>. A fresh container with a minimal install can be built and handed over or an existing file system converted.
+
Users with accounts on the vmsrv machine are able to run qemu-[http://www.linux-kvm.org/page/Main_Page kvm] based virtual machines that are managed by [http://libvirt.org/ libvirt]. We use [http://virt-manager.org/ virt-manager] as a libvirt front-end.
  
See the section on libvirt for more on our hopes to make direct-user allocation possible.
+
Because a fully featured x86/x86_64 machine is emulated and virtualized, a large variety of [http://www.linux-kvm.org/page/Guest_Support_Status#UNIX_Family:_BSD guest OSs] are supported.
  
You can also enjoy the benefits of Linux containers without having to administer your own by signing up for an account on [[mumd]] -- a cluster of Linux containers with common LDAP login hosted on vmsrv. (Read more on the [[mumd]] page)
+
virt-manager exposes a large number of features of libvirt and qemu-kvm -- asa GUI app this makes it largely self-documenting. Experiment!
  
==Virtual Box==
+
We welcome improvements to this documentation as well.
Users with accounts on the vmsrv machine are able to run [http://www.virtualbox.org/ Virtual Box 4.0]. There are many supported [https://www.virtualbox.org/wiki/Guest_OSes guest operating systems], and that support is at its best with guests where you can install "virtual box guest additions" which are extra drivers and things that make the guest work better with the host.
 
 
 
Because our CPU doesn't have VT extensions Virtual Box is only able to do a slower "software" virtualization with some insane trickery. A CPU with VT extensions would make hardware virtualization possible.
 
 
 
As a result you can only run 32bit x86 guests with a single processor, 64bit and SMP support are not available. This is explained in way more detail than you can handle in the [https://www.virtualbox.org/manual/ch10.html#hwvirt Virtual Box technical background]
 
  
 
===Accounts===
 
===Accounts===
 
Pick one of two ways to get an account:
 
Pick one of two ways to get an account:
 
* Ask the admin team (Mark Jenkins <mark@parit.ca>)
 
* Ask the admin team (Mark Jenkins <mark@parit.ca>)
* Use the automated claimid process for [[mumd]] at http://192.168.1.28 . mumd accounts are made available to the vmsrv host system via the wonders (and down sides) of LDAP.
+
* Use the automated claimid process for [[mumd]] at http://192.168.1.28 . mumd accounts are made available to the vmsrv host system via the wonders (and down sides) of LDAP. Follow up with Mark Jenkins <mark@parit.ca> to have your account added to the libvirt group.
  
 
Accounts are for Skullspace members only.
 
Accounts are for Skullspace members only.
  
===How to login and start VirtualBox===
+
===How to login and start virt-manager===
 
The host vm machine is 192.168.1.26 on the skullspace LAN. Three ways to log in the from the Skullspace network:
 
The host vm machine is 192.168.1.26 on the skullspace LAN. Three ways to log in the from the Skullspace network:
 
* A [[wikipedia:Secure_Shell| SSH]] client (port 22), for graphics use -X or port forward a vnc session
 
* A [[wikipedia:Secure_Shell| SSH]] client (port 22), for graphics use -X or port forward a vnc session
Line 68: Line 62:
 
* [[wikipedia:RDP | RDP]] client to vmsrv.markjenkins.ca (206.220.196.57 port 3389)
 
* [[wikipedia:RDP | RDP]] client to vmsrv.markjenkins.ca (206.220.196.57 port 3389)
  
The default desktop environment is [[wikipedia:LXDE | LXDE]] which is fairly lightweight, but still least has a menu in the corner and a task bar. VirtualBox can be found under the accessories menu under the main application launch menu in the bottom left corner.
+
The default desktop environment is [[wikipedia:LXDE | LXDE]] which is fairly lightweight, but still least has a menu in the corner and a task bar. virt-manager can be found in the applications menu (bottom left corner) in the System Tools menu, the menu entry says "Virtual Machine Manager".
  
===Documentation===
+
There's a button on the top, left hand side of virt-manager for creating a new virtual machine.
 
 
The features of Virtual Box are [http://www.virtualbox.org/wiki/Documentation well documented]
 
  
 
===Memory settings===
 
===Memory settings===
The memory setting in virtual box is very important. Feel free to be more on the greedy side (3 gigabyte) if you're just starting your vm, doing your thing, and shutting it down when you're done (interactive use).
+
Your choice of memory setting is very important. Feel free to be more on the greedy side (3 gigabyte) if you're just starting your vm, doing your thing, and shutting it down when you're done (interactive use).
 
 
If you're planning on running all the time, than you should use 1G at most.
 
  
Let everyone know how often you're using the VM service and what kind of RAM requirements you're hitting -- this will help us justify an upgrade to maximum RAM and eventually start fundraising for an even higher capacity machine.
+
If you're planning on running all the time, than you should use 1G at most except by special request to the vm server administrator Mark Jenkins <mark@parit.ca> .
  
===Sound setting===
+
Keep us in the loop as to how often you're using the VM service and what kind of RAM requirements you're hitting -- this will help us justify eventual for an even higher capacity machine.
Disable the virtual sound card, sound isn't available (right now)
 
  
===Network settings===
+
===Network settings==
We recommend using the bridged adapter instead of NAT. Join the skspprivbr bridge for the skullspace network and the skspvoipubbr bridge if you have a VOI public ip addresses allocated to you [[Networking |on the networking page]].
+
Join the skspprivbr bridge for the skullspace network and the skspvoipubbr bridge if you have a VOI public ip addresses allocated to you [[Networking |on the networking page]].
  
 
===Remote Access===
 
===Remote Access===
 
We recommend installing guest operating systems with remote access features that are either built in or installable and enabling these features shortly after completing your install.
 
We recommend installing guest operating systems with remote access features that are either built in or installable and enabling these features shortly after completing your install.
  
This will allow you to go for direct logins to your virtual machine. You should also look into the commands for starting up VirtualBox "headless" -- once your vm is set up nicely you can probably do much faster starts and stops of it via ssh commands.
+
This will allow you to go for direct logins to your virtual machine.
 +
 
 +
If your guest operating system lacks a proper remote access facility or if your going to end up spending a lot of time doing console access for other reasons, you should look into the feature where a graphic card can be emulated as a vnc server you can directly connect to and also consider the remote access features built-in to the qemu-kvm serial port emulation which can be used as a console on some OSs as well.
  
VirtualBox also has a feature for remote access to its virtual console, but this requires a guest system with VirtualBox guest extensions.
+
===virtio===
 +
To improve performance, qemu-kvm emulates traditional PC hardware and supports the [http://wiki.libvirt.org/page/Virtio virtio] standard. If you're running a Linux or Windows based guest, we recommend installing the virtio network and disk drivers and uses these options for network and disk in the virt-manager hardware manager so that we can all have better performance.
  
 
===Always running VMs===
 
===Always running VMs===
The commands for starting and stopping VirtualBox "headless" will also be useful for smaller virtual machines that folks will be keeping online all the time. You could technically use cron to do this for yourself, but its also fine if you ask the admins to set this up.
+
VMs created in virt-manager by default will come up on system start-up. There's a checkbox you can check to ensure your VM does come up if required. Please keep the vmsrv administrator (Mark Jenkins <mark@parit.ca>) in the loop as to which VMs you intend to keep up all the time.
 
 
Eventually we'd like to manage these kinds of VMs through libvirt. (see below)
 
  
==libvirt==
+
===Courtesy===
Eventually we would like to make allocation of linux containers and management of headless VirtualBox systems ime possible via libvirt and manageable through nice tools like virt-manager.
+
If you virtual machine is for experimental/casaual/interactive use and does not need to be on 24/7, please take care to turn it off when you're done. If you notice that allocated RAM is running short, let the server administrator know -- its rude to just shut off someone elses virtual machine -- you can't tell just from looking if its being used or not, especially given the use of remote access.
  
libvirt has support for both of them, but we have to learn how to use it. virt-manager has some support but not for creating the configs for these two to beging with, but it does respond to command line arguments related to lxc and VirtualBox... and probably is okay once the underlying config files are in place it lets you manage the turning on and off...
+
==Services offered to members hosted on vmsrv==
 +
The following services being offered to members are hosted on vmsrv:
 +
* [[MUMD]]  (a group linux containers with shared LDAP login and a large install base of interactive software )
  
If we manage to upgrade the CPU to one that does have VT extensions, we'll have the possibility of replacing virtualbox with KVM and qumu which has better support under virt-manager.
+
Coming soon: Skullhost, a shared web hosting service. (not everyone needs to run their own dedicated web server!)
  
 
==Capital Campaign==
 
==Capital Campaign==
  
 
The vmsrv project is raising money for upgrades. Projects goals in order of priority are:
 
The vmsrv project is raising money for upgrades. Projects goals in order of priority are:
* [[wikipedia:Intelligent_Platform_Management_Interface|IPMI]] card
+
* [[wikipedia:Intelligent_Platform_Management_Interface|IPMI]] card and remote serial project
 
* Upgrade to a new combination of motherboard/CPU/RAM (distant goal)
 
* Upgrade to a new combination of motherboard/CPU/RAM (distant goal)
  
Line 117: Line 109:
 
==Thanks==
 
==Thanks==
  
To Kenny for our current 2nd generation equipment, Stef for donating the first generation equipment and Alex for getting the project started.
+
To Kenny for our current 2nd generation equipment, Stef for the first generation equipment, the members of Skullspace for funding the RAM upgrades to the first and second generation servers, and Alex for getting the project started and providing an uninterrupted power supply (UPS).
  
 
[[Category:Projects]]
 
[[Category:Projects]]

Revision as of 15:16, 9 September 2012

Philosophy

The Skullspace virtual machine service (vmsrv) is offered to members as a means to share the benefits of best-available hardware.

"Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total."

We focus our virtual machine service on two styles of computing

  • Interactive computing -- temporary bursts of high resource use (IO/CPU/memory) by a single user for the purpose of "figuring stuff out", "getting stuff done", "hacking", etc. with the ethic of ensuring resources are freed when not in use. "Always yield to the Hands-On Imperative!"
  • General service computing -- always up and running services with reasonable IO, CPU, and memory use that doesn't impair the above.


(services with intense all the time resource requirements should be operated on dedicated servers)

System

* qemu-kvm managed by libvirt (full machine virtualization), our recommend choice for most
* Linux Containers (LXC) (OS-level virtualization)


Ask for Help! Free migrations available

Don't be afraid to ask for help, email Mark Jenkins <mark@parit.ca> and catch me in person on Tuesdays, hackathons (third Saturdays), special events, and by appointment.

Some free (but not unlimited) migration consulting and assistance is also available.

Linux Containers (LXC)

If you want to run a Linux-based x86_64 or x86 based guest, you should consider the benefits of running it as a Linux Container (LXC).

The main vmsrv kernel (version 2.6.32) directly runs your processes (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with. There are performance upsides to using the host OS kernel directly.

There are also downsides, see the Vmsrv_lxc_containers page for more info. You probably want to use our primary virtualization offering, qemu-kvm (see next section)

qemu-kvm with libvirt

Users with accounts on the vmsrv machine are able to run qemu-kvm based virtual machines that are managed by libvirt. We use virt-manager as a libvirt front-end.

Because a fully featured x86/x86_64 machine is emulated and virtualized, a large variety of guest OSs are supported.

virt-manager exposes a large number of features of libvirt and qemu-kvm -- asa GUI app this makes it largely self-documenting. Experiment!

We welcome improvements to this documentation as well.

Accounts

Pick one of two ways to get an account:

  • Ask the admin team (Mark Jenkins <mark@parit.ca>)
  • Use the automated claimid process for mumd at http://192.168.1.28 . mumd accounts are made available to the vmsrv host system via the wonders (and down sides) of LDAP. Follow up with Mark Jenkins <mark@parit.ca> to have your account added to the libvirt group.

Accounts are for Skullspace members only.

How to login and start virt-manager

The host vm machine is 192.168.1.26 on the skullspace LAN. Three ways to log in the from the Skullspace network:

  • A SSH client (port 22), for graphics use -X or port forward a vnc session
  • RDP client (port 3389)
  • XDMCP, e.g. X -query 192.168.1.26, Xephyr -query 192.168.1.26, Xnest -query 192.168.1.26

From outside the space, there are two options:

  • SSH to vmsrv.markjenkins.ca (206.220.196.57 port 22 )
  • RDP client to vmsrv.markjenkins.ca (206.220.196.57 port 3389)

The default desktop environment is LXDE which is fairly lightweight, but still least has a menu in the corner and a task bar. virt-manager can be found in the applications menu (bottom left corner) in the System Tools menu, the menu entry says "Virtual Machine Manager".

There's a button on the top, left hand side of virt-manager for creating a new virtual machine.

Memory settings

Your choice of memory setting is very important. Feel free to be more on the greedy side (3 gigabyte) if you're just starting your vm, doing your thing, and shutting it down when you're done (interactive use).

If you're planning on running all the time, than you should use 1G at most except by special request to the vm server administrator Mark Jenkins <mark@parit.ca> .

Keep us in the loop as to how often you're using the VM service and what kind of RAM requirements you're hitting -- this will help us justify eventual for an even higher capacity machine.

=Network settings

Join the skspprivbr bridge for the skullspace network and the skspvoipubbr bridge if you have a VOI public ip addresses allocated to you on the networking page.

Remote Access

We recommend installing guest operating systems with remote access features that are either built in or installable and enabling these features shortly after completing your install.

This will allow you to go for direct logins to your virtual machine.

If your guest operating system lacks a proper remote access facility or if your going to end up spending a lot of time doing console access for other reasons, you should look into the feature where a graphic card can be emulated as a vnc server you can directly connect to and also consider the remote access features built-in to the qemu-kvm serial port emulation which can be used as a console on some OSs as well.

virtio

To improve performance, qemu-kvm emulates traditional PC hardware and supports the virtio standard. If you're running a Linux or Windows based guest, we recommend installing the virtio network and disk drivers and uses these options for network and disk in the virt-manager hardware manager so that we can all have better performance.

Always running VMs

VMs created in virt-manager by default will come up on system start-up. There's a checkbox you can check to ensure your VM does come up if required. Please keep the vmsrv administrator (Mark Jenkins <mark@parit.ca>) in the loop as to which VMs you intend to keep up all the time.

Courtesy

If you virtual machine is for experimental/casaual/interactive use and does not need to be on 24/7, please take care to turn it off when you're done. If you notice that allocated RAM is running short, let the server administrator know -- its rude to just shut off someone elses virtual machine -- you can't tell just from looking if its being used or not, especially given the use of remote access.

Services offered to members hosted on vmsrv

The following services being offered to members are hosted on vmsrv:

  • MUMD (a group linux containers with shared LDAP login and a large install base of interactive software )

Coming soon: Skullhost, a shared web hosting service. (not everyone needs to run their own dedicated web server!)

Capital Campaign

The vmsrv project is raising money for upgrades. Projects goals in order of priority are:

  • IPMI card and remote serial project
  • Upgrade to a new combination of motherboard/CPU/RAM (distant goal)

Administrators

  • Mark Jenkins <mark@parit.ca>

Thanks

To Kenny for our current 2nd generation equipment, Stef for the first generation equipment, the members of Skullspace for funding the RAM upgrades to the first and second generation servers, and Alex for getting the project started and providing an uninterrupted power supply (UPS).