Difference between revisions of "Vmsrv"

From SkullSpace Wiki
Jump to navigation Jump to search
(Revised equipment thanks)
 
(24 intermediate revisions by 2 users not shown)
Line 6: Line 6:
 
We focus our virtual machine service on two styles of computing
 
We focus our virtual machine service on two styles of computing
 
* Interactive computing -- temporary bursts of high resource use (IO/CPU/memory) by a single user for the purpose of "figuring stuff out", "getting stuff done", "hacking", etc. with the ethic of ensuring resources are freed when not in use. "Always yield to the Hands-On Imperative!"
 
* Interactive computing -- temporary bursts of high resource use (IO/CPU/memory) by a single user for the purpose of "figuring stuff out", "getting stuff done", "hacking", etc. with the ethic of ensuring resources are freed when not in use. "Always yield to the Hands-On Imperative!"
* General service computing -- always up and running services with reasonable IO, CPU, and memory use that doesn't impair the above.
+
* General service computing -- always up and running services with reasonable IO, CPU, and memory use that doesn't impair the above. See our section in intense resource usage.
(services with intense all the time resource requirements should be operated on dedicated servers)
 
  
 
==System==
 
==System==
 
* [http://www.amd.com/us/products/desktop/processors/phenom-ii/Pages/phenom-ii-model-number-comparison.aspx AMD Phenom II X6 1055T], which has 6 core, 512k L2 cache per core, a shared 6M L3 cache, and AMD's virtualization extensions
 
* [http://www.amd.com/us/products/desktop/processors/phenom-ii/Pages/phenom-ii-model-number-comparison.aspx AMD Phenom II X6 1055T], which has 6 core, 512k L2 cache per core, a shared 6M L3 cache, and AMD's virtualization extensions
* [http://ca.asus.com/en/Motherboards/AMD_AM3Plus/M5A88V_EVO/#specifications Asus M5A88-V EVO] motherboard
+
* [https://www.asus.com/ca-en/Motherboards/M5A88V_EVO/specifications/ Asus M5A88-V EVO] motherboard
 
* 4x4G (16G total) of DDR3 RAM in unganged mode, 1333.33 MT/s configuration,  
 
* 4x4G (16G total) of DDR3 RAM in unganged mode, 1333.33 MT/s configuration,  
* 4X1TB SATA hard drives in RAID 10 configuration, [[wikipedia:Logical_Volume_Manager_%28Linux%29LVM|LVM]] block layer
+
* 2X1TB SATA hard drives in RAID 1 configuration, [[wikipedia:Logical_Volume_Manager_%28Linux%29LVM|LVM]] block layer
* Debian GNU/Linux 6.0 amd64 host operating system
+
* Debian GNU/Linux 9 amd64 host operating system
* 1GBit internal NIC on SkullSpace lan (on host Linux bridge skspprivbr), 192.168.1.26
+
* 1GBit internal NIC on SkullSpace lan (on host Linux bridge skspprivbr), 172.30.6.40
 
* 100Mbit PCI NIC on VOI public IP switch (on host Linux bridge skspvoipubbr), 206.220.196.57
 
* 100Mbit PCI NIC on VOI public IP switch (on host Linux bridge skspvoipubbr), 206.220.196.57
 
* power backed by UPS
 
* power backed by UPS
 
* Two types of virtualization:
 
* Two types of virtualization:
** qemu-kvm managed by libvirt (full machine virtualization), our recommend choice for most users
+
** Unprivileged Linux Containers (LXC) ([[wikipedia:Operating_system-level_virtualization|OS-level virtualization]]), offered some performance advantages for users running linux guests over full-machine virtualation and reducing the RAM usage. Our recommended choice if you need to run a supported GNU/Linux distribution and your use-case would work in a LXC container
** Linux Containers (LXC) ([[wikipedia:Operating_system-level_virtualization|OS-level virtualization]]), offered some performance advantages for users running linux guests over full-machine virtualation
+
** qemu-kvm managed by libvirt (full machine virtualization), for everything else
  
  
Line 29: Line 28:
  
 
==Linux Containers (LXC)==
 
==Linux Containers (LXC)==
If you want to run a Linux-based x86_64 or x86 based guest, you should consider the benefits of running it as a Linux Container (LXC).  
+
If you want to run a Linux-based x86_64 or x86 based guest, you should consider the benefits of running it as an unprivileged Linux Container (LXC).  
  
The main vmsrv kernel (version 2.6.32) directly runs your processes (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with. There are performance upsides to using the host OS kernel directly.
+
The main vmsrv kernel directly runs your processes, all under your own user account (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with. You have root in the container even though its not root on the host system. (achieved with process id mapping) There are performance upsides to using the host OS kernel directly and this reduces the RAM usage overall.
  
There are also downsides, see the [[Vmsrv_lxc_containers]] page for more info. You probably want to use our primary virtualization offering, qemu-kvm (see next section)
+
Ask Mark Jenkins <mark@parit.ca> to set your account up for this
  
 
==qemu-kvm with libvirt==
 
==qemu-kvm with libvirt==
Line 40: Line 39:
 
Because a fully featured x86/x86_64 machine is emulated and virtualized, a large variety of [http://www.linux-kvm.org/page/Guest_Support_Status#UNIX_Family:_BSD guest OSs] are supported.
 
Because a fully featured x86/x86_64 machine is emulated and virtualized, a large variety of [http://www.linux-kvm.org/page/Guest_Support_Status#UNIX_Family:_BSD guest OSs] are supported.
  
virt-manager exposes a large number of features of libvirt and qemu-kvm -- asa GUI app this makes it largely self-documenting. Experiment!
+
virt-manager exposes a large number of features of libvirt and qemu-kvm -- as a GUI app this makes it largely self-documenting. Experiment!
  
 
We welcome improvements to this documentation as well.
 
We welcome improvements to this documentation as well.
  
 
===Accounts===
 
===Accounts===
Pick one of two ways to get an account:
+
To get an account, contact Mark Jenkins <mark@parit.ca>
* Ask the admin team (Mark Jenkins <mark@parit.ca>)
 
* Use the automated claimid process for [[mumd]] at http://192.168.1.28 . mumd accounts are made available to the vmsrv host system via the wonders (and down sides) of LDAP. Follow up with Mark Jenkins <mark@parit.ca> to have your account added to the libvirt group.
 
  
 
Accounts are for Skullspace members only.
 
Accounts are for Skullspace members only.
  
 
===How to login and start virt-manager===
 
===How to login and start virt-manager===
The host vm machine is 192.168.1.26 on the skullspace LAN. Three ways to log in the from the Skullspace network:
+
The host vm machine is 172.30.6.40 on the skullspace LAN. To log in the from the Skullspace network:
 
* A [[wikipedia:Secure_Shell| SSH]] client (port 22), for graphics use -X or port forward a vnc session
 
* A [[wikipedia:Secure_Shell| SSH]] client (port 22), for graphics use -X or port forward a vnc session
* [[wikipedia:RDP | RDP]] client (port 3389)
 
* [[wikipedia:XDMCP | XDMCP]], e.g. X -query 192.168.1.26, Xephyr -query 192.168.1.26, Xnest -query 192.168.1.26
 
  
From outside the space, there are two options:
+
 
* [[wikipedia:Secure_Shell|SSH]] to vmsrv.markjenkins.ca (206.220.196.57 port 22 )
+
From outside the space:
* [[wikipedia:RDP | RDP]] client to vmsrv.markjenkins.ca (206.220.196.57 port 3389)
+
* [[wikipedia:Secure_Shell|SSH]] to vmsrv.skullspace.ca (208.81.6.230 port 22 )
  
 
The default desktop environment is [[wikipedia:LXDE | LXDE]] which is fairly lightweight, but still least has a menu in the corner and a task bar. virt-manager can be found in the applications menu (bottom left corner) in the System Tools menu, the menu entry says "Virtual Machine Manager".
 
The default desktop environment is [[wikipedia:LXDE | LXDE]] which is fairly lightweight, but still least has a menu in the corner and a task bar. virt-manager can be found in the applications menu (bottom left corner) in the System Tools menu, the menu entry says "Virtual Machine Manager".
Line 93: Line 88:
 
==Services offered to members hosted on vmsrv==
 
==Services offered to members hosted on vmsrv==
 
The following services being offered to members are hosted on vmsrv:
 
The following services being offered to members are hosted on vmsrv:
* [[MUMD]]  (a group linux containers with shared LDAP login and a large install base of interactive software )
+
* [[shell.skull.space]] -- Newer shell account service
 +
* [[Skullhost]], a shared web hosting service. (not everyone needs to run their own dedicated web server!)
 +
* [[outbound commercial vpn]]
 +
* [[whonix.skull.space]], conveniently access a Whonix gateway via ssh
 +
* [[mail.skull.space]], an inbound mail relay to assist you in running a home email server. (please don't use for state department business)
 +
** (currently used to inbound relay @markjenkins.ca)
 +
* [[Mumd|MUMD]]  -- Our old graphical shell account service, to be retired
 +
 
 +
==Intense resource usage==
 +
As described by in our philosphy section, our priority for the vm server is support members' hacking and not ongoing, high volume "serrious business". The activities of hackers are generally high intensity bursts that are monitored and terminated upon completion, or ongoing low resource services that have minimal impact.
 +
 
 +
Please respect our sugested memory limits for qemu-kvm/libvirt dedicated VMs. For temporary higher memory use that exceeds these guidelines, we would prefer that you run your processes directly on the host operating system, under your own linux container, or under one of our linux container hosted services (MUMD, Skullhost) as memory effectively allocated (and swapped out) by the host OS kernel for these, whereas dedicated VMs hog whatever memory they're set to use.
 +
 
 +
You can also get better access to the CPU by running processes on the host OS, your own linux container, or one of our linux container hosted services (MUMD, Skullhost) -- in fact, you're welcome to use all 6 cores. But, you should also be "nice" and use the nice command on your intensive processes:
 +
* "nice -n 1" if your intensive processes is highly interactive (such as raster editor running a filter) and could use your near immediate feedback
 +
* "nice -n 2" if your're looking for your process to finish ASAP, but its the kind of thing where you sit back or take a break while it runs, e.g. http://xkcd.com/303/
 +
* "nice -n 15" if it's the kind of thing that runs so long you're end up working on other things until it's done
  
Coming soon: Skullhost, a shared web hosting service. (not everyone needs to run their own dedicated web server!)
+
As an exception to our focus on "short run intensive, long run unintensive", we do permit our users to operate longer runing processes that are only CPU intensive (not memory or disk access) as long as they're run on the host OS or linux containers, as the kernel can effectively schedule these to be out of the way of everything else with minimal task switching costs. Thanks to modern CPU design, these kinds of processes do raise our electrical bills, so we ask that the number of cores be limited if run times are expected to be longer than one day. Our nice level and number of cores expectation is
 +
* "nice -n 16" and limited to 6 cores if run time less than 2 days
 +
* "nice -n 17" and limited to 3 cores if run time less than 5 days
 +
* "nice -n 18" and limited to 1 core if run time expected is less than 30 days
 +
* "nice -n 19" and limited to 1 core if run time expected to exceed 30 days
  
==Capital Campaign==
+
Many intensive multi-core programs come with options to control the number of cores in use. If this isn't available, you can use the taskset command, e.g.
 +
* "taskset -c 0 nice -n 19 intensive_monster.py" runs on CPU 0 with nice 19
 +
* "tasket -c 0,1,2 nice -n 17" runs on CPUs 0, 1, and 2 with nice 17
  
The vmsrv project is raising money for upgrades. Projects goals in order of priority are:
 
* [[wikipedia:Intelligent_Platform_Management_Interface|IPMI]] card and remote serial project
 
* Upgrade to a new combination of motherboard/CPU/RAM (distant goal)
 
  
 
==Administrators==
 
==Administrators==
 
* Mark Jenkins <mark@parit.ca>
 
* Mark Jenkins <mark@parit.ca>
 +
* Alex Weber <alexwebr@gmail.com> (I'm new still)
 +
 +
==Equipment Donation Thanks==
 +
 +
* Stef for the first motherboard, case, power supply and hard drives (1 of these drives still in use)
 +
* Kenny for our current (2nd) motherboard and paired power supply (which died, rest in peace)
 +
* Whoever abandoned a rack mountable case at Skullspace (came from a closed business I think)
 +
* Mark J and Thor for funding our first replacement hard drives
 +
* The members of Skullspace for RAM upgrades on our first and second motherboards and current replacement power supply
 +
* Alex for getting the project started and providing an uninterruptable power supply (UPS).
  
==Thanks==
+
==SSH host keys==
 +
Signed by Mark Jenkins(http://markjenkins.ca/gpg/)
 +
-----BEGIN PGP SIGNED MESSAGE-----
 +
Hash: SHA1
 +
 +
MD5:59:ed:95:bc:b8:2c:5c:2e:12:be:2b:01:7d:ba:1a:f1 (RSA)
 +
SHA256:srpC2U3qbLdTOwTv+VH6XjJ/QerY07BEG4mZsLbLntY (RSA)
 +
MD5:af:e7:cc:2d:84:d9:c2:68:fd:f2:86:0e:c8:7a:a5:13 (ECDSA)
 +
SHA256:voapDaz4aJlGMGgPa8kQNKbs2bmWEAoDcwugwL357Dc (ECDSA)
 +
-----BEGIN PGP SIGNATURE-----
 +
Version: GnuPG v1
 +
 +
iQEcBAEBAgAGBQJc4/jhAAoJEKj4ZJOqTbH7hdQIAJ3akVuGxuFVNtHpoLuLA+bE
 +
ZHnM+noI5+oqBAGYdaAj66hUrLPSvWb+LwVT82qZimOqlrekfXrUsxZc9lLQaI0s
 +
4BLeY2q6tRngY679FfYg416fX/iwWoo56DOh63vEw+TAbZepX9b5m88r7w/jkb2R
 +
oyzx82DwdWKWqghB1dPFUJKOXQRHoZPkqFug/rhXBLLezmPb7FyZnONaLAVm50B+
 +
PLyY5AuN0l9E3NlA1tcZ0tEuJAG+GXJywzaphHjER988Zo1yzsGr1wMWXSGwqcJV
 +
voyWiPF+Yn4UZDSLzcRGs+LrM5y1BPSRI/gPEfJ+COARX2SP5h04/3daNWaWwd8=
 +
=r1fO
 +
-----END PGP SIGNATURE-----
  
To Kenny for our current 2nd generation equipment, Stef for the first generation equipment, the members of Skullspace for funding the RAM upgrades to the first and second generation servers, and Alex for getting the project started and providing an uninterrupted power supply (UPS).
 
  
 
[[Category:Projects]]
 
[[Category:Projects]]
 +
<nowiki>Insert non-formatted text here</nowiki>

Latest revision as of 14:39, 24 May 2019

Philosophy

The Skullspace virtual machine service (vmsrv) is offered to members as a means to share the benefits of best-available hardware.

"Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total."

We focus our virtual machine service on two styles of computing

  • Interactive computing -- temporary bursts of high resource use (IO/CPU/memory) by a single user for the purpose of "figuring stuff out", "getting stuff done", "hacking", etc. with the ethic of ensuring resources are freed when not in use. "Always yield to the Hands-On Imperative!"
  • General service computing -- always up and running services with reasonable IO, CPU, and memory use that doesn't impair the above. See our section in intense resource usage.

System

  • AMD Phenom II X6 1055T, which has 6 core, 512k L2 cache per core, a shared 6M L3 cache, and AMD's virtualization extensions
  • Asus M5A88-V EVO motherboard
  • 4x4G (16G total) of DDR3 RAM in unganged mode, 1333.33 MT/s configuration,
  • 2X1TB SATA hard drives in RAID 1 configuration, LVM block layer
  • Debian GNU/Linux 9 amd64 host operating system
  • 1GBit internal NIC on SkullSpace lan (on host Linux bridge skspprivbr), 172.30.6.40
  • 100Mbit PCI NIC on VOI public IP switch (on host Linux bridge skspvoipubbr), 206.220.196.57
  • power backed by UPS
  • Two types of virtualization:
    • Unprivileged Linux Containers (LXC) (OS-level virtualization), offered some performance advantages for users running linux guests over full-machine virtualation and reducing the RAM usage. Our recommended choice if you need to run a supported GNU/Linux distribution and your use-case would work in a LXC container
    • qemu-kvm managed by libvirt (full machine virtualization), for everything else


Ask for Help! Free migrations available

Don't be afraid to ask for help, email Mark Jenkins <mark@parit.ca> and catch me in person on Tuesdays, hackathons (third Saturdays), special events, and by appointment.

Some free (but not unlimited) migration consulting and assistance is also available.

Linux Containers (LXC)

If you want to run a Linux-based x86_64 or x86 based guest, you should consider the benefits of running it as an unprivileged Linux Container (LXC).

The main vmsrv kernel directly runs your processes, all under your own user account (starting with /sbin/init!) in an independent process space and gives you your own network stack (interfaces, routing tables, iptables) to work with. You have root in the container even though its not root on the host system. (achieved with process id mapping) There are performance upsides to using the host OS kernel directly and this reduces the RAM usage overall.

Ask Mark Jenkins <mark@parit.ca> to set your account up for this

qemu-kvm with libvirt

Users with accounts on the vmsrv machine are able to run qemu-kvm based virtual machines that are managed by libvirt. We use virt-manager as a libvirt front-end.

Because a fully featured x86/x86_64 machine is emulated and virtualized, a large variety of guest OSs are supported.

virt-manager exposes a large number of features of libvirt and qemu-kvm -- as a GUI app this makes it largely self-documenting. Experiment!

We welcome improvements to this documentation as well.

Accounts

To get an account, contact Mark Jenkins <mark@parit.ca>

Accounts are for Skullspace members only.

How to login and start virt-manager

The host vm machine is 172.30.6.40 on the skullspace LAN. To log in the from the Skullspace network:

  • A SSH client (port 22), for graphics use -X or port forward a vnc session


From outside the space:

  • SSH to vmsrv.skullspace.ca (208.81.6.230 port 22 )

The default desktop environment is LXDE which is fairly lightweight, but still least has a menu in the corner and a task bar. virt-manager can be found in the applications menu (bottom left corner) in the System Tools menu, the menu entry says "Virtual Machine Manager".

There's a button on the top, left hand side of virt-manager for creating a new virtual machine.

Memory settings

Your choice of memory setting is very important. Feel free to be more on the greedy side (3 gigabyte) if you're just starting your vm, doing your thing, and shutting it down when you're done (interactive use).

If you're planning on running all the time, than you should use 1G at most except by special request to the vm server administrator Mark Jenkins <mark@parit.ca> .

Keep us in the loop as to how often you're using the VM service and what kind of RAM requirements you're hitting -- this will help us justify eventual for an even higher capacity machine.

Network settings

Join the skspprivbr bridge for the skullspace network and the skspvoipubbr bridge if you have a VOI public ip addresses allocated to you on the networking page.

Remote Access

We recommend installing guest operating systems with remote access features that are either built in or installable and enabling these features shortly after completing your install.

This will allow you to go for direct logins to your virtual machine.

If your guest operating system lacks a proper remote access facility or if your going to end up spending a lot of time doing console access for other reasons, you should look into the feature where a graphic card can be emulated as a vnc server you can directly connect to and also consider the remote access features built-in to the qemu-kvm serial port emulation which can be used as a console on some OSs as well.

virtio

To improve performance, qemu-kvm emulates traditional PC hardware and supports the virtio standard. If you're running a Linux or Windows based guest, we recommend installing the virtio network and disk drivers and uses these options for network and disk in the virt-manager hardware manager so that we can all have better performance.

Always running VMs

VMs created in virt-manager by default will come up on system start-up. There's a checkbox you can check to ensure your VM does come up if required. Please keep the vmsrv administrator (Mark Jenkins <mark@parit.ca>) in the loop as to which VMs you intend to keep up all the time.

Courtesy

If you virtual machine is for experimental/casaual/interactive use and does not need to be on 24/7, please take care to turn it off when you're done. If you notice that allocated RAM is running short, let the server administrator know -- its rude to just shut off someone elses virtual machine -- you can't tell just from looking if its being used or not, especially given the use of remote access.

Services offered to members hosted on vmsrv

The following services being offered to members are hosted on vmsrv:

  • shell.skull.space -- Newer shell account service
  • Skullhost, a shared web hosting service. (not everyone needs to run their own dedicated web server!)
  • outbound commercial vpn
  • whonix.skull.space, conveniently access a Whonix gateway via ssh
  • mail.skull.space, an inbound mail relay to assist you in running a home email server. (please don't use for state department business)
    • (currently used to inbound relay @markjenkins.ca)
  • MUMD -- Our old graphical shell account service, to be retired

Intense resource usage

As described by in our philosphy section, our priority for the vm server is support members' hacking and not ongoing, high volume "serrious business". The activities of hackers are generally high intensity bursts that are monitored and terminated upon completion, or ongoing low resource services that have minimal impact.

Please respect our sugested memory limits for qemu-kvm/libvirt dedicated VMs. For temporary higher memory use that exceeds these guidelines, we would prefer that you run your processes directly on the host operating system, under your own linux container, or under one of our linux container hosted services (MUMD, Skullhost) as memory effectively allocated (and swapped out) by the host OS kernel for these, whereas dedicated VMs hog whatever memory they're set to use.

You can also get better access to the CPU by running processes on the host OS, your own linux container, or one of our linux container hosted services (MUMD, Skullhost) -- in fact, you're welcome to use all 6 cores. But, you should also be "nice" and use the nice command on your intensive processes:

  • "nice -n 1" if your intensive processes is highly interactive (such as raster editor running a filter) and could use your near immediate feedback
  • "nice -n 2" if your're looking for your process to finish ASAP, but its the kind of thing where you sit back or take a break while it runs, e.g. http://xkcd.com/303/
  • "nice -n 15" if it's the kind of thing that runs so long you're end up working on other things until it's done

As an exception to our focus on "short run intensive, long run unintensive", we do permit our users to operate longer runing processes that are only CPU intensive (not memory or disk access) as long as they're run on the host OS or linux containers, as the kernel can effectively schedule these to be out of the way of everything else with minimal task switching costs. Thanks to modern CPU design, these kinds of processes do raise our electrical bills, so we ask that the number of cores be limited if run times are expected to be longer than one day. Our nice level and number of cores expectation is

  • "nice -n 16" and limited to 6 cores if run time less than 2 days
  • "nice -n 17" and limited to 3 cores if run time less than 5 days
  • "nice -n 18" and limited to 1 core if run time expected is less than 30 days
  • "nice -n 19" and limited to 1 core if run time expected to exceed 30 days

Many intensive multi-core programs come with options to control the number of cores in use. If this isn't available, you can use the taskset command, e.g.

  • "taskset -c 0 nice -n 19 intensive_monster.py" runs on CPU 0 with nice 19
  • "tasket -c 0,1,2 nice -n 17" runs on CPUs 0, 1, and 2 with nice 17


Administrators

  • Mark Jenkins <mark@parit.ca>
  • Alex Weber <alexwebr@gmail.com> (I'm new still)

Equipment Donation Thanks

  • Stef for the first motherboard, case, power supply and hard drives (1 of these drives still in use)
  • Kenny for our current (2nd) motherboard and paired power supply (which died, rest in peace)
  • Whoever abandoned a rack mountable case at Skullspace (came from a closed business I think)
  • Mark J and Thor for funding our first replacement hard drives
  • The members of Skullspace for RAM upgrades on our first and second motherboards and current replacement power supply
  • Alex for getting the project started and providing an uninterruptable power supply (UPS).

SSH host keys

Signed by Mark Jenkins(http://markjenkins.ca/gpg/)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

MD5:59:ed:95:bc:b8:2c:5c:2e:12:be:2b:01:7d:ba:1a:f1 (RSA)
SHA256:srpC2U3qbLdTOwTv+VH6XjJ/QerY07BEG4mZsLbLntY (RSA)
MD5:af:e7:cc:2d:84:d9:c2:68:fd:f2:86:0e:c8:7a:a5:13 (ECDSA)
SHA256:voapDaz4aJlGMGgPa8kQNKbs2bmWEAoDcwugwL357Dc (ECDSA)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJc4/jhAAoJEKj4ZJOqTbH7hdQIAJ3akVuGxuFVNtHpoLuLA+bE
ZHnM+noI5+oqBAGYdaAj66hUrLPSvWb+LwVT82qZimOqlrekfXrUsxZc9lLQaI0s
4BLeY2q6tRngY679FfYg416fX/iwWoo56DOh63vEw+TAbZepX9b5m88r7w/jkb2R
oyzx82DwdWKWqghB1dPFUJKOXQRHoZPkqFug/rhXBLLezmPb7FyZnONaLAVm50B+
PLyY5AuN0l9E3NlA1tcZ0tEuJAG+GXJywzaphHjER988Zo1yzsGr1wMWXSGwqcJV
voyWiPF+Yn4UZDSLzcRGs+LrM5y1BPSRI/gPEfJ+COARX2SP5h04/3daNWaWwd8=
=r1fO
-----END PGP SIGNATURE-----

Insert non-formatted text here