KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
The kernel component of KVM is included in mainline Linux, as of 2.6.20.
# If you change this file, run 'update-grub' afterwards to update# /boot/grub/grub.cfg.GRUB_DEFAULT=0GRUB_TIMEOUT=5GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null ||echo Debian`GRUB_CMDLINE_LINUX_DEFAULT="quiet elevator=deadline"GRUB_CMDLINE_LINUX=""# Uncomment to enable BadRAM filtering, modify to suit your needs# This works with Linux (no patch required) and with any kernel that obtains# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"# Uncomment to disable graphical terminal (grub-pc only)#GRUB_TERMINAL=console# The resolution used on graphical terminal# note that you can use only modes which your graphic card supports via VBE# you can see them in real GRUB with the command `vbeinfo'#GRUB_GFXMODE=640x480# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux#GRUB_DISABLE_LINUX_UUID=true# Uncomment to disable generation of recovery mode menu entries#GRUB_DISABLE_LINUX_RECOVERY="true"# Uncomment to get a beep at grub start#GRUB_INIT_TUNE="480 440 1"
And update grub:
1
update-grub
Now, to reduces data copies and bus traffic, when you’re using LVM partitions, disable the cache and use virtio drivers which are the fastest:
Then, we will enable KSM. Kernel Samepage Merging (KSM) is a feature of the Linux kernel introduced in the 2.6.32 kernel. KSM allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. For KVM, the KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.
To enable it, add this line:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/sh -e
## rc.local## This script is executed at the end of each multiuser runlevel.# Make sure that the script will "exit 0" on success or any other# value on error.## In order to enable or disable this script just change the execution# bits.## By default this script does nothing.# KSMecho1 > /sys/kernel/mm/ksm/run
exit0
You can see at anytime the status of KSM by:
1
for i in /sys/kernel/mm/ksm/*; doecho -n "$i: "; cat $i; done
And in addition, we will disable swapiness to avoid having too much memory consumption. Add those lines in sysctl:
The installation of kvm created a new system group named kvm in /etc/group. You need to add the user accounts that will run kvm to this group (replace username with the user account name to add):
1
adduser username kvm
For those who would like to use libvirt (recommanded), add your user to this group too:
Create a virtual disk image (10 gigabytes in the example, but it is a sparse file and will only take as much space as is actually used, which is 0 at first, as can be seen with the du command: du vdisk.qcow, while ls -l vdisk.qcow shows the sparse file size):
# This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).# The loopback network interfaceauto lo
iface lo inet loopback
# The primary network interfaceallow-hotplug eth0
auto eth0
iface eth0 inet manual
auto eth0.110
iface eth0.110 inet manual
vlan_raw_device eth0
# The bridged interfaceauto vmbr0
iface vmbr0 inet static
address 192.168.100.1
netmask 255.255.255.0
network 192.168.100.0
broadcast 192.168.100.255
gateway 192.168.100.254
# dns-* options are implemented by the resolvconf package, if installed dns-nameservers 192.168.100.254
dns-search deimos.fr
bridge_ports eth0
bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off
auto vmbr0.110
iface vmbr0.110 inet static
address 192.168.110.1
netmask 255.255.255.0
bridge_ports eth0.190
bridge_stp off
bridge_maxwait 0 bridge_fd 0
You may need to configure iptables for example if you’re on a dedicated box where the provider doesn’t allow bridge configuration. Here is a working iptables configuration to permit incoming connexions to Nated guests:
To make a clean installation of a Guest KVM, you can create a script for each VM you want to create. here is a script exemple to launch a KVM using VNC for display instead of X11 display:
#!/bin/sh
clear
# Var Definition# Name of your KVMHOSTNAME="Client 1"# Path to your virtual hard drive imageHDD="-hda /mnt/vms/lenny/disk0.qcow2"# Path to your CD-Rom# note: you can use an isoCDROM="-cdrom /dev/cdrom"# Boot Sequence# note: "c": HDD, "d": CD-Rom, "a": FloppyBOOT="-boot c"# TAP Device Creation# note: don't forget to change "ifname" if you are using mutliples KVM's!# This is for "bridged mode". if you wan't to use the "user mode", remove the "TAP" variableTAP="-net tap,vlan=0,ifname=tap0,script=/etc/kvm/kvm-ifup"# Virtual Network Card parameters# note: default model is a "ne2k_pci" (rtl8029) and works on Windows XP and Vista# "rtl8139" has better performances and is detected as a 100Mb Adapter# "pcnet" or "ai82551" are better for BSD'sNIC="-net nic,model=rtl8139,vlan=0"# Amount of memory (in Megabyte)MEM="-m 384"# Miscelaneous options# note: "-k fr": if using VNC, it corrects the keyboard problem# "-usbdevice" tablet: correct the problem of mouse desynchronisation# "-no-acpi": if you are installing a Windows based guest or a BSDMISC="-k fr -localtime -no-acpi -usbdevice tablet"# VNC Mode# note: "-vnc <ip>:<display>": allows clients to connect to specified display only ON (not "from") the specified IP AddressVNC="-vnc 192.168.0.80:1"# Starting the KVM# Cheapy Design by Hostin!! ;-pecho -e "\n\n################################"echo"##### Starting KVM with... #####"echo"################################"echo -e "\n Hard Disk: \"$HDD\" "echo" CD-Rom: \"$CDROM\" "echo" Boot Sequence: \"$BOOT\" "echo" TAP Device: \"$TAP\" "echo" Virtual Network Card: \"$NIC\" "echo" Memory size: \"$MEM\" "echo" Miscelaneous: \"$MISC\" "echo" VNC Mode: \"$VNC\" "echo -e "\n################################"echo -e "\n\n######################################################################"echo"Kernel-based Virtual Machine: $HOSTNAME - Running"echo"######################################################################"echo -e "\n\nLoading kvm-intel kernel module..."modprobe kvm-intel
exec kvm $HDD$CDROM$BOOT$TAP$NIC$MEM$MISC$VNC
<snapshot_name>: the name of the snapshot (not as a file, but as it will be displayed on virsh)
<snapshot_description>: a description of that snapshot
vda: select the name of the VM device to backup
<snapshot_name>: set the full path of the snapshot file (where it should be stored)
Once that command launched, the base VM disk (not the snapshot) becomes read only and the snapshot is read/write. You could copy the base for the backup if you want.
The blockcommit, is my favorite way to create backups. The actual problem is on Debian 7, this is not present as virsh require a version upper or equal to 0.10.2 and it’s only available on Debian unstable for the moment. Anyway, if you’ve got this version, here is how to do it.
Now we’ve got something like that:
[base(1)]--->[snapshot(2)]
If I now want to merge snapshot to base and got only one disk file:
> grep -e processor -e core /proc/cpuinfo | sed 's/processor/\nprocessor/'processor : 0core id : 0cpu cores : 4processor : 1core id : 1cpu cores : 4processor : 2core id : 2cpu cores : 4processor : 3core id : 3cpu cores : 4processor : 4core id : 0cpu cores : 4processor : 5core id : 1cpu cores : 4processor : 6core id : 2cpu cores : 4processor : 7core id : 3cpu cores : 4
You can see there are 7 cores (called processor). In fact there are 4 cores with 2 thread each on this CPU. That’s why there are 4 cores id and 8 detected cores.
So here is the list of the cores with their attached core:
core id 0: processors 0 and 4
core id 1: processors 1 and 5
core id 2: processors 2 and 6
core id 3: processors 3 and 7
Now, if I want on a VM a dedicated CPU with it’s additional thread, I would prefer do 2 virtual CPU (vpcu) and bind the good core on it. So first, look at the current configuration:
1
2
3
4
5
6
> virsh vcpuinfo vmname
VCPU: 0CPU: 6State: running
CPU time: 7,5s
CPU Affinity: yyyyyyyy
You can see there is only 1 vcpu. And all the cores of the CPU are used (count the number of ‘y’ in CPU Affinity, here 8). If we want the best performances, we need to add as many vcpu as we want of cores on a VM, you will see the advantage later… So let’s add some cores:
1
virsh setvcpus <vmname> <number_of_vcpus>
So here for example, we set 4 vcpus. That mean the VM will see 4 cores! Now, we’re going to bind processor 0 and 4 on both vcpu! Why? Because if an application doesn’t know how to multithread, it will use all the cores! And if applications knows how to use multi cores, they will use it like that. So in any case, you will have good performances :-).
So now I added 4 virtuals CPU (0 and 1) and added 2 cores (2 and 3) with their associated thread (6 and 7).
In Debian 6 version, it will be done on the fly, but won’t be set definitely in the configuration. That’s why you’ll need to add those parameters (cpuset) in the XML of your VM:
You may have a couple of VM based on disk image like qcow2 and my want to convert them into LVM partition. Fortunatly, there is a solution! First convert into your qcow into raw format:
If you need to transfer from one server to another a VM based on LVM, there is an easy way solution. You need to first stop the Virtual Machine to have consistency datas, then you can transfer them:
Do not forget to transfer xml file configuration of the VM and adapt LVM disks name if needed. Then “virsh define” the new xml file.
Graphically access to VMs without Virt Manager link
If you want to access thought your VMs without installing any manager, you can. First you have to be sure when you created your VM, you entered the –vnc option or when you launch it, you use this option.
If if it’s not hte case and you’re using libvirt, please add it to your wished VM:
Now this is done, you need to change the default listening address of VNC on libvirt. By default, it’s listening on 127.0.0.1. This is the most secure choice. However, you may have a secured LAN and wished to open it to anybody. Open so the qemu.conf and modify it to bind on you secure server IP address:
1
vnc_listen="192.168.0.1"
If you need as well to activate secure VNC connections, please activate TLS in the same config file.
If your desktop hosts several VMs, it could be interesting to auto suspend them when you restart your computer for example. There is a service for that to make it easy. Simply edit libvirt-guests file configuration:
# URIs to check for running guests# example: URIS='default xen:/// vbox+tcp://host/system lxc:///'URIS=qemu:///system
# action taken on host boot# - start all guests which were running on shutdown are started on boot# regardless on their autostart settings# - ignore libvirt-guests init script won't start any guest on boot, however,# guests marked as autostart will still be automatically started by# libvirtdON_BOOT=ignore
# Number of seconds to wait between each guest start. Set to 0 to allow# parallel startup.START_DELAY=0# action taken on host shutdown# - suspend all running guests are suspended using virsh managedsave# - shutdown all running guests are asked to shutdown. Please be careful with# this settings since there is no way to distinguish between a# guest which is stuck or ignores shutdown requests and a guest# which just needs a long time to shutdown. When setting# ON_SHUTDOWN=shutdown, you must also set SHUTDOWN_TIMEOUT to a# value suitable for your guests.ON_SHUTDOWN=suspend# If set to non-zero, shutdown will suspend guests concurrently. Number of# guests on shutdown at any time will not exceed number set in this variable.PARALLEL_SHUTDOWN=3# Number of seconds we're willing to wait for a guest to shut down. If parallel# shutdown is enabled, this timeout applies as a timeout for shutting down all# guests on a single URI defined in the variable URIS. If this is 0, then there# is no time out (use with caution, as guests might not respond to a shutdown# request). The default value is 300 seconds (5 minutes).SHUTDOWN_TIMEOUT=600# If non-zero, try to bypass the file system cache when saving and# restoring guests, even though this may give slower operation for# some file systems.#BYPASS_CACHE=0
On completion and reboot, the VM will perpetually reboot. “Stop” the VM.
Start it up again, and immediately open a vnc console and select the Safe Boot from the options screen
When prompted if you want to try and recover the boot block, say yes
You should now have a Bourne terminal with your existing filesystem mounted on /a
Run /a/usr/bin/bash (my preferred shell)
export TERM=xterm
vi /a/boot/grub/menu.1st (editing the bootloader on your mounted filesystem), to add “kernel/unix” to the kernel options for the non-safe-mode boot. Ex: