Fri, 16 Feb 2007
Firstly the architectural differences: KVM is a full virtualization
system - it requires pretty new CPU with virtualization support
(Intel VT or AMD-V). With this support, it allows the computer
to be fully virtualized (yes, I even tried to install The Other OS
Which Should Not Be Named under KVM, and it worked). On the
lguest is pretty minimalistic paravirtualization
system, which requires slight modification of both the host and quest
kernel - so it is Linux-under-Linux only.
KVM virtualizes the CPU, but to run a complete guest OS, the whole
computer needs to be virtualized. KVM uses a patched version of
emulates PC with a NE2k compatible NIC, Cirrus Logic VGA card,
IDE drive (with a plain file in the host OS as a backing store), etc.
Qemu itself can work as a full PC (and PowerPC and Sparc too) emulator,
but with KVM, it just provides the necessary virtual components to
the already-virtualized CPU. Qemu supports many file formats
for the disk image (including Xen, User Mode Linux, VMDK and others).
It also has the nice feature that when started with
option, it connects its terminal to the virtual serial port of the
quest system, providing a seamless serial console!
As for architecture, KVM+Qemu runs on IA32 and AMD64, while the
guest system can be 32-bit or 64-bit. The guest system is single-CPU only.
Networking can be done using a shared file (so even an ordinary user
can run his own set of virtual machines, and connect them together),
or with TUN/TAP interface it can communicate with the host OS as well.
lguest is still a proof-of-concept code, but it has some nice
features: it works on older CPUs as well (AMD64 support is being developed
as well, but now it is 32bit only). It can use memory-to-memory communication
between host and quest (so it does not have to emulate the NE2k card,
for example). Subjectively
lguest was faster and more
responsive than KVM. Here are the drawbacks of
its virtual disk is a raw file only, and it even does not support
partitioning, so existing Qemu images did not work. It does not "boot"
the block device per se, instead it starts an external kernel (in form
vmlinux file), which then mounts its root.
I will measure the performance penalty of both virtualization systems, and post it to my blog later.