2007-02-22

From ScotLUG
Jump to: navigation, search

The February 07 Meeting was presented by Dan Shearer [Personal Website] on Virtualisation


Error creating thumbnail: Unable to save thumbnail to destination


Virtualisation – Why Use It?

Dan Shearer http://shearer.org

Introduction

Going back, no virtualisation talk was any good without a demo. "Oooh! Ahh! It's a computer inside a computer!". Sadly those days are gone and smoke and mirrors won't work any more (download Xenoppix, see section at end, if you want a quick DIY intro!) We can't get away from virtualisation now. Not just because of marketing and the Linux distros, but because even low-end hardware will come with virtualisation support within a year or so thanks to Intel VT and AMD SVM architecture.

This evening I will concentrate why people should use virtualisation, and why people generally they think they use virtualisation. And from now on I'm going to use 'v12n' (v-followed-by-12-letters-and-n) instead of 'virtualisation'.

Free Software and Virtualisation

Linux is driving v12n. Nearly all modern developments in v12n -- no matter what OS or hardware manufacturer they are are associated with -- are implemented by Linux, or require Linux to work at all, or originally came from Linux, or is principally run and maintained on Linux, or is mostly used with Linux.

There are maybe twenty main solutions, and at least fifty functional solutions. Many of these are to do with virtualising the x86 architecture and its variants (KVM, Xen ...) , but many are not (Hercules, PearPC, SimH, Mame ...) and some cover multiple unrelated architectures (QEMU, OpenVZ ...)

At a quick glance on the screen, many of these systems look the same. Especially if it is Linux both underneath and in the virtual machines. So it helps to see where a particular v12n solution fits within the scope of what is possible.

Scope of Virtualisation

Here are the limits of v12n, in all directions. Any given solution implements features somewhere within these limits.

Full machine, fully faithful

A software implementation of hardware: peripherals, busses, chipsets. It is possible to do this to such a degree of fidelity that all system software and firmware can run without being aware it is not on real hardware. This implies that the underlying hardware need not be the same as the target architecture. It used to be that this implied slow execution speed, but this is need not be a practical consideration in 2007.

Many architectures

Support for many unrelated target architectures and host architectures, what I call 'any-on-any', each target architecture implemented as above. Supporting many host architectures requires that the target architecture is decoupled from the host architecture (unlike VMware or PearPC, for example) but does not necessarily imply that multiple target architectures are supported (eg Hercules.)

Networks of the above

A faithful implementation of hardware in software includes networking. Many v12n solutions also include a full network environment as well, including switches, hubs, routers and different kinds of network media (sometimes including wireless!). You can achieve the equivalent with Linux -- think of the tunctl and brctl utilities -- but these networks are part of the host system rather than the target systems so for example freezing a running target system does not freeze the packets in flight at the time of freezing.

Time dimension

The time dimension is very important and not widely supported. Linux itself (when used as a hypervisor layer as in KVM) is only just beginning to support the concept of independent time domains in VMs. It is possible to stretch, compress, halt and reverse time in VMs.

Four Problems to Solve

V12N has to address four problems:

  • Efficiency for hardware and humans -- cheaper, faster, more manageable, better use of resources, etc.
  • Hardware isolation (point in time) -- make the target VM hardware not dependent on a particular host's hardware. Eg Xen has the xm migrate command for moving a VM instance from one host to another.
  • Hardware isolation (over time) -- make the target VM hardware independent of the nature of the host hardware. Eg Xen in the previous example requires that all host hardware matches the instruction set of the hardware in the VMs.
  • Control -- a virtualisation solution is of value to the extent that it provides control to users over the target systems. Control over resource usage, over time velocity, over peripherals installed, over the virtual networking environment etc. Everything can be controlled in a good v12n system, and a brilliant v12n system exposes that control to users.

Software Longevity

It is common for software applications to have a 15 year working life today, and that is only getting longer. Hardware life is fixed at around three years, or 5 hardware generations in one software application's life. The 1st and 5th generation will be very incompatible. So there will be software infrastructure disruption over the lifetime of that application unless v12n is used with hardware isolation over time.

This is one of today's biggest computer science problems in the area of business applications. Over time, lots of time, the corollaries are clear: you'll end up virtualising the virtual hardware, and mass-deployed applications are going to get longer and longer-lived. It is no longer just COBOL applications in banks that outlive their programmers.

Nonexistant Problems

There are some commonly-cited worries about v12n that aren't problems at all if you consider them over a long timescale, such as 15 years. These are:

  • Speed. Increases in hardware means that unless you've got a problem growing very quickly (eg some kinds of online businesses) and already using close to 100% of hardware resources, the inefficiencies of v12n are insignificant.
  • Cost. Over time the costs of v12n vanish compared to the costs of continually migrating platforms to keep applications running, especially if you factor in the risks and knock-on effects of migration.
  • Stackability. Many observe that stacking v12n solutions usually involves an O(n^2) or worse degradation in performance. Better software design has reduced this greatly, and hardware that supports virtualisation has reduced it more, and hardware speed increases help again.

Xen, chroot (partitioning)

They remind me of the closing days of steam. Getting more efficient and functional, but doesn't solve the fundamental problems people need to solve. (Not an exact analogy, see the Red Devil and 21st Century UK Steam .)

Solaris containers, VMware, Xen and others like them -- these are getting more efficient, but they don't have a way of addressing the software longevity issue. V12N that deliver server consolidation with a bit of clever VM control can save money and reduce risk, but long-term it is a losing strategy.

QEMU (full simulation)

QEMU reminds me of oil. It is just everywhere, including in the plastic you use to make green energy generating technology.

QEMU addresses the software longevity issue. It is also incredibly flexible. You'll find it in Xen (without it HVM mode wouldn't work, and that is Xen's future), in VirtualBox, in KVM (it is nearly all of KVM!), in ScratchBox and other projects.


Virtualization <-- this is here for search engines confused by spelling

Updated Xenoppix Links

5.1 CD Version

5.1 DVD Version

LUG Discussion

Dan also provided some excellent insight into the current state of Scottish LUGs and his experience of similar groups in Australia. He suggested the idea that the various Linux groups within Scotland should start to communicate, and possibly generate more interest and eventually a critical mass for larger events.