Most organizations operate multiple platforms within their IT infrastructures. If anything, heterogenous infrastructures...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
have been bolstered as vendor support has increased for virtualized server and workstation solutions. VMware currently dominates the virtualization marketplace with its almost unchallenged line of ESX Server and Workstation products (not to mention the free VMware Player), but XenWorks has recently stepped forward as a challenger. Its flagship virtualization product Xen 3.0 has been heavily touted as the big alternative to VMware, particularly for Linux, and even comes integrated with commercial SuSE and Red Hat Enterprise solutions.
This article briefly examines the current state of affairs for multiple server deployment, with an emphasis on modern advancements in virtual servers and workstations. The most common configurations use three primary techniques: Partition, Emulate and Virtualize.
Multiple partition schemes
Sharing physical resources is easy and quite common on virtualized systems. In order to partition physical media into separate volumes with multiple operating systems, you need minimal technical skill but a fair amount of experience with and understanding of file system layouts and formats. Any number of drives can house multiple operating systems that work individually.
The upside to using multiple partitions is that setup is relatively straightforward, provided you have prior experience in deploying the operating systems in use. Also, this type of configuration makes full use of the hardware on which it runs, so no performance is sacrificed.
The downsides to this approach are a) a partitioning and multiple-boot loader scheme is non-intuitive for inexperienced system integrators, and b) the approach requires isolated operation among the different operating systems on a machine. Run-time context data or descriptions cannot be saved, paused, and then restarted later, as in a virtual container, nor may they be stacked as in an emulation layer. Instead each host must be completely shut down and another reinitialized and restarted, which translates into lost time and productivity.
Kernel emulation is an old school method that involves overlaying a guest operating system atop an original host image. The guest operating system can then negotiate software applications' requests for underlying hardware or other system resources. Products such as Bochs, Qemu, PearPC and Virtual PC represent the current state of emulation technology. All these products support shared use of a wide variety of common hardware components. Emulation is thus considered raw hardware or native virtualization, because the base operating system mediates all resource requests from an overlaid (and virtual) guest operating systems.
The upside is that one operating system (OSX) can operate another (Windows) in such a way that compliant applications are unaware of the real-virtual distinction. By its very nature, emulation provides a way to execute programs from different host architectures on a variety of target processor architectures (such as when running x86/Windows applications on PPC/OSX computers).
The downside to emulation is that not all hardware is fully supported and some components may not work very well or at all. Also, an emulation approach assumes the overhead of running one operating system inside another. Underpowered or under-resourced setups can experience dramatic performance penalties under heavy loads. Conventional wisdom therefore dictates using the most powerful processors possible and loading machines up with large amounts of RAM to achieve the best results when using emulated environments.
Virtualization may be divided into two high-level categories: hardware and software solutions. On the hardware side, in-kernel virtualization provides hardware acceleration to tightly integrated host software that takes advantage of AMD Virtualization or Intel Virtualization Technology (VT). On the software side, a virtual manager hosts one or more guest operating systems and their runtime environments. Both methods exercise a single common principle: partition available resources and create separate, mutually independent domains for guest operating systems such that they cannot intrude upon the host operating system. For example, VMware Workstation and Player can create virtual environments that operate freely within the Windows or Linux desktops without modifying their underlying kernels.
Virtual machine contexts like VMware and Xen utilize hypervisor contexts that permit multiple OS interactions to proceed independently. The hypervisor modifies operating system behavior by altering page tables that point to critical system operations and rewiring some of such operations to become hypercalls. In turn, these hypercalls represent traps (or faults) into the hypervisor context, which supervises access to all virtual environment resources and allows guest operating systems to share resources independently. For standard x86 architectures the method for interpreting between guest and host instructions is called dynamic recompilation, which essentially rebuilds the guest operation on the host at run-time.
The upside is that VMware enables rapid deployment of easily reproducible virtual images that are transportable to any operating system. This can save a lot of time in setup and production. Xen partitions the available resources logically so that each guest has an equally subdivided purview of host resources. Creating several instances of guest operating systems on a single (or several) host machines makes virtual clustering quick and easy, not to mention that it's quicker and more economical than deploying dedicated workstations or servers. Furthermore, a virtualized environment using VMware or Xen can leverage full use of on-board hardware, unlike the hit-or-miss emulation method.
The downside to virtualization of this kind is that privileged mode code must be dynamically recompiled at runtime. That's necessary because the standard x86 architecture on which AMD/Intel processors are based lack trap conditions for sensitive machine instructions. As a result, a slight performance penalty is incurred on processors that lack the in-kernel virtualization techniques built into newer x86 processors.
When it comes to timely delivery of development products on the enterprise scale, there is no better turn around time what virtual server contexts can provide. TCO may be reduced to a common per-set/per-server licensing scheme and lowers overall hardware costs.
Ed Tittel is a full-time freelance writer and trainer based in Austin, TX, who specializes in markup languages, information security and IT certifications. Justin Korelc is a long-time Linux hacker who works with Ed and concentrates on hardware and software security topics. Both contributed to a recent book on Home Theater PCs; right now, both are busy at work on a book about the Linux-based MythTV environment.