Does server virtualization leave you scratching your head? If so, authors Wade Reynolds and David Marshall are...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
here to the rescue! Drawing from their own experiences with server virtualization, Reynolds and Marshall, authors of Advanced Server Virtualization: VMware and Microsoft Platforms in the Virtual Data Center, share their best picks for deploying and managing of virtualized machines. They also offer pointers on how to assess the scalability of virtualization products, where best to use them and how to secure them within a network.
When is virtualization the best option for server consolidation?
Wade Reynolds: We believe that software-based virtualization technology is the most cost-effective and flexible option available for server consolidation. As the technology matures and more standards are adopted, especially in the areas of APIs and virtual disk formats, barriers between the different virtualization platforms are being dismantled. Virtualized servers are less likely to be subjected to vendor lock-in relative to a blade or physical partitioning solution.
Through encapsulation, virtual machines (VMs) can be easily moved from host to host as needed, allowing you to take advantage of new server hardware without disrupting services. The best candidates for software-based virtualization server consolidation are under-utilized physical servers whose performance will not be impacted by running in a VM.
David Marshall: Consolidation was the early war cry from those of us in the virtualization business. But virtualization offers so much more than just server consolidation. Companies are only now realizing all the problems virtualization can solve. Areas such as business continuity (disaster recovery and high availability) and on-demand servers (for development and testing) are starting to get more attention in the marketplace, from end users as well as third-party software manufacturers. These solutions are going to continue to get more and more inventive.
Are there any areas of an IT environment in which you would hesitate to use virtualization?
Marshall:Graphics-intensive applications are not well suited for today's virtualization environment. Current virtualized video cards cannot handle the requirements of a high-performance graphics adapter. Games, CAD packages and software applications that require three-dimensional graphics are not well suited to a virtualized environment.
We've witnessed mixed results with trying to virtualize high-performance applications such as databases and business intelligence software. These applications require a lot more memory and processor power than today's typical virtual 3.6GB and dual virtual SMP limitations. Although databases can be successfully implemented in virtual machines, serious scalability considerations exist as far as raw computing power available to virtual machines. As the technologies mature, virtual machines will probably become capable of handling larger workloads.
Server applications that require access to hardware, like PCI cards and USB devices, are difficult, if not impossible, to virtualize right now. Unsupported operating systems will also affect decisions about shifting resources from physical hardware to virtual machines.
What advice would you give IT managers as to how to secure their virtual environment?
Marshall:The same security risks and concerns that you face in a physical server or physical data center extend into a virtualized world. A common misconception about a virtual machine is that it is somehow immune to these problems or that the host server will somehow protect it. Virtual machines need to have the same networking concerns addressed and the same virus problems attended to as physical machines. A defensive posture must be taken to prevent spyware and malware from taking over the machine.
One simple piece of advice for locking down your host server: Let your host server be your host server. In other words, don't install any unnecessary applications or operating system components that aren't needed to create a virtualization host server. It will not only be safer but will work and perform at its best when its sole purpose in life is to provide a virtualization platform and nothing else.
Reynolds: More emphasis has to be placed on virtualization host servers from a risk perspective, which directly applies to availability and security. If a virtualization host server is compromised, the potential damages are much higher because many virtual machines providing production-level services may be compromised, taken offline, or worse, destroyed.
Techniques like physically separating the network between virtualization host servers and virtual machines, or having a pessimistic view when configuring perimeter security (firewalls and intrusion detection) will have to be implemented to further secure the host servers. Many well known but often neglected security best practices such as having strong, highly guarded passwords and running with least privilege are also important for virtualization host server security.
Aren't there gaps in management tools for virtualized environments?
Reynolds: There are gaps in the existing APIs, features and tools used in virtualization platforms. Third-party organizations are filling in some of those gaps now, but with a painful lack of consistency across platforms. The industry needs standards across APIs and features. VMware is currently spearheading the creation of standards. But they may have to release control of the standards and allow them to stand on their own in order for mass adoption to occur from both third parties and platform providers.
What key points should one look for in determining scalability of a virtualization implementation?
Reynolds: Unfortunately, no simple formula can definitely calculate that. Profiling performance of both the virtual machines and the virtualization host servers can be very useful. Ultimately, the scalability success criteria may vary greatly depending on the context of the virtualized environment.
From a traditional server point-of-view, a virtual machine may be scaled up by increasing the amount of resources (i.e. CPU, RAM and disk I/O) that it is allowed to consume. Of course, the limitations are encountered sooner than in a physical server environment. VM-to-host densities may also be adjusted to scale up a VM's CPU performance.
On the other hand, an application running either on a physical or a virtual server may be scaled out more rapidly by implementing VMs on-demand that can help absorb the processing load of the application. Additional computing resources spread across physical servers could potentially be leveraged by firing up VMs as needed and taking them down when the application's workload returns to normal. VMs are good at scaling out rather than up.
What are your favorite applications and tools for designing, deploying and managing virtualized systems?
Marshall: Products like PlateSpin's PowerRecon greatly assist in the design process. By taking a snapshot of the server details through a full hardware and software inventory, the product provides the admin or architect with a complete server resource picture of the data center. It also probes server workloads to help design server consolidation and optimization planning, basically taking the guesswork out of the design phase.
Once you've installed and created your virtual infrastructure, you quickly realize that you need to create virtual machines. The problem is you probably already have a data center full of physical servers doing their jobs, so why reinvent the wheel? Early on with server consolidation, this was a huge problem. We experimented with a number of homegrown solutions based on what we knew from the physical world -- using an imaging tool such as Symantec Ghost. It never quite worked out the way we planned. (Translation: blue screen of death.)
So companies started appearing with physical to virtual (P2V) products such as PlateSpin, Leostream and VMware. These products have been refined over time, and most of them work quite well now. Every IT administrator should have a good P2V solution in his or her virtualization arsenal.
In the beginning, third parties started to create and sell a management solution that was heterogeneous, managing VMware GSX Server, VMware ESX Server and Connectix Virtual Server. Most of these companies left the management space in hopes of providing a better niche market. VMware filled the gap for managing its own server virtualization products with VMware VirtualCenter, which does a really good job at managing its virtual farm.
Wade A. Reynolds is an architect at Surgient, Inc., a leading provider of on-demand applications based in Austin, Tex. Reynolds specializes in server virtualization, enterprise software development and database systems.
David Marshall is a senior member of the reference architect team at Surgient, Inc., and he specializes in server virtualization, virtualization applications and Windows administration. He also runs the InfoWorld Virtualization Report, as well as the virtualization news blog.
Reynolds and Marshall are the co-authors of Advanced Server Virtualization: VMware and Microsoft Platforms in the Virtual Data Center, a book that details their years of hands on experience using and implementing server virtualization solutions.