When determining the scalability of a virtualization implementation, focus on simplicity. Assess each part of the
data center individually, say Ron Oglesby and Scott M. Herold, authors of "VMware ESX Server: Advanced Technical Design Guide"."
In this interview, the authors tell which management tools are integral to a virtualized environment, when IT pros should be wary of using virtualization and how security should straddle the happy medium between overdoing it and slowing down the virtual environment or being too lax.
SearchOpenSource.com: What sorts of management tools are there for virtualized environments? How are they lacking? What's needed?
Ron Oglesby: Right now there are many companies paying lip service to virtualization. In some cases, software companies are just jumping on the bandwagon and saying 'We support XYZ technology' just to boost sales. Talk to their staff and they know very little about virtualization concepts or technology specifics.
In the VMware space, VirtualCenter is the management tool of choice for ESX Server. Other products, like Hewlett-Packard's Virtual Machine Management or IBM's Director modules, are adding functionality to deal with virtual machine [VM] environments. The problem is that most of these tools that are snap-ins lack much of the simple functionality you get in VirtualCenter. Most companies will end up buying both VirtualCenter and the vendor's tool and use both depending on what they are doing.
Personally, I would like to see a single tool that could act as a management interface for multiple virtual platforms while still not losing the functionality of the basic tools that you can get with the products. The tool should be able to handle basic functionality like snaps, templates, deployments and more, and demonstrate performance data and handling alerting or integrating into existing monitoring systems. I think this is kind of like saying I want an SUV that goes 0-60 in 4 seconds and gets 39 miles to the gallon, but you have to have a dream.
Scott M. Herold: The vendor tools are often limited to the capabilities of VirtualCenter, itself. In fact, neither will function without the Web API provided by VMware's VirtualCenter. The best thing VMware did was create its Community Source program, which allows ISVs to have direct input on not only what the APIs from VMware's management interface can provide, but also the capability to create and contribute their own code that can tie directly into the core ESX and VirtualCenter. With the release of VMware 3.0, we should start seeing some very creative things from the ISVs that are the real players in the virtualization market.
In what areas of an IT environment would you hesitate to use virtualization?
Oglesby: Right now, I would simply shy away from large amounts of processing when doing consolidation. If you are doing virtualization for other reasons, like workload management, then you can get nearly anything to run virtualized if you are willing to change some of the things you do. However, if you are looking for maximum consolidation ratios and high ROIs, stay away from the quad boxes that are already running at 50%.
What sort of security tips can you suggest for IT managers who are working in a virtualized environment? What are some common mistakes made when implementing virtualization (in the area of security) and how can they be avoided?
Oglesby: I think the biggest mistake is not securing the ESX box at all. Most ESX Servers are implemented in Windows environments. This leads to Windows administrators -- with maybe a little knowledge of Linux -- simply using root for everything. We recommend some standard minimum security at least: Disable remote root access, use sudo when needed and configure the AD PAM modules for Windows shops.
Herold: I have also seen some organizations use too much surrounding security and end up making their environment slower, more difficult and expensive to manage. When dealing with the VMs, all of the standard procedures should be followed. The host systems themselves should often be considered appliances, and organizations should limit the amount of customized agents and security hacks performed on these systems.
How does security protocol differ between a virtual environment and the physical environment?
Oglesby: I don't think it should differ at all. I don't think you should go overboard with ESX hosts, since they are basically appliances serving up computing resources and should be treated as such. Nevertheless, taking a common sense approach to security on the servers is the best bet.
The most common mistakes made with virtual security are based on ignorance, lack of knowledge of the Linux console, failure to understand how virtual switch architecture works, and what the host does not directly see in the data in the VM disk files. If you don't understand how it works, you tend to think in traditional mindsets and make bad decisions based on incorrect information.
Herold: The same practices that are performed to secure a physical environment can, and should, be used in a virtual environment as well. Everything from proper VLAN/firewall organization to host-based intrusion detection should be leveraged to keep the environment secure.
How does one determine the scalability of a virtualization implementation? What key points should one look for?
Oglesby: Simplicity. The more complicated the design and infrastructure, the less scalable it will be. For example, a common mistake in large organizations, is that they assume they cannot create a simple solution because they are big. I argue that they should make the solution or design for VMware as simple as possible to make it scalable for the size of their organization and largest client base. Don't design the entire solution around the one-offs.
Herold: When designing a virtual infrastructure, you should never look at the environment and try to plan one large infrastructure for the entire virtualization project. You would never get any traction. Instead, the overall environment should be organized into smaller groupings of servers and addressed individually. When approached this way, at the end of the project, you will realize that you have a very scalable deployment methodology that uses the same principals with a manageable number of servers in various phases of the project.
Do you think virtualization is the best option today for server consolidation? Or, in what situations is it the best option?
Oglesby: I think it's the easiest and most cost-effective option. Would consolidating a number of database servers into a single large database cluster be more 'elegant'? Possibly. But the time and money involved in migrating the databases and testing and managing that process makes it very attractive just to carve out logical VMs and move them right now. Most organizations we work with have power and DC capacity issues; hence, they need to address the issues right now.
Scott: It depends on the organization's reasons for the consolidation. If the organization wants to consolidate under-utilized Intel servers into a smaller amount of more powerful hardware, virtualization is much better than using blades. If an organization wants to take several medium-powered servers and create larger, redundant systems such as database or Web clusters, virtualization may not be the way to go. Organizations often use a mix of both physical and virtual consolidation to optimize the financial and operation benefits of a consolidation project.
What do you make of VMware distancing itself from ESX Server?
Oglesby: I don't know that they are.
Herold: I agree with Oglesby. On the outside, what appears to be VMware moving away from ESX by pushing the free VMware Player and VMware Server applications is really the company extending its name further into the market. Ultimately, VMware is drawing people toward ESX Server and enterprise-level virtualization.
What sort of design alternatives for hardware are there for setting up a VM?
Oglesby: Any type you can imagine in the x86 space. For VMware you need to stick with its HCL, which is comprised mostly of tier 1 server vendors like HP, IBM and Dell. Right now, people are deploying VMware on everything from blades to eight-ways.
The quad processor x86 box is the strike zone, in most instances, for ESX. Quads have the right mix of power and size, with available PCI expansion to support multiple network cards and host bus adapters [HBAs]. Blades are often limited by expandability, and eight-ways are most times too big for an environment. Duals like the DL385 can be used if you are in a smaller environment with limited NICs or HBAs needed. But, bottom line, I think the decision should be made at a cost per VM level.
Decide what you need to support (HBA and network-wise) and find the servers that fill that requirement. Then, take all the components into account, estimate the number of VMs you will get in each solution and do the math. Quads often come out on top, but every now and then a dual will pop up.
Herold: As virtualization increases in popularity, nearly every hardware vendor is making sure its systems are capable of supporting VMware ESX Server. This provides a lot of flexibility for end users to use their preferred vendors. People are using everything from two-way blades to 16-way interconnected systems for virtualization. I would stay away from both extremes and stick to the four-way and, in limited cases, eight-way servers for virtualization.
Find a system from your manufacture of choice that provides expansion in regards to PCI slots and memory. This will ensure you can scale up your architecture as well as scale out.