In my LinuxWorld 2007 preview, I made predictions about the virtualization news and views that would be presented there. I was successful in a few predictions and missed on a couple of others.
Here's my take on the hits and misses, along with my analysis of the session highlights. I'll conclude with a few words on what is for many people the real LinuxWorld, the exhibition hall.
Xen: Alive and kicking
Simon Crosby of XenSource kicked off the LinuxWorld virtualization track by comparing Xen virtualization to a car engine, making the point that customers don't buy engines, they buy complete motorcars. His message is that Xen is an open source project, not a product, and requires a vendor to step in and convert the project to a product. As he notes, that is being done by a number of companies, including Novell, Red Hat, and even his company, XenSource.
Crosby claims that open source virtualization is ahead of proprietary virtualization (i.e., VMware) due to its rapid innovation, community of contributors and open ecosystem of partners. On the hardware support side, hardware vendors themselves are driving that innovation, because they want to get their latest and greatest hardware supported by virtualization software.
Of course, Xen is not without its problems. One issue for Xen is that there appears to be an issue with what it takes to claim being Xen-based. Is it enough to use some of the code or do you have to take Xen as delivered? In other words, as a user can you depend on consistent functionality in something called Xen?
Crosby was also concerned about the proliferation of Linux virtualization technologies; meaning KVM, although there are still further Linux virtualization initiatives. His plaintive cry is that this fragmentation of effort might allow Microsoft to win the virtualization race; the race, that is, to be the replacement technology for VMware. While his concern is understandable, I'm not sure there's any real way to solve it, particularly as a couple of the alternative technologies -- including KVM-- emanate from commercial companies that, presumably, have deep enough pockets to keep the technologies going for the foreseeable future.
I predict that the various Linux virtualization technologies will continue going forward, that Xen will come to be the dominant technology, and that that dominance will be achieve in typical open source fashion, via a shambling, anarchistic process that still will deliver great results.
Microsoft: No threat?
The most refreshing virtualization session of all was given by Sam Ramji of Microsoft. The title of his presentation was "Linux and Windows Interoperability: On the Metal and On the Wire."
I had a sense of trepidation going into the presentation, as "interoperability" has been used recently by Microsoft in the context of explaining why Linux vendors and users should pay Microsoft licensing fees to gain access to its patents. Thankfully, there was none of that in the presentation, but there was a lot of really interesting information about the virtualization work that Microsoft is doing and its partnership with Novell.
As you may know, Microsoft and Novell are cooperatively developing technology to enable Xen virtualization and the upcoming Windows Server virtualization to interoperate.
The big news: Microsoft is publishing a public ABI to enable Xen-enabled Windows Server drivers to interoperate with the Xen hypervisor as well as enable Windows Server to interact with the Xen hypervisor. I gleaned that info from the presentation, and a conversation afterwards with Tom Hanrahan, who runs the Microsoft Open Source Lab -- bet you'd never thought you'd see those words in a single sentence, eh?
Hanrahan, who used to run engineering at the Open Source Development Lab (OSDL), also shared that he believes that Novell is creating these glue layers as GPL software, which is what I predicted in my pre-LinuxWorld piece last week.
The only drawback to this software collaboration is that it is targeted at Windows Server 2008. Microsoft and Novell are collaborating to support Windows Server 2000 and 2003, but those pieces of software will remain proprietary because Microsoft cannot publish an ABI interface. I gleaned this from my conversation with Hanrahan.
I'm not sure I understand why the Windows 2000 and 2003 interfaces must remain proprietary; I understand that Microsoft created the products long before considering the need to publish a public ABI; however, couldn't a shim layer be created that would enable publishing a public ABI for those products? Certainly, Windows 2000 and 2003 are by far the preponderance of systems that are candidates for virtualization; why not work toward making them virtualizable instead of focusing work on Windows Server 2008, which will not be an interesting virtualization target for quite a while?
I must commend Ramji for not mentioning "building a bridge" during his presentation. "Building a bridge" is Microsoft-speak for interoperability between Microsoft and open source software and the need for open source vendors and users to pay licensing fees for the privilege of interoperating with Microsoft software. This interpretation of interoperability strikes me as misguided and customer-unfriendly. The term itself is reminiscent of Orwell's newspeak, in which terms are used to imply one thing while actually describing the complete opposite, and it was a pleasure that Ramji's discussion of interoperability focused on getting products to work together better, rather than on extracting additional revenues.
Where's the hardware?
I reiterate the sad lack of a hardware session in the virtualization track. There are many exciting developments in hardware that are focused on virtualization and will make virtualization easier and higher performance in the near future. This topic would be a great one and a great service to track attendees.
Security: Could virtualization be dangerous?
I was looking forward to the session on virtualization and security, as I wanted to understand the potential vulnerabilities of virtualization.
Well, there may be significant concerns about virtualization security, but the session on the topic at LinuxWorld did not limn them. The speaker primarily and repeatedly noted the old news there could be security vulnerabilities with virtualization, that virtualization offers new "attack surfaces". For instance, if a security attack took over a hypervisor, it could then attack all the virtual machines on the physical server.
By the way, all the hysteria about virtualization seems to stem from one example of a security vulnerability identified last year at Defcon.
I did learn that policy audits drive more security spending than actual security break-ins, which no doubt accounts for the frustration of security experts, and the cool term "attack surface," meaning software.
Virtualization management could be challenging
I expected analyst Tony Iams' session on virtualization management to be depressingly similar to the security session: virtualization sounds good, but it raises so many issues that you'd probably better go back to bed and get up after you've forgotten about your virtualization fantasy.
Thankfully, however, the session was short on hand-wringing and long on interesting info. Iams interviewed 50 real-world virtualization users to understand how they're using virtualization and how they manage their virtualized infrastructure.
The key findings he presented:
- The primary driver for x86 virtualization is consolidation, while Unix virtualization is driven by the desire to respond to variable workloads
- Most users had developed rigorous policies about what to virtualize and how to virtualize. Hard cases -- i.e., more than 2 cores of processing power, more than 1 GB of memory -- tend to stay physical. Operating systems (OS) stay in silos, which means that physical servers host all Windows virtual machines or Linux virtual machines, but not mixed workloads.
- Consolidation ratios are growing and are up to about eight-to-20 virtual machines per server, which makes the economics of virtualization even more attractive. The break-even point for VMware ESX Server-based virtualization is around six virtual machines per server.
- Virtualization users appear to have an appetite for more complex use of virtualization, pointing toward using virtual machine migration and high availability.
- This pool of users is not moving to uber-management software; rather, they tend to add virtualization-specific management software to the existing portfolio of three or four system management software products they already use.
One thing that was pretty interesting is that most of these users are not moving to true storage virtualization. Oh, they use SANs, alright -- after all, you have to move away from DAS if you're going to use VMware -- but they haven't invested to have truly virtualized storage enabling them to dynamically change storage allocations or even storage arrays.
All in all, I thought Iams' session was a really interesting, information-heavy and anxiety-light.
LinuxWorld exhibition Several people mentioned to me that LinuxWorld seemed almost obsolete, in that Linux is now so well-accepted that there may not be a need for a conference that focuses on it. I don't know if that's absolutely true, and the fact that the conference is the de facto open source technology conference implies an ongoing purpose for it.
Something I missed at this year's show that was a highlight of previous years was the heavy focus on exciting hardware. At previous shows there was lots of emphasis and excitement about Opteron-based systems, especially dual-core. It was missing at this year's show, so let's hope that next year's LinuxWorld will be more hardware-heavy.