OpenMosix Project founder Moshe Bar wants to see IT initiatives such as utility, grid and on-demand computing moving forward; but he doesn't want misleading and inaccurate hype to propel those technologies. In this interview, he explains why users are confused about utility and grid computing and provides clarification about those technologies. He also discusses his new role as chair for the OASIS DCML (Data center Markup Language) Server Technical Committee. Bar is also CTO of Palo Alto-based Qlusters, a management software vendor.
As I understand it, DCML is the only open, vendor-neutral data center specification for describing an environment and policies governing its management, descriptions needed for implementing utility computing. Why is the expansion of DCML as a standard important to enterprises today?
Bar: Whenever a new paradigm comes up, especially in the IT industry it's important for customers to keep up with innovation, but do it while keeping three things in mind:
- Protect their investments in new technologies;
- Guarantee that what they implement today will be forward compatible; and
- Make sure that various components interact well.
Standards assure that these three goals are met, and make it easier to move into new paradigms.
What is utility computing?
Bar: Utility computing is a new paradigm similar to the 80's server/client paradigm, and the 90's
Utility computing is about increasing the flexibility of IT for the benefit of the business, while simultaneously lowering the overall cost of IT through automation, increase of server utilization virtualization and the availability of services and assets at all times. Utility computing is today's only way to truly tackle the rising costs of IT in the modern enterprise.
Some IT managers are wary of utility computing because they feel that they will be giving up too much control of their IT systems. What is right or wrong about that assumption?
Bar: Whenever I hear that feedback, it's usually a good indication that the first vendors that this particular IT manager talked to about utility computing were either IBM or Sun. These two companies interpret utility computing as a financial agreement for the outsourcing of the data center with payment based on actual resource consumption. So, when your company needs faster response times, the IT manager has to call IBM or Sun and ask for additional computing resources to be allocated. This is exactly the same concept as time-sharing computers from the 50s and 60s and hardly an innovation.
That is the wrong way to approach utility computing. In true utility computing, you can leave control of the data center firmly in the hands of the data center manager. You can actually even strengthen the control of the data center manager by giving them granular control of resource consumption and provisioning.
How does this happen?
Bar: Utility computing adds three new concepts to the technologies existing in the data center.
- One is to automate wherever you have human intervention and the control and management of all of the options in the data center.
- Two is to increase asset utilization by aggregating idle or low-usage workloads through virtualization technologies.
- Three is to make sure that the assets that you have already bought and are operating in the data center are continuously available for advanced availability solutions.
If you do these things, you operate at a lower cost, which is always a concern for enterprises. Also, you will return to control back into the hands of the data center manager.
In summary, automating and the management of the use of assets, increasing the utilization of those assets and making them available is the very essence of utility computing. You can and should put utility computing into your shop and retain control.
How does utility computing differ from grid computing?
Bar: Grid computing is just like clustering. It is somewhat related to utility computing, but it is not the same.
Grid computing tends to unify resources across different geographical locations and make them available for the purpose of one task and during a short time.
So, lets say, a researcher needs to move all available research onto one path to do a job. Grids are an excellent tool to unify resources all over the world to get that job done as quickly as possible. But, grids are usually good for doing things quickly and not very good at doing things securely, efficiently or as cheaply as possible, and these qualities don't fit with enterprises' needs. Also, the software that is used in research and academics is usually pretty simple, but the algorithms are very complex. It's usually the opposite for software used in businesses applications. That is why you see grids being more useful in academic and research environments.
I think utility computing is more useful in an enterprise environment. Utility computing requires a process of standardization and virtualization before approaching the previous three steps I mentioned.
What's the current status of utility computing products?
Bar: There are so many vendors out there right now preaching utility computing, but many only provide configuration management. Some others preach utility computing and focus on clustering software. Some others are offering grid technologies but positioning them as utility computing.
So, what happens is that the business customers are really confused about what utility computing is. They're not sure about its viability in cost, what other solutions could be used instead of utility computing, or where to buy true utility computing.
Only a handful of companies have a vision of utility computing and have the matching products to go with the vision. Very few have customers to validate their vision and the product. Most likely, there will be a lot more consolidation on this market. In two or three years, there will be four or five top IT companies offering products around utility computing.
What role does Linux play in utility computing?
Bar: Linux has a very important role. First of all, Linux has become a standard, supporting the business process of standardization and virtualization I described earlier and all of the major vendors have switched to Linux.
Secondly, Linux provides performance at a very good price-performance ratio. Linux allows cost savings by enabling the usage of standardized hardware and software and makes the operating system a commodity component as well. Linux is reliable, too. It becomes a component that you don't have to worry about anymore.