CEO: Labor costs to plummet with utility computing

The CEO of data center management software vendor Qlusters Inc. explains how Linux will help shape the future of utility computing.

Utility computing presents an IT paradigm shift as profound as businesses' adoption of Linux servers in their data centers, according to David Martin, CEO of Qlusters Inc., a provider of data center management software. In fact, he said, the automated processes enabled by utility computing will cut the need for manual administration sharply, giving IT personnel more time for other tasks.

In this interview, Martin discusses how to bring utility computing into an enterprise, the changes it will spur there and how Linux and clustering fits into the picture. Next week, at the LinuxWorld Conference & Expo 2005, Qlusters will demonstrate SEMPRE, its datacenter management platform.

Why is there so much confusion today about utility computing?

Find out what else is happening at LinuxWorld

Column: Mozilla's Boston tea party

LinuxWorld preview: Open source rules and SCO fades

LinuxWorld preview: RHEL 4 not the only star

David Martin:There is confusion because virtually every major vendor and a slew of new companies have different definitions of it.

The widespread movement of Linux servers into the data centers of large-scale enterprises is one of the most significant IT industry trends of the new century. It has huge ramifications for customers, vendors and businesses. I believe that the next paradigm shift will be to utility computing. Actually, it's much more of a paradigm shift than the adoption of Linux. It is clearly a different approach to provide compute resources to applications and, therefore, users.

What advice do you have for companies that are considering the utility computing model?

Martin: You have to think about standardization. That immediately pushes you towards Linux. Linux is the macro standard that all other vendors and customers can rally around because it is an open operating system. If you want to follow Microsoft, you can go down the utility computing path, but that standard will lock you into a proprietary world.

The second step to take is investing in virtualization. When major customers are moving mission critical applications from one environment to another it will require less investment if a virtualization framework is in place.

Finally, after choosing the Linux platform and virtualization, a company should create and maintain a really robust policy base that enforces strong service level agreements.

With these things in place, they should just feel very, very comfortable when they move applications to utility computing. When they move, they are getting equal to or better availability in the system.

There is confusion because virtually every major vendor and a slew of new companies have different definitions of [utility computing].
David Martin
CEOQlusters Inc.

Where is clustering going in commercial environments, outside of scientific high-performance computing centers?

Martin: In the commercial environment, clustering isn't so much about high performance computing. The whole idea of clustering goes back to the need for high availability. It was not chaining many computers together so that they could make a nice supercomputer. In the beginning, clustering was aimed at mission critical environments. So, today, clustering is going right back to its roots with a focus on high availability.

Now, grid cluster systems do have high performance characteristics, and those are moving into the commercial environment. And, there are some real barriers there relating to the application characteristics. It is easy to scale cost effectively with a high degree of scaling ratios and a high percentage increase in performance for each node when you add an HPC cluster. But, with some applications, it is very difficult to HPC clustering in a commercial environment.

That said, the standard for clustering that relates to approving high availability has from day one been focused on mission critical enterprise applications, not primarily on HPC. Availability, in the sense of clustering that we do and the specific high availability elements that we provide, to the customers includes instant application failover, failing over the applications so rapidly, in just seconds, so that you don't have to restart and so that the customer or user doesn't even think about restarting.

What are some of the management issues for utility computing?

Martin: During the transition, there is a frenzy of activity when you do a wholesale change from one environment from another. Everyone, from customers to senior IT executives and their technical teams are actively involved in making the transition.

After having made the transition, everyone's job instantly becomes easier because they have rigorous, policy-based automation. With automation, the system can say, 'I will keep my most mission critical application running all of the time with a nominal three-second response time. Whatever the software has to do to keep that service level, I will do that per the policy and per the service level agreement with the user.' All of that is automated, so that it takes a significant manual physical administration burden off of the IT operations staff.

I'm not speaking tongue-in-cheek when I say that once utility computing in place, some IT people will need to look for another job.

This Content Component encountered an error

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataCenter

SearchServerVirtualization

SearchCloudComputing

SearchEnterpriseDesktop

Close