Linux's upsurge in popularity as a data center platform is a natural offshoot of its proven performance in clusters, says Dean Hutchings, chief operating officer of Linux Networx Inc., in Bluffdale, Utah. His company is building one of the world's largest Linux clusters, named Lightning, for Los Alamos National Laboratory. In this interview, he explains how clusters increase application performance and why Linux and clusters are a good match.
Why is Linux a good platform for clustering?
Dean Hutchings: There are a lot of reasons that come from its open-source background. It's being developed by many of the brightest developers in the world. There is a large group developing the Linux operating system. Linux doesn't have a lot of the overhead that Microsoft or other operating systems might have, [which] don't add value when you're using them for high-performance computing. You're able to use the system to do what you want, and that's where the power is going. Bottom line: Price and performance is why Linux is used.
How does Linux compare with other platforms in the area of clustering?
Hutchings: It compares quite favorably -- again, because of its price performance. The TCO of Linux is much better than any other OS out there. That's why most clusters deployed today are deployed on Linux.
Why would an enterprise want to use a cluster rather than a mainframe?
Where are clusters being implemented? How are they used?
Hutchings: They're being implemented in many settings. Linux clusters got their start in scientific settings and are still most widely used there. For instance, we still sell clusters to the government for doing scientific research. Commercial companies use them for engineering applications. Boeing, for example, buys clusters from us to do computational fluid dynamics. That may include anything from how air flows across a rocket or an airplane to how molten aluminum flows into a mold in order to mold a part. In the life sciences arena, systems are bought for things such as looking up different DNA strands and molecules in the BLAST database that's held by the National Institutes of Health, or simulating tests in order to get FDA approval on new drugs for [the] curing of disease.
In oil and gas, they use Linux clusters for seismic mapping (to find oil). They'll shoot off a bunch of explosions and take seismic readings and run them through these supercomputers and come up with a map of oil fields under the ground: How much oil is down there? How deep is the oil? What's between the ground level and the oil? Things like that.
Linux clusters are moving into other settings. They are now being used in entertainment to make the latest and greatest movies for film rendering. They're used for financial modeling in the capital markets. In general, they're used for a variety of tasks involving simulation and analysis.
When is a cluster not a good idea?
Hutchings: A cluster isn't a good idea when the code you're running doesn't take advantage of parallel processing. There are certain scientific codes that are developed for shared-memory processing (SMP) systems. So a cluster isn't a good idea with any code that hasn't been ported over to take advantage of that parallel processing capability. But, every day, more and more code from applications within the simulation and analysis world -- the HPC (high-performance computing) world -- is being ported over. Therefore, a Linux cluster becomes a very good thing for more and more code every day.
What are security issues with clusters? Wouldn't it be easier to hack a Linux cluster than a mainframe, for example?
Hutchings: Most of the systems that we deliver are closed systems, and Linux has significant amounts of security built into it. The Kerberos security application is run on top of Linux, providing security against hacking.
Many of the clusters we have [such as Lightning] are closed systems, so the security issues are different than what you would have for an open system -- there's no availability to the outside world.
How does clustering increase the performance of applications?
Hutchings: Again, it's the parallelization. An application -- especially in high-performance computing -- has a whole lot of algorithms or a whole lot of calculations. These are running in parallel when they're in a cluster. So I can take bits and pieces of the algorithms that need to be performed, and I can drive that across an entire cluster.
For example, with Lightning, I can drive [an application] across those 2,800 processors, so they're all processing a component of the application at the same time; then they come back and they answer a certain piece of information. All that information comes back to a central point, then it goes out and it does some more calculations, and then it comes back and does more calculation.
So what you're getting when you have a cluster is, again, parallel or simultaneous processing of a whole bunch of pieces of that code. Whereas if I'm using a single processor, it's done step by step -- I do one thing, and then I do the next, and then I do the next. That's why clusters are so much more efficient.
How should a company select a microprocessor? When and why should companies use AMD, for example, instead of Intel?
Hutchings: It all depends on the code that they're running. Opteron, as you may know, has a cross 32- and 64-bit processing capability. Whereas Xeon, for example, in Intel, has 32-bit, and Itanium has 64-bit. The reason that the choice was made for Opteron by LANL was the cross-processing capability; they had 32- and 64-bit code they were running, and they needed both those capabilities.
FOR MORE INFORMATION:
FEEDBACK: What advantages has a Linux-based cluster brought to your enterprise?
Send your feedback to the SearchEnterpriseLinux.com news team .