Linux clusters give HPC price-performance

Find out which types of enterprise IT shops are checking out and what's ahead for HPC Linux clusters.

High-performance computing (HPC) users are switching from supercomputers to Linux clusters, and enterprise IT shops are getting keen on that idea, too. Linux clusters cut down job times and offer stellar price performance for applications running over hundreds of nodes, according to Alex Rublowsky, product marketing director for PathScale Inc. of Sunnyvale, Calif. In this interview, he explains why HPC on Linux makes sense.

Why use Linux clusters for HPC and not supercomputers?

Rublowsky: Price/performance: Linux clusters run on commodity servers. When you start adding up the cost of putting 32 commodity servers in a rack and expanding those out, it is a fraction of the price of a supercomputer.

Who do you think in the business sector will be an early adopter of the Linux clusters?

Rublowsky: It has got to be the people who run high-end, database-centric applications. Anywhere where 64-bit is a play right now is a candidate for a Linux cluster. After all, 64-bit is available in commodity hardware today, and those folks using them are looking at the performance gains that they can get. Why not get those gains from a cluster? The early adopters are in financial services organizations and pharmaceutical companies, and they fully comprehend and are not afraid to capitalize on the value of new information technology, such as Linux clusters.

Could you compare ease of management for a Linux cluster versus managing a supercomputer?

Rublowsky: If a commodity server burns out, you just plug another one in. They are cheap enough to do that, and that is what we see people doing today. A supercomputer is a very expensive item, and it requires a certain amount of love and care and maintenance. These commodity servers are practically disposable. There are dozens of companies and organizations developing innovative management tools for clusters, such as the ROCKS and OSCAR efforts. This field is flush with innovation that is applicable to both HPC and enterprise clusters. Only Cray and SGI are developing tools for supercomputers.

What's needed for HPC clusters to move out of scientific circles and into other types of businesses?

Rublowsky: Today, the main users are universities, bioscience groups, oil and gas companies, and government research labs. But, what you can feel is that the enterprise customers are looking at clustering and trying to see if it's a fit for them. They asking, 'What would I need to do to take our SMP-based applications and move them onto clusters?'

To answer that question, we at PathScale are focusing on the three elements that we see as the big bottlenecks in clusters. The first is the compiling of the 64-bit software and making the applications go as fast as it can. The second is cluster interconnect technology, which is super low latency and enables applications to more easily scale to dozens or hundreds of nodes. Latency is the number one issue that prevents the SMP-based applications from moving to clusters. The third thing we are doing are MPI-focused performance tools, and this is clearly where in HPC, there is a lot of MPI code.

There are tools today that show you where your bottlenecks are on parallel applications, but we want to show the bottlenecks and give recommendations on how you can remove them and improve performance.

When you tackle those three elements together, you are going to be able to multiply your performance. The combination of these elements are critical in HPC applications, but within a year or two, they will be required in enterprise clusters as well.

Dig deeper on Linux high-performance computing and supercomputing

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataCenter

SearchServerVirtualization

SearchCloudComputing

SearchEnterpriseDesktop

Close