Cray CTO: Supercomputers outshine Linux clusters in HPC, Part 1

In part one of this interview, Cray Canada chief technology officer Paul Terry evangelizes for supercomputers over Linux-based clusters -- and ironically, Cray's new Linux-based supercomputer.

Linux clusters can not offer the same price-performance as supercomputers, according to Paul Terry, chief technology officer of Burnaby, British Columbia-based Cray Canada. In this interview, Terry explains that assertion and describes Cray's new Linux-based XD1 system, which will be priced competitively with other types of high-end Linux clusters.

What's the closest thing to a high-performance computing solution offered on Linux today? How does a Linux cluster differ from a supercomputer?

Paul Terry: I'll answer those questions in reverse.

Supercomputers and most Linux systems differ in their heritage, and as a result, in their architectures. Most Linux systems available today began life as computers for commercial applications -- databases, information management, Web servers. Architecturally they fall into two camps: Linux clusters, where processors are connected through I/O links; or shared memory machines, where processors exchange data and instructions through shared memory.

Designed to minimize cost, cluster systems' performance is severely limited by PCI (Peripheral Component Interconnect) bottlenecks. The shared memory approach does produce high performing Linux systems. However, when multiple shared memory machines need to be clustered to meet application demands for more processing power, the resulting clusters suffer from the same performance problems as any other clusters.

On the other hand, supercomputers are purpose-built to handle HPC applications, which place enormous demands on both processing power and inter-processor communication. Their design includes high performance interconnects that provide high bandwidth, low-latency communications across the entire system, regardless of the number of processors required.

Another characteristic that differentiates supercomputers from Linux clusters is designed-in reliability, availability and manageability. A truly high performance system must not only deliver high application efficiency, it also must be highly available. Cluster experts know that achieving this with clusters is very difficult and requires significant design and integration effort. In supercomputer systems, these features are built in, totally integrated into the design and functioning of the system.

Where does Cray's new Linux-based supercomputer fit?

Terry: The Cray XD1 system, together with Cray's Red Storm platforms, will be the first Linux system purpose-built to handle HPC workloads. It uses a new architecture that presents a real alternative to clusters, while preserving the economics of commercial components. The Direct Connected Processor architecture breaks the communications bottleneck by embedding the interconnect and removing the PCI bottleneck to directly connect processors to each other and memory. The Cray Red Storm system, designed for Sandia, take this same direct connect approach.

The Top500.org ranks the efficiency of clusters versus traditional supercomputers and puts some Linux clusters ahead of supercomputers. What does that mean to you?

Terry: The Top500 list was never intended to rank the real world capabilities of systems. It's more valuable as a census of large systems than a ranking. It's based on Linpack, a single test that looks at only one attribute -- total processing power -- but doesn't measure sustained application performance or system efficiency. As a result, a highly efficient system at number 200 on the Top500 list can be more powerful in practice than a system in the top 10. That's why one of the co-publishers of the Top500 list, Jack Dongarra of the University of Tennessee, assembled a suite of tests last year that includes Linpack plus six other tests. His work was sponsored by the National Science Foundation, the Department of Energy and DARPA. The results of these new HPC Challenge benchmark tests are out now. I'm happy to report that based on results submitted by customers, purpose-built Cray systems rank higher on these more-realistic tests than any other systems.

Is it true that some applications don't run well on cluster systems? Why not?

Terry: Yes, that's true for many HPC applications. Many of these applications, such as those used to simulate car crashes or weather systems, spend as much time exchanging data between processors as they do calculating. For these applications, it's all about the balance between the processor speed in a system, and the bandwidth available to keep the processors busy. When that balance is off, the computer is inefficient, especially for challenging problems.

Balance is expressed as the ratio between flops and bytes. A ratio of 1.0 or greater means you have a balanced architecture. Anything less than one and you have a system where the processors are in danger of being starved for work. Cray systems have a balance of at least 1.0 and as much as 2.0. These are balanced systems. Contrast those numbers against current clusters and SMP systems from the large U.S. vendors, which have ratios of less than 0.2. Even the upcoming IBM Blue Gene/Light system has a ratio of less than 0.4.

The commodity HPC systems out there are much worse. There is a category of HPC applications referred to as embarrassingly parallel which don't place such high demands on interprocessor communications. When processors don't need to exchange data so frequently, clusters can be just fine. But for really challenging problems, you just can't get there with clusters.

Are there any reasons besides price for a company to choose a Linux cluster-based HPC solution over a supercomputer-based solution?

Terry: Most customers are looking for more than price -- they want the best price/performance. There are some applications where a well-designed Linux cluster can deliver good price/performance on a particular application; those embarrassingly parallel applications where processors spend little time exchanging data. But we're seeing two other factors that reduce even this number. First, customers view manageability and availability as part of the price/performance equation. Clusters are notoriously difficult to manage, requiring customers to purchase and integrate multiple software packages to manage this collection of separate computers. Compare this to attractively-priced systems with designed-in sophisticated system management capabilities.

To quote Sandia National Laboratory's Bill Camp, who will get the first Red Storm system from Cray, 'We expect to get substantially more real work done, at a lower overall cost, on a highly-balanced system like Red Storm than on a large-scale cluster.'

Second, customers are often looking for a system to run a collection of applications and looking for the best price /performance for the set. Unless the embarrassingly parallel application represents the majority of the systems workload, supercomputer-based systems are likely to win the price/performance battle.

Does a Linux cluster offer more flexibility and less costly scalability and upgradeability than a supercomputer?

Terry: Just the opposite. Clusters, at best, are a loose collection of unmanaged, individual microprocessor-based computers. Cluster users know it can take months to implement a cluster system. Scaling or upgrading these systems requires much more than simply ordering more parts; it opens up the whole integration exercise. From an application perspective, clusters limit application scaling. Bandwidth and latency restrictions significantly constrain performance as more processors are applied to a problem.

On the other side, systems like the Cray XD1 system are designed for both application and system scalability. The high speed interconnect ensures continued application performance gains as more processors are added. Management features automatically configure and initialize new system components.

FEEDBACK: Is Paul Terry off-base, or on-the-money?
Send your feedback to the SearchEnterpriseLinux.com news team.

Dig deeper on Linux high-performance computing and supercomputing

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataCenter

SearchServerVirtualization

SearchCloudComputing

SearchEnterpriseDesktop

Close