Is high-performance computing only used in scientific labs? In this interview, Cray Canada chief technology officer...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Paul Terry answers this question, offers advice on how to choose the right HPC system responds to questions from SearchEnterpriseLinux.com's readers.
The following questions were culled from e-mails and calls received in response to this site's previous Q&A with Terry, entitled "Linux clusters don't play in HPC".
Does the advent of high-performance, low-latency, standards-based interconnects put Linux clusters on a more equal footing with supercomputers?
Paul Terry: Commercial interconnect speeds are improving, but the fundamental cluster architecture and the PCI bottleneck still remain. To compete with supercomputers, systems need to be highly efficient on a broad spectrum of HPC applications, not just a narrow range of codes with minimal communications requirements. They need to employ an architecture that provides high bandwidth, low-latency connections between processors and memory, across the complete system.
Why does an enterprise need HPC? The enterprise is more starved for IO throughput than GFLOPS. What value does HPC bring?
Terry: It's not clear to me that the only thing enterprise applications are hungry for is IO. It is true that fast IO is one characteristic of a balanced HPC system, along with fast interconnect, fast processors and fast memory. We are seeing demand for balanced systems in many kinds of enterprises.
The automotive, aerospace, petroleum, chemical, pharmaceutical and weather/environmental industries rely heavily on HPC systems. These industrial HPC users need increasing performance to design and deliver safer products, sooner and at lower cost. Some of them need both to compute and to analyze huge volumes of data, for example, to accelerate drug discovery or find new energy sources.
Less obviously, typical commercial applications used by enterprises are running out of steam on their commercial servers. High end users of financial analysis tools, databases and application servers are turning to HPC systems to meet their performance demands.
What is your reaction to the following statement from a SearchEnterpriseLinux.com reader? "The problem with clusters of any kind is that they aren't well-suited to most business needs. To provide real-time access to data, the database must sit on a single machine. And this is why mainframes are still used. Those who build databases on clusters will find that as the number of machines increases, so does the percentage of bandwidth consumed by consistency checks and cache updating. A database split among these servers would consume 3 times the normal bandwidth as the request is mirrored across the cluster. Obviously, a read-only database could be built to scale, but as the cluster grows, an updateable database rapidly reaches the point where internal consistency checks and management consume more of the cluster's resources than the requests themselves."
Terry: The bandwidth problem you are describing for databases is the problem that the HPC industry has been grappling with for many years. The reason that clusters systems offer poor application performance is because bandwidth and latency limitations severely restrict cluster efficiency. This is why an increasing number of enterprises that explored clusters as a way to reduce costs are now looking at purpose-built HPC system vendors to solve their problems. Clusters aren't able to solve them effectively or cost-efficiently.
Isn't the bottom line that HPC systems like Cray's and Linux clusters – even the HPC ones – are designed for different tasks? For example, one reader said: "For problems that have lots and lots of data interdependence, a power 'box' like the Cray is better. And for some types of extremely high interdependence problems, where even threads or child processes can't really be applied, just having the fastest CPU available is the ONLY solution. On the other hand, clusters can scale linearly for the types of problems that fit them."
Terry: It's true that HPC systems like Cray's and Linux clusters excel at different tasks. Linux clusters excel if you are running embarrassingly parallel applications, and if these represent the majority of the workload on your system. The problem is that many users have been acquiring clusters for mixed workloads with not-so-simple problems, because of the low entry pricing. That's because more competent systems haven't been available with attractive pricing. The new Cray XD1 system is going to change that. These users will be able to purchase far more powerful systems within their budgets.
Our readers have asked us for tips about how to determine their companies' processing needs and make the correct purchase. Do you have any?
Terry: The most foolproof method, though not always the easiest, is to test the candidate systems on your own applications. The worst method is to look only at how much peak processing power -- the rated speed of a single processor times the number of processors -- you can buy with your budget. Organizations using this method often find out that their applications run at only 5-10% of this peak speed in practice (or even less).
The more challenging the application, the less efficiently it is likely to run on a cluster. Between these two methods, there are additional references you can use, including the HPC Challenge benchmark results on the Internet. These are the relatively new tests, and the website lists are continually updated with the performance of various HPC systems on seven main tests. It's relatively easy to see which tests most closely resemble your own applications workload.