Article

Truth and lies about Linux scalability

Jan Stafford

Suggest that Linux has been a slowpoke in scalability and Peter Honeyman will colorfully advise you to think again. Then he'll offer a choice word to those who are "paid" to dismiss Linux's scaling power. "For 32-bit architectures, Linux is at the head of the pack," said Honeyman, director of the Linux Scalability Project (LSP) at the University of Michigan's Center for Information Technology (CITI) and CITI's scientific director. Honeyman and his CITI colleagues have studied and helped build Linux's scalability, prowess and robustness. Currently, LSP is working on the open-source Linux client and server for NFS version 4, which is in the new Linux 2.6 kernel. "We are now extending that into highly parallel access for massive clusters," he said.

In this interview, Honeyman offers his views on the current state of and the future of Linux scalability. He consulted on some of his responses with colleagues Chuck Lever and Trond Myklebust.

Has Linux really lagged behind in scalability? If so, how? If not, why has it gotten a bad rap for scalability?

Peter Honeyman: Yes, I have stopped beating my wife; I mean, no I haven't. I mean -- hey, wait a minute. Who says Linux lags in scalability? (I am aware that there are some who are paid to say so, and I offer a petard in their general direction!) You claim Linux is getting a bad rap in scalability, and ask me why? Perhaps some introspection is in order!

Seriously, though,

Requires Free Membership to View

SMP (symmetric multiprocessing) for Linux has lagged somewhat -- it's kind of a red queen scenario, where Linux developers have to run as fast as they can just to keep pace with SMP hardware opportunities. The Linux kernel thread model undoubtedly plays a role here, as does the reliance on spin locks. At the same time, it becomes increasingly difficult to develop applications and libraries that can take advantage of the ever-increasing scale in SMP hardware, so the challenge is broader than scalability in the kernel.

How have Linux technologies evolved over the past five years in the area of scalability?

Honeyman: In many cases, developers have identified algorithms and data structures in the Linux kernel that were appropriate for certain assumptions about hardware and workload but were outpaced by external developments such as the plummeting cost of memory and disk.

For example, at CITI we studied the way Linux keeps track of page allocations of files. We discovered that the hash function used to find a page demonstrated superlinear behavior, studied several alternatives, and suggested a linear solution, which was adopted by the Linux kernel team. (And now you find this quotation, "Chuck Lever verified the effectiveness of this technique(http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf), in a comment in the kernel sources).

We also studied scalable solutions to problems that arise when applications have to cope with huge numbers of active file descriptors (i.e., Web servers with many active connections), as well as some issues related to buffer management for I/O intensive workloads. CITI's research contributions have often built the case for kernel improvements that improve scalability for workloads that stress these resources.

What are some misconceptions about what scalability really means?

Honeyman: Scalability is not the same as performance. Fixing a bug that hurts performance does not necessarily make a system more scalable. Usually, when we talk about scalability, we are talking about the relationship of performance to resources over a range of workloads.

Here is one way to look at scalability: A system whose performance improves linearly with the addition of a bottleneck resource is scalable. For example, suppose a system is memory bound. If we double the physical memory and find that the system is still memory bound, we would expect throughput to double. If it doesn't, then the system does not scale appropriately with the size of physical memory.

There are other ways to look at scalability. For example, the success of Beowulf clusters has demonstrated that Linux is highly scalable for applications whose computing needs can be partitioned into 'shared nothing' parallel systems, e.g., finite-element analysis, or password cracking. But not all applications can be partitioned this way. In those cases, the problem is not Linux; it is inherent in the application (or reflects insufficient innovation or understanding on the part of the developer).

What is the state of Linux scalability now?

Honeyman: For 32-bit architectures, Linux is at the head of the pack: Memory and file sizes can now scale to the limits of 32-bit hardware. On to 64-bit!

In some cases, the massive scale of hardware affects overall system scalability, especially administerability. For example, a sufficiently massive storage subsystem stresses the ability to take backups -- if it takes 24 hours to copy the contents of a petabyte system to tape, then it is not possible to take daily backups without a fundamental change in approach. But these problems are inherent to the scale of hardware and are not specific to Linux.

How will the 2.6 kernel change the scalability picture for Linux?

Honeyman: Critical improvements to Linux scalability are reflected in both the 2.4 and 2.5 kernels, but some of the more far-reaching developments have been isolated to 2.5. NFSv4 is an example: a secure, scalable distributed file system designed for the global Internet. As 2.6 emerges, with all the improvements committed to 2.5, new features devoted to enhancements in scalability will become broadly available, standard on every desktop, server, and cluster.

What other technologies will be contributing to improvements in Linux scalability in the near future?

Honeyman: Ever-higher network bandwidth, e.g., 10 GbE and technologies such as RDMA (remote direct memory access), will challenge Linux developers. Massive SMP complexes demand advanced NUMA solutions that exploit processor affinities. Sixty-four bit architectures also offer new challenges. History has shown that these engineering challenges serve as an inspiration to Linux developers, who relish the opportunity to prove their mettle and blaze a trail. This demonstrates the power of the open-source development model.

FOR MORE INFORMATION:

SearchEnterpriseLinux.com expert response: "Has Linux lagged behind?"

SearchEnterpriseLinux.com expert response: "What role does scalability play in planning?"

SearchEnterpriseLinux.com expert response: "What's the difference between 'scalability' and 'enterprise-ready?'

FEEDBACK: What scalability issues are you hoping the 2.6 kernel answers?
Send your feedback to the SearchEnterpriseLinux.com news team.


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: