Unscrupulous competitors have convinced many businesses that Linux comes up short on scalability, according to SearchEnterpriseLinux.com's resident Linux experts. In this interview, they set the record straight and offer straight talk about the 2.6 Linux kernel's scalability prowess.
Our Ask the Expert scalability experts are Sam Greenblatt, Kenneth Milberg, Matt O'Keefe and John H. Terpstra. Greenblatt is senior vice president and chief architect with the Linux technology group at Computer Associates, and Matt O'Keefe is chief technology officer of Sistina Software Inc. Kenneth Milberg is president of Unix Solutions, a Unix consulting firm. John H. Terpstra is co-founder of Samba Team and president of PrimaStasys Inc. These experts offer free advice to those who pose questions via our Ask the Expert section.
Has Linux really lagged behind in scalability?
Greenblatt: Linux has not lagged behind in scalability, [but] some vendors do not want the world to think about Linux as scalable. The fact that Google runs 10,000 Intel processors as a single image is a testament to [Linux's] horizontal scaling. The National Oceanographic and Atmospheric Administration (NOAA) has had similar results in predicting weather. In the commercial world, Shell Exploration is doing seismic work on Linux that was once done on a Cray [supercomputer].
Terpstra: Accusations have been made that Unix and Windows scale
Milberg: There are many other examples of Linux scalability. Unfortunately, once you get a bad reputation in this industry, it is hard to shake. Because Linux has a rep of being wonderful for e-mail or Web servers but not wonderful for scaling well in database environments -- regardless of the truth -- it will take time and lots of publicized success stories to break this rep. Perhaps the Linux folks can hire some Microsoft advertising people.
How have Linux technologies evolved during the past few years in the area of scalability?
Greenblatt: Cluster technologies have evolved. Beowulf includes the message-passing interface and parallel virtual machines with bonding network software, including distributed inter-process communications and a distributed file system. This is the key technology that allows scale. Other technologies include Giganet cLAN; Legion; Cplant; Jessica-2 and PARIS.
The horizontal scaling in Beowulf will continue to improve through better use of virtualization in shared workloads, distributed file system, bonded dual nets and routed mesh networks. This will make Linux infinitely scalable.
O'Keefe: Increasing memory size, storage device sizes, file system sizes, SMP support, networking technology -- and the list goes on and on.
Terpstra: Network drivers are more mature, and a greater range of hardware is fully supported. Many kernel I/O bottlenecks have been removed. USB support has been added, and much USB hardware is now supported in fully transparent plug-and-play mode -- more so than with any competing operating system. Memory management has been improved; process and task scheduling has improved greatly. The range of hardware supported overall is much greater. The TCP/IP protocol stack is more responsive and faster performing than any Unix TCP/IP stack on similar hardware.
What is the state of Linux scalability now?
Greenblatt: Excellent, with fault tolerance being built into the Intel blade servers. The Linux-HA 'Heartbeat' project and the Open Clustering Framework established by FSG Linux is targeted to scale to even higher ground.
Terpstra: During the past five years, Linux has grown up. In 1998, Linux would successfully run on one to two CPUs. Today, it can handle eight or more CPUs. In 1998, it could handle 2 GB memory on 32-bit Intel hardware. Today, it can handle up to 64 GB memory. Many Linux 2.6 kernel features have been back-ported to the 2.4 kernel and are already offered for use in key products. SuSE, Mandrake and Red Hat already provide support for several features that are now part of the default Linux kernel in 2.6. Examples include improved I/O, access control Lists, extended attributes, improved scheduling and much more.
The result is that existing Linux products are able to scale well beyond the needs of most sites wishing to use Linux.
How will the 2.6 kernel change the scalability picture for Linux?
O'Keefe: Larger file system sizes, logical device sizes, more logical devices, larger main memories, better and more scalable SMP support, etc. With the release of Linux 2.6, large, multi-terabyte file systems will be possible. Linux has support for symmetric multiprocessing beyond eight CPUs, which is sufficient for most systems today. Linux will have increased support for large memory systems, have a far more expansive feature set suitable for enterprise deployments, and has effectively caught up with most other Unix operating systems in the area of scalability.
Greenblatt: With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling.
Terpstra: It provides in the default kernel more functionality than has to date been available in Linux distributions other than those targeted at large servers. It improves plug-and-play support, extends hardware driver support, and significantly improves overall system performance.
Particular improvement will be noticed in database I/O performance and in systems that have large numbers of users for scarce system resources.
Two years from now, where will Linux be, scalability-wise, in comparison to Windows and Unix?
O'Keefe: It will be at least comparable in most areas. In some areas, like maximum cluster sizes, Linux will be way beyond Unix and Windows, supporting thousands of machines in a single cluster.
FOR MORE INFORMATION:
FEEDBACK: What scalability issues are you hoping the 2.6 kernel answers?
Send your feedback to the SearchEnterpriseLinux.com news team.