The following tip is excerpted from the chapter on choosing your server from TechTarget's upcoming e-book, " Servers and Storage." In this tip, the authors analyze issues involved in choosing servers.
Choosing a server is a major decision that typically takes time and money. Making the wrong decision can lead to negative consequences. In this tip, we mention a few planning considerations and then discuss how to choose between Windows and Unix.
A key element in choosing a server is determining which operating system (OS) to deploy on it. Here are a few points to consider:
- Is the server to be added to an existing server/computing environment?
- What constraints do existing or selected applications place on the OS selection?
Adding the server to an existing environment usually means the new server must use the same OS as the existing servers, due to application constraints and the limited skill set of existing operations staff.
If new applications are needed, the software vendor may require or strongly recommend a particular OS.
Windows vs. Unix and Linux
Except for when you are forced into using a proprietary OS, such as OS/40 or z/OS with certain IBM systems or even Novell's Netware, you have three possible OS candidates: Windows, Linux or Unix. By Unix, we refer to any or all Unix systems provided by systems vendors, such as IBM's AIX, Hewlett-Packard's HP/UX or Sun's
The first choice is to choose between Windows and an OS from the extended Unix or Linux families. As noted above, you may at times be forced into using one over the other. But when you have freedom of choice, it is best to compare offerings using selection criteria based on their capabilities in areas such as scalability, robustness and cost of ownership.
How many clients must the server support?
A basic first step in server selection is identifying, on an application-by-application basis, how many clients must be supported simultaneously. Different apps will use different amounts of server resources. This information will be used in configuring the server.
Whatever these client and application numbers turn out to be initially, you can expect them to grow over time due to business activity growth or an increase in computer-based activities. This natural growth makes the system's scalability important.
Scalability is a measure of the system's ability to grow in a variety of dimensions, including processing power, storage capacity, main memory size and network connectivity and bandwidth. A scalable system can be grown, which is usually a cheaper and less disruptive choice than replacing it with a bigger system.
Although scalability is a key property, little published information and no useful rule of thumb is available to reasonably estimate the scalability of a server system. Neither is there an established benchmark for measuring scalability.
It is important to note that an OS is tuned for better capabilities through a long period of measurement, experimenting and re-measurement. As a result, especially in multiprocessor operating systems, the one with the longest history usually offers more favorable characteristics than more recent systems.
What kinds of applications must be supported?
In considering which OS best supports your applications, it is useful to split applications up into four major classes:
- File, print or communications server
- Database server
- Application server
- Intensive computation server
This classification doesn't mean that applications from different classes can't coexist in a single server. But mixed-use systems can hit their limits quickly because it is difficult to assign useful resource-allocation priorities among the application classes unless a resource manager is being used.
In order to estimate resource consumption by client, first note which clients will use each application. Each application can demand a different resource mix from the server.
The nature of the work
The next question to address is the nature of the work to be supported by the server. Will it be used for business-critical applications, or workgroup, departmental or company-wide applications? Or something else?
A server can be dedicated to a workgroup, a department or some number of applications for the entire company. Even though putting all the needed applications on one server appears to be economical, it makes the system more vulnerable, because if one application takes down the system, all applications will fail.
As we have noted, when a server runs a heterogeneous workload, it can be difficult to balance resource demands among the applications. Although applications are available to ease this balancing dilemma, the problem may remain difficult to alleviate.
Generally, it is better to deploy several servers, with each dedicated to one or a small group of activities (and thus applications). This approach also reduces vulnerabilities, because with multiple servers, you have the possibility of moving work to the remaining machines if one fails. This use of server redundancy is the basis for high-availability systems.
While the purchase price of a server and necessary software is obviously a large component of its overall cost, we strongly recommend also evaluating the total cost of ownership (TCO) in the purchasing decision.
TCO involves both direct and indirect costs.
Primary direct costs include:
- Purchase price of hardware and software
- Running and administering the system
- Application development, support and communications
- Electrical power to run and cool the system
- Space to hold the system
Indirect costs are harder to quantify and include the costs of system downtime. Downtime includes not just the costs of dealing with system unavailability (lost productivity, for example) but also the soft costs caused by unavailability. Customers or suppliers who are suddenly unable to do business with you due to system downtime may leave you for a competitor.
When a system goes down or becomes very unresponsive, its users may do nothing and just wait for the system to come back up. More often, they will attempt to solve their problem by calling on each other for help and assistance. In either of these cases, they will spend time and resources learning what to do, which also costs the company money.
Benchmarks exist to compare system price and performance. But because they measure servers in specific ways, your usage may well be different. While the resulting benchmark numbers are real, some care must be taken in their usage. Our advice is to employ them as useful indicators or to winnow down a large field of possibilities to a more manageable number. Do not rely solely on them for purchase decisions.
Systems with good scalability are more expensive than those without. But buying more computer power than you need is almost always cheaper than having to upgrade to a replacement system when your server turns out to be incapable of running the necessary workload.
About the authors:
René J Chevance is an independent consultant. He formerly worked as chief Scientist of Bull, a European-based global IT supplier.
Pete Wilson is Chief Scientist of Kiva Design, a small consultancy and research company specializing in issues surrounding the move to multi-core computing platforms, with special emphasis on the embedded space. Prior to that, he spent seven years at Motorola/Freescale.
This was first published in October 2006