The key word here for me is infrastructure, so I won't bore you with programming or internal details that you should be aware of. While there are dozens of factors to consider, we'll look at the following:
- Bits and bytes
When clustering your Unix servers, one usually thinks of high-availability. In the Unix world, this comes with a price. Whether you are using VERITAS for your Sun servers, PowerHA for IBM or HP Serviceguard for your HP Unix servers, they are all very proprietary and come with a hefty price. With Linux, there are many open source solutions available to you. One example of this is the Linux-HA project, which provides clustering solutions for Linux. While some of the Unix clustering solutions also offer Linux solutions, staying on the same clustering platform in a sense defeats some of the objectives of moving to Linux in the first place, which is moving towards open source and away from proprietary systems.
Another important factor is understanding the critical differences between the source and target systems, particularly when moving from 32- to 64-bit environments and vice-versa. The three big issues here are Endianness (byte-ordering), difference in data type lengths and data alignment differences in the architecture. You must study these issues in great detail. Byte-ordering issues are often encountered by developers during the process of migrating applications, device drivers or data files from 64-bit RISC architecture's to the x86 architecture of Linux running on a PC. Of course, if you are going to run Linux on a RISC system such as the IBM Power servers, you won't have this issue when migrating from IBM's Unix, AIX, as you'll be staying on the same hardware platforms.
Virtualization is another important factor. When moving to Linux, you will not be using the same virtualization systems you have grown accustomed to in the Unix world. Red Hat offers KVM in their new release of RHEL5, and SUSE uses Xen. Sun has many different types of virtualization offerings as does HP, while IBM offers PowerVM on their Unix servers. The only way you might be able to keep the same virtualization system would be to stay on the same hardware platform (i.e., IBM Power servers) - which is not always on option, as many organizations look to migrate away from Unix to move towards horizontally scaling clustered PCs.
Finally, support is another important factor. As a Unix user, you may be accustomed to round-the-clock support from your hardware vendor. Things work a little differently with Linux. Unless you've signed a deal with your hardware vendor to provide Linux support for your systems running on their hardware, you will need to be more self-sufficient. This is not necessarily a bad thing, as Linux encourages collaboration, research and innovation. As a general rule (though this is changing, with increased support by hardware and software vendors, including Oracle for Linux), you will need to do more on your own, so be prepared.
Related Q&A from Kenneth Milberg
A reader new to Linux wonders about which distribution is recommended for installing Nagios and what Nahant and Tikanga mean.continue reading
Documentation for Red Hat Enterprise Linux 5 covering checking system performance, tuning, kernel configuration and extending the file system exists ...continue reading
Before managers upgrade a Linux server, they should consider their database options. RHEL and SLES work well with Sybase, but migrating to an ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.